Some Takeaways from Interop 2011
When industry heavyweights get together, you can be assured of some insights on where the industry is and where it is going. The same is true for cloud computing, and we have already covered a few such events before, including those scheduled to be held in the near future.
Scheduled later this year
Interop 2011 is one such event that brought together several notables from the cloud computing world. The event was held at Mandalay Bay in Las Vegas from May 7 to May 12, 2011, and, in the words of the organizers, provided “a comprehensive and unbiased understanding of all the latest innovations – including cloud computing, virtualization, security, mobility and data center advances – that help position” a company for growth.
The list of attendees read like a veritable who’s-who of the high-technology world, with representation from Alcatel-Lucent, Amazon, Avaya, Broadcom, Cisco, Citrix, Dell, D-Link, EMC, HP, IBM, Intel, Microsoft, Novell, Rackspace, Terremark, VeriSign, VMware, and many, many more. Besides the traditional promotions, the organizers made optimum use of social media to publicize the event, including a Facebook page, a LinkedIn profile and a Tweeter feed.
We present some of the interesting takeaways from the event, and tie them in with some of the articles on this website.
The comparison of doing business on cloud computing with traveling on airplanes drew a lot of media attention. Simon Crosby, Chief Technological Officer of Citrix, drew this interesting parallel during a panel discussion. This has been discussed in length in an earlier article (See: The Similarities between Airplane Travel and Cloud Computing).
Andy Schroepfer, vice president of enterprise strategy at Rackspace, was of the opinion that operators should invest in backup, redundancy and resiliency to prevent an outage like the recent Amazon cloud fracas from destabilizing business. In other words, cloud computing is not infallible.
Randy Rowland, senior vice president of product development at Terremark, had some harsh words from the uninformed customer. “I don’t feel sorry for the application writer that doesn’t understand the infrastructure they’re writing to,” Rowland said. “They need to understand how it’s built, do their due diligence.” He was speaking about the fact that some companies managed to tide over the Amazon crisis through clever tweaks, a fact mentioned in an earlier article here (See: Did Human Error Cause The Amazon Cloud Computing Outage? ).
Personally speaking, I cannot agree with Rowland and shift the entire weight of responsibility to the customer. The all-important question here is: Did Amazon outline the possibilities of failure with their existing setup and advice customers accordingly? And, if it such load balancing was so successful in solving problems of this magnitude, why didn’t the provider do it itself?
Many of the industry speakers mentioned that Service Level Agreements (SLAs) need to be defined by the contracts, but failed to address the issue that contracts in the field itself are prone to vagueness and loopholes (See: The Small Print in Cloud Computing Contracts).
Moreover, service providers seem to continue to be non-committal on clearing up this issue, a fact earlier mentioned here (See: Cloud Based Service Level Agreements – Two Different Worlds). In my opinion, a lot of work needs to be done here. Perhaps the IEEE can help show the way (See: The Similarities between Airplane Travel and Cloud Computing).
Kirk Skaugen, vice president and general manager of Intel’s Data Center Group came up with some interesting numbers on cloud computing. He said that cloud computing in 2015 could save $25 billion in IT spending, reduce 45 gigawatts of energy consumption and have a monumental impact on $100 billion of industry services. While these figures are quite optimistic, they are not impossible, going by earlier estimates released by several reputed organizations.
By Sourya Biswas