The Big Yellow Book

Seeing the World from Both Oculars-- a Bananaslug's Journal


Previous Entry Share Next Entry
Is cloud computing really in the fog?
bigbananaslug
bigbananaslug
Yesterday, I got a press release from advisory and consulting firm Ovum about the deflation of the "cloud computing" hype balloon.

Here's the press release, and then I will make comments:

OVUM COMMENT
David Mitchell, SVP IT Research at Ovum
Getting lost in the clouds

Ma.gnolia, the social bookmarking service that many have relied on to share and exchange bookmarks, has reported serious operational problems over the past two weeks. The result is that substantial amounts of user data have been lost, caused by hardware failures, and hopes of recovering it are slim – despite the endeavours of those running the service.


The general assumption that cloud equates to global scale is fallacious There is a general presumption that organisations that offer cloud-based services have access to near-infinite pools of computing resources and that these resources are operated to ‘best in class’ standards.

Of course, the cloud service provider might not actually own all of the equipment. It might work with one of the major data centre experts globally or use an infrastructure-as-a-service provider such as Amazon EC2. Evidently, this has not been the case with Ma.gnolia and we should not expect this to be an isolated case. The assumption of near-infinite scale should never really have been accepted in the first place.

Cloud providers have the same investment criteria as traditional businesses, in that they are only able to afford the capital investment required to build out their infrastructure when there are revenues to justify the outlay. Ma.gnolia, for example, was a service delivered on a veritable shoestring, with low levels of IT hardware investment and operational procedures that did not prevent data loss.
Buyer beware: conduct thorough due diligence of cloud services. Buyers of cloud services need to treat their purchases with the same seriousness as they would treat mainstream IT purchases, unless it is acceptable for those cloud services to have unknown service levels and for them to lose the data that the customers trust to them.

Obviously, there are some cloud services where these caveats are entirely acceptable, as not all services need to be on mission-critical robustness. Part of the traditional buying process for enterprise software and services sees the buyer undertake a degree of technical and commercial due diligence on the provider.

For software purchases this has generally focused on the functional fit of the software and the verification that the supplier has the financial strength to continue to deliver product support and further enhancements. For IT services the recipe is slightly different, focusing more on the actual service delivery infrastructure and the processes used to deliver the service. For enterprises, buying cloud services should become more akin to purchasing IT services than to acquiring software products.

At present too many treat the acquisition of cloud services as though they are acquiring disposable commodities. Cloud services need to encompass the corporate architecture The real ‘take away’ from the Ma.gnolia problems is that CIOs should treat cloud services the same way as they treat other IT assets that they use.

They need to ensure that they have effective backup and recovery plans for the data held in cloud services, in the same way as they would for on-premise services – whether those backup services are provided by the cloud provider or by the CIO. They also need to test these regularly.

They need to have considered the disaster recovery and business continuity provisions – so that the business can continue when a catastrophic failure occurs. For the CIO to be able to include the cloud services into the broader corporate architecture, the real need is around interoperability.

At the simplest level, this means that all cloud services should expose functionality through services, so that they offer integration points. At a richer level, vendors of different cloud services should work together, so that potentially competing offerings can work together and support each other – e.g. two storage cloud services acting as a mirroring service. Ultimately, interoperability around cloud needs to be taken more seriously and offer progressively richer functionality, so that cloud-to-cloud and cloud-to-on-premise integration is seamless and can become part of the standard corporate architecture. Rather than expecting consumers to change what they want and to live with the more fragile parts of the cloud, cloud providers must change to encompass traditional IT thinking.


So what does this mean for us mere mortals? I think Ovum's Mitchell is absolutely right. Cloud computing is not ready for prime time except in very low risk applications where unplanned downtime at any time is acceptable.

  • 1
Ummm....not really. I'll grant, readily, that depending on its implementation, cloud computing can be a very low-availability model. What it boils down to, though, is that cloud computing is really dependent on (a) the business model, and (b) the back end. As a way for an organization to harness desktop computational cycles that are otherwise being frittered away, it's a great idea. If you operate the way Ma.gnolia did, well, you risk being burned.

One thing you've got to remember is that "unplanned downtime" means a completely different thing in cloud computing vs. a data center. In the former, it's much more "unplanned degredation of service" than anything else.

  • 1
?

Log in