Opening the cloud computing kimono

One of the central tenets of cloud hosting (e.g. Amazon EC2) is that application owners no longer need to be concerned with the lower level underpinnings of the infrastructure.  Theoretically, the app owners can focus on their application code and then let the cloud provider handle the configuration and scaling of everything underneath (VM, server, network, storage).  This sounds great, but in practice, its just not the case, particularly in the enterprise world.

When I was running infrastructure operations at, we used a managed services provider for a variety of our hosting needs.  They provided us certain services on a “cloud” basis, which meant that our servers could access these resources freely, but that we had very limited visibility into the infrastructure providing those services.  This made it very hard to debug issues related to this service since the MSP provided limited monitoring information for the systems providing the service.  I remember fighting very hard to get additional visibility into the service, and ultimately the MSP built some tools to support that need.

Fast forward to 2008…if I am an enterprise thinking about deploying anything remotely business critical to a cloud hosting provider, I will want to know (in gory detail) how the systems and networking supporting the infrastructure are configured and architected.  Why?  Because if something in the infrastructure breaks (and as we know, something always breaks), I am going to need to fix the issue as soon as possible.  With little visibility into the underlying infrastructure, I’m going to have a very hard time isolating and ultimately solving or working around the problem.

Where am I going with this?  For the larger clouds (like Amazon) there is a somewhat limited amount of information publicly available about how the underpinnings of their system works.  If I’m Amazon, for example, I probably have a great deal of know-how and trade secrets related to my cloud infrastructure that I’d like to protect.

The conundrum (I think) is that to see true success in the enterprise, the cloud providers will need to reveal a good deal of this information to potential customers in order to get them comfortable enough to move significant workloads and applications to their cloud.  Is it worth it for the cloud provider to give up some of that competitive advantage in exchange for more enterprise traction?

Reblog this post [with Zemanta]


1 comment
  1. Hi,

    well it would definitely help users to gain confidence into could providers.

    But providers can also take the other path. They can ignore the distrust and simply provide the best service for the buck. If they do it long enough and reliably they will have their track record to point to when convincing users the cloud is the (safe) way.
    At least for Amazon this strategy seems to be the case.

    Much the same as today users don’t demand access to CPU architecture, hard disk design, etc. to be confident the components are solid when buying servers.

    Both paths are possible, but the fact is that users aren’t really comfortable right now.

    Andraz Tori, Zemanta

Leave a Reply

You May Also Like

Is VMware doomed?

Admittedly, things have not been going that well at VMware over the last few months.  The stock price…


To watch him with his workmates in the holy of holies, Apple’s design lab, or on a night…