Yet Another (ex-)VC Blog

Cloud computing, entrepreneurship, and venture capital

Opening the cloud computing kimono

with one comment


One of the central tenets of cloud hosting (e.g. Amazon EC2) is that application owners no longer need to be concerned with the lower level underpinnings of the infrastructure.  Theoretically, the app owners can focus on their application code and then let the cloud provider handle the configuration and scaling of everything underneath (VM, server, network, storage).  This sounds great, but in practice, its just not the case, particularly in the enterprise world.

When I was running infrastructure operations at FOXSports.com, we used a managed services provider for a variety of our hosting needs.  They provided us certain services on a “cloud” basis, which meant that our servers could access these resources freely, but that we had very limited visibility into the infrastructure providing those services.  This made it very hard to debug issues related to this service since the MSP provided limited monitoring information for the systems providing the service.  I remember fighting very hard to get additional visibility into the service, and ultimately the MSP built some tools to support that need.

Fast forward to 2008…if I am an enterprise thinking about deploying anything remotely business critical to a cloud hosting provider, I will want to know (in gory detail) how the systems and networking supporting the infrastructure are configured and architected.  Why?  Because if something in the infrastructure breaks (and as we know, something always breaks), I am going to need to fix the issue as soon as possible.  With little visibility into the underlying infrastructure, I’m going to have a very hard time isolating and ultimately solving or working around the problem.

Where am I going with this?  For the larger clouds (like Amazon) there is a somewhat limited amount of information publicly available about how the underpinnings of their system works.  If I’m Amazon, for example, I probably have a great deal of know-how and trade secrets related to my cloud infrastructure that I’d like to protect.

The conundrum (I think) is that to see true success in the enterprise, the cloud providers will need to reveal a good deal of this information to potential customers in order to get them comfortable enough to move significant workloads and applications to their cloud.  Is it worth it for the cloud provider to give up some of that competitive advantage in exchange for more enterprise traction?

Reblog this post [with Zemanta]
About these ads

Written by John Gannon

October 31, 2008 at 6:30 pm

One Response

Subscribe to comments with RSS.

  1. Hi,

    well it would definitely help users to gain confidence into could providers.

    But providers can also take the other path. They can ignore the distrust and simply provide the best service for the buck. If they do it long enough and reliably they will have their track record to point to when convincing users the cloud is the (safe) way.
    At least for Amazon this strategy seems to be the case.

    Much the same as today users don’t demand access to CPU architecture, hard disk design, etc. to be confident the components are solid when buying servers.

    Both paths are possible, but the fact is that users aren’t really comfortable right now.

    bye
    Andraz Tori, Zemanta

    Like this

    Andraz Tori

    November 1, 2008 at 10:09 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 44 other followers

%d bloggers like this: