Venture Capital Jobs Blog

Curated by John Gannon and Team

Posts Tagged ‘Cloud computing

Cloudslam panel now available for download

leave a comment »

Wanted to quickly post to let you know that if you missed the Cloudslam panel regarding cloud adoption, you can download it here.  I was a member of the panel along with a variety of industry and investor folks.

If you listen, I’d love to get your comments on the discussion.


Written by John Gannon

April 21, 2009 at 9:17 pm

Posted in Uncategorized

Tagged with ,

The Cloud is kicking Mike D’s butt, and we are all better off because of it!

with one comment

VMware Fusion
Image via Wikipedia

Mike D, one of the smartest guys I’ve ever worked with, has been blogging about cloud computing and has a very unique perspective as one of VMware‘s cloud architects (the cloud is kicking his butt!).   I’m very happy to see he is blogging and I’m looking forward to some strong opinions and stories from the trenches.

Mike, good to ‘see’ you again!

Reblog this post [with Zemanta]

Written by John Gannon

April 11, 2009 at 10:13 pm

Posted in Uncategorized

Tagged with ,

Feld Thoughts spurring my thoughts on cloud computing

with one comment

feld-sense
Image by glemak via Flickr

Brad Feld made a couple of great posts over the last week talking about some of the challenges with cloud computing around the coordination of infrastructure level services in complex cloud environments.  Namely, that most cloud computing (hosting) providers have APIs for primitive functions like creating and starting/stopping instances, but lack coordination or error handling functionality at a granular (operating system) level.

These posts struck a chord with me thinking back to my days as a UNIX systems administrator working for a F100 financial services company.  We ran a three-tier architecture with a large number of web servers, application servers, and a pair of beefy database boxes at the bottom of the stack.  Whenever we needed to reset the environment, we had to go through a very specific process to get things online again.

Here’s what a reset looked like:

  • Restart database servers, make sure they were working
  • Restart application servers, make sure they are talking to the database
  • Restart webservers, make sure plugins are talking to the app servers
  • Flush load balancers and firewalls

This whole process would take about 15 minutes, very manual, and thus was quite error prone.   For example, if you started the app servers before the database servers – oops, do not pass go, do not collect $200.

Although that example was from a traditional hosting environment in 2000, I imagine there will be similar problems for medium/high complexity cloud hosted applications.

Vendors like VMware have tried to address this kind of issue by providing a workflow engine that can be used to automate sophisticated processes within the infrastructure, and I know RightScale has the RightScripts framework as well, but there is still a major gap.

These systems presuppose that a developer is going to know and care that there are lower level dependencies beyond the APIs with which they interact with the cloud environment.

The beauty of the cloud is that the developers shouldn’t need to know or care about which server starts before which other server, or that sshd needs to start on box A before box B can run a batch job.

But as Brad implies in his post, I think we’ve got a long way to go…

Reblog this post [with Zemanta]

Written by John Gannon

March 18, 2009 at 4:44 pm

Are Cloud Based Memory Architectures the Next Big Thing? | High Scalability

leave a comment »

DEC (Digital Equipment Corporation) VAX ECC me...
Image via Wikipedia

I’ve probably said this before, but the cloud is a new computing platform that some have learned to exploit, others are scrambling to master, but most people will see as nothing but a minor variation on what they’re already doing. This is not new. When time sharing as invented, the batch guys considered it as remote job entry, just a variation on batch. When departmental computing came along (VAXes, et al), the timesharing guys considered it nothing but timesharing on a smaller scale. When PCs and client/server computing came along, the departmental computing guys (i.e. DEC), considered PCs to be a special case of smart terminals. And when the Internet blew into town, the client server guys considered it as nothing more than a global scale LAN. So the batchguys are dead, the timesharing guys are dead, the departmental computing guys are dead, and the client server guys are dead. Notice a pattern?

via Are Cloud Based Memory Architectures the Next Big Thing? | High Scalability.

Reblog this post [with Zemanta]

Written by John Gannon

March 17, 2009 at 2:33 pm

Posted in Uncategorized

Tagged with ,

Systems management or cloud management?

with 3 comments

Clouds in Calgary, Canada
Image via Wikipedia

With the cloud computing market growing so rapidly there has been a rise of cloud computing management vendors like RightScale, Enomaly, Scalr, Elastra, and others.  These companies have developed solutions that work very well for the early adopters in the cloud computing market, but I wonder if these tools are getting an equally warm reception from the enterprise.  After all, most medium to large IT shops use systems management suites like Opsware, Tivoli, etc and don’t have expertise in this new breed of tools.  Not to mention, most decent sized IT shops don’t want yet another management tool because they probably already have three too many!

In the long run, I believe we are going to see hybrid datacenters where enterprise customers are able to run workloads simultaneously in local and cloud datacenters and manage them in a seamless way.  The question is: How will these hybrid datacenters be managed?

I think there are three possible outcomes:

  1. Systems management vendors add cloud functionality to existing tools: Opsware, BMC, and the rest of the usual systems management suspects will make their products cloud-aware  just as they have made them virtualization-aware.
  2. Cloud management vendors add traditional systems management functionality to their toolkit: This would be a very tough nut to crack, but the emerging cloud vendors could take a stab at developing more traditional IT management functionality, allowing them to handle management of both traditional and cloud hosted datacenters.
  3. Proxy or Glueware:  Startups build tools that allow both the cloud management vendors and the systems management vendors to interact with non-native environments.  In effect, they glue together these two separate worlds.  These glueware startups would almost act as proxies, allowing say a systems management tool to manage cloud hosts in a seamless way and with minimal retraining of staff required, while letting a cloud management tool manage legacy IT infrastructure.

I think we’ve seen #1 come to pass in the virtualization market to date, but in the cloud management space I think #3 is a more likely outcome.  There would be major rearchitecture required of most enterprise IT management products to support a hybrid datacenter configuration and so some sort of proxy or glueware would enable much faster/easier integration.

Your thoughts?

Reblog this post [with Zemanta]

Written by John Gannon

March 13, 2009 at 5:17 pm

Posted in Uncategorized

Tagged with ,

Why cloud-based load testing is a killer app

with 18 comments

Cost-Volume-Profit diagram, decomposing Total ...
Image via Wikipedia

Cloud hosting providers with any degree of scale should have an application load testing product offering.  In fact, I think it has the potential to be a killer app for cloud hosting companies.

Load testing has traditionally been a painful process and inefficient process from a time and cost perspective due to challenges around traffic generation and scaling.

There have been typically two options for the traffic generation aspect of the load test:

  1. Buy a bunch of servers to make into a traffic generation farm, buy some software (e.g. Mercury Interactive) to automate the testing, and tie it together by hand.  This is no fun, no matter whose software you are using.
  2. Pay a load testing managed service provider (e.g. Mercury or Keynote) to provide both the testing server capacity as well as construction of the test scripts.  This is less painful than #1, although you are not able to test on demand and its likely that the price you’re paying the provider is going to be negatively affected by the large fixed cost infrastructure load testing providers tend to own.

Let’s also look at a customer’s objectives during a load test.

  1. See how much traffic of various types the application as-configured (hardware, software, etc) can handle.
  2. See how much more traffic the application can handle if configuration changes are made or additional infrastructure components (i.e. more servers, bigger servers, etc) are added.

#2 is painful in a non-cloud world because a customer has to purchase or rent servers just to see how their site responds to scaling.  And its even more painful if you work with a hosting provider whose business model is dependent on long term usage of server capacity to cover a large fixed cost base.

Here’s where the cloud comes in…

For a customer who has their app hosted w/a cloud hosting company, scaling is not an issue, as long as that customer is willing to take the variable cost hit of spinning up more servers to see how their app handles higher loads.  No need to reconfigure existing servers or purchase new hardware!  For a customer who has a traditional hosting environment but has the capability to cloudburst could see similar benefits.

Also, a cloud-based load testing platform would by definition be available on demand and less expensive to operate than a platform that lives on physical servers since the fixed cost base would be drastically reduced in the cloud case.

And here’s the best part:  if I’m a cloud hosting provider who offers a load testing service (or resells one), I’m getting paid for the extra capacity that the application owner is using during the load test AND if the cloud load testing platform lives on my cloud, I’m getting paid for their capacity usage during the test as well!

Seems like a beautiful thing to me.  App owners get cheaper, more flexible load testing capabilities, load testing companies get paid, and the cloud hosters get paid twice!

Reblog this post [with Zemanta]

Written by John Gannon

March 4, 2009 at 11:55 am

Layer 1: An important piece of the cloud computing puzzle?

leave a comment »

Photo supplied by Cisco Systems Inc. of a 7600...
Image via Wikipedia

(Disclosure: My employer is an investor in OnPath Technologies, a Layer 1 switching company)

Due to the broad adoption of virtualization technology in the server, network, and storage sectors, it is easier than ever for IT admins to apply business policy to the infrastructure and automate any number of menial tasks that used to suck up manhours.  Servers, storage, and even networking are treated as logical data objects and can be manipulated with ease.

However, to realize the industry vision of the hands off, “lights out” cloud computing datacenter, all parts of the IT infrastructure will need to have automation capabilities, including the physical network.  Just as network engineers don’t have to physically be in the datacenter to modify routing tables or switch configurations and server guys don’t need to sit at a server’s keyboard to manage it, datacenter ops teams shouldn’t have to send someone into the datacenter, in the wiring closet, or under the raised floor to make physical cabling adds, moves, and changes.

Seems logical to me, but I’m not an IT buyer so my vote doesn’t really count :)  Wondering what folks out there are thinking (or not thinking) about this topic of layer 1 automation…

Reblog this post [with Zemanta]

Written by John Gannon

February 11, 2009 at 4:46 pm

2009 Predictions

with 3 comments

cb

Every blogger is doing a 2009 predictions post at this time of year, so why shouldn’t I?  In fact, this is the only time I’ll be able to say my predictions for the previous year were perfect (because I was zero-for-zero), so I’m going to enjoy my unblemished record while I can…

VMware gets acquired:  Whomever owns VMW will own the 21st century datacenter.  Cisco and Microsoft have enough cash on the balance sheet to foot the bill, but will they?  A Cisco/EMC merger would be pretty interesting, too.

Someone purchases Citrix: Again, Cisco and Microsoft would be logical acquirers.  Cisco is focused on growing the application side of their business and Microsoft can always use more ammo against VMware.

Continued slow growth of cloud computing in the enterprise:  Due to concerns about security, cloud computing adoption in the enterprise will still severely lag that of the SMB/SME markets.

Social media works its way into enterprise IT:  I think there is huge value in leveraging social media to help IT professionals do their jobs better.  New entrants to the enterprise IT market that lack the baggage of legacy product lines will integrate social media into their products and use rich internet application technologies to enable that integration.

M&A frenzy begins in Q2: Right around the middle of 2009 I predict that investors and operating companies will perceive a (near) bottom in the market and will go shopping in earnest for assets and companies.

Feel free to comment on these predictions or add some of your own!

Reblog this post [with Zemanta]

Written by John Gannon

December 17, 2008 at 2:48 pm

Opening the cloud computing kimono

with one comment

One of the central tenets of cloud hosting (e.g. Amazon EC2) is that application owners no longer need to be concerned with the lower level underpinnings of the infrastructure.  Theoretically, the app owners can focus on their application code and then let the cloud provider handle the configuration and scaling of everything underneath (VM, server, network, storage).  This sounds great, but in practice, its just not the case, particularly in the enterprise world.

When I was running infrastructure operations at FOXSports.com, we used a managed services provider for a variety of our hosting needs.  They provided us certain services on a “cloud” basis, which meant that our servers could access these resources freely, but that we had very limited visibility into the infrastructure providing those services.  This made it very hard to debug issues related to this service since the MSP provided limited monitoring information for the systems providing the service.  I remember fighting very hard to get additional visibility into the service, and ultimately the MSP built some tools to support that need.

Fast forward to 2008…if I am an enterprise thinking about deploying anything remotely business critical to a cloud hosting provider, I will want to know (in gory detail) how the systems and networking supporting the infrastructure are configured and architected.  Why?  Because if something in the infrastructure breaks (and as we know, something always breaks), I am going to need to fix the issue as soon as possible.  With little visibility into the underlying infrastructure, I’m going to have a very hard time isolating and ultimately solving or working around the problem.

Where am I going with this?  For the larger clouds (like Amazon) there is a somewhat limited amount of information publicly available about how the underpinnings of their system works.  If I’m Amazon, for example, I probably have a great deal of know-how and trade secrets related to my cloud infrastructure that I’d like to protect.

The conundrum (I think) is that to see true success in the enterprise, the cloud providers will need to reveal a good deal of this information to potential customers in order to get them comfortable enough to move significant workloads and applications to their cloud.  Is it worth it for the cloud provider to give up some of that competitive advantage in exchange for more enterprise traction?

Reblog this post [with Zemanta]

Written by John Gannon

October 31, 2008 at 6:30 pm

Three questions that need answers before enterprises will go to the Cloud

with 10 comments

Amazon has been making a variety of moves to make their AWS platform more palatable for enterprise hosting deployments.  New features like Oracle support and block level filesystems will cause enterprises to consider Amazon’s cloud in their list of potential vendors, but I think it will be hard for Amazon to get on the enterprise ‘short list’ of hosting vendors any time soon.  This is because it takes enterprise customers a long time to get comfortable enough to try a new technology, and still longer to move from pilots and proof-of-concept exercises to full-blown production deployment.

While at VMware, I often discussed customer concerns related to migrating to our technology platform.  These concerns typically boiled down to three key questions.  I believe these are the same questions that enterprise customers will ask before going into the cloud…

1.  “Will my current and future applications function, perform well, and scale in the cloud?”

Applications must be 100% guaranteed to work in the cloud, and to perform just as well as they would in a traditional datacenter environment.  Regardless of the amount of savings or additional efficiencies gained due to usage of cloud services, it is a non-starter if a customers’ applications lose functionality or do not perform as well in the cloud vs in the data center.  No amount of business case development or ROI/TCO analysis will get past this objection.

2.  “Which applications should I move to the cloud?”

Enterprises will need to identify which systems and process can be safely moved to the cloud.  They will likely take a phased approach where non-critical apps will be moved first, and then after a trial period, more important business applications will be migrated.  A proper assessment process will be needed so enterprise customers don’t stumble in their move to the cloud.  Just as server consolidation initiatives can grind to a halt if critical systems are migrated without proper planning, enterprises won’t move to cloud services if they encounter problems with early deployments.  New technology always gets blamed first, even if the technology wasn’t the root cause of a problem.  So, if you’re championing cloud hosting at your company, make sure you plan meticulously and minimize any business impact.

There are software tools like CIRBA and VMware Capacity Planner that help customers figure out which applications will play nicely in virtualized environments, but to my knowledge nothing of the sort exists in the cloud world.  This seems like an area that a startup might be able to address since the aforementioned vendors are focused squarely on virtualization as a software component within an infrastructure versus an outsourced cloud service.

3.  “Who can help me get my people, processes, and technologies to be ‘cloud ready’ ?”

Any time a new, disruptive technology comes on the market, there is huge disparity between the supply of skilled technologists with expertise in that technology and the demand for those technologists.  To fill the gap, enterprises tend to hire subject matter experts (in the form of consultants).  Although cloud computing certainly makes aspects of managing IT infrastructure easier, there is still no substitute for experts who can smoothly guide an enterprise through the migration process.  It will be interesting to see if the major consulting shops start to build practices around outsourced cloud services to address this need.

There is also an issue of “operational readiness”, a term coined by VMware to describe the challenges inherent in forcing traditionally siloed IT departments to work together in the management of a shared infrastructure where responsibilities overlap.  For example, is there a separate ‘cloud team’ or center of excellence within an IT department that is responsible for the performance of the cloud environment, or does that responsibility fall to the server team?  What about storage?  Are the SAN guys going to be expected to debug problems that might be related to Amazon S3? And don’t get me (or Hoff) started on security

These questions were often asked by my customers at VMware, and I think the cloud providers are going to face the same queries as they push furthur into the enterprise.

Reblog this post [with Zemanta]

Written by John Gannon

September 29, 2008 at 5:20 am

%d bloggers like this: