Advertisements

Yet Another (ex-)VC Blog

Venture Capital Jobs

Archive for March 2009

Systems management or cloud management?

with 3 comments

Clouds in Calgary, Canada
Image via Wikipedia

With the cloud computing market growing so rapidly there has been a rise of cloud computing management vendors like RightScale, Enomaly, Scalr, Elastra, and others.  These companies have developed solutions that work very well for the early adopters in the cloud computing market, but I wonder if these tools are getting an equally warm reception from the enterprise.  After all, most medium to large IT shops use systems management suites like Opsware, Tivoli, etc and don’t have expertise in this new breed of tools.  Not to mention, most decent sized IT shops don’t want yet another management tool because they probably already have three too many!

In the long run, I believe we are going to see hybrid datacenters where enterprise customers are able to run workloads simultaneously in local and cloud datacenters and manage them in a seamless way.  The question is: How will these hybrid datacenters be managed?

I think there are three possible outcomes:

  1. Systems management vendors add cloud functionality to existing tools: Opsware, BMC, and the rest of the usual systems management suspects will make their products cloud-aware  just as they have made them virtualization-aware.
  2. Cloud management vendors add traditional systems management functionality to their toolkit: This would be a very tough nut to crack, but the emerging cloud vendors could take a stab at developing more traditional IT management functionality, allowing them to handle management of both traditional and cloud hosted datacenters.
  3. Proxy or Glueware:  Startups build tools that allow both the cloud management vendors and the systems management vendors to interact with non-native environments.  In effect, they glue together these two separate worlds.  These glueware startups would almost act as proxies, allowing say a systems management tool to manage cloud hosts in a seamless way and with minimal retraining of staff required, while letting a cloud management tool manage legacy IT infrastructure.

I think we’ve seen #1 come to pass in the virtualization market to date, but in the cloud management space I think #3 is a more likely outcome.  There would be major rearchitecture required of most enterprise IT management products to support a hybrid datacenter configuration and so some sort of proxy or glueware would enable much faster/easier integration.

Your thoughts?

Reblog this post [with Zemanta]
Advertisements

Written by John Gannon

March 13, 2009 at 5:17 pm

Posted in Uncategorized

Tagged with ,

Debt paydowns versus equity repurchases

leave a comment »

Image representing Seeking Alpha as depicted i...
Image via CrunchBase

Been reading a bunch of earnings calls this morning, found this bit interesting about debt vs equity repurchases…check out the link below to get the full context:

Courtesy of Andrew Watts – Oaktree Capital on the ADCT F1Q09 earnings call

If you’d bought back bonds, your tangible book would be over $5 right now and instead it’s at $2.70, which is where the stock is trading. So I think the shareholders should be really more cognizant of that kind of situation. And you guys are certainly not alone, but this repurchase of equity has just been a horrible, horrible plague on our markets speaking from my perspective. I think its something people really need to start to – it’s an Emperor has no clothes situation to me. I think people really need to start to look at that.

But I would really encourage you revisit the whole concept of equity repurchases, particularly at significant premiums to tangible book. They’ve just been huge wealth destroyers and, if you have the opportunity on the debt side, you can really create a lot of value there.

via ADC Telecommunications, Inc. F1Q09 (Qtr End 1/30/09) Earnings Call Transcript — Seeking Alpha.

Reblog this post [with Zemanta]

Written by John Gannon

March 12, 2009 at 11:19 am

Posted in Uncategorized

Tagged with

Firefox add-on overload

with 4 comments

Mozilla Firefox
Image via Wikipedia

Firefox generally runs pretty well for me, but a week ago, I started having problems.  Google Reader was behaving weirdly and was basically unusable, and any use of GMail would make my Mac become slow (CPU would get pegged).

My suspicion (which was confirmed) was that one of the many Firefox add-ons I had installed was causing the problem.  At the time, I had installed:

It turns out that Zemanta had a bug that was affecting GMail users and Mashlogic had a bug that caused the problem with Google Reader.  Both companies were very responsive to my bug report (and in both cases had already started looking at the issue due to other user complaints) but during the process, I ended up doing quite a bit of debugging: installing this plugin, uninstalling this one, disabling, re-enabling, etc.

I didn’t mind all of this work that much because I do get a good deal of value out of both of the add-ons that were causing the problem.  But, I was thinking about how I would react if I were a non-technical user.  My guess is that I would just give up on Firefox and use IE again (or Safari on the Mac) instead.

Another topic that came to mind was the idea of add-on interoperability.  Thankfully Zemanta and Mashlogic were not conflicting with one another, but originally I thought they might have been.  Given that many  add-ons these days are altering how pages are rendered, it would seem there is certainly an opportunity for weird interoperability issues to crop up.  One thing that was quite interesting related to Zemanta was that the bug fix was on the server side.  I did have to clear my cache and restart Firefox to fix the problem once Zemanta fixed it on their end, but I didn’t need to download a new version of the plugin (thanks Andraz!)

Are there any standards coming from the Firefox add-on community that will help to address some of the issues I outlined above?

I think the add-on ecosystem is awesome but feel like some of this stuff is what could prevent more mainstream adoption and acceptance, which is what I imagine most of the companies in the add-on space would want.

Reblog this post [with Zemanta]

Written by John Gannon

March 11, 2009 at 6:07 pm

Posted in Uncategorized

Tagged with , , ,

Why cloud-based load testing is a killer app

with 18 comments

Cost-Volume-Profit diagram, decomposing Total ...
Image via Wikipedia

Cloud hosting providers with any degree of scale should have an application load testing product offering.  In fact, I think it has the potential to be a killer app for cloud hosting companies.

Load testing has traditionally been a painful process and inefficient process from a time and cost perspective due to challenges around traffic generation and scaling.

There have been typically two options for the traffic generation aspect of the load test:

  1. Buy a bunch of servers to make into a traffic generation farm, buy some software (e.g. Mercury Interactive) to automate the testing, and tie it together by hand.  This is no fun, no matter whose software you are using.
  2. Pay a load testing managed service provider (e.g. Mercury or Keynote) to provide both the testing server capacity as well as construction of the test scripts.  This is less painful than #1, although you are not able to test on demand and its likely that the price you’re paying the provider is going to be negatively affected by the large fixed cost infrastructure load testing providers tend to own.

Let’s also look at a customer’s objectives during a load test.

  1. See how much traffic of various types the application as-configured (hardware, software, etc) can handle.
  2. See how much more traffic the application can handle if configuration changes are made or additional infrastructure components (i.e. more servers, bigger servers, etc) are added.

#2 is painful in a non-cloud world because a customer has to purchase or rent servers just to see how their site responds to scaling.  And its even more painful if you work with a hosting provider whose business model is dependent on long term usage of server capacity to cover a large fixed cost base.

Here’s where the cloud comes in…

For a customer who has their app hosted w/a cloud hosting company, scaling is not an issue, as long as that customer is willing to take the variable cost hit of spinning up more servers to see how their app handles higher loads.  No need to reconfigure existing servers or purchase new hardware!  For a customer who has a traditional hosting environment but has the capability to cloudburst could see similar benefits.

Also, a cloud-based load testing platform would by definition be available on demand and less expensive to operate than a platform that lives on physical servers since the fixed cost base would be drastically reduced in the cloud case.

And here’s the best part:  if I’m a cloud hosting provider who offers a load testing service (or resells one), I’m getting paid for the extra capacity that the application owner is using during the load test AND if the cloud load testing platform lives on my cloud, I’m getting paid for their capacity usage during the test as well!

Seems like a beautiful thing to me.  App owners get cheaper, more flexible load testing capabilities, load testing companies get paid, and the cloud hosters get paid twice!

Reblog this post [with Zemanta]

Written by John Gannon

March 4, 2009 at 11:55 am