Yet Another (ex-)VC Blog

Cloud computing, startups, and venture capital

Archive for June 2009

Amen!

leave a comment »

Using techniques perfected by consumer web companies, a generation of enterprise IT companies will emerge that deliver a vastly superior user experience to IT professionals and employees alike. Improvements in ease of use will liberate employs from terrible software, server-side software development will increase the pace of product improvements and lower support costs, and the self-service model will change the economics of enterprise IT sales forever. Google’s success is as much about AdWords offering advertising services to the SMB segment as it is about serving consumers search services.

via Self-Service Nation: Why Targeting Small Business Is Good Business.

Reblog this post [with Zemanta]

Written by John Gannon

June 29, 2009 at 10:26 am

Posted in Uncategorized

Tagged with ,

Vertical integration as a cloud competitive weapon

leave a comment »

The Maxim gun and its successor the Vickers (s...
Image via Wikipedia

When will the cloud providers begin acquiring product companies in order to keep the technology out of the hands of their competition?

Is that a strategy worth pursuing?

Many clouds are using open source technology to power their solutions, but I assume many of them have reliance on commercial products as well.  Could one cloud get an upper hand over the others by bringing a leading edge commercial product in house, integrating it tightly into one’s own cloud, and then end-of-lifeing the commercial version?

If you believe that the winners of the cloud game will be won by those firms with the greatest ability to control and continually shrink OPEX then maybe it is worth for big cloud players to bid against the typical acquirers of typical infrastructure technology (MSFT, Symantec, etc) in order to keep that technology out of the hands of the other clouds.

Are there any commercial products that are so clearly differentiated and powerful that they’d be worth a cloud provider paying up to acquire?

Reblog this post [with Zemanta]

Written by John Gannon

June 26, 2009 at 4:23 pm

Jurvetson on Moore’s Law and innovation

leave a comment »

Notice that the pace of innovation is exogenous to the economy. The Great Depression and the World Wars and various recessions do not introduce a meaningful change in the long-term trajectory of Moore’s Law. Certainly, the adoption rates, revenue, profits and economic fates of the computer companies behind the various dots on the graph may go though wild oscillations, but the long-term trend emerges nevertheless.

via Transcending Moore’s Law on Flickr – Photo Sharing!

(Thanks to Larry Cheng for pointing out this post in his recent blog entry.)

So don’t fret, folks….everything’s gonna be alright…

Written by John Gannon

June 26, 2009 at 12:19 pm

Posted in Uncategorized

Tagged with , ,

Technologies I wish we had in 2001

leave a comment »

Music for 2001: A Space Odyssey album cover
Image via Wikipedia

The best and worst thing about working in the technology industry is that you constantly build custom solutions to problems, sometimes quite expensively, and then years later see the same problems get solved through affordable (or free) off-the-shelf products.

Recently I’ve been thinking about solutions that we could have really used during my stint at FOXSports.com, but didn’t exist at the time (2001-2003).

Amazon Web Services (EC2 and S3): It was exciting to support interactive polls during major FOX broadcasts like Super Bowl and World Series, but a huge challenge for the technology organization, particularly in the areas of capacity planning and scaling.  We literally had our hosting provider bring in additional servers for these events, and then decommission them after the events ended.  If we had EC2 we might have been able to scale more flexibly during these events.  Also, we had loads of static content stored in our Oracle database, and served up by our web servers.  S3 would have allowed us to serve this content more effectively while reducing our reliance on a homegrown caching system.

Cloud integration (a la Boomi, CastIron Systems): As a sports website, we had a whole bunch of data and content feeds that we’d get from third parties.  Each feed was a custom integration using different protocols, authentication methods, and required specialized operations support.   If we had solutions like Boomi or CastIron available to us, we could have saved ourselves and our partners a whole lot of development time, and the end result would have been a more operationally supportable set of systems, with more flexibility to onboard new business partners quickly.

Application caching layer (e.g. memcached): We built our own caching platform within our app so that we wouldn’t hit our Oracle database so often with reads.  The cache logic was built in our app and the storage for the cache was an NFS shared volume sitting on a Netapp NAS device.  If we built the site today, we could have leveraged memached (or one of its commercial derivatives) and saved a bunch of dev, testing and debugging time.

Google Analytics: We spent a ton of money on web analytics solutions back in the day.  Google Analytics would have given us much of the same functionality, for free.  Enough said :)

All of these solutions would have addressed big pain points for our tech team, and consequently for our business as a whole.

Would love to hear any of your war stories related to this topic in the comments.

Reblog this post [with Zemanta]

Written by John Gannon

June 24, 2009 at 11:39 am

Using Metrics to Vanquish the Fail Whale (Twitter)

leave a comment »

Office Fail Whale
Image by CC Chapman via Flickr

“You really want to instrument everything you have,” Adams told an audience of 700 operations professionals. “The best thing you can do is have more information about your system. We’ve built a process around using these metrics to make decisions. We use science. The way we find the weakest point in our infrastructure is by collecting metrics and making graphs out of them.”

via Using Metrics to Vanquish the Fail Whale « Data Center Knowledge.

This makes high volume datacenter ops sound fairly straightforward.  As long as you have volumes of data, and a well-thought out process, you can make informed decisions.  I certainly practiced this methodology when I was involved in datacenter operations, but not so sure this is practical as computing environments become harder to debug through multiple layers of abstraction and virtualization.

Thoughts?

Reblog this post [with Zemanta]

Written by John Gannon

June 23, 2009 at 8:55 pm

I especially like the last sentence…

leave a comment »

Convenience is hugely attractive in organizations because it is easy to defend and easy to approve. You don’t need to call a meeting to try something new, because the convenient option has already been approved. The problem is that convenient approaches rarely break through or generate extraordinary returns.

via Seth’s Blog: Circles of Convenience.

Written by John Gannon

June 18, 2009 at 11:18 am

Posted in Uncategorized

Cloud supply chain? Quite interesting…

with 3 comments

I think the cloud computing industry will borrow some of the best practices of previous generations of tech partnerships to solve today’s revenue sharing challenge. The rapidly evolving cloud environment is creating a new set of supply-chain relationships which will be governed by the same partnering principles of the past, but with a different set of revenue tracking requirements and economic parameters. This means new tools and techniques will have to be employed to automate the monitoring and billing processes so they are cost-effective in this price-competitive market.

via Where is the Revenue Stream in Cloud Computing? – ebizQ Forum.

Written by John Gannon

June 16, 2009 at 1:12 pm

Posted in Uncategorized

Tagged with

%d bloggers like this: