Advertisements

Venture Capital Jobs Blog

Curated by John Gannon and Team

Posts Tagged ‘Cloud computing

Datadog and DevOps

with 4 comments

Based on what I see in the video embedded below, the Datadog team has developed a dashboard that can be used jointly by development and infrastructure operations teams to get a holistic view of everything that’s going on in the stack.  This challenge (the interaction between application developers and infrastructure operations, or rather, the lack of it) is very real, and I see it every day in my day job.   So, I always like to see when a startup (especially a NYC-based one) is building tooling to help bridge this gap.

It is hard to tell from the video how much of the product is focused on analysis versus aggregation and presentation of statistics.  Certainly there is value in correlating various statistics from different web services (e.g. money spent on AWS correlated with a spike in site traffic captured by Google Analytics) but the greater value would be in crunching the data and making predictions (or taking pre-emptive action).  Maybe the idea is that a dashboard is provided for free, with analytics available for a fee?

Another question that came to mind when watching the video was: “Who is the customer?” Is it the application developer, the system administrator, or both?  My gut tells me infrastructure folks would be the ones to get excited about this product, as they are generally spending lots of time looking at various metrics, reports, and charts, and trying to correlate behaviors between different areas of the infrastructure.   Anything that can help them pick up on correlations or trends is of value.  However, application developers who are running their application on a IaaS platform might appreciate this data as well, since they may not have system administration staff to support them.

This dual focus leads to some questions around the customer acquisition strategy and positioning.  Even in a world where DevOps is beginning to see traction, there is still a line of demarcation between application developers and infrastructure operations people.  Each group has a distinct set of needs, and a product that tries to cater to both groups might not catch on with either one.   I’m sure Datadog will address this question in the coming months as they move from Alpha into wider distribution.

Related articles

Enhanced by Zemanta
Advertisements

Written by John Gannon

October 17, 2010 at 11:12 am

VMware is definitely going big, not going home – Google PaaS play

leave a comment »

We are committed to making Spring the best language for cloud applications, even if that cloud is not based on VMware vSphere.

via VMware: The Console: Google and VMware’s “Open PaaS” Strategy.

Reblog this post [with Zemanta]

Written by John Gannon

May 19, 2010 at 12:56 pm

Seeking panelists for Virtualization and Private Cloud session

leave a comment »

I’m going to be moderating a panel for the virtual conference Cloud Lab ’10 (formerly known as Cloud Slam) the week of April 19.  The abstract of the talk is below.  If you would like to be a panelist, please leave me a comment and let me know who you are and why you want to be on the panel.  As you can see in the abstract, hoping to get a mix of investors, vendors, and end users.  Keep in mind this is a virtual conference, so everything will be done via teleconference and webex.

The path from Virtualization 1.0 to Private Cloud: Risks, challenges, and opportunities

Server virtualization has gained widespread acceptance, with most IT organizations obtaining a substantial 1st wave of savings and operational efficiencies. However, obtaining the next wave of savings and business agility is predicated on building internal (private) cloud capabilities. To be successfully deployed, these private cloud environments must be equipped with a new breed of automation tools and management processes that can scale without increasing operational cost and complexity.

In this panel, a group of experienced end users, vendors, and investors discuss the risks, challenges, and opportunities associated with moving from Virtualization 1.0 to Private Clouds. Attendees will walk away with actionable recommendations that they can apply in their move to the Private Cloud.

Reblog this post [with Zemanta]

Written by John Gannon

March 11, 2010 at 11:06 pm

Is there a business in Physical to Cloud (P2C) conversions?

leave a comment »

I have been thinking about if there is a place for some cloud computing vendors to come on the scene to handle what I call the ‘P2C’ conversion process-taking a physical machine and converting it to an image that can run on a cloud.  If we look at the virtualization market, clearly P2V (physical to virtual) was an enabling technology that helped people migrate existing physical hosts into virtual machines, without having to completely rebuild systems from scratch.  VMware had a product in the space (and still does) and there was also some popular products provided by 3rd parties like Platespin (who had a nice exit to Novell for ~$200MM).

Do we have the potential for the same story in the cloud?

Well, what’s the same this time around?  You have huge existing deployments of physical machines and virtual machines, some of which IT managers would like to move the cloud, just as you had IT managers who wanted to consolidate physical hosts by converting them to VMs.

But what’s different?  As I understand it, most of the cloud deployments are Linux based, and you’ve got a series of tools (Puppet, Chef, and the like) that allow administrators to very easily deploy cookie-cutter system templates very quickly.  So, the cost of migrating an existing system may be much higher than simply rebuilding through one of these systems and migrating data.

Maybe small environments are the sweet spot for a P2C product.  They are unlikely to have invested time and effort into deploying a configuration management system like Chef or Puppet, but may still want to move their physical systems into a cloud environment.   There is a consultant I know who was recently asked if he could do exactly this for a customer’s small LAMP infrastructure.  This is just one data point but I have a hard time believing there wouldn’t be other SMBs willing to pay for this kind of service.

Is there anything like this out there today?  Agree or disagree with my thesis?   Is there a business here?

Reblog this post [with Zemanta]

Written by John Gannon

January 13, 2010 at 10:04 pm

A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics

leave a comment »

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

via A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics.

Reblog this post [with Zemanta]

Written by John Gannon

December 17, 2009 at 11:09 pm

Using Metrics to Vanquish the Fail Whale (Twitter)

leave a comment »

Office Fail Whale
Image by CC Chapman via Flickr

“You really want to instrument everything you have,” Adams told an audience of 700 operations professionals. “The best thing you can do is have more information about your system. We’ve built a process around using these metrics to make decisions. We use science. The way we find the weakest point in our infrastructure is by collecting metrics and making graphs out of them.”

via Using Metrics to Vanquish the Fail Whale « Data Center Knowledge.

This makes high volume datacenter ops sound fairly straightforward.  As long as you have volumes of data, and a well-thought out process, you can make informed decisions.  I certainly practiced this methodology when I was involved in datacenter operations, but not so sure this is practical as computing environments become harder to debug through multiple layers of abstraction and virtualization.

Thoughts?

Reblog this post [with Zemanta]

Written by John Gannon

June 23, 2009 at 8:55 pm

Cloud supply chain? Quite interesting…

with 3 comments

I think the cloud computing industry will borrow some of the best practices of previous generations of tech partnerships to solve today’s revenue sharing challenge. The rapidly evolving cloud environment is creating a new set of supply-chain relationships which will be governed by the same partnering principles of the past, but with a different set of revenue tracking requirements and economic parameters. This means new tools and techniques will have to be employed to automate the monitoring and billing processes so they are cost-effective in this price-competitive market.

via Where is the Revenue Stream in Cloud Computing? – ebizQ Forum.

Written by John Gannon

June 16, 2009 at 1:12 pm

Posted in Uncategorized

Tagged with

Say goodbye to bad UI (thanks to the cloud)

leave a comment »

A typical modal dialog box with prominent &quo...
Image via Wikipedia

I don’t think anyone will argue with me that the typical IT management tool user interface (UI) is just plain awful.  There are several reasons for this, but the most obvious one is that an enterprise software product is loaded with hundreds of features, functions, and configurations, all of which need to be accessible to an end user.

As cloud computing aggregates formerly disparate functions and resources into logical groups, it stands to reason that  user interfaces will not need as much complexity as is required today, simply because there is more abstraction of the resources that make up the application (code, servers, network, database, etc).  If there are less things that are user configurable in a software package, you can simply eliminate numerous menu items, configuration toggles, buttons, etc.

It will be hard for the incumbents to change their UI to fit this new model.  Customers who are used to a certain UI from a vendor or product are going to want it to stay the same (or close to the same) since they’re used to how it looks – even if it looks horrible :)

New entrants, however, have a great opportunity to leverage UI and user experience to make their management apps more sticky and to appeal to a broader market.  For example, the Bluebear guys have built a snazzy, intuitive multi-hypervisor virtualization management tool written in Adobe AIR.  If I’m an SMB who is dipping my toe into the waters of virtualization, maybe a tool like this makes it easier for me to get started.   Or take a company like Cloudkick, that is looking to “make the cloud easier to use an accessible to everyone.” That’s a great misson statement, and one that’s certainly achievable given the software development technologies available today.

Maybe (hopefully?) we end up in a world where the idea of sending one’s IT staff to “training” class for several thousand dollars a pop will be a thing of the past.  The IT guys will just be able to sit down and drive whatever software you put in front of them.

The UI will be that good…

Reblog this post [with Zemanta]

Written by John Gannon

May 29, 2009 at 4:49 pm

Cloud app store hype

with 7 comments

iPod 5th Generation white.
Image via Wikipedia

With the rise of virtual appliances as a software delivery and deployment model, people are beginning to talk about the idea of cloud computing app stores (a la iTunes) where admins can find virtual appliances and then easily deploy them onto a cloud or a server in their data center.

Although this idea sounds cool (“Hey, I can search for apps like I’d search for songs on iTunes and then deploy them almost instantly!”), I’m not convinced it is something that is going to create a dramatic market shift within the enterprise.

Why not?

First let’s think about why customers would be inclined to use a virtual appliance or app store:

  • Easily demo software on their own environment or in the cloud:  The virtual appliance model is clearly a great way for an IT guy or developer to test new apps.  You can try before you buy, and you don’t need to requisition any hardware to test.
  • Pay-per-appliance instead of pay-per-physical server: A pay-per-appliance model makes more sense in the virtual world than does the old licensing model of per-CPU or per-server.
  • Choice: App stores are a place where the big vendors’ marketing muscle won’t matter as much.  Customers will be exposed to new vendors and solutions.

And some reasons why customers wouldn’t want to use virtual appliances or app stores:

  • Lack of Control: Larger companies will have strict standards on what kind of applications and OS’s go into their environment.  Typically, they are going to want control of the hardware, the application, and everything in between.  Using a virtual appliance means giving up much of the control enterprise IT is used to having on the entire stack.
  • Good config management and deployment tools beat virtual appliances any day of the week:  The virtual appliance value proposition is eliminated if you’ve got robust config and deployment systems (think Opsware, Puppet, etc) that let you deploy fully customized app stacks (w/custom OS) in minutes.  Why sacrifice the ability to customize when you don’t have to?

Why are the appliances and app stores good for vendors?

  • Lead gen: Download of virtual appliance = sales lead for appliance vendor
  • Makes software pre-sales process easier: Instead of putting a sales engineer onsite for a couple days to help setup a customer demo, give the customer a virtual appliance that they can get up and running in an hour or less.
  • Best practices:  The vendor can ensure the configuration of the appliance conforms with best practices.  This will prevent some folks from shooting themselves in the foot by not selecting manufacturer suggested default settings. (Although certainly the ‘suggested’ settings are a really bad idea for certain use cases – a longer story which I won’t dig into here)
  • Makes cloud more useful:  Helps cloud customers deploy apps faster.
  • Long tail:  Exposes lesser known or upcoming vendors to IT buyers.

Seems to me like virtual appliances are a great sales/marketing tool for vendors large and small, but not something that will fundamentally change how enterprise IT is delivered.  SMBs on the other hand…maybe there is a play there.

Thoughts?

Reblog this post [with Zemanta]

Written by John Gannon

May 21, 2009 at 11:08 am

What the cloud is all about

leave a comment »

IBM
Image via Wikipedia

IBM is fixing the one thing it really needed to get its head around to do well in the cloud- simple pricing models, without needing to call a third party expert to unravel them. Forget technology – the cloud is about simple pricing and billing.

via James Governor’s Monkchips » IBM in the Amazon Cloud: on pricing and billing innovation.

Reblog this post [with Zemanta]

Written by John Gannon

May 11, 2009 at 2:08 pm

Posted in Uncategorized

Tagged with ,

%d bloggers like this: