Advertisements

Yet Another (ex-)VC Blog

Cloud computing, startups, and venture capital

Posts Tagged ‘Cloud computing

Datadog and DevOps

with 4 comments

Based on what I see in the video embedded below, the Datadog team has developed a dashboard that can be used jointly by development and infrastructure operations teams to get a holistic view of everything that’s going on in the stack.  This challenge (the interaction between application developers and infrastructure operations, or rather, the lack of it) is very real, and I see it every day in my day job.   So, I always like to see when a startup (especially a NYC-based one) is building tooling to help bridge this gap.

It is hard to tell from the video how much of the product is focused on analysis versus aggregation and presentation of statistics.  Certainly there is value in correlating various statistics from different web services (e.g. money spent on AWS correlated with a spike in site traffic captured by Google Analytics) but the greater value would be in crunching the data and making predictions (or taking pre-emptive action).  Maybe the idea is that a dashboard is provided for free, with analytics available for a fee?

Another question that came to mind when watching the video was: “Who is the customer?” Is it the application developer, the system administrator, or both?  My gut tells me infrastructure folks would be the ones to get excited about this product, as they are generally spending lots of time looking at various metrics, reports, and charts, and trying to correlate behaviors between different areas of the infrastructure.   Anything that can help them pick up on correlations or trends is of value.  However, application developers who are running their application on a IaaS platform might appreciate this data as well, since they may not have system administration staff to support them.

This dual focus leads to some questions around the customer acquisition strategy and positioning.  Even in a world where DevOps is beginning to see traction, there is still a line of demarcation between application developers and infrastructure operations people.  Each group has a distinct set of needs, and a product that tries to cater to both groups might not catch on with either one.   I’m sure Datadog will address this question in the coming months as they move from Alpha into wider distribution.

Related articles

Enhanced by Zemanta
Advertisements

Written by John Gannon

October 17, 2010 at 11:12 am

VMware is definitely going big, not going home – Google PaaS play

leave a comment »

We are committed to making Spring the best language for cloud applications, even if that cloud is not based on VMware vSphere.

via VMware: The Console: Google and VMware’s “Open PaaS” Strategy.

Reblog this post [with Zemanta]

Written by John Gannon

May 19, 2010 at 12:56 pm

Seeking panelists for Virtualization and Private Cloud session

leave a comment »

I’m going to be moderating a panel for the virtual conference Cloud Lab ’10 (formerly known as Cloud Slam) the week of April 19.  The abstract of the talk is below.  If you would like to be a panelist, please leave me a comment and let me know who you are and why you want to be on the panel.  As you can see in the abstract, hoping to get a mix of investors, vendors, and end users.  Keep in mind this is a virtual conference, so everything will be done via teleconference and webex.

The path from Virtualization 1.0 to Private Cloud: Risks, challenges, and opportunities

Server virtualization has gained widespread acceptance, with most IT organizations obtaining a substantial 1st wave of savings and operational efficiencies. However, obtaining the next wave of savings and business agility is predicated on building internal (private) cloud capabilities. To be successfully deployed, these private cloud environments must be equipped with a new breed of automation tools and management processes that can scale without increasing operational cost and complexity.

In this panel, a group of experienced end users, vendors, and investors discuss the risks, challenges, and opportunities associated with moving from Virtualization 1.0 to Private Clouds. Attendees will walk away with actionable recommendations that they can apply in their move to the Private Cloud.

Reblog this post [with Zemanta]

Written by John Gannon

March 11, 2010 at 11:06 pm

Is there a business in Physical to Cloud (P2C) conversions?

leave a comment »

I have been thinking about if there is a place for some cloud computing vendors to come on the scene to handle what I call the ‘P2C’ conversion process-taking a physical machine and converting it to an image that can run on a cloud.  If we look at the virtualization market, clearly P2V (physical to virtual) was an enabling technology that helped people migrate existing physical hosts into virtual machines, without having to completely rebuild systems from scratch.  VMware had a product in the space (and still does) and there was also some popular products provided by 3rd parties like Platespin (who had a nice exit to Novell for ~$200MM).

Do we have the potential for the same story in the cloud?

Well, what’s the same this time around?  You have huge existing deployments of physical machines and virtual machines, some of which IT managers would like to move the cloud, just as you had IT managers who wanted to consolidate physical hosts by converting them to VMs.

But what’s different?  As I understand it, most of the cloud deployments are Linux based, and you’ve got a series of tools (Puppet, Chef, and the like) that allow administrators to very easily deploy cookie-cutter system templates very quickly.  So, the cost of migrating an existing system may be much higher than simply rebuilding through one of these systems and migrating data.

Maybe small environments are the sweet spot for a P2C product.  They are unlikely to have invested time and effort into deploying a configuration management system like Chef or Puppet, but may still want to move their physical systems into a cloud environment.   There is a consultant I know who was recently asked if he could do exactly this for a customer’s small LAMP infrastructure.  This is just one data point but I have a hard time believing there wouldn’t be other SMBs willing to pay for this kind of service.

Is there anything like this out there today?  Agree or disagree with my thesis?   Is there a business here?

Reblog this post [with Zemanta]

Written by John Gannon

January 13, 2010 at 10:04 pm

A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics

leave a comment »

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

via A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics.

Reblog this post [with Zemanta]

Written by John Gannon

December 17, 2009 at 11:09 pm

Using Metrics to Vanquish the Fail Whale (Twitter)

leave a comment »

Office Fail Whale
Image by CC Chapman via Flickr

“You really want to instrument everything you have,” Adams told an audience of 700 operations professionals. “The best thing you can do is have more information about your system. We’ve built a process around using these metrics to make decisions. We use science. The way we find the weakest point in our infrastructure is by collecting metrics and making graphs out of them.”

via Using Metrics to Vanquish the Fail Whale « Data Center Knowledge.

This makes high volume datacenter ops sound fairly straightforward.  As long as you have volumes of data, and a well-thought out process, you can make informed decisions.  I certainly practiced this methodology when I was involved in datacenter operations, but not so sure this is practical as computing environments become harder to debug through multiple layers of abstraction and virtualization.

Thoughts?

Reblog this post [with Zemanta]

Written by John Gannon

June 23, 2009 at 8:55 pm

Cloud supply chain? Quite interesting…

with 3 comments

I think the cloud computing industry will borrow some of the best practices of previous generations of tech partnerships to solve today’s revenue sharing challenge. The rapidly evolving cloud environment is creating a new set of supply-chain relationships which will be governed by the same partnering principles of the past, but with a different set of revenue tracking requirements and economic parameters. This means new tools and techniques will have to be employed to automate the monitoring and billing processes so they are cost-effective in this price-competitive market.

via Where is the Revenue Stream in Cloud Computing? – ebizQ Forum.

Written by John Gannon

June 16, 2009 at 1:12 pm

Posted in Uncategorized

Tagged with

%d bloggers like this: