Advertisements

Venture Capital Jobs Blog

Curated by John Gannon and Team

Posts Tagged ‘cloud

IDEs belong in the cloud

with 7 comments


A few weeks ago I decided I wanted to play around with Ruby and naively thought that it would be easy to get started. I’d download Eclipse, and a few Ruby packages that weren’t shipped with MacOS, and then be off to the races. Fat chance. What I thought was going to a leisurely evening of writing sample Ruby apps turned out to be a marathon debugging session, wrestling with dependencies, error messages, and anything else that could have possible gone wrong — before I had the chance to write a lick of my own code. It reminded me of being a sysadmin, poring over log files and reading cryptic StackOverflow posts trying to figure out what the heck was wrong with my setup. It just should not be this hard for someone moderately technical to start coding up a simple app.

Fortunately there are some companies building cloud hosted IDEs (definitely should have started there in retrospect), like eXo, Cloud9 and Kodingen, who will take slot of the “getting started” pain out of your coding experience. Spend less time monkeying around with libraries and dependencies and more time developing cool stuff. I like it.

The other cool thing about cloud hosted IDEs is that it opens up a ton of innovation in terms of 3rd party developers being able to expose their libraries to those IDEs as a web service. Individual users of the IDE could simply select modules and libraries from a service catalog of 3rd party devs, and it would just work, with no hacking or tearing out of ones hair :) I can also see someone making a play to develop some kind of middleware layer where 3rd party Devs could upload their libraries as-is, and that middleware would make the libraries available as a web service to the Cloud IDEs. Maybe github is the right home for that middleware-as-a-service?

I think this is a really interesting space but I’m sure I’m missing some of the nuances. Let me know what I may have overlooked in the comments. I appreciate any and all feedback.

Advertisements

Written by John Gannon

January 1, 2012 at 10:51 pm

Posted in Uncategorized

Tagged with , ,

Netflix’s Advice on Moving to Amazon Web Services

leave a comment »

“You have to assume that the hardware and underlying services are ephemeral, unreliable and may be broken or unavailable at any point, and that the other tenants in the multi-tenant public cloud will add random congestion and variance. In reality you always had this problem at scale, even with the most reliable hardware, so cloud ready architecture is about taking the patterns you have to use at large scale, and using them at a smaller scale to leverage the lowest cost infrastructure.”

via Netflix’s Advice on Moving to Amazon Web Services – ReadWriteCloud.

Written by John Gannon

April 1, 2011 at 10:21 am

Posted in Uncategorized

Tagged with ,

3 reasons why a public cloud-focused MSP makes sense

with 2 comments

The high tech industry raves about how “the cloud” will save the world, give us completely automated IT infrastructure, and obviate the need for talented system administrators.  However, I would make a contrarian claim that this move to IaaS public clouds (e.g. EC2) will generate more demand for system administrators within the organizations using these clouds, and maybe even spawn a new breed of MSP that solely focuses on management of public cloud infrastructure on behalf of end customers.  Here’s my reasoning:

  • If we need 3rd party software to manage it…it ain’t there yet – I have no idea what kind of revenue companies like Rightscale generate, but the fact that they and other companies like them have filled a market need for an additional monitoring management layer on top of public clouds suggests that this cloud stuff is actually harder to operate than the hype would make one believe.
  • People still make decisions, not computers – Monitoring and management tools rely on user-defined thresholds in order to execute pre-determined actions.  CPU of a server goes over 70%?  Spin up another instance.  CPU of a server goes below 30%, spin it down.  No amount of additional monitoring data or additional monitoring tools will change that core limitation.  You still need a person to define those thresholds and a person to define those actions.  And that person needs to be able to troubleshoot when those thresholds and actions don’t produce the desired result.  There is still no substitute for a smart sysadmin who knows their sh*t.
  • IaaS is still new, and there is a significant knowledge gap – Let’s face it…these cloud platforms are still in the initial phases of adoption.  This means there is going to be a big skills gap for many years ahead (look at virtualization – a good VMware admin can demand top dollar because there are still relatively few of them – and ESX has been around for at least 7 years) and that gap means that companies will need to look outside for help. I also regularly see posts for developers as well as sysadmins who have EC2 experience.

Would you work with an MSP to help manage your EC2 deployment?  Are there any well-known companies with large EC2 deployments who have farmed out at sysadmin duties to MSPs?

 

Written by John Gannon

November 13, 2010 at 3:18 pm

Posted in Uncategorized

Tagged with

Where is cloud computing dragging IT operations?

with one comment

Well, if the inevitable outcome of reduced friction is to increase demand for IT resources, someone is going to have to do the capacity planning. In a sense, the impact of cloud computing will be to shift the tasks for IT operations from tactical resource provisioning to strategic resource planning — with an emphasis on achieving the most efficient, lowest cost infrastructure possible. This is a far cry from the “your mess for less” outsourcing that has previously been the outcome of cost focus — this is about creating an automated, immediate search for the lowest cost, most available, most appropriate computing resources needed to fulfill a provisioning request.

via Page 2 Cloud computing will create three revolutions for CIOs – Debate – CIO UK Magazine.

Written by John Gannon

March 30, 2010 at 11:22 am

Posted in Uncategorized

Tagged with ,

A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics

leave a comment »

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

via A Cloud Computing Wish List for 2010 « John Savageau’s Technology Innovation Topics.

Reblog this post [with Zemanta]

Written by John Gannon

December 17, 2009 at 11:09 pm

Technologies I wish we had in 2001

leave a comment »

Music for 2001: A Space Odyssey album cover
Image via Wikipedia

The best and worst thing about working in the technology industry is that you constantly build custom solutions to problems, sometimes quite expensively, and then years later see the same problems get solved through affordable (or free) off-the-shelf products.

Recently I’ve been thinking about solutions that we could have really used during my stint at FOXSports.com, but didn’t exist at the time (2001-2003).

Amazon Web Services (EC2 and S3): It was exciting to support interactive polls during major FOX broadcasts like Super Bowl and World Series, but a huge challenge for the technology organization, particularly in the areas of capacity planning and scaling.  We literally had our hosting provider bring in additional servers for these events, and then decommission them after the events ended.  If we had EC2 we might have been able to scale more flexibly during these events.  Also, we had loads of static content stored in our Oracle database, and served up by our web servers.  S3 would have allowed us to serve this content more effectively while reducing our reliance on a homegrown caching system.

Cloud integration (a la Boomi, CastIron Systems): As a sports website, we had a whole bunch of data and content feeds that we’d get from third parties.  Each feed was a custom integration using different protocols, authentication methods, and required specialized operations support.   If we had solutions like Boomi or CastIron available to us, we could have saved ourselves and our partners a whole lot of development time, and the end result would have been a more operationally supportable set of systems, with more flexibility to onboard new business partners quickly.

Application caching layer (e.g. memcached): We built our own caching platform within our app so that we wouldn’t hit our Oracle database so often with reads.  The cache logic was built in our app and the storage for the cache was an NFS shared volume sitting on a Netapp NAS device.  If we built the site today, we could have leveraged memached (or one of its commercial derivatives) and saved a bunch of dev, testing and debugging time.

Google Analytics: We spent a ton of money on web analytics solutions back in the day.  Google Analytics would have given us much of the same functionality, for free.  Enough said :)

All of these solutions would have addressed big pain points for our tech team, and consequently for our business as a whole.

Would love to hear any of your war stories related to this topic in the comments.

Reblog this post [with Zemanta]

Written by John Gannon

June 24, 2009 at 11:39 am

The cloud cash cow: planning & implementation services

leave a comment »

Image representing Sun Microsystems as depicte...
Image via CrunchBase

Sun Microsystems (NSDQ: JAVA) has announced the Cloud Strategic Planning Service to provide cloud know-how to companies of various sizes that want to implement a form of cloud computing.

The planning service will be provided through Sun’s consulting arm, Sun Professional Services. It will evaluate a customer for cloud-readiness, determine whether a public or private cloud is appropriate, and identify opportunities in the cloud in terms of the nature of the business, the corporate culture, and the existing IT environment.

via Sun Shows Off Vendor Support For Sun Cloud — Cloud Computing — InformationWeek.

What percentage of enterprise workloads are in the cloud today?  My swag is less than 1%.

Think about why the x86 virtualization services market is so big.  Because you’ve got less than 10% of workloads virtualized.  There are plenty of workloads out there that companies help figuring out where/how/when to virtualize.  We’ll never get to 100% virtualized but even if we hit 20% or 30%, there is still a huge services opportunity.

Going from 1% to even 5% or 10% is meaningful to the whole ecosystem of cloud services vendors, including the professional services guys who are going to help customers make the transition.  Since the cloud professional services ‘experts’ haven’t really been identified yet, I bet we’ll see alot of people hanging the cloud shingle and following the money.

Getting back to the post referenced here, this is an obvious move for Sun, and is bound to put a ton of juice into their services business.  Cloud consulting is going to be very high margin and high volume at the same time.

Buckle up, folks.

Reblog this post [with Zemanta]

Written by John Gannon

June 2, 2009 at 4:02 pm

Posted in Uncategorized

Tagged with , , ,

Backup your data passively, or don’t backup at all

with one comment

A standard RJ45 Ethernet connector.
Image via Wikipedia

I’ve finally come to realization that the idea of an end home/SoHO/SMB user being actively involved in backup of their data is a losing proposition.

Foundry Group just funded Cloud Engines, a company that makes a product (Pogoplug) which allows you to passively share data from your hard drive and make it available as a cloud-like service.  Certainly automated backups would be a logical next step.

There are also some other guys out there (whose names are escaping me – and by all means please add them to the comments) who take a similar approach of placing a device in the network path to perform backups with bare minimum user configuration or intervention.

If the backup service/software has to prompt a user for files and directories to be backed up, it has already failed.  The user will underutilize it, or won’t use it at all (sadly I’m in the latter bucket).

A device inline on the network will miss some stuff, but it’s certainly better than nothing (which is what you get with backup software which sits uninstalled/unconfigured).

I wonder if the network card makers could create a backup offload engine (BOE?) chip that would grab file related network I/O’s and replicate them into the cloud.  We have TCP offload engines (TOE), and iSCSI offload chips, so why not BOE?

Related articles by Zemanta

Reblog this post [with Zemanta]

Written by John Gannon

May 13, 2009 at 5:56 pm

Posted in Uncategorized

Tagged with , ,

Come visit me (online) at Cloudslam on Monday April 20 2p ET

leave a comment »

I’m sitting on an online panel Monday April 20 2p ET where I’ll be discussing cloud computing adoption with a group of vendors, investors, and practitioners.  I hope you can join us online.

Written by John Gannon

April 16, 2009 at 10:54 am

Posted in Uncategorized

Tagged with , ,

Where are all the enterprise cloud success stories?

with one comment

the SOA metamodel
Image via Wikipedia

There are way too many vendors talking and not enough practitioners. That is not a good thing. If all of our education is coming from vendors we will be in a bad place. This is SOA all over again from 3-4 years ago. When I do see a practitioner, most of them have not done anything significant yet. If I have to listen to the NY Times success story one more time I will puke. I keep hearing the same case studies over and over which tells me that we are talking about the cloud a heck of a lot more than we are doing the cloud. Most of the good case studies are from startups which is an obvious sweet spot for the cloud. I haven’t heard many case studies where an established enterprise built something significant in the cloud.

via My thoughts on Cloud Computing after the Cloud Computing Expo (Mike Kavis’s blog)

Mike made a great post reviewing the cloud computing expo, and I encourage you to read it.  The post resonated with me not only because I’ve seen a similar dynamic in the past with other enterprise IT companies, and the dynamic keeps repeating itself.  Let me draw on some of my experience at VMware as an example.

I started working at VMware in 2003 when ESX Server was v1.5, and spent the better part of the first year or so jetting around the US, helping customers learn how to use what was, at the time, a very new product without wide adoption.

Besides getting a great education on bad takeout food and figuring out which hotels would give me double miles for staying with them, I also witnessed an interesting phenomenon related to how customers perceived our product.

Here are some conversations from various customer visits (with a little bit of hyperbole sprinkled in, and the names and faces changed) to give you a sense of what I’m talking about.   Although I’m exaggerating with regards to customer routers communicating w/Mars, the general flavor of the scenarios is quite real.  Here goes…

  • Customer ABC: “Hey, have you ever tried to use ESX with super-bleeding-edge-storage-array-XYZ that was released last week?
  • Me: “Nope”
  • Customer EFG: “Hey, we’re running funky Cisco VLAN configuration options #752, #781, and oh yeah, we’ve got the firewall configured to communicate with Mars on alternate Tuesdays through a virtual machine proxy server.  You have any other customers in our region who are running this configuration?”
  • Me: “Uhh…”
  • Customer 123: “John, I see you guys support our tape backup software.  We have it configured in this really specific way that cost us about $100k of consulting work from the tape backup vendor to get it to work just right.  Any chance you guys tried this configuration out in your QA labs at VMware?”
  • Me: (frantically calls product management)

I was trying (perhaps unsuccessfully) to be funny with these examples, but hopefully I’m illustrating a point…

When a startup releases a new product,  everyone in the market knows the darned thing is so new, that there is no reference architecture, there are no production customer references, and that there is probably some risk that the technology will not work as advertised or designed.  Expectations are not low, but its clear to everyone in the room (or the datacenter) that the vendor and the customer are going to need to be tied at the hip to make the implementation successful, and likely everyone is going to be learning and taking lumps along the way.

It’s a different story when a company or  a market sector has received strong coverage (and hype) from the press and has the appearance of being more established.  All of this mindshare, marketing, and PR causes customers to assume that a startup will be able to support any and all configurations and uses cases, even in newly released products.

And that’s where we are with cloud computing.   Conversations like the ones I had at VMware are the same kinds of conversations the cloud vendors are having with their customers and prospects.  Cloud computing has been receiving so much press and hype- but frankly (and to Mike’s point) not many referencable examples that show enterprise adoption.

Over time, we’ll start to see greater adoption in the enterprise, but it will take some cutting edge enterprises assuming some risk and partnering with emerging cloud computing vendors  to develop this next generation of success stories.

Reblog this post [with Zemanta]

Written by John Gannon

April 7, 2009 at 10:14 am

Posted in Uncategorized

Tagged with ,

%d bloggers like this: