Tuesday, October 22, 2013

The real Slim Shady

Historically when an application team needed compute and storage resources they would kick off a workflow that pulled in several teams to design, procure and deploy the required infrastructure (compute, storage & network).  The whole process generally took a few months from request to delivery of the infrastructure.  

The reason for this onerous approach was really that application groups generally dictated their choice of compute technology.  Since most applications scaled vertically, the systems and storage scaled likewise.  When the application needed more horsepower, it was addressed with bigger more powerful computers and faster storage technology.  The hardware for the request was then staged followed by a less-than-optimal migration to the new hardware.  

The subtlety that gets lost regarding server virtualization is that a virtualization cluster is based on [near] identical hardware.  The first machines that were virtualized where the ones who’s computer and storage requirements could be met by the hardware that the cluster was based on.  These tended to be the applications that were not vertically scaled.  The business-critical vertically scaled applications continued to demand special treatment, driving the overall infrastructure deployment model used by the enterprise.

The data center of the past is littered with technology of varying kinds.  In such an environment technology idiosyncrasies change faster than the ability to automate them -- hence the need for operators at consoles with a library of manuals.  Yesterdays data center network was correspondingly built to cater to this technology consumption model.  A significant part of the cost of procuring and operating the infrastructure was on account of this diversity.  Obviously meaningful change would not be possible without addressing this fundamental problem.

Large web operators had to solve the issue of horizontal scale out several years ahead of the enterprise and essentially paved the way for the horizontal approach to application scaling.   HPC had been using the scale out model before web, but the platform technology was not consumable by the common enterprise.   As enterprises began to leverage web-driven technology as the platform for their applications they gained it’s side benefits, one of which is horizontal scale out.  

With the ability to scale horizontally it was now possible to break an application into smaller pieces that could run across smaller “commodity” compute hardware.  Along with this came the ability to build homogeneous easily scaled compute pools that could meet the growth needs of horizontally scaling applications simply by adding more nodes to the pool.  The infrastructure delivery model shifted from reactive application-driven custom dedicated infrastructure to a proactive capacity-driven infrastructure-pool model.  In the latter model, capacity is added to the pool when it runs low.  Applications are entitled to pool resources based on a “purchased” quota.

When homogeneity is driven into the infrastructure, it became possible to build out the physical infrastructure in groups of units.  Many companies are now consuming prefabricated racks with computers that are prewired to a top-of-rack switch, and even pre-fabricated containers.  When the prefabricated rack arrives, it is taken to it’s designated spot on the computer room floor and power and network uplinks are connected.  In some cases the rack brings itself online within minutes with the help of a provisioning station.

As applications transitioned to horizontal scaling models and physical infrastructure could be built out in large homogeneous pools some problems remained.  In a perfect world, applications would be inherently secure and be deployed to compute nodes based on availability of cpu and memory without the need for virtualization of any kind.  In this world, the network and server would be very simple.  The reality is, on the one side, that application dependencies on shared libraries do not allow them to co-exist with an application that needs a different version of those libraries.  This among other things forces the need for server virtualization.  On the other side, since today’s applications are not inherently secure, they depend on the network to create virtual sandboxes and enforce rules within and between these sandboxes.  Hence the need for network virtualization.

Although server and network virtualization have the spotlight, the real revolution in the data center is simple homogeneous easily scalable physical resource pools and applications that can use them to effectively.  Let's not lose sight of that.


[Improvements in platform software will secure applications and allow them co-exist on the same logical machine within logical containers, significantly reducing the need for virtualization technologies in many environments.  This is already happening.]

No comments:

Post a Comment