Managing Virtualized Infrastructure Will Continue to Be Too Complex in 2013 - Executive Viewpoint 2013 Prediction: Tegile Systems

Rob Commins (Profile)
Wednesday, January 16th 2013

Many large firms and small startups have been using the phrase “Converged Infrastructure” to explain the notion of pools of assets that can deliver storage, server and networking resources to applications. These assets are to be managed as a single entity that can be provisioned, monitored and managed from a single vantage point.

Today it is pretty common to hear that a virtualized environment was so overtaxed; the latencies it was hitting would induce a Disaster Recovery level failover. This scenario isn’t quite showing that datacenters are self-healing, and self-adjusting.  It is incredibly clear that the notion of Converged Infrastructure has plenty of room to grow.

Just as the real estate market runs off of “location, location, location”, it is clear that the virtualized datacenter runs off of “reporting, reporting, reporting.” But has reporting gotten out of hand? Why in the world does a datacenter administrator need all of these reports? What happened to the ease of fault or bottleneck isolation and orchestration tools to make it all better?

There are certainly some pretty cool tools and reports available to IT administrators.  Being able to see a graphic representation of the balance between two active sites is nice. But, we all know that the devil is in the details.  At a high level, two data centers may be in balance, but a key component underneath the hood may be in a high-speed wobble.  Getting at these issues quickly and concisely is key, and no virtualization administrator I speak with feels he or she is getting automated out of a job any time soon.