2010 Prediction: Lee Caswell, Pivot3
Today’s virtualization implementations and the very use of the term “virtualization” may blind newer IT professionals to the fact that virtualization technology was first developed and productized by IBM more than thirty years ago. Sure enough, a quick glance through Wikipedia confirms the extent of IBM’s hypervisor innovations and the fact that today’s virtualization solutions are not the first to offer consolidated hardware and high-availability benefits to advanced IT users.
“In the early 1970s...multiple copies of MVS (or other IBM operating systems) could share the same machine if that machine was controlled by VM/370 - in this case VM/370 was the real operating system and regarded the "guest" operating systems as applications with unusually high privileges.” - Wikipedia
With the knowledge that today’s virtualization solutions are the second wave of innovations, not the first, it is interesting to think about how this wave is different and why the third wave of virtualization will come more quickly and affect our daily lives even more directly.
So, what’s different about this wave and why did it take 30 years? The key difference this time around is that virtualization software innovations are being applied to high-volume commodity hardware platforms provided by open-system providers. Consider that unit shipments of x86 servers are three orders of magnitude greater than IBM mainframe shipments. The volume economies of x86 server hardware platforms and the plethora of suppliers ensure a multiplier effect for the benefits of virtualization.
It doesn’t stop here. The inexorable march of Moore’s Law has encouraged non-server providers such as network and storage providers to replace proprietary silicon in favor of x86 platforms and blades. In storage, IDC calls this trend the “serverization of storage,” and in 2010 we will see a massive shift to x86 platforms running new storage software that creates virtual arrays from discrete x86 storage appliances. Embedded server virtualization in new non-server x86 platforms will be a key technical differentiator in making these systems fast and reliable. Cisco for example is applying virtualization into switch platforms as part of their Unified Computing Initiative. In storage, Pivot3 deploys server virtualization to embed hosted applications in scale-out x86 storage appliances.
It is my prediction that the shift to x86-based non-server platforms with embedded server virtualization will be no less seismic than when x86 servers replaced RISC architectures in the mid ‘90s because these x86-based architectures address the fastest growing market segments, namely, rich media, video, medical imaging and disk-based backup.
How about beyond 2010? The lengthy thirty-year gap between the first two virtualization adoption waves is a tribute to the two substantive obstacles that any virtualization wave must overcome: performance taxes and power budgets. On performance, every virtualization technology steals a portion of CPU resources to manage multiple loads and the tipping point seems to be when CPUs can support four or more guests. Similarly, there is a power tradeoff in deploying powerful hardware platforms that only become more economical than separate physical hardware devices when multiple guests can share the power load. Again, a ratio of 4x seems to be the practical hurdle for widespread adoption, either for consolidation or for failover purposes.