Virtual I/O: Bringing Data Center Connectivity Into the Virtualization Era
While server virtualization has significantly improved datacenter economics by driving higher server utilization, it has also driven server I/O to its breaking point. Virtualized servers demand more connections to networks and storage, and require more I/O bandwidth to optimize utilization, making server connectivity in today’s data center more complex and costly than ever.
Contrasted with the powerful elegance of virtual machine management, today’s I/O management looks like an outdated anachronism. The myriad of cards, cables and switch ports – which were not originally designed for virtualized servers – drive up capital costs and limit application flexibility and scalability. Time consuming tasks, such as de-racking servers to add cards, installing cables, and adding switch ports, remain as antiquated reminders of our non-virtualized roots. Unfortunately, far from being quaint customs, these management tasks often take days or weeks to complete, are error-prone, and may consume resources from multiple IT teams. Worse yet, the resulting infrastructure often delivers sub-optimal results that may ultimately limit server performance.
How Virtualization Drives the Need for I/O
The term “I/O” simply refers to the transfer of information from one device to another. In the case of “server I/O”, we are looking specifically at the “first hop”, that is, the interconnect from a server to a switch, or to a storage device, or to another server.
Connecting a server to a switch or two may not sound very complicated – and in fact when servers run only one application, it usually isn’t. In that case, the application defines the connectivity requirements, and those needs rarely change.
But server virtualization requires much more from I/O. When a server can run a dozen or more applications, and those applications are subject to change, three things happen:More I/O ports: Data centers commonly have anywhere from two to ten physically separate networks. At various points in time, a virtualized server could potentially require connectivity to all of them, thus driving up the number of server connections needed. More bandwidth: As server utilization increases from today’s norm (<10%) to the increased levels possible with virtualization (>50%), I/O usage will naturally rise as well. When servers were underutilized, I/O was underutilized as well. Now, more applications drive more traffic, thus increasing the risk of bottlenecks, particularly when running on modern CPUs which offer a very high processing capacity with multiple cores and fast memory speeds.
More changes: A virtualized environment is by its nature dynamic. Since requirements are likely to change, connectivity needs will change as well.
Limping Along with Your Daddy's I/O
Traditional I/O obviously can work with virtualized servers. After all, the majority of today's deployments work this way. Three "brute force" options are commonly employed:
Option 1: Connect everything: Physically connect every server to every network