FAQ: Virtual Desktop Infrastructure (VDI)
This FAQ helps organizations better understand how to deploy desktop virtualization, the associated considerations, the drivers, user behaviors, applications, as well as how Riverbed solutions play a critical role in ensuring the best possible user experience.
Is VDI a Viable Option for all Companies?
Companies of all sizes are increasingly replacing or considering replacing their traditional thick client PC deployments with virtual desktop infrastructure (VDI). A convergence of trends is responsible. For example, server virtualization has given organizations not only greater comfort with centralization and virtualization models, but also prompted them to find additional ways to maximize the corresponding cost savings and management simplification. Additionally, the increased adoption of new operating systems like Windows 7 has lead to operating system and client hardware upgrade cycles as organizations look to take advantage of more advanced features. Finally, improvements in desktop virtualization offerings are edging closer to delivering “high definition,” local-like desktop experiences.
The combination of these factors provides a powerful draw to VDI for almost any organization. Unfortunately, scaling out to remote locations, branch offices and mobile workers brings challenges like end-user experience and increased bandwidth utilization. Without careful preparation, the internal benefits VDI provides to IT organizations will be outweighed by a sub-par experience for users who expect at least as good an experience as what they are accustomed to with traditional desktops.
What are the Inhibitors to VDI Adoption?
Despite the benefits, there are a number of complicating factors that have slowed wider adoption of VDI. In essence VDI represents a truly centralized architecture that presents some hesitation among IT organizations as they consider moving to this form of desktop and application delivery. The combination of: server design, hypervisor capability, virtual machine density, display protocols, and bandwidth have a direct effect on the performance of desktop virtualization clients. The added costs and complexity of a VDI project can dwarf the seemingly simpler choice to continue with thick client PCs running a local OS.
In localized deployments, administrators can exercise some control over these components to maximize performance. With a VDI deployment, IT administrators risk losing visibility into application performance, which means limiting their ability to resolve problems as latency and bandwidth constraints impact every element of a user’s computing experience. The network itself can often be along derail the success of a VDI deployment, as it can be the hardest to control.
Are There Any Hidden Costs to a VDI Deployment?
A measurable return in desktop virtualization can come in many forms, such as from exchanging a fleet of disparate heavy clients for more efficient centralized servers and thin clients, and the lower total cost of ownership of various IT services such as desktop support is another. But these savings should be put in context with VDI startup costs. Re-architecting an entire deployment is not a trivial exercise. It includes learning new skills, planning, implementation, and end-user training. The scope of these projects often involve multiple groups such as network, server, application, and desktop teams, which can obscure what might be considered significant downstream costs. Provisioning the correct amount of bandwidth is another example of a hidden cost. The sudden spike in volume of network activity that can result from the introduction of multiple desktop streams frequently require additional capacity that introduces a significant reoccurring operational expense to deliver virtual desktops along with everything else on the network.