Recently I wrote a guest editorial for Virtual Strategy Magazine, although I have to admit I wasn’t made aware of my goofy picture – look away I’m hideous – until the article was published. You can find the full contents at Virtual Strategy Magazine
From the article (here)…
As I was pondering the challenges of current desktop management, researching the latest and greatest from the desktop virtualization vendors and talking to a lot of large organizations, I was trying to find that one thing that explained the operational benefits of client-side virtualization. It really does come down to the need for standardization in our end-user computing platforms. Consolidation is the “killer app” for server/data center virtualization and standardization is the major benefit for client-side virtualization.
Computing environments have changed radically over the years, from static, tethered devices residing behind a well-protected perimeter that only required minimal protection and maintenance to complex, highly-distributed computing environments with a large population of remote, intermittently connected computing devices accessing not only centrally managed corporate resources, but also corporate resources managed by a third-party SaaS or cloud computing provider.
This evolution to distributed computing is occurring in parallel with the most hostile threat environment we have ever experienced. With law enforcement officials suggesting cybercrime is hitting its zenith and becoming more profitable than even the international black market for illicit drugs, nation-states opining on cyberwar and of course the prevalence of disclosure and identity theft, most of the country has been left numb to the dangers of online transactions.
There is no question that poorly managed systems are less secure, cost more, and lead to reactionary spending with different groups implementing disparate management and security tools, none of which are easily aligned with adjacent IT departments within the organization.
Gartner publishes research on the total cost of ownership (TCO) for PCs, in which they find that locked down and well-managed PCs can cost up to 42% less than unmanaged systems. They also found that somewhat managed and mismanaged PCs were only slightly less costly than unmanaged PCs due to the costs incurred from the management systems themselves.
The volume of technical support calls that result from deviations of IT standards or common operating environments are not only significant but consume a disproportionate amount of time to troubleshoot and resolve.
The reason desktop management is so costly is in large part due to a lack of standardization. The problem is that IT has not been able to maintain a common operating environment (COE) that enables them to effectively manage and secure their end-user population. For every application deployed or updated; for every patch release, AV data file update or system modification; for every downloaded widget and system reboot, there is some segment of the user population that experiences downtime, conflicts or other technical issues resulting from variability in their computing environment.
These challenges, coupled with the success of server virtualization, have driven a lot of attention toward desktop virtualization. The problem is that even though standardization is good, the majority of companies will find desktop virtualization an exceedingly difficult and unacceptably costly proposition.
With VDI, virtual desktop images are stored in a data center and provided to a client via the network. The virtual machines will include the entire desktop stack, from operating system to applications to user preferences, and management is provided centrally through the backend virtual desktop infrastructure.
The promise is that VDI will replace the need for myriad systems management and security tools that are currently deployed. No more demands for traditional desktop management tools for OS deployment, patch management, anti-virus, personal firewalls, encryption, software distribution and so on. In fact, many are suggesting that we can return to thin client computing models.
Standardization has major implications for security management as well. In the latest Verizon Data Breach Investigations report the forensic analysis found that almost 80% of the all data breaches were from an external source, and in 98% of those external breaches the attacker exploited poorly administered or mis-configured systems.
Before embarking on a desktop virtualization project, organizations will need to understand the impact:
- Does your organization support remote, intermittently connected mobile computing devices?
- Have you considered the cost of the backend VDI/HVD infrastructure (network, storage, hardware, etc.)?
- Will the project require specialized FTEs in addition to current IT staff?
- How will the organization manage and secure the virtualization infrastructure?
- How will the organization maintain the health and security of roaming virtual desktops or desktops in use?
- What are the licensing costs for the operating systems, applications and other end-user components aside from the virtual software itself?
In some select situations, VDI or server-hosted virtual desktops hold promise for improved efficiencies, lower management costs and improved security, but its effectiveness is limited to those environments that can adopt thin client computing models, do not require offline or mobile support and can enforce draconian usage policies on their user population so that personal computing power does not impact productivity or end-user satisfaction.
VDI has additional problems as well. First is the inherent cost and complexity in simply implementing VDI. In many cases the backend requirements for storage, networking, connection brokers and management systems can be 4-10 times as expensive as traditional solutions.
Second, the reality is that regardless of the marketing hype, media frenzy, and vendor misinformation, these systems still require real-time systems management and security solutions. Centralizing the desktop image does not magically protect it from viruses, intrusion attempts, system compromises, or operational failures.
Third is that even if one could efficiently and with limited investment implement virtual desktops, the user population would still be unable to work offline or in a disconnected fashion while operating under the same integrity and protection provided while tethered within the corporate network. Additionally, most users would never allow themselves to be deprived of personal computing power, so a thin-client model would only work in those situations in which the user population required little more than access to a single or small set of corporate applications, and the devices themselves had ‘always-on’ static network connectivity.
Fourth, and most importantly, is that VDI now introduces a single point of compromise. An attacker only needs to attack the central data center servers to bring down the entire end-user population.
As cumbersome and inappropriate as VDI models may be, there are alternatives that provide the benefits of desktop virtualization while maintaining the integrity of distributed computing models. A PC hypervisor is a software layer between the operating system and the PC hardware that allows hardware resources (CPU, RAM, Disk, etc) to be shared between multiple execution environments.
PCs have very different requirements than servers; therefore, PC hypervisors have very specific attributes that are not available in server-based hypervisors, such as Hyper-V or ESX. Most important of these is the ability to support device pass-through, which is essentially the ability for operating systems to have direct access to hardware and peripherals without use of emulation or para-virtualization.
To illustrate the difference let’s look at video card support. In the server environment there is no requirement for high-end video processing, like 3D modeling, since this is generally done at the client. Server-based hypervisors use “emulation” or “para-virtualization” which will emulate the HW itself but are unable to take advantage of the video card’s GPU (graphic processing unit). When it comes to end-user computing however, there is a requirement to be able to take advantage of the computer’s hardware, such as high-end video cards.
PC hypervisors provide another very important benefit, and that is the abstraction of management outside of the OS. Information security and operational management of end-user computing devices is becoming a more challenging and untenable problem day by day. The reality is that we continue to build on top of inherently and fundamentally weak computing foundations. We need an alternative to the current computing paradigm and we need it to support the growing demands of personal computing power and mobile computing.
The real issue is not just the computing paradigm but the reliance on the OS itself, which is the root of all Internet evil.
- IT has more tools deployed to manage and secure the operating system
- All these security and systems management tools rely on the integrity of the operating system
- The majority of commercial operating systems are inherently insecure and carry a lot of legacy baggage
- Operational failures and compromise render traditional management tools useless
The evolution of computing from a centralized, tethered model highly reliant on perimeter security and data center management to highly distributed, complex, and globally interconnected networks supporting remote intermittently connected mobile computing devices will make VDI models extremely unpalatable for most organizations.
Although real-world desktop virtualization deployments have not met the market hyperbole, the development of PC hypervisors offer radical change to desktop management as they provide desktop standardization, support for distributed computing and the ability to abstract management outside of the OS itself, while providing all the purported benefits of desktop virtualization without the infrastructural costs and management headaches.