Client-Side Virtualization Episode II: Standardization, Attack of the Clones and Desktops Reloaded

The matrix

Consolidation is the major benefit or “killer app” for server/data center virtualization. Standardization is the major benefit or “killer app” for client-side virtualization.

As I was pondering the challenges of current systems management processes, researching the latest and greatest from the client-side virtualization vendors, and talking to a lot of large organizations I was trying to find that one thing that explained the operational benefits of client-side virtualization. There are more than one, but it really does come down to standardization, allow me to explain…

Computing environments have changed radically over the years, from static, tethered devices that only required minimal protection and maintenance to highly distributed computing environments with a large population of remote, intermittently connected computing devices accessing not only centrally managed resources but also 3rd party applications and resources, all under constant threat from increasingly hostile actors. I discuss this evolution in IT environments in some detail in an earlier blog post “How Cloud, Virtualization, and Mobile Computing Impact Endpoint Management in the Enterprise

IT Enterprise Architecture Circa: 2012 – Organizations must manage and secure a large, complex, and globally distributed. remote, and mobile computing environment all accessing corporate assets housed within the corporate network as well as corporate assets/resources housed and maintained in a 3rd party service providers infrastructure


Gartner publishes TCO research for PCs, in which they find that locked down and well-managed PCs can cost up to 42% less than unmanaged systems. They also found that “somewhat managed” and mismanaged PC’s were only slightly less costly than unmanaged PCs due to the cost incurred from the management systems themselves.

Why is systems and security management of client computing environments so difficult and costly?

The answer is simple, there is no real standardization. The problem is that IT has not been able to maintain a common operating environment (COE) that enables them to effectively manage and secure their user population. For every application deployment or upgrade, for every patch release, dat file update or system modification, for every downloaded widget and system reboot, there is some segment of the population that experiences downtime, conflict or other technical issue resulting from variability within their computing environment. The volume of tech support calls that result from deviations from the COE (or COEs) are not only significant they take a disproportional amount of time to troubleshoot and resolve.

* Standardization has major implications for security management as well. In the latest Verizon Data Breach Investigations Report (here) the forensic analysis found that almost 80% of all data breaches were from an external source and in 98% of those breaches the attacker exploited some mistake by the victim. When I was with Gartner we had similar statistics that showed that the majority (>99%) of external attacks took advantage of poorly administered, misconfigured and mismanaged systems.

There is no question that poorly managed systems are less secure, costs more, and lead to reactionary spending with different groups implementing disparate management and security tools, none of which are aligned with other adjacent IT groups within the organization.

These challenges, coupled with the success of server/desktop virtualization, have driven a lot of attention towards client-side or desktop virtualization technologies. Of course they are not a silver bullet. Although they hold promise for certain environments the majority of companies, dealing with the majority of the professional user population, in the majority of use cases will find desktop virtualization an exceedingly difficult and costly proposition.

So before you leap held first into client virtualization you will need to understand the impact:

  • Do you need to support remote, intermittently connected mobile computing devices?
  • Have you considered the cost of the back end VDI/HVD infrastructure (network, storage, hardware, etc)?
  • Will it require specialized FTE’s in addition to current staff?
  • How will you manage and secure the virtual infrastructure?
  • How will you maintain the health and security of virtual desktops “in use”?
  • Have you looked at the licensing costs?

In the next post we will focus on Hosted Virtual Desktops and VDI models currently offered and which use cases and situations they support and which environments should look to alternative solutions.

* Note: There has been some arguments made that diversity in computing environments provide natural defenses against widespread ownage. This would be true if an organization could maintain the health and security of the diverse computing population but in reality this is logistically infeasible for most organizations that struggle with simply keeping their AV dat files or patches up to date. All computing diversity would provide for most IT organizations is a richer population of prey for the predators to attack. Put another way; diversity in computing environments leads to diversity in attacks.

One thought on “Client-Side Virtualization Episode II: Standardization, Attack of the Clones and Desktops Reloaded

  1. Pingback: Client-Side Virtualization Part III: HAL 9000, Hosted Virtual Desktops, and the Death Star « Amrit Williams Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s