Top 10 Most Overhyped Technology Terms

We have entered a new era of information technology, an era where the clouds are moist, the data is obese and incontinent, and the threats are advanced, persistent, and the biggest ever. Of course with all the paradigm-shifting, next generation, FUD vs. ROI marketing, its important to remember that sometimes we need to balance innovation against misunderstood expectations, vendor double-speak, and relentless enterprise sales guys.

Because contrary to the barrage of marketing, these technologies won’t make you rich, teach you how to invest in real-estate, help you lose weight or grow a full head of hair, it won’t make you attractive to the opposite sex, nor will it solve all your problems, in some cases they can improve the efficiency and effectiveness of your operating environment but it requires proper planning, expectation setting and careful deployment…and on that note, I give you the top 10 most overhyped technology terms over the last decade.

Continue reading

Virtual Strategy Magazine: PC Hypervisors Virtually Change Everything

Recently I wrote a guest editorial for Virtual Strategy Magazine, although I have to admit I wasn’t made aware of my goofy picture – look away I’m hideous – until the article was published. You can find the full contents at Virtual Strategy Magazine

Continue reading

Client Hosted Virtual Desktops Part II; Back to Basics

Back 
to 
Basics: 
What 
is a Client Hosted Virtual Desktop (CHVD)?

Client 
hosted virtual desktops 
refer 
to 
the 
combination 
of 
a 
management
 system
 and
 a 
hypervisor 
on 
a
 client
PC,
 utilizing 
the 
local 
resources 
to 
execute 
the 
operating
 system.


Figure 1. different desktop virtualization models segmented by central vs. distributed computing environment support and reliance on operating system

50th “Beyond The Perimeter” Podcast HighLights

btp2

Not too long ago I embarked on a creating a podcast series that would provide more regularity than the blog. Beyond the Perimeter has been a tremendous amount of fun and as we just posted our 50th podcast I wanted to reflect on some of the highlights and wonderful guests we have been honored to have joined us.

Beyond the Perimeter iTunes subscription

Beyond the Perimeter Direct XML Feed

Continue reading

Client-Side Virtualization Part III: HAL 9000, Hosted Virtual Desktops, and the Death Star

HVD-fail

Systems and security management is difficult, ineffective, costly and becoming ever more so in increasingly distributed, heterogeneous, complex, and mobile computing environments…

  • 98% of all external attacks take advantage of poorly administered, misconfigured, and unmanaged systems (Source: Verizon Data Breach Investigations Report 2009)
  • A locked down and well managed PC can cost 42% less than an unmanaged one (Source: Gartner – The Total Cost of Ownership: 2008 Update)
  • The direct costs incurred in a “somewhat managed” PC are only slightly lower than the direct costs of an unmanaged PC, because of expenses to maintain underutilized or dysfunctional management systems (Source: Gartner – The Total Cost of Ownership: 2008 Update)

The benefits provided by server virtualization are being realized as server consolidation has enabled cost reduction and efficiencies in data center/server management. This is of course leading many to ask the question “why can we not virtualize our desktops as well?” Continue reading

Client-Side Virtualization Episode II: Standardization, Attack of the Clones and Desktops Reloaded

The matrix

Consolidation is the major benefit or “killer app” for server/data center virtualization. Standardization is the major benefit or “killer app” for client-side virtualization.

As I was pondering the challenges of current systems management processes, researching the latest and greatest from the client-side virtualization vendors, and talking to a lot of large organizations I was trying to find that one thing that explained the operational benefits of client-side virtualization. There are more than one, but it really does come down to standardization, allow me to explain… Continue reading

Desktop Virtualization Overview; The Good, The Bad, and The Reality – VDI is DOA!

VDI fail

To address the increasing cost and complexity of managing dynamic IT environments organizations are trying to understand how to adopt virtualization technologies. The value proposition and “killer app” are quite clear in the data center, however less attention has been given to the opportunities for endpoint virtualization. Even though there are multiple methods to address client-side virtualization; hosted virtual desktops (HVD), bare-metal hypervisors, local and streaming virtual workspaces and a range of options that layer on top of and between them all, such as application virtualization, portable personalities, and virtual composite desktops, there is still a tremendous amount of confusion and even more misconceptions about the benefits of client-side virtualization than with server virtualization. The major architectural flaw in almost all of these solutions is they remain very back end and infrastructural heavy, which reduces the benefit of cost-reduction and lower complexity.

Unlike server virtualization, which drove adoption from the bottom up, that is from the hypervisor and then through the other stacks, adoption of endpoint virtualization technologies is moving top down, that is starting with single applications within an existing OS. Application virtualization adoption will accelerate over the next 12-18 months with Gartner life cycle management analyst suggesting that it will be included in the majority of PC life cycle RFP’s in 2010 and beyond. Workspace/Desktop virtualization will follow over the next 24-36 months, as will the endpoint virtualization infrastructures. The adoption of both workspace/desktop and endpoint virtualization infrastructure will align with organizations desktop refresh cycles. Considering the average is between 3-5 years and considering that many are looking at desktop refresh to support Vista, although it probably only has about a 10% market adoption, and Windows 7, it is conceivable that we will begin seeing accelerated adoption of desktop and infrastructure virtualization over the next 24-36 months as organizations rethink their current systems management processes and technologies.

Let’s look at the 4 client/desktop virtualization models I believe will become the most prevalent over the next 3-5 years… Continue reading

How Cloud, Virtualization, and Mobile Computing Impact Endpoint Management in the Enterprise

I had an interesting conversation with a peer recently that started with a statement he made that “innovation was all but dead in security”. The implication was that we had done all we could do and that there was very little more that would be accomplished. Of course I felt this was an overly simplistic and narrow view, not to mention that it completely ignores the rather dramatic impact changes in computing infrastructures will have over the next 5-10 years and beyond.

How have enterprise architectures evolved over the past 10 years and how will it continue to evolve? Simply put we are pushing more of our computing assets and the infrastructure that supports them out into the Internet / cloud. It began with mobile computing devices, remote offices, and telecommuters and is now moving into aspects of the traditional internal infrastructure, such as storage, application / service delivery, and data management. This has forced IT to, in some cases, radically redefine the technologies and processes they implement to even provide the basics of availability, maintenance and security. How does an IT organization maintain the health and availability of the evolving enterprise while securing the environment? How do they ensure visibility into and control over an increasingly complex and opaque infrastructure? Continue reading

Moving Security through Visibility to Implementing Operational Controls

viz-and-control1

Quick thought for the day. Most technologies in the security world move through a predictable cycle of adoption. First an organization implements a solution to gain visibility into the scope of the problem (VA, IDS, DLP/CMF, SIEM) then once it becomes apparent that the problem is vast and overwhelming they move to operationally implement technical controls to protect the environment and to enforce organizational policies, when this switch over occurs the adoption of the pure visibility tools becomes eclipsed by the control tools. This doesn’t mean that the visibility tools are ineffective, it generally means that the scope of the problem is understood to the point that an organization can effectively implement controls, it also means that the problem has successfully moved from the security team to the operations team. You can apply this same logic to any segment of security and to any new technology, including cloud computing, virtualization and all the little shiny obejcts in between.

Examples of this movement from visibility to control include intrusion detection, vulnerability assessment and content monitoring and filtering. Let’s look at VA, It’s initial use was to determine the scope of the ‘exposure’ problem, that is to scan the environment against a database of known vulnerabilities to determine the extent of exposure. Unfortunately the volume of output was very high and was presented in a format that was not easily consumable or actionable by the IT operations team. What exactly does one expect the server admin to do with 300 pages of vulnerability data? There were also inherent issues of fidelity. The use of VA tools moved into targeted scans to determine what needed to be patched, which resulted in the operational implementation of patch management technologies, which soon overtook the market adoption of vulnerability assessment tools. There was also the pressure of auditors looking for the implementation of technical controls and although vulnerability assessments were viewed as an important first step, without the work-flow and controls to address the volume of vulnerability data they proved to be less effective in improving operational security than was originally thought.

It became clear that vulnerability management needed to cross the chasm to become an operationally actionable tool, without remediation capabilities the organization would always be under a mountain of vulnerabilities and the use of the technology would linger in the trough of disillusionment. Security configuration management met that need, it allowed an organization to define the desired configuration state of an environment against industry best practices (NIST, DISA, CIS, etc) and then to operationally implement technical controls to identify non-compliant devices and enforce policy. Security configuration management also had the benefit of providing a common language between the security, audit, and operations teams. I wrote about this in a series of posts (here), (here), and (here).

Cloud computing: Swarm Intelligence and Security in a Distributed World

Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it’s relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent’s policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

Myths, Misconceptions, Half-Truths and Lies about Virtualization

Thanks to VMware you can barely turn around today without someone using the V-word and with every aspect of the English language, and some from ancient Sumeria, now beginning with V it will only get worse. There is no question that virtualization holds a lot of promise for the enterprise, from decreased cost to increased efficiency, but between the ideal and the reality is a chasm of broken promises, mismatched expectations and shady vendors waiting to gobble up your dollars and leave a trail of misery and despair in their wake. To help avoid the landmines I give you the top myths, misconceptions, half-truths and outright lies about virtualization.

Virtualization reduces complexity (I know what server I am. I’m the server, playing a server, disguised as another server)

It seems counter-intuitive that virtualization would introduce management complexity, but the reality is that all the security and systems management requirements currently facing enterprises today do not disappear simply because an OS is a guest within a virtual environment, in fact they increase. Not only does one need to continue to maintain the integrity of the guest OS (configuration, patch, security, application and user management and provisioning), one also needs to maintain the integrity of the virtual layer as well. Problem is this is done through disparate tools managed by FTE’s (full time employees) with disparate skills sets. Organizations also move from a fairly static environment in the physical world, where it takes time to provision a system and deploy the OS and associated applications, to a very dynamic environment in the virtual world where managing guest systems – VMsprawl – becomes an exercise in whack-a-mole. Below are some management capabilities that VMware shared/demoed at VMworld.

  • Vddk (Virtual disk development kit) allows one to apply updates by mounting an offline virtual machine as a file system, and then performing file operations to the mounted file system.  They ignored the fact that file operations are a poor replacement for systems management, such as applying patches.  This method won’t work with windows patch executables, nor will it work with rpm patches which must execute to apply.
  • Offline VDI: The virtual machine can be checked out to a mobile computer in anticipation of a user going on the road and being disconnected from the data center. Unfortunately, data transfers, including the diff’s are very large and one needs to be aware of the impact on the network.
  • Guest API – allows one to inspect the properties of the host environment, but this is limited to the hardware assigned to the virtual machine
  • vCenter – Management framework for viewing and managing a large set of virtual machines across a large set of hardware, a seperate management framework than what IT will use to manage physcial environments.
  • Linked Clones – Among other things, this allows for multiple virtual machine images to serve as a source for a VM instance, however without a link to the parent, clones won’t work.
  • Virtual Machine Proliferation – Since it is so easy to make a snapshot of a machine and to provision a new machine simply by copying another and tweaking a few key parameters (like the computer name), there are tons of machines that get made.  Keeping track of the resulting virtual machines – VMsprawl – is a huge problem.  Additionally disk utilization is often under estimated as the number of these machines and their snapshots grows very quickly.

Want to guess how many start-ups will be knocking on your door to solve one or more of the above management issues?

Virtualization increases security (I’m trying to put tiger balm on these hackers nuts)

Customers that are drawn to virtualization should be aware virtualization adds another layer that needs to be managed and secured. Data starts moving around in ways it never did before as virtual machines are simply files that can be moved wherever.  Static security measures like physical security and network firewalls don’t apply in the same way and need to be augmented with additional security measures, which will increase both cost and complexity.  Network operations, security operations, and IT operations will inherit management of both the physical and the virtual systems so their jobs get more complicated in some ways, and they get simpler in other ways.

Again it would seem counter intuitive that virtualization doesn’t increase security, but the reality is that virtualization adds a level of complexity to organizational security that is marked by new attack vectors in the virtual layer, as well as the lack of security built into virtual environments, which is made even more difficult by the expertise required to secure virtual environments, skills that are sadly lacking in the industry.

The Hoff has written extensively about virtualization security and securing virtual environments (here) – they are different, yet equally complex and hairy – and nowhere will you find a better overall resource to help untangle the Tet offensive of virtualization security or securing virtual environments than from the Hoff.

Virtualization will not require specialization (A nutless monkey could do your job)

What is really interesting about the current state of virtualization technology in the enterprise is the amount of specialization that is required to effectively manage and secure these environments, not only will one need to understand, at least conceptually, the dynamics of systems and security management, but one will also need to understand the technical implementations of the various controls, the use and adminstration of the management tools, and of course follow what is a very dynamic evolution of technology in a rapidly changing market.

Virtualization will save you money today (That’s how you can roll. No more frequent flyer bitch miles for my boy! Oh yeah! Playa….playa!)

Given the current economic climate the CFO is looking for hard dollar savings today. Virtualization has shown itself to provide more efficient use of resources and faster time to value than traditional environments, however the reality is that reaching the promised land requires an initial investment in time, resources, and planning if one is to realize the benefits. Here are some areas that virtualization may provide cost savings and some realities about each of them

  • Infrastructure consolidation – Adding big iron and removing a bunch of smaller machines may look like an exercise in cost-cutting, but remember you still have to buy the big iron, hire consultants to help with the implementation, acquire new licenses, deploy stuff, and of course no one is going to give you money for the machines you no longer use.
  • FTE reduction – Consolidating infrastructure should allow one to realize a reduction in FTE’s right? The problem is that now you need FTE’s with different skills sets, such as how to actually deploy, manage, secure and manage these virtual environments, which now require separate management infrastructures.
  • Decrease in licensing costs – Yes, well, no, depends on if you want to pirate software or not, which is actually easier in virtual environments. With virtual sprawl software asset and license management just jumped the complexity shark.
  • Lower resource consumption – See above references to complexity, security and FTE’s, however one area where virtualization will have immediate impact is in power consumption and support of green IT initiatives, but being green can come at a cost

Virtualization won’t make you rich, teach you how to invest in real-estate, help you lose weight or grow a full head of hair, it won’t make you attractive to the opposite sex, nor will it solve all your problems, it can  improve the efficiency of your operating environment but it requires proper planning, expectation setting and careful deployment. There will be an initial, in some cases substantial, investment of capital, time, and resources, as well as an ongoing effort to manage the environment with new tools and train employees to acquire new skills. Many will turn to consulting companies, systems integrators and service providers that will help them to implement

solutions that generate a quick payback with virtually no risk and position your organization to take advantage of available and emerging real-time infrastructure enablers designed to closely align your business needs with IT resources.

As Les Grossman said in Tropic Thunder “The universe….is talking to us right now. You just gotta listen.”