It has been awhile since we had a good old fashioned, highly publicized, hysteria inducing, globally distributed, mass-infecting worm. The AV vendors (here) and (here) must be ecstatic that 2009 is really turning out to be the year of the largest security incidents since the beginning of forever as I predicted it would be back in January (here). Of course you could make that prediction every year for the next 20-30 years and pretty much experience an 80%+ success rate, it’s like predicting that as social media becomes ubiquitous we will experience more social media related security threats, or that as the economic condition worsens it will drive even more financially motivated cybercrime buoying an already burgeoning digital black market, or that there will be more high-profile data breaches – all no brainers.
Verizon Business Services posted a well-thought out response to the hype around Conficker “Risk, Group Think, and the Conficker Worm” (here), in which they stated…
A very large proportion of systems we have studied, which were infected with Conficker in enterprises, were “unknown or unmanaged” devices. Infected systems were not part of those enterprise’s configuration, maintenance, or patch processes. In one study a large proportion of infected machines were simply discarded because a current user of the machines did not exist. This corroborates data from our DBIR which showed that a significant majority of large impact data breaches also involved “unknown, unknown” network, systems, or data.
Richard Bejlitch used the Verizon posting to drive greater awareness of network security monitoring technologies on his Tao Security blog (here)…
This my friends is the reality for anyone who defends a live network, rather than those who break them, dream up new applications for them, or simply talks about them. If a “very large proportion of systems” that are compromised are beyond the reach of the IT team to even know about them, what can be done? The answer is fairly straightforward: watch the network for them. How can you do that? Use NSM.
I don’t disagree that network security monitoring is an important tool for IT organizations to gain visibility into events that occur in their environments. My issue with NSM as a response to conficker infections or “unknown / unmanaged systems” is that it can really only be used to monitor activity within an organizations environment – the problem is your network now includes Starbucks, Marriott Hotels, and Virgin American Airlines. The majority of “unknown” or “unmanaged” systems that IT is unable to consistently include as part of their standard configuration, maintenance or patch processes are the large number of remote, intermittently connected, mobile computing assets that can make up to 40% or more of many modern organizations computing base.
- Do we simply acquiesce management of the “very large proportion of systems” that are beyond the reach of the IT team to even know about them to fate and luck?
- Do we only try to “manage” them once they try to return inside of the computing environment – a la NAC or NSM?
- How come no one is asking the really tough question, how does an IT organization manage the seemingly unmanagable – their remote, intermittently connected, mobile computing base?
This problem is only exasperated as we move to adopt cloud-computing services since internal and external systems may be accessing not only corporate owned resources but also resources stored, managed, and maintained by an external 3rd party. NSM cannot provide visibility into a remote, non-corporately routed, device accessing a 3rd party application or service provided by a 3rd party.
There are reportedly over 10 million corporate computers infected by Conficker worldwide, and although the payload has been less than the digital apocalypse some are now predicting will occur on the 1st the fact remains that most (almost all) organizations are still unable to implement even a base level of security controls. That is they lack the ability to even answer the most basic of security and IT questions about their entire computing environment – fixed or mobile, physical or virtual, located anywhere in the world – such as…
- Asset Discovery/Inventory: How many assets do we own and how many of those are actively connected to the network or corporate resources right now?
- Security Configuration Management/Patch Management/Endpoint Protection: Of those how many conform to corporate IT and security policies, that is they meet the organizations COE (common operating environment), are up to date with patches, security configuration baselines (such as DISA STIGS, FDCC, CIS, NIST, etc…), have the latest endpoint security software installed, configured and running with all appropriate updates (dats, signatures, etc) applied regardless of how they are connecting to the internet or where they are located?
- Network Monitoring/Application Access Auditing: How many non-corporate assets are actively or attempting to connect to the corporate network or corporate assets?
There are more basic questions that need to be answered and many tools that IT departments can deploy to maintain the health of their environments and improve their security posture, but most are less than effective if they cannot even implement a base level of IT hygiene, escpeically as we move to more agile and dynamic computing infrastructures such as cloud computing environments.
BTW – The most involved and publicly available analysis of the worm “An analysis of Confickers Logic and Rendezvous Points” has been posted by SRI (here) – a great read and very well done, definitely recommend your read it for more information on the ins and outs of Conficker – fascinating!