Vulnerability assessment scanning has been the primary means for the majority of organizations to attempt to determine their security posture against an external threat environment. Essentially the security group will scan the environment against a database of known vulnerabilities and then request the operations team resolve the vulnerable conditions.
Many companies I talk to are still stuck with the never-ending, non-actionable, false-positive laden, non-environmentally aware, slow, cumbersome, disruptive, snapshot in time approach to improving their security by attempting to understand what their security posture looks like against an ever-changing threat environment. The problem is that information security must evolve beyond just simply having a catalog of tens of thousands of unique vulnerable conditions that the tens of thousands of organizational assets possess. Vulnerability assessment scanning has many limitations and certainly needs to evolve as I discussed in an earlier post (here). Honestly what does a large organization do with 600 pages of unique, distinct vulnerabilities?
Generally they do one or more of the following:
– Nothing, they simply scan periodically, note the results and move on.
– Focus their efforts only on critical vulnerabilities, of course there are problems with this, most notably the list refreshes on a fairly regular basis and to truly know what is critical in a large, complex, globally distributed environment against a dynamic and increasinlgy hostile threat environment requires a tremendous amount of foresight.
– They struggle through the list with the security team coercing operations to fix this, patch that, disable this and uninstall that, of course the list changes, the network changes, the threats change, most organizations are far too dynamic for this to be even remotely effective.
– They scan only for a small set of new critical vulnerabilities, say on a given Tuesday or when exploit code appears in the wild, and then they attempt to rapidly patch systems, but in this case what exactly is the role of vulnerability assessment scanning? Patch validation tool? Seems like an expensive and inefficient way to give the security team a warm fuzzy.
Effective vulnerability management requires organizations to move beyond the endless cycle of vulnerability assessment scanning and patching and gain control of their environment by defining the desired configuration state of environmental assets against a security configuration standard, auditing the environment to identify non-compliant elements and enforcing compliance by remediating non-compliant systems (here)
Define policy -> audit against policy -> enforce policy = elimination of a significant percentage of vulnerabilities and exposures
Any system that is deployed, or will be deployed in the future, should adhere to a common security configuration baseline, of which organizations like NIST, NSA, CIS, and vendors such as Microsoft and Cisco have already defined templates with settings for common operating environments and network elements. With the introduction and adoption of XCCDF, an XML specification for instantiating security configuration baselines and checks (here), it is becoming increasingly easier to adopt a security configuration management approach.
Security configuration management, unlike vulnerability assessment scanning, provides operationally useful and actionable output since the orientation is towards maintaining system integrity by ensuring system compliance with a defined gold standard. This ability to describe deviations from policy in terms of remediation activities, as opposed to a big list of unique, distinct vulnerable conditions provides a level of efficiency that cannot be obtained through vulnerability assessment scanning. For example if you perform a vulnerability assessment against a system running an old version of IE the result would be hundreds of vulnerabilities, do these matter? Is it important to understand all these conditions? What exactly would the operations team be expected to do in response to such a list? If the organization has a policy that states all systems running IE must be running version 7, then it is immediately clear what action should be taken by the operations team, and coincidentally the hundreds of vulnerabilities are resolved in the process. Extrapolate this out to other system attributes, such as ports, protocols, services, patches, as well as applications and it becomes clear that tens of thousands of vulnerabilities can more easily be expressed as resolutions in the form of security baselines.
Organizations looking to achieve effective vulnerability management, and honestly who isn’t? Should move away from the outdated scan and patch approach and implement security configuration management, then if required or chosen, vulnerability assessment can focus on those conditions that are outside of the SCM scope, this greatly reduces the excessive noise VA creates and supports an organizations move towards a higher level of operational and security maturity.
You would still need to complete VA scans for audit purposes (maybe less frequent) to make sure your SCM actually works. What’s the point of scanning for IE 6 vulnerabilities assuming that SCM is implemented but actually works less than perfect and missed those PCs that have IE7 installed? Besides, if VA finds vulnerabilities in products that we shouldn’t be having on our network and you probably can move them to a separate report and handle them as a SCM issue and not a VA issue.
Neither VA nor SCM are perfect, they go hand in hand. You just have to find the right balance (like everything else in life).
Okay, I must be missing something here. This just sounds like a fancy version of scan and patch to me. I’m not complaining, lowering work load and having standard configurations all makes sense. I’m just not seeing the innovation beyond yet another XML standard to track and follow.
Excellent, excellent post!
You could have left it at:
Amrit, Amen to much of what you said. I have two questions around configuration management being the answer though. Not sure how to track back to your blog, so you can read my full questions here:
http://www.stillsecureafteralltheseyears.com/ashimmy/2007/04/questions_to_am.html
@Osama
Your right, balance is key.
Unfortunately I realized after my post that I had not included the areas where VA was needed…
– Identify unmanaged assets or rogue devices
– As a requirement of PCI or some other mandated compliance initiative (FISMA/FIPS-199)
– by the security team as an additional control to validate that the operations team was in compliance with corporate standards.
Ideally an organization would have a mechanism for correlating information between vulnerable conditions and the configuration state of an an asset, however most do this manually and it is extremely cumbersome and error prone.
@Arthur
The main difference is the report being oriented towards the operational team that must implement the resolution. Operational folks, whether they be in networking, desktop and server support, or in the application space do not look at the world as unique distinct vulnerable conditions – in the case of a non-compliant IE version they would be overwhelmed to review a large list of vulns to determine the resolution, or in the case of a Cisco router having web administration enabled, or a Solaris box running telnet – these are all areas where the remediation step is more important than the vulnerable conditions that are found. Routers should not have web administration enabled, telnet should be disabled on all servers, all desktops must be running the latest version of IE – this is far easier for an operations team to respond to.
So VA tools can be used in this way as well, instead of scanning an entire environment to determine the tends of thousands of vulnerabilities, scan it against a set of gold standard images as part of the ongoing audit that should be performed, that is a far more effective way for the security and operations teams to work and the noise generated from VA would be greatly reduced.
Additionally scan and patch is still a fairly reactive process, although far less than sitting around and staring at your IDS waiting for an attack. An organization that moves to implement defined security configuration policies is looking at pre-incident measures that are far more proactive than VA.
Instead of just scanning for vulns an patching – do not even image a machine unless it is configured correctly
Pingback: www.andrewhay.ca » Suggested Blog Reading - Thursday, April 12th, 2007
Pingback: Effective Vulnerability Management in Action « Observations of a digitally enlightened mind
Pingback: NIST 800-53A - 3rd draft available « Observations of a digitally enlightened mind
Pingback: The 11 Worst Ideas in Security « Amrit Williams Blog
Pingback: Moving Security through Visibility to Implementing Operational Controls « Amrit Williams Blog