50th “Beyond The Perimeter” Podcast HighLights

btp2

Not too long ago I embarked on a creating a podcast series that would provide more regularity than the blog. Beyond the Perimeter has been a tremendous amount of fun and as we just posted our 50th podcast I wanted to reflect on some of the highlights and wonderful guests we have been honored to have joined us.

Beyond the Perimeter iTunes subscription

Beyond the Perimeter Direct XML Feed

Continue reading

Moving Security through Visibility to Implementing Operational Controls

viz-and-control1

Quick thought for the day. Most technologies in the security world move through a predictable cycle of adoption. First an organization implements a solution to gain visibility into the scope of the problem (VA, IDS, DLP/CMF, SIEM) then once it becomes apparent that the problem is vast and overwhelming they move to operationally implement technical controls to protect the environment and to enforce organizational policies, when this switch over occurs the adoption of the pure visibility tools becomes eclipsed by the control tools. This doesn’t mean that the visibility tools are ineffective, it generally means that the scope of the problem is understood to the point that an organization can effectively implement controls, it also means that the problem has successfully moved from the security team to the operations team. You can apply this same logic to any segment of security and to any new technology, including cloud computing, virtualization and all the little shiny obejcts in between.

Examples of this movement from visibility to control include intrusion detection, vulnerability assessment and content monitoring and filtering. Let’s look at VA, It’s initial use was to determine the scope of the ‘exposure’ problem, that is to scan the environment against a database of known vulnerabilities to determine the extent of exposure. Unfortunately the volume of output was very high and was presented in a format that was not easily consumable or actionable by the IT operations team. What exactly does one expect the server admin to do with 300 pages of vulnerability data? There were also inherent issues of fidelity. The use of VA tools moved into targeted scans to determine what needed to be patched, which resulted in the operational implementation of patch management technologies, which soon overtook the market adoption of vulnerability assessment tools. There was also the pressure of auditors looking for the implementation of technical controls and although vulnerability assessments were viewed as an important first step, without the work-flow and controls to address the volume of vulnerability data they proved to be less effective in improving operational security than was originally thought.

It became clear that vulnerability management needed to cross the chasm to become an operationally actionable tool, without remediation capabilities the organization would always be under a mountain of vulnerabilities and the use of the technology would linger in the trough of disillusionment. Security configuration management met that need, it allowed an organization to define the desired configuration state of an environment against industry best practices (NIST, DISA, CIS, etc) and then to operationally implement technical controls to identify non-compliant devices and enforce policy. Security configuration management also had the benefit of providing a common language between the security, audit, and operations teams. I wrote about this in a series of posts (here), (here), and (here).

5 Security Metrics That Matter

Security metrics, which I have posted on in the past (here), and (here), are almost as elusive as security ROI. But unlike the mystical pink unicorn that is security ROI, security metrics are real, tangible and meaningful. Why is it then that we have so much difficulty in defining metrics that are both simple in their implementation and significant in their impact on the organization? I believe much of this stems from two flaws in how most organizations approach information security.

The first problem is that, for the most part, security is a reactive, ad-hoc discipline, primarily focused on responding to an incident. This drives post-incident metrics such as how many virus outbreaks did we experience, or how many attacks did our IDS detect, or how much SPAM did our anti-spam thingie block. These might be useful in determining, well, those things above, but they are hardly telling of the effectiveness or efficiency of one’s IT security program.

The second problem is how an organization communicates between groups. Operations, audit & compliance, and security are examples of domains within an organization that use a very different language to communicate problem/resolutions.

Vulnerability assessment is a great example of the problem of cross-organizational communication. Security will look at vulnerability assessment data from the perspective of unique, distinct conditions, operations will look at the data with an eye towards what remediation must be done, and audit & compliance might be concerned with how the data is relevant to regulatory initiatives. Operationally these are all very different ways of describing environmental variables, and it is very difficult to satisfy each of these groups with a simple metric of how vulnerable are we? – to what? or How many vulnerabilities exist in our environment? – Why does it matter? Operations doesn’t care how many, unique, distinct vulnerabilities some VA scanner found – their charter is availability.

A common language that is driven by policy and used in terms of the business is critical to ensuring cross-organizational communication. Ideally we would be able to draft metrics that address effectiveness and efficiency, how effective is our IT security and operations program and how efficient are we in detecting and remediating change. Most of this would require a move towards a policy driven approach and SLA’s to monitor adherence to plan, which we will look at in a future post. I did want to take a minute and list some metrics that every organization must be able to address today, because if you cannot answer these basic questions about your environment, with any degree of accuracy, then all the metrics we will come up will fall short.

1. How many computing devices are actively connected to my network right now and how many of these do we actually own?

2. Of these how many do we actively manage (have full visibility into and command and control of)?

3. What percentage of these are compliant with basic security policies, including…?
a. Endpoint security is up to date and configured in compliance with corporate policy (Anti Virus, Anti Spyware, Personal Firewall, HIPS, Encryption, et al)
b. Systems are configured against a security baseline as defined by NIST, NSA, DISA, CIS, etc…
c. Systems are patched to corporate standards

4. How effective is our change management process? And how quickly can we affect change in the environment. For example, once a decision has been made to change some environmental variable (modify PFW settings, configuration change to the device itself, update to dat files, reconfigure HIPS/PFW settings, etc) what percentage of the environment can we verify conforms these changes within a 24 hour period?

5. What audit mechanisms are in place to detect changes to a corporate COE (common operating environment), how often do we monitor for non-compliance, what is the process for remediating non-compliant devices, and how long does it take from detection to remediation?

If your organization can repeatably and verifiably answer these 5 questions, you are well on your way to metrics nirvana.