Automatic Patch-Based Exploit Generation

Update: 4_25_2008: Added link for updated article based on review of APEG paper (here)

Security Focus has an article describing a system for reverse engineering Microsoft patches to determine deltas between binaries and automagically developing exploit code within seconds (here)

The technique, which the researchers refer to as automatic patch-based exploit generation (APEG), can create attack code for most major types of vulnerabilities in minutes by automating the analysis of a patch designed to fix the flaws, the researchers stated in a paper released last week. If Microsoft does not change the way its patches are distributed to customers, attackers could create a system to attack the flaws in unpatched systems minutes after an update is released by the software giant, said David Brumley, a PhD candidate in computer science at Carnegie Mellon University.

Honestly I am surprised someone hadn’t already developed such a system, you would think the folks at Bluelane would have one running in their lab. Anyway, there is little doubt that the time to protect against dynamic threats is decreasing and minutes matter. However the reality is that most organizations can barely patch within the 3-6 weeks using their crappy version of SMS/SCCM, so really what’s the difference between seconds, minutes, hours, days or weeks? And how should an organization deal with, what we already knew were, dramatically shorter times to protect?

Well, first is to note that the old scan and patch model is broken (here), that’s not to say that patch management isn’t important – it is critical! however the immediate response to exploit code in the wild may not always be to distribute a patch, but to shield against the threat by mitigating the vulnerable condition. Essentially the response should be shield then remove the root cause, which in most cases becomes shield the environment and then patch, upgrade or remove the vulnerability or exposure.

Scan and patch = ineffective

Define policy, audit against policy, enforce policy + shield against emerging threats, then eliminate root cause = effective

So how does an organization shield against attack? They must incorporate and facilitate coordination of all network and host-based technologies as part of their vulnerability and threat management program. Of course this level of organizational command and control would require technologies, like BigFix (here), and processes that support rapid modification to environmental variables. But how is that different from delivering a patch quickly you ask, well, modifying a firewall, host or network based, to block ingress or egress traffic on a particular port is far easier and timely than trying to deploy a patch, not to mention rolling back the change requires far less effort and environmental disruption than other mitigating factors.

4 thoughts on “Automatic Patch-Based Exploit Generation

  1. Pingback: Automatic Patch-Based Exploit Generation | Patch Management

  2. Could elaborate more on how you can shield a threat? I guess you still need to scan before you know that you are vulnerable to the exploit. I also assume that to be able to shield the threat you have to have some information on how an exploit looks like.

  3. Patching is a difficult thing to do right, especially with regards to getting it done right in a timely fashion (as your post reflects).

    You mention the idea of shielding against attacks which is a novel idea and a good observation, the problem is though is that such attacks shouldn’t be able to be mitigated through blocking a port, service etc…
    The reason being that any company that is considering a pro-active initiative such as this should already have any vulnerable non-required points disabled/blocked already and the only one’s left are those that are required and therefore can’t be blocked (the obvious exception being where there is a perceived imminent or likely threat where systems may be locked down in preparation).

    What I’m saying is that being proactive on locking down systems is better than fighting these fires and the time that would otherwise be put into blocking could be put into patching.

  4. @Daniel G

    You are absolutely right and I completely agree that locking down systems is more effective than fighting fires, it is actually the main part of define policy -> audit against policy -> enforce policy – I wrote about this here…most folks are still very reactive in how they approach security though.

    There are times, however, when what seemed acceptable yesterday becomes vulnerable today, or some service/app is part of doing business but you opt to lock it down until you can patch.

    Couple of examples:

    1. company allows inbound RPC since some internal applications use it -> RPC exploit is found in the wild -> company decides to block ingress RPC traffic until machines can be patched. Although there are some folks who are unable to access the internal application, the business impact may be minimal, especially if the application can also be served through a web front end (think outlook vs. outook web access)

    2. ActiveX or flash are enabled for browsing -> new exploit appears targeting one of them -> company decides they need to disable/unregister the components until they can patch

    In both examples the company may lock-down systems but allow these services/components, and in both cases their immediate response may be to temporarily disable the service -> shield the device -> until they remove the root cause, whih may include patching, upgrading, reconfiguring or some combination.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s