Responsible Vulnerability Disclosure

Yes I am beating this dead horse, why do you ask?

Dino at Matasano posted some great thoughts on vulnerability disclosure (here) the point of which calls into question the motivations of some researchers and states the need for a mechanism that enforces accountability and transparency into the vuln process. Suggesting an independent or objective 3rd party (there goes that word again – I ain’t mad at cha’ Thomas 🙂 ) such an entity acting as a vuln broker would resolve the issue.

Anway I was reading through the comments having posted a question around how such a 3rd party would work and what the motivations would be for the various constituencies to participate and “ivan” replied that the situation will self-correct if a representative set of researchers and vendors pressured by end-users drive the needed transparancy and accountability.

I don’t think a viable 3rd party vuln broker or market self-correction will happen anytime soon. The reality is that researchers will continue to antagonise large vendors. Vendors will impede the process and not provide a process to deal with researchers furthering the antagonism and end users will blindly go on with the business of IT without exerting the neccessary market pressure.

One of the last research notes I wrote before leaving Gartner concerned responsible disclosure. Although I cannot post the full text, here are some excerpts from “Responsible Vulnerability Disclosure: Guidance for Researchers, Vendors, and End Users”, which echoes much of what ivan suggested the market should do. Bear in mind the audience includes non-technical, non-security folks so refrain with some of the “Captian Obvious” comments.

Key Findings

 

  • Attackers looking to exploit vulnerabilities in IT will focus their efforts in the area in which a critical vulnerability may exist, increasing the potential that the vulnerability is identified and an exploit made available. Suggesting that a vulnerability exists in a vendors product, without naming the product or providing specifics can increase the risk of exploitation.
  • When a vulnerability is publicly announced, attackers and researchers generally find more vulnerabilities in the product(s) or similar vulnerabilities in other products soon after.
  • It is a common practice for hackers to reverse-engineer both patches to better understand how to exploit vulnerabilities, and this can be extended to security product signatures. Security vendors that provide “zero-day” protection for their clients, prior to release of a patch, can put the broader IT community at risk if they don’t take precautions.
  • IT vendors that do not provide adequate information on the nature of a vulnerability will impact the ability of their user base to make decisions on how to proceed with a work-around, patch or deployment of a new version. This delays patching and increases risk.
  • End-user IT organizations absorb most of the risk if disclosure is done irresponsibly, and they have limited influence over the process. Enterprises should not buy security products from companies that do not practice responsible disclosure. There are also situations that arise in which disclosure can and should impact how an organization reacts.

Guidance for Security Researchers
Provide all necessary information to the ISV and obtain a positive indication that the appropriate function within the vendor has received the information. An appropriate amount of time (typically at least 30 days) should be allowed for the ISV to respond before releasing any information, even information without details. If the vendor requests a reasonable amount of additional time (another four to six weeks), no information should be released. Work closely with the vendor to ensure a timely response and be prepared to publicly announce the vulnerability details when the vendors has provided a patch, work-around or software update, but not before. Exploit code should never be released.

Guidance for ISVs
Provide a well-publicized means of accepting vulnerability information from researchers, as well as a published policy for how your organization will respond and work with security researchers. Researchers that follow responsible reporting protocols should be credited when the patch is publicly released. ISVs have responsibility to their user base to provide adequate information that allows their clients to make the appropriate decisions about implementation of a work-around, distribution of a patch, or upgrade to a new version.

Guidance for End-User IT Organizations
There’s little that an end-user organization can do to affect who finds or discloses vulnerabilities. However, these events are recognizable in the press and through vulnerability information sources. Remember that no patch will be available. Organizations must respond to these occurrences by absorbing the available information as soon as possible and adjusting their controls — including reconfiguring firewall, intrusion detection system, intrusion prevention system, security information and event management, and network behavior analysis technologies — to detect suspicious behavior or block affected protocols if possible. Limit the use of affected applications where they are not mission-critical.

Organizations should not conduct business with vendors or security research companies that do not follow responsible disclosure. These entities must not be allowed to manipulate, intentionally or not, enterprise security postures.

Third-Party Patches
Some third parties produce patches for popular software. These patches are typically available free or sold through services with the intent of filling the gap between disclosure and the availability of the vendor patch. Most of the organizations that produce these patches are also vulnerability research organizations, so there is an inherent conflict of interest. Gartner does not recommend using third-party patching for security issues. The cure may be worse than the disease: Third-party patches can create havoc in a large organization because of inconsistent quality, which may result in service or application disruption, limited ability to manage them remotely and the need to uninstall them when the vendor-approved patch is available. In the worst-case scenario, they may contain backdoors or other malicious software.

There is a lot more to it than that, but you get the idea…

Advertisements

10 thoughts on “Responsible Vulnerability Disclosure

  1. Amrit,
    I read the Gartner report and a couple of questions came up. The first, and I guess, biggest, is where the evidence for the conclusions are. For example, in your key findings, you state
    “When a vulnerability is publicly announced, attackers and researchers generally find more vulnerabilities in the product(s) or similar vulnerabilities in other products soon after.”
    Really? We haven’t found this to be true, but I’d like to see any evidence if it is true that this happens on a regular basis, or a basis enough to warrant taking the pressure off of vendors that comes from announcing pending vulnerabilities with no details. What we have generally found is that the criminal researcher has plenty of their own pocket zero-days that they are working on exploits for and aren’t waiting around for us or any other firm for a broad floodlight pointed at a vague target, like stating a “A remote code execution vulnerability was found in Vista’s Kernel” – it’s a low return activity for them.

    Also, get a bigger comment box, this is annoyingly small.
    RB

  2. Amrit,

    Is it a dead horse when folks still aren’t listening? I think Apple/Secureworks was like Not Dead Yet Fred from Spamalot. “I Feel Happppyyyyyy!”

    I, too, have been thinking about how a third party “arbiter” might benefit all parties involved if they just can’t get along. Of course, the issue is, who?

    I suppose an ISC^2 or SANS or someone could retain folks for that purpose, but would they be willing to take on the potential liability?

  3. I would like to turn it around and ask what value does eEye provide by highlighting 0-days?

    The biggest issue is that it provides no value to an IT organization, none, zip, zero, to say something like remode code execution vuln in Symantec AV. there is nothing actionable they can do, except quietly choke on the thick cloud of FUD. I know eEye does this to highlight the security research team and the power of blink, so in essence it is just for marketing as opposed to doing anything to support or better security.

    Of course shining a spotlight on a potential vulnerability will direct the full efforts of the research community to find it – do you really need statistics? you guys read the various disclosure mailing lists, you see the discussions, you know how these folks work. This becomes especially damaging if the vendor named only has 1 or 2 products, sure saying there is a vuln in MS is a lot different from saying there is a vuln in salesforce.com.

    And there are plenty of examples of multiple vulnerabilities coming out from different researchers once the spotlight has been shown, think OSX or firefox.

    The bottom line is that it provides no value except to further your own marketing efforts, it creates a potential risk to the larger IT community, and it fosters an antagonistic relationship with the vendors.

    btw – I have some choice quotes, regarding your efforts, from large enterprises I am happy to share privately with you if you like.

  4. Alex,

    If there was a 3rd party that could perform this function in an objective way and there was enough trust that would be great, but I doubt it would solve the problems

  5. Ross,

    I was reviewing some of the work you all had done with the zero-day tracker and the mitigation information is useful and exactly the type of information the people are looking for. I still think providing 0-day information is a bad idea, however when you also provide actionable advice that changes the equation.

  6. I will second Ross Brown’s comments (and while at it, double up the bet). I’ve also read Gartner’s report and recommendations and I can’t really understand what is the basis for many of the recommendations. In the guidance to security researchers section the original document says “sufficient time for vendor *acknowledgment * is typically 30 days”. That is completely insane for any real world scenario (in 11 years of security research and vulnerability disclosure I haven’t seen any vendor, not even the worst ones, take more than a few days or at most a bit more than a week to acknowledge the reception of a vulnerability report, in my particular experience if they did not do it within at most a week or so, they will never do it and I can only point at one big vendor as an example of the later).
    The report goes on to say that if the vendor requests an additional 4 to 6 weeks of time, no information should be disclosed (it does not say additional time for what purpose tho.)
    Later it is recommended for “initial public vulnerability release” that “Security researchers should allow six months for the vendor to provide an ETA for a work-around, patch or upgrade. If the vendor has not responded with resolution or an acceptable ETA for resolution (potentially six to nine months) within that time, then information on the vulnerability should be provided to the greater security research community to ensure that defensive mechanisms can be properly implemented.”

    SIX MONTHS for an ETA plus SIX to NINE MONTHS for resolution? plus 4-6 weeks of random extra time plus 30 days for acknowledgment sums to a total of no less than 8 months (since the original report to the vendor) still considered ‘responsible disclosure’ and a maximum of 17.5 months. Any IT organization that believes that’s a workable time frame for vulnerability disclosure and risk mitigation is in serious trouble.
    Besides the surprisingly ridiculous time frames (hey, not even the OIS ‘consortium’ recommended that) it is said that exploit code “should never be released”. Why? Is the reader to assume that exploit code will ONLY help the attackers? IT organizations have no legitimate use for exploit code? A somewhat less strict stance is recommended about the ‘vulnerability details’, these can be disclosed “when the vendor has provided a patch, work-around or software update but not before”. Whats the rationale for that? Is the vendor to hold a monopoly on actionable information and advice? It would posit to you that vulnerability details and exploit code constitute valuable assets for actionable advice to IT organizations seeking to protect themselves and minimize risks of attacks.

    The guidance to security researchers recommends that , after the acceptable lock period (of up to 17.5 months) the researcher informs “the greater security research community to ensure that defensive mechanisms can be properly implemented” (this seem to imply that IT organizations may not be able to implement defenses by themselves, is up to the ‘greater security research community’ to do it first) but further down the report the recommendation is to not use those possible defense mechanisms (ie. third party patches). Also it is implied that security researchers should stick to finding bugs do not attempt to provide plausible solutions because of an alleged “conflict of interest”. How does that make any sense?

    Frankly, I’ve found the report quite contradictory and in lieu of a rigorous explanation of methodology, data sources and the rationale followed to reach its conclusions I can not take it seriously.

    I, too, have some choice quotes from big (and small and under-funded) IT organizations that find exploit code, vulnerability details and transparent and timely disclosure of security bugs quite useful and valuable to protect themselves from attacks, but perhaps they are just blinded by the clever marketing strategists of security vendor companies that decided to dedicate some of their immense technical and financial resources to the effort of finding bugs for high yield marketing campaigns, really…

  7. Ivan,

    Thanks for your detailed and lengthy response. The bottom line is that for the majority of organizations vulnerability disclosure provides no benefit, they simply do not have the staff or resources, nor have they implemented technologies and processes to deal with the flood of irresponsible vulnerability announcements in any way that would allow them to do anything. I spent many years talking to C-level folks, security and IT operations personnel and aside from some elite few, most organizations simply do nothing, because they cannot do anything.

    I am not a fan of disclosure. I do not feel it provides any measurable benefit to the majority of organizations, worse I think it recklessly endangers them. It has served one purpose and that is to force some vendors to implement secure coding practices as part of their SDLC and to provide more security capabilities and functions within their products. Actually it has served another purpose and that is to further the marketing aims of security vendors in an attempt to sell more product through FUD. Selling fear is a crappy business to be in.

    The argument that disclosure improves security is flawed. It is arguing a perceived risk against an actual risk. The argument goes that some elite Eastern European hacking group already knows about the flaw and MAY be actively exploiting it, a perceived and generally unlikely risk vs. once it is made public an organization will need to take the same actions as if an Eastern-European hacking group actually was exploiting it (and in many cases they do, once it is public) and incur all the costs, resource consumption, downtime associated with it – a real risk. For those who do not know or have never worked in IT, it is extremely disruptive to deal with vulnerability disclosure. Managing 80,000 desktops and servers can be quite a challenge, toss in FUD and it quickly becomes a nightmare.

    Why should a fortune 100 company, or my Grandmothers cookie ecommerce site be put at risk because Ross Brown wants to sell more Blink? And that is exactly what happens when eEye announces a critical flaw in Symantecs AV product and then provides no details, and no actionable advice.

    You took the time to post some comments and some questions so I would like to respond.
    a. You take issue with the position that researchers should not release information prior to an acknowledged receipt from the vendor. Your argument is that they always respond within a week – great! Then what is the problem? If they respond faster than 30-days then all the better, but if they do not why should their customers be put at risk, why must the disclosure be made? How is it responsible to release information, any information, on vulnerabilities prior to the vendor acknowledging receipt of the information?
    b. You take issue with the position that if the vendor requests additional time it should be given. Absolutely! If the ISV responds that they need more time, and they are communicating, what purpose does it serve not to give it to them? Seriously, what purpose?
    c. You take issue with the length of time a researcher should allow an ISV to provide an ETA and the actual patch or work-around. Again, releasing information, without the ISV being allowed time to prepare a patch or workaround puts the majority of organizations at risk – PERIOD!
    d. You take issue with the stance that exploit code should never be released. Yes it should never be released – PERIOD! I am not sure why this is even a question. Most IT organizations do not have any legitimate use for exploit code, again very few have any capacity, knowledge, resources, processes, or technologies that would allow them to safely run exploit code in their environment and even less have a secure test environment that reflects production. If only there was a tool that could help them, something like, oh yeah Core – no conflict of interest there huh?
    e. You state “Whats the rationale for that? Is the vendor to hold a monopoly on actionable information and advice?” Well it is there property. The problem is that rarely is disclosure done with actioanable advice, simply stating that something is broken, can be remotely owned and then providing details on how to do it is a far cry from providing enterprise level advice that is easy to execute and minimally disruptive.
    f. You mention 3rd party patches; again for those not familiar with life in a large organizations IT shop it is extremely costly to deploy a potentially disruptive, unverified, and perhaps malware laden patch to 80,000 plus desktops and servers, ensuring that it doesn’t impact any services or internal applications and then remove the 3rd party patch when a real patch from the vendor comes along – come on, isn’t this one obvious?
    g. You state you find the report contradictory and that rigorous explanation of methodology, data sources, and rationale be provided – happy to, want to schedule a call? Personally I would like to see the same from the research community, especially the fear-sellers who post 0-days. Show real proof that disclosure protects the majority of organizations, not just edge-cases or a case here or there, but real wide-spread improvement in security postures, now that would be good to see.

    Vuln disclosure is an emotionally charged issue, one that will not be solved in a blog posting, comments and a response. Like I mentioned I am happy to discuss my thoughts further with you, just let me know. But to be clear I do feel that the majority of people involved in disclosure (and I am not saying everyone) is very self-serving and does not have the best interests of IT organizations on their minds as they claim.

  8. Hello again Amrit

    Today is a slow and rainy Sunday in Buenos Aires and that gives me a a perfect excuse to respond to your response. You posted a lengthly response with 7 specific points, unfortunately that demands an equally lengthly reply from me. I apologize in advance for its length. A blog posting is not likely to be the best forum for this discussion but we’re already into it so… grin and bear it

    > Thanks for your detailed and lengthy response. The bottom line is
    > that for the majority of organizations vulnerability disclosure
    > provides no benefit, they simply do not have the staff or resources,
    > nor have they implemented technologies and processes to deal with
    > the flood of irresponsible vulnerability announcements in any way
    > that would allow them to do anything. I spent many years talking to
    > C-level folks, security and IT operations personnel and aside from
    > some elite few, most organizations simply do nothing, because they
    > cannot do anything.

    The implied rationale here is that since organizations can’t act on disclosed vulnerabilities they would rather not know about them. This is the “ignorance is bliss” argument and I do not share your view of it as a desirable strategy for a discipline that aspires to improve security on the solid foundations of scientific practices and rational thinking (however if you view information security through a different lens I’ll respect that as well)

    Even _if_ (and I am prepared to dispute this “can’t do” statement ) an organization can not act immediately on a disclosed vulnerability, it
    will be much better prepared against attacks just by having the possibility to know that a vulnerability exists and by having the ability to assess the risk it represents to the organization’s environment than by not being able to know of its existence.

    > I am not a fan of disclosure. I do not feel it provides any

    I am not fan either, but I do not base my daily professional decisions on my personal -emotional- preferences (although I will readily admit that they do influence my viewpoint of information security and any other worldly affairs in the long term)

    > measurable benefit to the majority of organizations, worse I think it
    > recklessly endangers them. It has served one purpose and that is to
    > force some vendors to implement secure coding practices as part of
    > their SDLC and to provide more security capabilities and functions
    > within their products. Actually it has served another purpose and
    > that is to further the marketing aims of security vendors in an
    > attempt to sell more product through FUD. Selling fear is a crappy
    > business to be in.

    Maybe so, but it is naive and somewhat offending to propose that all or most security vendors sustain their business by fear-mongering rather than by solving real problems and providing value to their customers.

    The vast majority of customers of security vendors are not thoughtless sheep and the vast majority of security professionals that make up the information security industry are not greedy, unscrupulous fortune hunters.

    Framing the vulnerability disclosure debate in that oversimplified and bipolar view (security vendors thrive on FUD, buyers are driven by fear) does not help us. So, I will make two assumptions before I continue:
    – IT organizations that buy security products and services are not stupid, their security decisions are not driven by fear.
    – Successful security vendors provide value to their customers, their business is not driven by fear-mongering

    My third assumption will be that:
    – Bug finders are not necessarily professional security researchers employed by a security vendor

    I think those are fair assumptions that would naturally lead to discuss vulnerability disclosure as part of a substantially rational decision making process based on risk management arguments. Which you the proceed to introduce:

    >
    > The argument that disclosure improves security is flawed. It is
    > arguing a perceived risk against an actual risk. The argument goes
    > that some elite Eastern European hacking group already knows about
    > the flaw and MAY be actively exploiting it, a perceived and generally
    > unlikely risk vs. once it is made public an organization will need
    > to take the same actions as if an Eastern-European hacking group
    > actually was exploiting it (and in many cases they do, once it is
    > public) and incur all the costs, resource consumption, downtime
    > associated with it – a real risk. For those who do not know or have
    > never worked in IT, it is extremely disruptive to deal with
    > vulnerability disclosure. Managing 80,000 desktops and servers can be
    > quite a challenge, toss in FUD and it quickly becomes a nightmare.
    >

    My thoughts about the above paragraph:
    1. What’s this fixation with Eastern Europeans? Why is it that Eastern European “hackers” and “the Russian Mafia” are used as the stereotype of
    attackers. How about using “US spam rings”, “Canadian pornsite operators”, “French skript kiddies”, “Chinese anti-censorship hackers”
    or “the Congolese mafia” for a change? Ohh, but I digress..
    2. Ok, the real point. Risk management is based on accurate risk assessment. Vulnerability knowledge and information (yes technical
    details and even exploit code in some cases) enhances the accuracy of your risk assessment, your risk equation is based on what you do know
    plus an arbitrary provision for what you do not know. If you do not know how many vulnerabilities exist, their nature and which ones are or may be known to others then your “unknown risk” component will outweigh the
    “known risk” one and you’ll manage unknown risk based purely on your perception. The optimist will say that the unknown risk is minimal and therefore irrelevant, the paranoid will say that it is substantial and therefore live in constant fear (which will be promptly ameliorated by the unscrupulous money hunters). One way or the other, unknown, unmeasurable risk tends to be VERY costly.
    I posit that vulnerability disclosure helps to improve the accuracy of risk assessment and in doing so it helps organizations devise *efficient* (not just effective) countermeasures, ie. your praised actionable advice. Efficient risk mitigation does not directly lead to installing the official patch on your 80k desktops and servers. Surely that will solve the problem (*if the patch is sound*..and btw how would you tell it is?) but at what cost? Could it be done differently? With less effort? You can’t tell if you don’t know what is the problem that you’re facing.
    3. Your paragraph touches on the topic of vulnerability re-discovery or simultaneous discovery (by the “bad guys” from East Europe). My personal assumption is that if I found a vulnerability there is a high probability that somebody else around the world found it as well either at the same time or before I did. To think otherwise would be an unhealthy exercise of arrogance. To think that as *the norm for every bug I discover* is on a scale of arrogance that goes beyond the wildest megalomaniac’s dreams. The cautious and, more importantly, humble
    security researcher should assume that some of his/her findings are not the result of individual enlightenment but the natural outcome of a
    current paradigm adopted by the entire research community (disregarding of what are the researcher’s intentions) and they are either already found or bound to be found at any moment.
    4. I’ll assume that whoever reads this blog does know and/or has worked in IT so the subtle appeal for inside baseballers is superflous 🙂

    So those are my general thoughts, let us now move to the specific points I made about the report and to your response. I may add that you
    responded to *your interpretation* of my comments not necessarily to my comments so I’ll try to clarify them better to avoid misunderstandings.

    > Why should a fortune 100 company, or my Grandmothers cookie ecommerce
    > site be put at risk because Ross Brown wants to sell more Blink? And
    > that is exactly what happens when eEye announces a critical flaw in
    > Symantecs AV product and then provides no details, and no actionable
    > advice.
    That’s a cheap shot that does not do justice to eEye or to the quality of your analyst capabilities. First because every single eEye advisory that I know of is profusely detailed, technically sound and does provide
    actionable advice. Second because it does not force your grandmother to buy anything (and in fact Blink if FREE for personal use and so are many other plausible solutions, including one from Core). Third because it implies that the risk did not exist before the disclosure, something
    that is at least debatable. Finally, it attributes motive to eEye’s actions (they want to sell more Blink->they disclose bugs->they put
    people at risk) and in lieu definite proof that is, prima facie, a fallacy. I will give eEye the benefit of the doubt in this case, I will assume that they do not need to disclose vulnerabilities to increase their revenues nor they need it to make a name for themselves (they already have one)

    > a. You take issue with the position that researchers should not
    > release information prior to an acknowledged receipt from the vendor.
    > Your argument is that they always respond within a week –

    No, I took issue with the recommendation stating the allowing 30 days for vendor *acknowledgment* is responsible. Not only I don’t consider it responsible but also I don’t consider it realistic (I explained why based on my experience).

    I’m entirely in favor of notifying the vendor(s) of vulnerable software before releasing information, I think that is responsible and necessary to allow the vendor(s) to act upon a bug in their software. I also think that responsible vendors should acknowledge (which does not necessarily imply to provide an ETA for a fix) a vulnerability notification within
    a few business days. At Core our policy is to request acknowledgment within 2-3 business days and to attempt notification at least twice. We request acknowledgment to come from an human being (email, phone call
    etc) rather and an automated response. After the second attempt fails, we evaluate the situation on a case by case basis. Silver bullet time
    frames do not work for every case in the real world.

    If the various attempts to contact the vendor(s) fail, then we think that the responsible thing to do is to notify the security community at
    large so one or many suitable protection mechanisms can be devised. We put our best effort into describing protection mechanisms of our own but we do not consider ourselves (nor the vendor) the one and only source entitled to provide actionable advice to affected users.

    > b. You take issue with the position that if the vendor requests
    > additional time it should be given. Absolutely! If the ISV responds
    > that they need more time, and they are communicating, what purpose
    > does it serve not to give it to them? Seriously, what purpose?

    No, I take issue with the report’s recommendation of allowing 4 to 6
    weeks for “something” that is it not clearly defined. It is certainly not for acknowledgment of the vulnerability notification (that’s covered
    with the proposed 30 days) and it is certainly not for the purpose of providing an ETA for a fix (that is covered with the proposed SIX MONTH
    period), so what are those 4 to 6 weeks for??

    I am all in favor of not releasing vulnerability information to the public if the vendor(s) requests more time to work on the report, but I
    only consider that responsible behavior if the request and the response is clearly substantiated on the goal of both parties working towards a resolution of the bug and *not* (as it’s often the case) on the merits of the disclosure process itself, on vague assumptions or promises, unstated purposes or on vendor(s) or reporter(s) goals that are not clearly aligned with providing risk mitigation mechanisms for the specific bug that was reported.

    Vendor(s) or anyone else for that matter are entitled to have more time to work on a bug but only if they can provide a plausible justification
    for it, otherwise the process may enter a vicious circle with diminishing transparency and eroded trust. To let things go that way would be irresponsible.

    > c. You take issue with the length of time a researcher should allow
    > an ISV to provide an ETA and the actual patch or work-around. Again,
    > releasing information, without the ISV being allowed time to prepare
    > a patch or workaround puts the majority of organizations at risk –
    > PERIOD!
    No, I take issue with the recommendation of SIX MONTHS as responsible conduct in providing a ETA for a fix.

    I am in favor and agree that reporters should allow the vendor(s) to provide an ETA for a fix. In fact I think reporters should not only “allow” the vendor(s) but that they MUST request the ETA. Not doing so would be irresponsible. However, I do not consider 6 months a reasonable period to estimate how much time it would take to develop and deliver a
    fix. Six months may or may not be reasonable or sufficient to develop the fix but it is certainly not reasonable for providing an ETA for it.

    > d. You take issue with the stance that exploit code should
    > never be released. Yes it should never be released – PERIOD! I am not
    >
    Yes, I do. This is an accurate interpretation of my post.

    “PERIOD” does not constitute proof, it is an fallacious appeal to authority . You’re veering on the emotional side here, please move past
    it. There is tangible proof (as you surely know) of possession of exploit code as a legitimate business need to improve an organization’s
    security posture. Perhaps what is bothersome is to think is that *proof of concept* exploit code could be given out FOR FREE to organizations that may use it to improve their security rather than sell it as part of a commercial grade offering. I could elaborate further on all the possible legitimate uses for exploit code but I am sure you’ve heard those arguments before.

    So, according to your stance, exploit code should never be released, but is it ok to sell it? or to buy it? Is an organization’s financial capacity a good measure for judging its intend? Or is it that exploit code should *not exist*, PERIOD?

    Btw, I hate to break the news but vendors regularly request proof of concept code at some point in time during the vulnerability reporting process. Evidently somehow they find it useful or otherwise valuable. So far none has been able to convince me that, as a general rule, PoC is valuable to them but not to their customers.

    > sure why this is even a question. Most IT organizations do not have
    > any legitimate use for exploit code, again very few have any
    > capacity, knowledge, resources, processes, or technologies that would
    > allow them to safely run exploit code in their environment and even
    > less have a secure test environment that reflects production. If only
    > there was a tool that could help them, something like, oh yeah Core –
    > no conflict of interest there huh?

    uh? Which conflict of interest? Taken at face value and following your rationale -you do not explicitly state where’s the conflict of interest
    tho.- the best thing for Core’s interests would be NOT TO provide PoC for free and to ship it as part of our product’s update to paying customers, *that* would be a conflict of interest: Concealing information and code that
    have legitimate use for the community at large in favor of Core’s own financial interests. I can elaborate on this further as well, but the
    gist of my thoughts are that yes, Core is a for profit organization but that does not necessarily mean that EVERY SINGLE THING that Core (and its employees) does is aimed at obtaining immediate financial gain or profiting at the expense of the community at large. The same principle applies even more loosely to the hundredths of security researchers that
    find, report and disclose bugs on their own and are not financed by any security vendor.

    > e. You state “Whats the rationale for that? Is the vendor to hold a
    > monopoly on actionable information and advice?” Well it is there
    > property. The problem is that rarely is disclosure done with
    > actioanable advice, simply stating that something is broken, can be
    > remotely owned and then providing details on how to do it is a far
    > cry from providing enterprise level advice that is easy to execute
    > and minimally disruptive.

    Ok, now you are entering the grey area of how much disclosure is enough to be considered responsible. Tis good because it diverges for a bipolar view of the problem. The question here is how much detailed information is necessary to provide actionable advice. What are the details that must be disclose not only to provide advice but also to let the affected users assess their specific risk in an accurate manner.

    I posit that to provide no details whatsoever is irresponsible and leads to inaccurate risk assessment and the subsequent deployment of
    high-cost, and often ineffective, mitigation mechanisms. To provide all available details (and PoC) in every single case is unnecessary and most likely wrong as well. So what’s the responsible behavior? To analyze each particular case and decide to the best of your knowledge what amount of information will maximize the chances of accurate risk
    assessment and effective and efficient mitigation by affected organizations. The caveat here is that this stance is focused on maximizing value for the affected users not on minimizing it to
    potential attackers and the premise is that potential attackers need a lot less information and external help to figure out things than the
    general security community (btw, I consider the people in charge of the security of IT organizations part of the security community).
    If you fundamentally disagree with the statement that ‘attackers’ need less information than ‘defenders’ to achieve their goals then we are coming to this debate from entirely opposite tacks and our discussion is not an emotional (something that can be easily solved among intelligent human beings) but a dogmatic one (something that may prove harder to solve, I’d choose the methodology of modern science to try resolve it).

    My second point is about the “monopoly on actionable advice”. While the vendor(s) and the researcher(s) are certainly entitled (and expected) to provide actionable advice they SHOULD NOT (and some would say must not) be the ONLY ones entitled to do it. By censoring relevant vulnerability details from a public disclosure statement they are arbitrarily limiting the ability for OTHERS to provide actionable advice. *That* is self-serving and clearly for the benefit of both the reporter(s) and the vendor(s).

    > f. You mention 3rd party patches; again for those not familiar with
    > life in a large organizations IT shop it is extremely costly to
    > deploy a potentially disruptive, unverified, and perhaps malware
    > laden patch to 80,000 plus desktops and servers, ensuring that it
    > doesn’t impact any services or internal applications and then remove
    > the 3rd party patch when a real patch from the vendor comes along –
    > come on, isn’t this one obvious?

    For those not familiar with life in a large organization, the above
    description can apply both to third party patches as well as to official
    vendor patches or to any other piece of software (be it a patch or not)
    under consideration.

    Hopefully no one reading or posting on this blog is no entirely
    unfamiliar with life in a large organization IT shop, but Amrit, perhaps
    you should request a “certificate of familiarity” to qualify you blog;s
    posting and responses using a reputation system (works for eBay
    apparently). Sorry, I could not resist the sarcastic retort but you
    can’t blame for making it either.

    > g. You state you find the report contradictory and that rigorous
    > explanation of methodology, data sources, and rationale be provided –
    > happy to, want to schedule a call?

    Yes, why not?

    But I think it would be better to discuss these matters face-to-face with plenty of time and in an amicable environment (such as a bar eh). I find that quite more effective to really understand each other and get to the bottom of things.

    > Personally I would like to see the same from the research community,
    > especially the fear-sellers who post 0-days. Show real proof that
    > disclosure protects the majority of organizations, not just
    > edge-cases or a case here or there, but real wide-spread improvement
    > in security postures, now that would be good to see.

    I could respond with the same paragraph replacing “protects” with “does not protect” but I suspect nor you neither I can show *real proof*

    At least not one that is satisfactory with the principles of a scientific discipline. That would require, at least, a clear and transparent methodology, carefully scrutinized experimental data derived from repeatable experiments and rational analysis with conclusions that are open for the review of peers that are demonstrably not tainted by spurious interests and even then it would not constitute “proof”. There is no apparent interest in producing such a thing form any of the involved parties (IT vendors, security researchers, security vendors, analysts and media representatives,etc.) with the possible exception of end user organizations.

    I would really like to see that happening because I do not consider my personal view of these things as particularly enlightened or morally righteous but in the absence of more convincing arguments my (emotional I’ll concede that) reaction is “let the doers do and the talkers talk”, for good or worse the current state of security affairs is the result of a pragmatic security community, one that privileges acting on things rather than talking about them and faced with the bipolar choices of “do X” or “do not do X” I will always fall on the “do X” side. In my view that is what evolution is all about and that’s what brings both innovation and maturity to our narrow professional field.

    > But to be clear I do feel that the majority of people involved in
    > disclosure (and I am not saying everyone) is very self-serving and
    > does not have the best interests of IT organizations on their minds
    > as they claim.

    I may agree with you on that but I don’t think it should be relevant to the *specific* recommendations of a Responsible Vulnerability Disclosure Guidance document and if it is then it should be clearly stated as a contributing factor to the analysis, shouldn’t it?

    ok, it stopped raining here, I’m off.

  9. Hi Ivan,

    I was to supposed to spend Christmas in Chile with the in-laws, they live in Concon near Rinaca, last time I was in Argentina we captured some incredible photographs at Foz de Iguazu. Neither here nor there, just giving a little color to the gray world of digital communication.

    We do not disagree as much as it may appear, and if we ever have an opportunity to discuss these and other topics in depth, in an relaxed setting, I would imagine that we would even come to consensus.

    From my perspective there are really three main points we disagree on….

    First I am far less inclined to believe that organizations are able to use vulnerability disclosure and exploit code to assess their security or improve it than you are. I believe that the majority of organizations are grossly ill-prepared to do anything but react to security events as they happen. They do not have mature processes to support workflow between security and operations, they are severely understaffed and resource constrained, they do not have tools that can automate much of the work, they lack the knowledge to properly prioritize remediation or mitigation activities and are so overwhelmed with they day to day pressures of IT, security, and compliance that there is little time for them to do anything but run around putting out fires – that is the current reality, and in that reality vuln disclosure causes them a lot of fear, uncertainty and doubt.

    Second malicious hackers are not spending time looking for new vulnerabilities as frequently as everyone states. This is purely conjecture on my part, but based on several observations – first and foremost 0-days, with the vuln being found by the exploit code author, are rare, second it is not economically viable – there are far too many vulnerable, misconfigured and poorly administered systems already in play to require them to find new methods of attack. Sans and the FBI had at one time published that 99% of all external attacks take advantage of known vulnerabilities, misconfigured and poorly administered systems. The DoD recently provided some forensic data suggesting that 70% of those were a result of misconfigured and poorly administered systems The reality is that there is plenty of weaknesses out there for the bad guys to take advantage of and it really isn’t economically viable for them to look for new ones, especially since the research community is doing it for them.

    Third is the issue of releasing exploit code. I feel it is dangerous, should not be done, and most organizations are simply ill-prepared to use it without disrupting services or worse, breaking the law. Yes I know vendors want PoC code but I do not agree that IT should have it without some boundaries, running it as part of a pen testing framework like Core or Canvas is one way, but the majority of folks out there have no business playing with exploit code – sort of like handing a 5-year old a knife and asking them to cut down an oak tree. It isn’t gonna do jack to the tree and the kid is going to hurt himself.

    I have a series of executive briefings over the next several days so will need to respond to the specific comments as time allows. I did want to comment on why so many seem to reference Eastern European hackers as the symbol for internet badness, it is a remnant from the cold war and the Dark Avenger was supposedly from that region of the world. Before anyone flames me I am just providing observations I have no strong feelings on Eastern Europe or their people and am pretty sure that the Nigerian Federation of Dollar Choppers are far more prolific in turing digital assets into dollar bills. Also it was a cheap shot to imply that Ross Brown and eEye are putting my grandmother at risk, but they have done some things I feel are irresponsible, for example earlier in the year they posted information on a Symantec vuln at the same time they notified Symantec. BTW – this caused a lot of angst with enterprise clients of Symantec (I took a lot of calls during that period). There really is no reason for a company like eEye to release information, any information, without at least waiting for the vendor to provide some mitigation advice to their install base.

    Aside from some nuances of dates, of which I have no real investment in, I think the rest of our debate is more aligned or cut and dry, but blog debating is a really poor forum for communication. Perhaps we will have the opportunity to discuss in person, with the participation of others who sit on both sides of the debate, and move the thinking forward.

  10. Pingback: The 11 Worst Ideas in Security « Amrit Williams Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s