« A reminder that CSRF affects more than websites | Main | A reminder as to why using random salts is a good idea »

Why publishing exploit code is *generally* a bad idea if you're paid to protect

Update2: Further proof that people are abusing this in a wide scale and likely wouldn't have had the exploit code not been released.

Update: I've clarified a few points and added a few others.

Recently Tavis Ormandy (a google employee) discovered a security issue in windows, and days after notifying Microsoft published a working exploit to the full disclosure mailing list after failing to negotiate a fix from the vendor within 60 days. I've been chatting about this issue with several people in the field and found that nobody has been discussing what many of us feel, so decided that I'll bite the bullet and put myself out there for criticism (and as a proxy for others). I suspect some people won't like my answers/opinions. My intention of this post isn't to insult or flame Tavis, it is to debate the act of releasing PoC exploit code when one is employed to protect people. 

Note: Tavis found this on his own time, I am not saying Google had anything to do with it, however where he works is of importance to some of the points below.

Facts

  • Tavis is paid by his company to identify security issues, and work responsibly to ensure they are addressed
  • Tavis believed he was doing the right thing. His intention was to help people not harm them.
  • Tavis states this was done on his personal time.
  • Tavis provided mitigation advice for the issue.
  • The exploit published by Tavis is now being used in public attacks.
  • By releasing code it put some of the customers at risk that interact with his company. Even more so because no fix existed.
  • Some of the customers that interact with his employer will likely become infected with bots or other malware as a direct result of the exploit being published. These in turn will be used to assist in brute forcing/attacking their platform as well as other companies.
  • By publishing the code, more attackers have access to it, rather than private circles.
  • Enterprise vendors have release cycles and need to perform regression testing prior to pushing out fixes. In this case he discovered a vulnerability in almost a dozen versions of windows. This will require coordination with each OS team to perform regression testing, as well as any products that interact with it. 
  • Microsoft is one of the few companies that responds to those who report issues in their products and have a general good attitude/response (now adays) to issues such as this.
  • I don't have all the information on the communication between Tavis and Microsoft, I am going by what has been publicly discussed. 
  • After speaking with 4 different companies about this, every one said they would have fired him if he was in a similar role in his company. I am not saying that I would fire him or that he should be fired, just that many people feel strongly about this. 
  • Microsoft is a competitor of Google (Ormandy's employer)


Likely True

  • The vuln is probably known by others and has been for some time.
  • Had he not published some sort of info publicly on it it may have taken longer to fix.
  • If he had simply published an advisory without working exploit code he'd get his message out (the media would have picked up on this because it's Microsoft) and put pressure on a vendor to fix it sooner. This is a fairly effective technique that has worked before.
  • Even if Tavis had only published an advisory, eventually someone would have figured out the issue and used it to attack people.


My Opinions (based on what I know)

  • While this issue was likely known already, had Tavis waited 90 days (lets say it would take 90 days for MS to fix this) for the fix to be released maybe 1k-10k people would have been owned by the small circle that already knew about to exploit it. Now that the public exploit code is public 100k+ people are likely going to get owned within the 30-60 days it will take MS to perform a 'quicker fix' due to their hand being forced. The end result, more people owned because of the disclosure of PoC code.
  • The shift has been that when you have a nice 0day (as an attacker) in a major OS you typically focus it towards specific targets and don't just blast everyone with it. As people have been debating for years 0days vulnerabilities are worth a lot of money and people will pay a lot for them depending on their motivations. See the WebDav overflow (someone was attacking the army which is how MS discovered it) for an example of targeted attacks with 0days that were saved for high profile targets (whatever that means).
  • Had Tavis been a 'hacker' we wouldn't be having this conversation. This is because when people are paid to protect and are caught doing the opposite it is seen as a conflict. Much like a cop committing criminal acts on the side, the cop should know better since it goes against the ethics of the position. I see it the same way here. *If you are paid to protect people then it is an ethical conflict for you to publish working PoC code*. 
  • While 60 days may seen unreasonable, I can understand why this can take so long due to my previous points. Anyone who has worked in software development at a company that ships product (as compared to updating a website) understands that things can take time. Note: I am not defending 60 days, I am merely stating that for such major products as these that a lot of things depend on the components that will be patched, and that regression testing those other products takes time. You can't ship something without doing this if you are a software development shop with any worth. If MS were to ship it and something were to break then people would be bashing them for their lack of QA.
  • If the vendor had refused to acknowledge the issue/blew off the researcher this would be a different story. I have myself been forced to publish information when vendors were being uncooperative in the name of ensuring the issue is addressed. I would not recommend this approach for everyone and anyone in this position is open to attack, often unfairly.
There's an interesting thread on Daily Dave that I would suggest reading if you feel strongly about this subject one way or the other.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.


All Comments are Moderated and will be delayed!



i knew about this exploit 3 years ago


I agree with the sentiment, but the excessive use of notes and disclaimers makes my soul hurt.


you mean like nicholas j. percoco and christian papthanasiou of trustwave spiderlabs (nick is the sr. vp and christian is a consultant) are doing at defcon this year with the android rootkit?


# Tavis is paid to protect customers by protecting the website/products offered by his company.

# Tavis states this was done on his personal time.

You can't have it both ways buddy.


LOL. To crucify Tavis, you must also crucify those Trustwave guys.

Something tells me the Trustwave work was done on company time, so the "on personal time" out they had is gone.


I don't know enough about the trustwave situation to comment (I was planning on hitting that talk this year actually). I don't know if they are publishing code, or merely demonstrating concepts (concepts are not the same thing as script kiddie friendly exploit code).

If they are publishing PoC rootkit code and this is *sponsored by a company* then yes, I would consider this to be irresponsible.


Robert - It was spelled out pretty clearly in the abstract that they have developed one. It is called a POC, but a reverse shell is included.

The dealbreaker here is whether or not these guys release it, or if it was just a publicity rouse.

Pasted from from http://www.defcon.org/html/defcon-18/dc-18-speakers.html

"We have developed a kernel-level Android rootkit in the form of a loadable kernel module. As a proof of concept, it is able to send an attacker a reverse TCP over 3G/WIFI shell upon receiving an incoming call from a 'trigger number'. This ultimately results in full root access on the Android device. This will be demonstrated (live)."


Thanks for pasting a link to their talk, as I said I hadn't really been following that situation closely but was planning on attending the talk.

Developing a PoC and keeping it private, demonstrating a PoC publicly, and publishing PoC code are not the same thing. If they make available the source/binary to a working rootkit then I consider the company sponsoring this to be irresponsible.

I don't have an issue with discussing concepts/techniques/approaches since they can/will ultimately bring debate and discussion as to how the makers of these devices can improve their security posture.


This is a bit old, but any chance you could tell us which companies would have fired Tavis?

Post a comment







Remember personal info?