Saturday, January 24, 2009

Fastflux makes for some cool graphs


So after the last post on automation by CG I figured I'd change track a little and take a look at something that is of interest to me. Data Visualization. I've found it to be incredibly useful in not only tracking down anomalous behavior on a network but also for displaying metrics and data to the non technical folks. Visualizing data from a security perspective can show some interesting and valuable information.

Take NetFlow logs for example. While there is no payload information, if you're able to gather the logs from an entire network you can determine everything from infected hosts scanning your network to attacks against a web application. I've not actually done it but I'm sure there are ways to apply visualization to offsec as well. Maybe not in the actual pentest but possibly in the reporting/debriefing stage. It would be interesting to graph the path of compromise into the network with the attack vectors used and overlay that with the defensive measures [Patching policy, IDS/IPS, FW, etc...] in place in the target environment. It would be a really powerful way of showing how the network was compromised. If I get some time I'll try and gather some data and generate some visuals depicting that. I strongly recommend 'Applied Security Visualization' by Rafael Marty and 'Security Metrics' by Andrew Jaquith for information on data visualization and metrics for information security.

Lately I've been looking at the Waledac aka 'The New Stormworm' bot and it's fast flux capabilities. It's spreading via email spam messages. These e-mails contain different domains that are part of a fast flux network. Each of the domains has it's TTL set to 0 for the DNS record so virtually each time the domains are resolved a new IP address is returned. I took 82 known malicious domains associated with Waldac and using a simple perl script ran forward lookups against each domain gathering the ip and country information associated with that IP. I then stripped out any repeat addresses. I gathered approximately 2500 unique IP's in the short time I ran the script. Using Rafael Marty's Afterglow scripts and Graphviz's Neato I generated some simple link graphs of the source <> target pairs. In this case the source would be the domain and the target would be the fluxing IP address.


What's interesting about this graph is that it's immediately apparent that some domains are not fluxing and appear to be static. Perhaps this is intentional, perhaps they are no longer actively being used in the spam emails and are no longer part of the flux network or perhaps it's a misconfiguration on the authoritative DNS server for these domains. Unless you have access to Layer 7 traffic in an environment it's somewhat difficult to detect traffic to these domains using only NetFlow traffic but Emerging Threats and the folks over at the Shadowserver Organization have Snort sigs for detecting traffic from infected hosts.

As a curiosity I took the country location information for the IP addresses of the infected hosts and looked at the Top 10 Countries from that list. It's interesting to see that China and Korea have the largest percentage of infected hosts. This may be skewed by the time my script was running or it may be that the bot has been spreading longer in those countries. The above data, while interesting, is probably not that practical from an security admins perspecitive though.

Recently I needed to determine the potential number of hosts infected by the IE7 XML Parsing exploit. In order to do so I gathered NetFlow logs from the environment and next built up a list of known domains hosting the exploit. This list was gathered from multiple sources. Obviously this list is not comprehensive and likely there are numerous other domains hosting this and other exploits but it was a good start. Next I determined the IP addresses for these domains and using some Perl and the NetFlow traffic generated a list of unique source IP ranges with established communication to those malicious domains.

After generating the link graphs I was provided with a quick visual representation of which hosts had communicated with those domains. This created a quick list of potential infected hosts. I've since updated the graphs to show different colors for the subnets in the network, allowing me to determine areas that show greater infection rates.
The next step is to determine the patching levels in those subnets with infections. This was done by looking at the SUS server logs. If the host was patched it could be assumed that it was not infected. Of course this assumes that the patch actually installed correctly too. :)

These malicious domains hosted more that one browser exploit though. Some of these domains had upwards of 20 unique exploits listed. Now we all know from using exploits for various 3rd party apps that these are seldom at the latest patch levels. So likley many of these stations were infected in some manner. The next step would be to determine what sites hosted what exploits and add an additional column in the csv file that contained this exploit information. Re graphing the traffic would show which hosts communicated to those sites. So if you knew that all manged/imaged workstations in that location had Adobe Acrobat 8.0 installed and some had visited a site hosting a pdf exploit you could determine that they were possibly infected. This does assume that you know what software is on those systems though. Waaay too many assumptions for my taste. The advantage of my graphing this data was that it had a very powerful visual impact that backed up my recommendation to establish and maintain block lists on the firewalls and to improve patching policies for third party apps. A very tangible dollar value could be applied to this as well. Each machine has a cost associated with it being infected and now needing to be rebuilt and patched before being allowed back into the network. Add to this the loss of employee productivity and you have a strong argument for blocking and improved patching controls.

There are other methods of doing the same thing but hey, I like looking at visual representations of the data and when dealing with huge data sets it can assist in picking out patterns that may hint at infections or other anomalous traffic.

Cheers,
dean
dean de beer

Saturday, January 17, 2009

More on too much automation


I'm not the most articulate person, especially in writing, and while I thought by the people that bothered to comment on the blog, I got my point across other people make me think I didn't.

So, I'll try again, if this doesn't work I'll resign myself to just not being as l33t and skilled as other people in the community.

I'm not against ALL automation, by its very nature everything involved with "hacking" or penetration testing is automated but I'll try to restate. From the orginal post:

Automation is a good thing but when it comes to pentesting I really think there can be too much automation. Too much automation leads to "fire and forget" activities and lack of TLC.

Believe me, I'm all for bash running my nmap scan and rolling IPs for me all nite rather than stay there and do the stuff myself. I'm all for automating a tool that needs to run a set of commands on a subnet or group of hosts given the appropriate scenario. Hell I'm even all for running scripts that will log into every box and "do something" (go tebo!) also given the appropriate scenario.

But what I'm not for (but I do concede there are plenty of times when the following is perfectly acceptable and maybe regarded as "good" pentest):

Rolling in and run my full port nmap scan, nessus scan, core impact rapid pent test, connect to a couple of agents, take screenshots, high five with my team mates on how l33t I am, then head out and write the report.

I'm of the opinion that for the above scenario, a couple weeks of training and a couple of licenses later any in-house shop can do that for themselves.

I will propose that there is an alternate type of pentest where my goal is not to root as many boxes and I can but its too see if I can gain access to "what makes the company money". I am unannounced, trying to not get caught is part of the test, and where the customer actually has a great patching program, an IDS, possibly and IPS, an outbound proxy, somebody actively monitoring the previous technologies, and asking them to turn all that shit off is not going to happen....

how would our above scenario fair for that pentest?

how fast would your pentest be over if you did say....

"For example, I enumerate open shares on the entire subnet, then pull down all .doc documents, then search them for interesting information from the recon phase (i.e. the name of the CFO)."

I'd give it about 2 minutes before any SOC that doesn't totally suck blocks your ass and you have to go begging for them to unblock your IP...that's never any fun.

I wont address everything from the pauldotcom show notes, because frankly I think they completely missed the point of the post (because I am not against all automation), but I will address this.

"[PaulDotCom] - While I agree, automation can have a negative effect on risk identification,
its a vital part of every penetration test. Much of the post talked about how automated tools "Don't have the love" and "you need TLC". That's all well and good, but how do you show risk when you've got a meterpreter shell on 30 hosts? What are those hosts? Do you spend 3 weeks of your time and the customer's money to demonstrate risk? No, you automate the mundane tasks and pieces and parts that can be automated. "

I guess my response would be why do I need 30 shells to demonstrate a point? how would I have gotten those shells without being really sure about what/where the host was before I launched anything that would have gotten me a shell? but if ./autohack ran on a subnet and gave me 30 shells then I guess I might find myself wondering what are all those hosts and how did I get in. As for risk identification, if I got 30 shells the risk identification is an enterprise patching problem in this scenario.

"This does not make you any less skilled or a script kiddie, in fact, it makes you more of a master."

umm how? did I have to run a nessus with credentials to find those vulnerable hosts after checking 100's of KB's and throwing tons of traffic at the hosts or did core impact throw 20 exploits at each box before I got lucky and popped a few? Or did I limit my exposure and try to enumerate what services were running and what service pack the host was and try the latest exploit at one host to see what would happen?

I'll spare you the rest, like I said I think the point of the post was completely missed.
CG

Wednesday, January 14, 2009

When automation is too much automation or where's the TLC??!!


Like the"hiring geeks" post by ax0n said, we automate. Automation is a good thing but when it comes to pentesting I really think there can be too much automation. Too much automation leads to "fire and forget" activities and lack of TLC.

For example there are a couple scripts out there that try to automate your whole scan, enumerate, and exploit process and all the pentest frameworks have some sort of autohack feature and they all suck (as much as it pains me to say that --especially because I am such a msf fanboy).

There is a certain amount of diligence I think that should be applied come actual "exploit" time. Scripts that automate or allow a "tester"(?) to script too much of the pentest while handy can cause real damage on a network, not to mention MISS things, possibly IMPORTANT things.

I've heard that some people will spend a ton of time writing a tool that will run everything from nmap and a bunch of different exes, some that do automated exploitation like adding rogue user accounts if it finds null passwords or something, or whatever random exe's that can find on the net. run that script and go for lunch.

The problems arises that:

1. That much output really saves you no time if you go back and actually go through and validate the results.

2. Seems like no one knows how to enumerate and certainly no one teaches it. Automating all the steps between scan and exploit don't help the lack of enumeration either.

2. There is no "test" you just ran a bunch of tools, the script did all the "work."

3. There is no personal experience or tester analysis if you just run a script. There is no thinking outside the box or expertise involved if you hit the autohack button or ./autohack.rb

4. What about stealth? what about tactics? what about proper footprinting? what about emulating anything besides a script kiddie attacker?

5. Where is the fun and challenge in having the script do all the work for you?

6. Every pentest is (at least should be) a battle of minds against the tester and the people admining and securing the network. If you've got any kind of decent admin the easy low hanging fruit should be patched up but that doesn't mean there still aren't vulnerabilities to be found and exploited by an experienced tester. Its all about finding the one thing that guy missed and then digging in from there.

7. There's no TLC with autohack, for the amount of cash you paid for a "real" pentest, there should be some love and work from your tester, that nessus report just aint cutting it.

Moral of the story, show some TLC, get good (better) at your craft and don't rely on the latest autohack script to do things for you.
CG

Serving Up Malware via Ad Networks


So nothing new to serve up exploits via ad networks but I thought it was cool that someone was serving up a pdf exploit via the Ad Network

From http://www.curse.com/forums/t/69161.aspx

"I was looking at GridManaBars when Avast popped up a virus, 3 times. Twice on the addon's page, and once on the download page. I just viewed the page again, but nothing there.

Here's Avast's log.

12/2/2008 7:11:31 PM SYSTEM 1132 Sign of "JS:Packed-T [trj]" has been found in "hxxp://76.74.154.110/zv00108/pdf.php?id=9702&vis=1" file.
12/2/2008 7:11:31 PM SYSTEM 1132 Sign of "JS:Packed-T [trj]" has been found in "hxxp://76.74.154.110/zv00108/pdf.php?id=9702" file.
12/2/2008 7:11:50 PM SYSTEM 1132 Sign of "JS:Packed-T [trj]" has been found in "hxxp://76.74.154.110/zv00108/pdf.php?id=9702&vis=1" file. "

Url looks similar from what I recall, it's traced back to valuepromo.net. Ad banners I assume?

A robtex of that IP gives you two others in the valuepromo network

76.74.154.110 server2.valuepromo.net
76.74.239.45 server3.valuepromo.net
76.74.239.143 server1.valuepromo.net

http://www.robtex.com/dns/qiweroqw.com.html

google for those IPs and you'll see all kinds of people complaining about AV alerts and browser crashes.

The best stuff is here though

http://forums.techpowerup.com/showthread.php?t=81570

"
http://76.74.154.110/zyyqoeiwrueq/pdf.php?id=14273&vis=1

i'm sitting at techpowerup.com homepage and it takes me to this ^^ and brings me to a blank pdf document.... about 6 hours ago today, at techpowerup's homepage, it opened up acrobat reader (outside of firefox) with a blank document...."

Opens up a blank pdf, yeah that's not good...

On a more fun note, think of the damage you could do to competitor ad network by getting them to serve up malware and get their whole netblock blocked? good stuff.
CG

Tuesday, January 13, 2009

Winzip FileView ActiveX Exploit


It's not new at all [CVE-2006-5198] but I noticed that MSF did not have any coverage for Winzip's vulnerable ActiveX methods and the PoC's that I found did not work for me so I put this together last night. The great thing about Winzip is that, like Adobe Acrobat, no one updates it. :-)

[1] WinZip FileView (WZFILEVIEW.FileViewCtrl.61) ActiveX

So run 'svn update' and have fun.

Cheers,
/dean
dean de beer

Monday, January 12, 2009

Interview with an adware author


really good interview with "Matt Knox, a talented Ruby instructor and coder, talks about his early days designing and writing adware for Direct Revenue. (Direct Revenue was sued by Eliot Spitzer in 2006 for allegedly surreptitiously installing adware on millions of computers.)"

http://philosecurity.org/2009/01/12/interview-with-an-adware-author
CG

Sunday, January 11, 2009

Open Letter from Geeks to Recruiters


Sharing this link as i've done a few job interviews lately and this post really hits home.

"For the love of all things good in the world, learn how to hire and employ a geek. You're doing it wrong."

http://www.h-i-r.net/2008/12/open-letter-from-geeks-to-it-recruiters.html
CG

Saturday, January 10, 2009

Oracle Sid Enumeration Metasploit Auxiliary Module


I recently pushed out (again with MC's) help an Oracle Sid enumeration MSF auxiliary module for Oracle versions less than Oracle 10g Release 2. Starting with 10g Release 2 the TNS listener is protected and wont just cough up the SID for free, you'll have to guess it or brute force it (hopefully the SID guess module will come soon).

Here it is in action

msf > use auxiliary/admin/oracle/oracle_sid
msf auxiliary(oracle_sid) > info

Name: Oracle SID Enumeration.
Version: $Revision$

Provided by:
CG

Basic options:
Name Current Setting Required Description
---- --------------- -------- -----------
RHOST yes The target address
RPORT 1521 yes The target port

Description:
This module simply queries the TNS listner for the Oracle SID. With
10g Release 2 and above the listener will be protected and the SID
will have to be bruteforced or guessed.

msf auxiliary(oracle_sid) > set RHOST 192.168.0.43
RHOST => 192.168.0.43
msf auxiliary(oracle_sid) > run

[*] Identified SID for 192.168.0.43: admin1
[*] Identified SID for 192.168.0.43: admin2
[*] Identified SID for 192.168.0.43: database
[*] Identified SID for 192.168.0.43: dba3
[*] Identified SID for 192.168.0.43: dba5
[*] Identified SID for 192.168.0.43: dba7
[*] Identified SERVICE_NAME for 192.168.0.43: admin1
[*] Identified SERVICE_NAME for 192.168.0.43: admin2
[*] Identified SERVICE_NAME for 192.168.0.43: database
[*] Identified SERVICE_NAME for 192.168.0.43: dba3
[*] Identified SERVICE_NAME for 192.168.0.43: dba5
[*] Identified SERVICE_NAME for 192.168.0.43: dba7
[*] Auxiliary module execution completed
msf auxiliary(oracle_sid) >

If its protected you'll see this:

msf auxiliary(oracle_sid) > set RHOST 192.168.0.137
RHOST => 192.168.0.137
msf auxiliary(oracle_sid) > run

[-] TNS listener protected for 192.168.0.137...
[*] Auxiliary module execution completed


If you are on the MSF 3.3 trunk a svn update should be all you need to do.
CG

Thursday, January 8, 2009

More exploits pls.


mc committed 5 exploit modules, 2 browser & 3 fileformat, I'd put together to the metasploit trunk the other day. Just run 'svn update' to update your install with them. The five are:

[1] CA BrightStor ARCserve Backup AddColumn() ActiveX Buffer Overflow
[2] VeryPDF PDFView OpenPDF ActiveX OpenPDF Heap Overflow
[3] DjVu DjVu_ActiveX_MSOffice.dll ActiveX ComponentBuffer Overflow
[4] Microsoft Works 7 WkImgSrv.dll WKsPictureInterface() ActiveX Exploit
[5] SasCam Webcam Server v.2.6.5 Get() method Buffer Overflow

Cheers,
Dean
dean de beer

Wednesday, January 7, 2009

More Oracle Pwnage...I Lost Count...New Version Module


Thanks to help from MC, I pushed out a oracle_version scanner module today for MSF that uses MC's TNS mixin.

here it is in action:

msf > use auxiliary/scanner/oracle/oracle_version
msf auxiliary(oracle_version) > info

Name: Oracle Version Enumeration.
Version: $Revision$

Provided by:
CG

Basic options:
Name Current Setting Required Description
---- --------------- -------- -----------
RHOSTS yes The target address range or CIDR identifier
RPORT 1521 yes The target port
THREADS 1 yes The number of concurrent threads

Description:
This module simply queries the TNS listner for the Oracle build..

msf auxiliary(oracle_version) > set RHOSTS 192.168.0.0/24
RHOSTS => 192.168.0.0/24
msf auxiliary(oracle_version) > run

[-] The connection timed out (192.168.0.0:1521).
[-] The connection timed out (192.168.0.1:1521).
[-] The connection timed out (192.168.0.2:1521).
[-] The connection timed out (192.168.0.3:1521).
[-] The connection timed out (192.168.0.4:1521).
[-] The connection timed out (192.168.0.5:1521).
[-] The connection timed out (192.168.0.6:1521).
[-] The connection timed out (192.168.0.7:1521).
[-] The connection was refused by the remote host (192.168.0.8:1521).
[-] The connection timed out (192.168.0.9:1521).
[-] The connection timed out (192.168.0.10:1521).
[-] The connection was refused by the remote host (192.168.0.11:1521).
[*] Host 192.168.0.12 is running: 32-bit Windows: Version 10.2.0.1.0 - Production
[-] The connection timed out (192.168.0.13:1521).
[*] Host 192.168.0.14 is running: Linux: Version 10.2.0.1.0 - Production
[-] The connection timed out (192.168.0.15:1521).
[-] The connection timed out (192.168.0.16:1521).

---SNIP---You get the idea---

If you are running the framework trunk, you can svn up and get the aux module as well as MC's 8i TNS overflow exploit.
CG

Weak Password Brings 'Happiness' to Twitter Hacker


From Wired Threat Level

"An 18-year-old hacker with a history of celebrity pranks has admitted to Monday's hijacking of multiple high-profile Twitter accounts, including President-Elect Barack Obama's, and the official feed for Fox News.

The hacker, who goes by the handle GMZ, told Threat Level on Tuesday he gained entry to Twitter's administrative control panel by pointing an automated password-guesser at a popular user's account. The user turned out to be a member of Twitter's support staff, who'd chosen the weak password "happiness."

http://blog.wired.com/27bstroke6/2009/01/professed-twitt.html

great stuff, twitter got for free what would have cost them 20k+ from any other pen test shop.
CG

Sunday, January 4, 2009

UK to allow warrantless "remote searching"


"TheHome Office has quietly adopted a new plan to allow police across Britain routinely to hack into people’s personal computers without a warrant.

The move, which follows a decision by the European Union’s council of ministers in Brussels, has angered civil liberties groups and opposition MPs. They described it as a sinister extension of the surveillance state which drives “a coach and horses” through privacy laws.

The hacking is known as “remote searching”. It allows police or MI5 officers who may be hundreds of miles away to examine covertly the hard drive of someone’s PC at his home, office or hotel room."

http://www.timesonline.co.uk/tol/news/politics/article5439604.ece
CG

MSF VBA payload Demo


Pretty good demo by Mark Baggett using the MSF Payload with VBA output and creating a malicious word document.

http://markremark.blogspot.com/2009/01/metasploit-visual-basic-payloads-in.html

Its a shame everyone can do this now, its been ol'reliable for quite awhile :-(
CG

Saturday, January 3, 2009

Dissecting a Multistage Web Attack that uses IE7 0day


Couple of great posts over on AttackResearch on Dissecting a Multistage Web Attack that uses IE7 0day Parts 1 & 2.

http://blog.attackresearch.com/?q=node/4
http://blog.attackresearch.com/?q=node/5
CG

Friday, January 2, 2009

Googling Security: How Much Does Google Know About You? Book Review


Googling Security: How Much Does Google Know About You?

Greg Conti

5 stars

Witty (hopefully) Title for Amazon: Google may not be evil, but its still worth keeping an eye on

Disclaimer: I know the author personally and was given a review copy of the book.

I haven't read many (non-religious) books that totally change my outlook about the world we live in. In 2008, Robert O'Harrow's "No Place to Hide" is one such book and Greg Conti's Googling Security is the second.

The book begins with a simple question. "Have you ever searched for something you wouldn't want you grandmother to know about?" A simple but powerful question. Of course all of us have searched for topics we would rather our grandmother, friends, or spouse not know about. Would you ever consider posting the sum of your Google queries on your blog or website? Probably not, but just about all of us have given this information to Google in our dealings with them over the years. The book helps you take a look at how the sum of that information gathered through the use of the multitude of Google's "free" tools adds up to take a huge chunk of our privacy and very well could be giving Google a solid look into our personalities to include things most of us would prefer keep private.

Breakdown of the chapters:

Chapter 1: Googling 1

Chapter 2: Information Flows and Leakage 31

Chapter 3: Footprints, Fingerprints, and Connections 59

Chapter 4: Search 97

Chapter 5: Communications 139

Chapter 6: Mapping, Directions, and Imagery 177

Chapter 7: Advertising and Embedded Content 205

Chapter 8: Googlebot 239

Chapter 9: Countermeasures 259

Chapter 10: Conclusions and a Look to the Future 299


A common theme that the author found while conducting research for the book was "Google will collect personal information from you to provide you with a better experience."
Right now we expect Google to "do no evil" and their current policies say they don't personally identify its users but as the author points out through the chapters in the book; Google gathers A LOT of data they DO tell us about and the ability to gather even more data is already built into its "free" services.

Some other reviewers have said that its "preaching to the choir." While I agree that the normal person that would buy this book is in the IT field, I wouldnt be so quick to immediately say that the average system admin or evern security guy understands the magnitude of information gathering that could possibly be going on and the value and power of that information. While not specifically mentioned in the book I would encourage anyone interested in the topic to check out Conti's DEFCON 16 presentation on "Could Googling Take Down a President, a Prime Minister, or an Average Citizen?" When you think about the importance or value of that first page of results returned by Google and think about how events, commerce, or public opinion could be shaped by crafting the results that are returned you have a powerful tool(weapon?). What if the top results for a certain political candidate consistently only returned negative commentary? or if events were "buried" by Google never returning those results? Just because Google doesn't currently appear to be altering results or collecting and using personal information, its important to understand the power every user gives to Google in both personal information and the power of controlling what is presented to searchers.

One of the best things the book has that most books covering similar (privacy) type topics is a countermeasures chapter. While saying "don't use Google" really isn't an option for most people the best advice from the chapter was teaching people to know and understand what they are disclosing and adjusting the behavior accordingly.

My only dislike in the book was the coverage of "physical" information leakage (TEMPEST). The material is good, but I don't think it was pertinent to the Google and privacy discussion.

Conti's DEFCON 16 INFO

Abstract: http://www.defcon.org/html/defcon-16/dc-16-speakers.html#Conti
Materials: http://www.defcon.org/images/defcon-16/dc16-presentations/defcon-16-conti.pdf




Book Review Criteria: http://carnal0wnage.blogspot.com/2008/03/book-review-criteria.html
CG

Thursday, January 1, 2009

Happy New Year!


If you're subscribed to the blog or just found yourself here by accident.

Happy New Year!

Here's to good things in '09.
CG