There is a great deal of valuable data that can be gained from the penetration test element of an assessment. Knowing whether or not your perimeter, for example, is secure and validating that knowledge is important. Looking at the scope of the assessments that I have recently been working on I think most people are looking to gain more than just an initial validation of their existing security controls.
Any good penetration test or vulnerability assessment will deliver a set of results that will include a listing of vulnerabilities and assign a risk value to those vulnerabilities along with some remediation measures. Most of these assessments will use the following calculations or variants thereof: Risk = Threat x Vulnerability or Risk = (Threat x Vulnerability) x Impact and Annual Loss Expectancy = Single Loss Expectancy x Annualized Rate of Occurrence. These calculations, while a great means of assessing risk, are usually applied to a risk management lifecycle that is circular and repeating .This, in my opinion, creates an environment in which risk is not being managed but is really just being identified and fixed and re-identified. This creates a reactive rather than proactive environment. Also, these calculations rely on values that are hard, if not impossible to define in all but the simplest of situations or scenarios.
Lets assume that after the installation of an IDS/IPS at your perimeter, you detected hotbar/spybar spyware traffic indicating that users on managed stations had local admin rights in order to be able to install the malware. Using this information you were able to assess your current controls and determine why they had admin rights and take steps to fix it. While the product has proved it’s worth, it was a reactive response to an issue. Ideally being able to define a set of metrics by which to measure the security of the host’s configuration would allow you to better define, assess and improve the security controls you have in place. By tracking this data over time you can move towards a more proactive environment.
Using the example of Host Configuration Management I would look to achieve the following:
A Benchmark score for Workstations/Laptops/Servers. - This allows you to standardize configurations and characterize the degree of lockdown applied to the OS.
The percentage of Workstations/Laptops/Servers using the standard build image. - This allows you to measure the conformance of systems in your environment to the standard build.
The percentage of systems in compliance with the standard configuration. - This shows how many systems conform to the standard build requirements regardless of how the system was build (manual, image, etc..)
The network services ratio. - This identifies potential ingress points on the hosts. Tracking unnecessary/vulnerable services that should be disabled allows you to determine the number or percentage of systems deviating from your standard build as well as potential ingress points or vulnerable systems. This data can also be applied to your patch management processes in order to prioritize which systems require immediate patching, etc...
The percentage of systems that are remotely managed. - This would identify the systems that can be administered remotely and are subject to patch management and anti-malware controls.
The percentage of critical systems actively being monitored. - This helps identify the extent of uptime and monitoring controls in place.
The number/percentage of systems logging events remotely. - This determines how many systems are forwarding security event data to a central log server.
The number/percentage of systems using NTP server for time synchronization. - This and the previous two metrics are important for Incident Handling response. When an incident occurs and the event information needs to be accessed, having this data in a central location ensures access and integrity of those events. Time synchronization is important when reconstruction a sequence of events in the correct order.
The response time to (re)configure a system in an emergency. -This tracks the response time to reconfigure a set of systems in event of a zero-day attack or incident. This should ideally be organized by OS, Department, location.
Having a set of metrics that are easy to gather, repeatable, can be expressed as a number or percentage and are relevant to your environment, will help with analysis and allow you to become far more proactive.
Quantitative metrics like these can be applied to multiple areas of control, including the results of a penetration test of vulnerability assessment. Some metrics that would have immediate value would be:
Perimeter Security (Anti-virus/spam/malware, Firewalls, IDS/IPS) and Threats/Attacks (Events and Incidents).
Coverage and Control (Vuln/patch management, AV management, Host management). These determine effectiveness and success of your existing security program.
Availability and Reliability (Uptime, recovery, change control).
Application/Web application security.
Penetration Testing/Vulnerability assessments. These can provide valuable data but need to be defined by your environment. Identifying and defining issues by departments, looking at the difficulty of the exploit (remote or requires local access, etc...), assessing the impact of the vulnerability in terms of your existing security controls (defense in depth).
These are all predominantly technical in nature but the same methodology could be applied to assessing user awareness and compliance. I think that regardless of what you decide to have assessed, looking to gain valuable and repeatable metrics from results should be the outcome.
A great read on Security Metrics, and where most of the above content is from, is Andrew Jaquith's book Security Metrics. It's an excellent read and is extremely relevant in today's maturing security environments.