So I’m listening to the “Larry, Larry, Larry” episode of the Risk Hose podcast, and Alex is talking about data-driven pen tests. I want to posit that pen tests are already empirical. Pen testers know what techniques work for them, and start with those techniques.
What we could use are data-driven pen test reports. “We tried X, which works in 78% of attempts, and it failed.”
We could also use more shared data about what tests tend to work.
I like the idea and i think it could be useful.
However, they need to drop the pentest part. you are solidly into the vulnerability assessment part of things when you are talking about “ok, i tried 1,2,3,4,5 and 1 & 3 worked” ok on to the next set of tests… thats vulnerability assessment (with exploitation if you want to get technical) and not pentesting.
pentesting is about that human looking at the problem and figuring out how to break it, not some scanner, thats going to be very hard to standardize and put hard numbers on and i dont think its going to be possible without tying up your tester’s time with bullshit.
This post and really any methodology document you will ever read or write will have gaps, because no document on this subject can ever really be 100% all inclusive of every vulnerability and the myriad of variations that exist for many of these.