Part 1. Current State of Pentesting
Problem
Thousands of organisations around the world are executing tens of thousands of penetration testing (pentesting) exercises every year. According to MarketsandMarkets, the pentest market is expected to reach USD 4.5 Billion by 2025.
Many jurisdictions have established regulatory regimes that make various types of pentesting mandatory. Some regulatory frameworks even go as far as to make remediation mandatory as well!
And for many years the infosec community has been saying and writing that pentesting is broken. Google finds hundreds of articles on that and similar topics.
I think I know what the source of that breakage is. This article is about that. The next articles will be about the ways on how to fix it when running sizeable pentesting programs.
But before I go any further, I must confess on what I am not going to do – I am not going to get technical. I am not going to mention tools, AI, Machine Learning, Zero Trust, Quantum Computing or Next Generation <anything>.
Now let us see what is broken in pentesting.
What are the most common problems of a simple pentesting engagement?
Scope.
For pentesting to bring value to the business - the scope of the engagement must reflect the business risks associated with the asset.
However in reality, Scope is driven by budget limitations and limited understanding of actual threats. And quite often, the Scope is left up to the vendors to determine.
Execution
The quality of execution is completely dependent on individual pentesters. Application of methodologies cannot be verified. There is no way to know what has and what has not been tested!
Outcome
Outcome of pentesting is a static report. It is hard to produce a single document that is easily understood by non-technical stakeholders AND has necessary information to remediate and re-test.
And when we move to sizeable pentesting program - the problems grow exponentially. I am going to outline six areas which relates to challenges I’ve seen with many pentesting programs.
Some of the problems are the result of amplification of project-level challenges. The others are the results of how pentesting is done at the organisational level.
Challenges
Challenge #1. It is too late.
Pentesting is one of the last activities before going live. So multiple project teams have to burn a lot of budget waiting for their reports, and whatever immediate remediation activities that follow.
An even more significant loss is incurred when business is delayed – making pentesting a major business blocker. The cost of pentesting itself is insignificant when compared to the cost of lost opportunities due to products getting late to market.
Challenge #2. Scope does not match business threat environment
With scope of each pentest fragmented on project-by-project basis - the overall business threat landscape is hard to consider. There is no complete picture to ensure that the scope of entire pentesting program reflects the business risk.
Challenge #3. Lack of assurance on methodology and on overall coverage
Every pentesting proposal usually has a reference to testing methodologies, for example OWASP, PTES, OSSTMM, etc.
In reality - providers, internal teams and even pentesters themselves tend to do it “their own way”. Even when methodologies seem to be used, there is no references and tracing for test cases - meaning no visibility on what has been tested and what has not been tested, and the reasons WHY it was not tested.
Challenge #4. Lack of Common Vocabulary.
Everyone uses different terminology. The same vulnerability would be called differently by different pentesters, providers and even within internal teams of the same organisation. This means there is no way to compare results, or to assess the progress.
Challenge #5. Reporting and remediation tracking.
Outcome of each pentest is usually a static document – PDF, DOC, or XLS report.
Outcome of pentesting program is a set of dozens or hundreds of documents stored on a variety of media - shared folders, SharePoint sites, emails. These documents contain the most hazardous – “How to hack me” – information about the organisation and they are very hard to secure when fragmented.
But this is only the first problem. The biggest problem is that these formats are nearly impossible to use for remediation tracking. As a result, vulnerabilities stay open.
Challenge #6. Analytics, or rather lack of.
Any non-security testing programs (performance, integration, etc.) within large organisations bring a wealth of knowledge to be analysed and utilised to improve the processes. This does not work well for pentesting.
Without benchmark test cases, standard terminology, and analysis-friendly data format - it is impossible to determine the root causes, or measure any improvements, or draw any meaningful conclusions from pentest data. This denies the organisation, that spends a lot of money on pentesting, the opportunity to make educated decisions on how to improve the security posture, improve processes, and embed security into their wider IT.
The next articles will be on my views on how to fix it. And it will not be about another box with shining lights, or SaaS subscription of any kind. None of those alone are going to help.