I was recently challenged to come up with the best methods you can use in 2023 to make the systems you're developing more secure. I realized it boils down to one thing, and it’s what all the highest performing companies are already doing: automating offense as part of your defensive security program.
There are three steps to this strategy:
1. Focus on continuous security rather than one-step scans.
Top-performing security teams don't set impossible standards. They look at cybersecurity differently, and it allows them to get disproportionately large returns on their efforts.
The scan game is highly reactive. Each step of the scan mindset is looking for something already known to attackers. It sets the expectation that security will create a master list of all problems, and that all of them need to be fixed before you’re secure. That’s demoralizing because it puts security in the position of gatekeeper rather than enabler. It sets an impossible security standard.
The average performer thinks about security as a light switch: it’s either on or off, and you’re either secure or insecure. The tell-tale sign of this mindset is designing security around a sequence of scans that have to be passed. You scan your software build for known OSS vulnerabilities. You scan your network looking for known vulnerable services. You hire pentesters who try known exploits against your systems. Sound familiar?
High performers realize this game is unwinnable. In fact, they are in a completely different game. Instead of looking at security as a light switch, they look at it as a loop. You improve by executing that loop faster and faster, and you win when you end up with such fast execution that attackers—even if they find a bug first—don’t have enough time to gain an overall advantage.
This means you should think about security as something that always runs rather than a gatekeeper. High performers like Google and the Microsoft SDLC do this by continuously fuzzing their software with their own customized system. Newly identified vulnerabilities are dropped into the ticketing system, and then checked on the next release.
2. Focus on fixing vulnerabilities over finding vulnerabilities.
Think of a vulnerability as just a bug and an attacker’s exploit as just a test case for that bug. Suppose we had a magic scan that identified all known and even unknown vulnerabilities, all in one pass. Then, by definition, that scan could find all bugs, which is something any developer will tell you is impossible. It’s not the right benchmark.
Fundamentally, both offense and defense want to find vulnerabilities, but that’s not the end goal. High performers focus on finding vulnerabilities, figuring out which are real, proposing fixes, and testing those fixes as quickly as possible. They know if they can get the fix out before the attacker finds the same vulnerability, they win.
You can implement this strategy by measuring time from discovery to actually fielding a fix, not just the number of vulnerabilities found. This is really not hard, as you need to just include regression testing time as part of your security benchmark. It keeps security focused on testable problems rather than hypothetical problems.
One reason Google and Microsoft have adopted fuzzing is because they’ve found 90% of bugs found with fuzzing are fixed, far exceeding other approaches, and that they are fixed 2.23 times faster.
3. Create artifacts as part of your normal build and deploy process.
Usually this is as simple as creating a docker container, as anyone can stand up the container and start attacking it without harm to production. We’re even seeing a trend in OT to use a docker or a digital twin as part of the software-in-the-loop testing push. The containerization allows you to scale testing efforts and adopt an offensive-minded testing strategy without slowing down deployment or injecting failures into environments where stability is needed for acceptance testing or final QA cycles.
Use Mayhem as Part of Your Offensive DevSecOps Strategy
These techniques aren’t new. Offensive minded hackers have utilized them for years, and there are dozens of open source projects that help individuals adopt more proactive testing approaches.
We built Mayhem to help teams scale up these approaches and move from scan-based, reactive testing to a focus on fixing vulnerabilities and shipping faster. Mayhem “attacks” your applications in hundreds of different ways every second, executing your code under different conditions to uncover vulnerabilities at machine speed. It’s like having a team of top notch hackers on call, 24/7/365, helping you stay ahead of risk.
Development Speed or Code Security. Why Not Both?
Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.