First and foremost, if you’re implementing DevSecOps in the DoD, kudos to you for taking initiative.
DevSecOps is enabling the Department to develop quickly and securely, so organizations can continuously meet critical and urgent needs of the warfighter. It’s a dramatic change from waterfall development, where the actual problem largely changes by the time requirements are received.
To help organizations get started, the DoD has provided general guidelines, principles, and requirements for implementing the right DevSecOps framework for you and your organization. The August 2019 Enterprise DevSecOps Reference Design was architected by the Director of Architecture and Engineering, Thomas Lam, and Special Advisor, Nicolas Chaillan.
To ensure your success, it is crucial to implement within the best practices from the DoD. Best practices, industry experts, and even independent DoD tests have stressed that the key to adding security into DevOps pipelines is focusing on the “test” phase and making sure your security tools are actionable.
Putting the “Sec” in DevSecOps
DevOps is all about removing blockers in the pipeline so developers can iterate as fast as possible. One of the largest hurdles to incorporating security in DevOps is their opposing principles. DevOps is about time-to-outcome, while security testing is about quality-of-outcome.
You may be surprised to find that DevOps’ need for speed and iteration will eliminate several security tools as viable options for your organization.
Too often, we’ve seen this requirement being consistently overlooked, leading to poor outcomes. A bad approach to security in DevSecOps would look something like this:
- Too many false-positives: A security tool is run and it points out some real bugs, but also has numerous false positives.
- Manual results validation: Security experts, who are expensive and hard to find, spend their time manually reviewing potential bugs -- a waste of potential.
- Disconnected teams: Security staff manually files bug reports on true positives, so developers can start looking into them. DevSecOps is about integrating teams, not further division between developer and security.
- Unvalidated fixes: Developers propose fixes, but they can’t test whether the fix really addresses the security issue or satisfies the security tool -- meaning the reports aren’t actionable. Due to the inability to verify, they do the best they can and count the task done. This approach perpetuates an apathetic morale.
- Lack of measurability: New versions with the supposed fixes are released. While the report is cleaner, it remains unknown whether security has truly improved.
The common thread in this bad approach is a lack of actionable information at every stage of the pipeline. Agile and DevSecOps are about identifying and iterating fast. That means if your tool can’t accurately identify issues and prove them, you will be slow iterating - worst of all easily falling back into the “waterfall” mentality. No one wants the above approach. The best thing you can do is pick tools that provide accurate intelligence - tools that when an issue is identified, it is immediately actionable and couples both security and development teams for true DevSecOps.
Static Analysis? Maybe Not…
A common security testing solution that is considered for a DevSecOps pipeline is static analysis. There are two types of static analysis:
- Linters: Linters point out where code doesn’t meet security specifications you’ve defined. For example, the Joint Strike Fighter development guidelines say not to use insecure functions such as strcpy. So, your organizations may program your linter to detect such in compliances. Linters are useful in DevSecOps because they are actionable.
- Static Application Security Tools (SAST): SAST tools are like the grammar checker in Microsoft Word. However, you may not accept every suggestion made by MS Word grammar checker, because the recommendation does not work in context. Similarly, SASTs points out every potential problem, but many of the analysis “findings” aren’t real. SAST tools can perpetuate many of the issues in the “bad” pipeline examples we shared in the previous section.
SAST tools aren’t bad per se, they just were never intended for the DevSecOps use case. Forty years ago computer scientists tried to prove programs were correct before releasing them. Because SAST tools come from that line of research, they were built with a waterfall method in mind. At that time, it was considered completely normal to spend weeks, months, and even years of human effort to prove programs were correct.
A SAST tool’s modus operandi is to assert a program is bug free. However, research in computer science has shown automatically proving a program is bug free is impossible. Most notably, Alan Turing, the famous World War II codebreaker, proved this to be a fact when he cracked the Enigma in the 1940’s. Thus, if you strive for perfection, you’ll never get your code out the door. In DevSecOps, you can’t take the mindset of “I want a bug free program”, because you’ll never move fast.
DevSecOps: What Modern Organizations Do
DevSecOps is a different mindset. DevSecOps doesn’t try to get everything right all at once. Instead, it looks at the power of iteration: build, check, redeploy, repeat. The idea is if you can get through those four actions quickly and continuously, you’ll inch to the right solution faster.
That’s why many software companies, such as Google and Microsoft, exclusively use fuzzing for security. For example, Google Chrome is a high value target. If you can find a zero-day in Chrome, you can easily own millions of devices. It also wouldn’t be hard to do reconnaissance. Chrome is also open source, meaning every attacker out there can look at the code for vulnerabilities. How does Google make Chrome so safe? They almost exclusively use fuzzing to test the security of their code -- and the code of their supply chain.
Google’s efforts have been so effective that over the last three years they have reported over 20,000 new bugs and vulnerabilities -- and they’ve done it all by integrating a fully automatic fuzzer within their development pipelines. In their past research, they’ve shared that 80% of the bugs they uncover are detected by fuzzing, while the remaining 20% is found through other techniques, or organically in production.
When a bug is identified with a fuzzer, their developer receives a test case that demonstrates how to trigger the bug. The test case can also be used to test a developer’s patch by re-running the test case to validate it no longer triggers the vulnerability. The collection of test cases for previously found bugs will be beneficial. Google also cited that 40% of their bugs are regressions.