In August 2021, Dr James Ransome -- Veteran CISO, CSO, CPSO and Author -- hosted a fireside chat at FuzzCon 2021. In the Fuzzing Real Talks session, Ransome was joined by industry experts Anmol Misra of Autodesk, Larry Maccherone of Contract Security, Damilare D. Fagbemi of Resilient Software Security, and Jeff Costlow of Extrahop Networks to discuss the ins and outs of a successful security testing program. From tooling selection, to value justification, to organizational buy-in, to strategy building, these experts reference their 50+ years of collective industry experience to reveal their personal tips, tricks, and cautionary tales. Listed below are the top 3 takeaways from Ransome’s panel:
The bottleneck of software security is getting developers to respond to findings.
Contrary to popular belief the bottleneck is not finding new issues. “You can easily build piles of findings with various tools. The bottleneck is getting developers to actually do something about the findings,” shares Maccherone.
The lesson that Maccherone eventually learned during his tenure at Comcast was using the pull request as the one place to give developers feedback: “Developers are in love with the code they wrote that morning. It’s their heart throb. They want nothing more than their friends on their team to say it’s wonderful. If there's feedback that prevents this code from getting merged in, they are going to pay attention to it. They are going to commit to micro learning.”
But he cautions that you can’t flood them with 30,000 issues, so you have to be selective about the issue you choose to highlight to the developer. Up until recently, Larry admits that he didn’t feel DAST was sufficient at providing feedback in the pull request. However, recent evolutions in fuzz testing has shifted his mindset. Direct and immediate feedback within the SDLC was the key capability of fuzzing that got Larry over his resistance of inserting DAST in the SDLC. He also loved that the results were accurate. The results of fuzzing tools are much more significant than a SAST or an IAST, which has false positives. It’s about getting developers to resolve the findings, not finding new things.
The key to getting developers invested in security.
The panelists agreed that it’s about feedback providing a contextual learning opportunity for developers. Research around contextual learning reveals that there are 3 aspects: personal, social, and organizational.
- Personal. Is this important to me today? Is this important to what I'm trying to accomplish today?
- Socially. Does it give me credibility with my peers?
- Organizational. Does this learning align with organizational priorities?
This is the hedgehog of learning. When culture and mindset is designed with this architecture in mind, incremental change can be initiated and with momentum. “The first few months of standing up a program is the most critical. You must ensure that there's enough wins in the beginning and proof that the wins are hitting the right targets,” Misra elaborated.
Maccherone offered additional color by referencing a 2020 study he conducted within Comcast. He studied how Comcast’s security training programs correlated with risk reduction outcomes. There were two teams: one team that studied for 40 hours vs one team that studied for 2.5 hours. Surprisingly the group that had taken 40 hours of training had worse security outcomes. The reason? Selection bias for filling the 40 hours of study. They also saw that champion programs were underperforming, which ultimately led Comcast to switch over to the hedgehog method of contextual learning.
So, what does the ideal fuzz testing solution look like?
Fagbemi began: Some fuzz testing tools have a cult following, which is fine, but they are often limited in what they support. For example, some fuzzers only work on Linux. Some tools are limited on input type. Some require days to complete runs, which most organizations don’t have. Some require expertise to interpret results. These fuzzing limitations need to be addressed especially as development increases in speed.
Maccherone took in Fagbemi’s response and inquired “What if we didn’t give the findings to the developer immediately? Rather, wait until the developer is working on the code again. And, when they add the code to the pull request to merge it in, we share the fuzzing results. To the developers, it seems like responsiveness, when in reality, it’s the results of a run that had been going on for days in the background. But, is this technical detail relevant to the developer? No.”
Fagbemi responded: “What happens when the issue is high risk?”
Maccherone thought on that one and admitted he didn’t have the answer, but offered a suggestion: “What about two loops? One that is faster for high risk issues and one that is slower in a JIRA log that gets addressed at a later time via the hedgehog method?”
Got suggestions on the ideal fuzz testing solution? Let us know at firstname.lastname@example.org
To see the full session, you can watch the recording here.