What are the defenses that we have against the software vulnerabilities?
Static Application Security Testing, also known as static code analysis is perhaps the most popular tool. SAST uncovers vulnerabilities by analyzing source code itself, the defects that they identify are known unknown risks, meaning SAST identifies a known class of weaknesses and if left alone, that coding weakness might result in a vulnerability. So SAST operates in a world of what ifs. SAST takes weakness information to make an assumption on what could be a vulnerability. This often leads to high false positives.
Despite this shortcoming, SAST has its place in the software development lifecycle, and it remains a recommended preventative practice at the opposite end from static Application Security Testing is dynamic Application Security Testing or DAST.
This is an overarching category for a variety of techniques, all of which are looking at the active runtime code. To provide more specificity, we'll focus for the moment on the software composition analysis or SCA. This is a software testing solution that complements SAS, by looking for the known known vulnerabilities SCBAs can detect CVE ease within the software they're scanning SCA is operating in the world of what is relying on accepted and publicly known information to uncover known vulnerabilities,
SCA offers prescriptive advice,listing the affected component, and providing a link to its work around or patch. This level of actionability is unique in Dask solutions. However, bear in mind not all the vulnerabilities flagged by an SCA solution are exploitable. Additionally, each patch must be tested to ensure interoperability with the entirety of the application, and or the ecosystem, they live in.
Sast And Sca Are Not Perfect
While SAST and SCA are effective and in wide use today, they are not perfect. Often when discussing SAST and SCA the quality of analysis is overlooked in favor of their ability to find known weaknesses and known vulnerabilities coverage becomes a critical factor. If you're only looking for something that is known, then you might be missing the larger picture.
For example CVE 2014 0160, aka Heartbleed was a new class of vulnerability. It could not have been discovered using existing SAS, by definition could not have been identified by SCA, either.
These could not have been found through existing SCA or SAS by only looking for the known vulnerabilities and weaknesses. SAS and SCA has failed to offer continuous ROI value. This is due to something known as the pesticide paradox in the 1990 books software testing techniques for us Baiser first coined the term, the pesticide paradox states that if the same software tests are repeated, eventually those tests will no longer find new bugs.
It's a common misconception that no new reported bugs can indicate that the software under test is secure, more often than not, no new reported bugs actually indicates that the defects have instead clustered in limited sections of the software that are not currently being tested. This creates hotspots missed by most desk solutions on the market today. That's the value of these tests in terms of continuous testing, therefore decreases over time, and they're only circling around a small part of the code.
So what about those unknown vulnerabilities, those zero days? That's where we need to consider a third type of software testing tool. Fuzz testing, then, is a much more agile technique, under the dynamic testing category.
The defects that fuzzing tools identify are the unknown unknown risks, and there are far more of those lurking in any code fuzzing uncovers defects, utilizing unknown or uncommon attack patterns fuzzing tools operate between the world of what if and what is they uncover unknown unknown defects, enabling organizations to be preventative and proactive.
What Is Fuzzing?
Consider, we have a very simple computer program. At the left is a simple program behavior, where an unknown vulnerability lies at the bottom of a chain of conditional statements. At the upper right are a set of potential inputs, and at the lower right are what’s known as a “minimum set”, or the smallest possible set of inputs that covers every behavior. In computer science programs are often represented as ordered trees.
For the sake of simplicity, traversing the paths of each tree could also be seen as traversing the paths of the maze, where some inputs result in correct behavior, some inputs go nowhere at all, and some inputs result in bad behavior. Inputs can be thought of as directions in the maze, and when the program executes, it begins to follow the directions of the maze. Here we see a successful execution. Here’s another successful execution, albeit with a little extra work. While the inputs might be different, the net result is the same.
Here’s an input that leads to bad behavior. We’ve simplified this here in the diagram, but you can imagine that this could be any undefined or unexpected behavior, i.e. program crashes, misused or corrupted data output, hangs or freezes, etc. And again, although there is another input, it leads to the same bad behavior.
The minimum set is valuable to us because, as in the program example, it covers every behavior that the program might exhibit. How can we get there? This is what advanced fuzz testing achieves.
Download: Fuzzing 101: Application Security
See how you can use fuzz testing to locate unknown vulnerabilities before your adversaries do.
GOOD, BETTER, BEST
So how do these different solutions compare? We can think of one axis being unknown and known vulnerabilities. We can then add a second axis for static versus dynamic testing. Starting on the right side, the static side, we can have static analysis. In the lower left side is software composition analysis. And in the upper left we have fuzz testing. Now if we think of code coverage as a simple maze, how do each of the solutions perform their tests?
With SAST we're looking for unknown examples of something we do know. So let's say it's this 45 degree angle, all on its own. The static analysis test will go through and find all the examples that match that in the code. However, not all the examples are correct. Imagine having a much more complex maze, one with a lot more false positives to sift through.
With SCA we can then look toward known knowns, our CV ease, and in this case we know exactly the signature that we're looking for in the code. And here we find the signature in our maze. But look at all the areas of the maze that the sample didn't test.
Finally, we can use fuzz testing to dynamically discover all the potential vulnerabilities in domains. And with guided fuzzing. We can then map out and explore all the different pathways within our code. More importantly, we can map out all the good pathways, the end result that we want as opposed to all the bad pathways, the end result that we don't want. And with fuzz testing. Once we have mapped out all these pathways, we can test this continuously, as we continue to develop and iterate the software, always performing regression testing to make sure any change doesn't integrate any new vulnerabilities and always proving that we have fixed any exposed and known vulnerabilities.
Continuous fuzzing has been proven and accepted software security practice for years. However, it is also an advanced testing technique. Until now, fuzzing has been exclusive to technology behemoths such as Google, Microsoft, Apple Nvidia, and more, who have the technical savvy and the budget to implement and maintain such advanced technologies, latest advancements in this field of study, however, have dramatically improved usability and automation, making fuzz testing increasingly accessible to the general public.
Now that you have some idea of the importance of security and software today, and you are aware of some of the tools that allow you to do your own testing on whatever piece of software or device you want with fuzz testing, you should be able to find the zero day vulnerabilities that are looking inside your software, and therefore defend it against adversaries. In the next video, we'll talk more in depth about fuzz testing.