Think Like a Hacker: Webinar Transcript and Video

Debra Hopper
August 1, 2023
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In this webinar, you will gain valuable insights into thinking like a hacker. These methods will empower you to strengthen your defense mechanisms and safeguard your digital assets. 

Our expert speaker, Josh Thorngren, will guide you through the fascinating techniques used by hackers, arming you with the knowledge you need to protect yourself and your organization.

[Heads Up: This transcription was autogenerated, so there may be errors.]


Lakshmia: Welcome to the Think Like a Hacker webinar. I am Lakshmia. I am the product manager here at Mayhem. And here today is Josh, our VP of Developer Advocacy. And I'll let him tell you more about that when we get to that point. But we are here to help you guys think like a hacker and navigate our cybersecurity world in the current state that we are in today.

So if you have questions, feel free to either ask in the chat or in the Q&A boxes. Either one is cool, and then when we get to the questions section, we'll start answering those as well. So feel free to ask those questions as they come to you. Ready, Josh? 

Josh: Any time. All right. Well, hey, thanks, everyone, for hopping on and taking some time with us today. This is about thinking like a hacker. And it's not just about thinking like a hacker. It's about how do we start to learn from what hackers do and start to put that into practice as cybersecurity practitioners, DevSecOps practitioners, application security practitioners in our organizations. So there's gonna be a mix of kind of a background on what and why and going into some technical recommendations. Now, look, we have a product we make.

Introduction (1:12)

I'm not here to show you our product. I'm not here to demo you our product and be like, Listen, this is why it's the best. Now the story I'm telling you: Hey, we a got product that helps make that reality. But the things we’re gonna talk about today are some core principles that independently of how you implement this in your organizations from a tool selection standpoint, adopting these are going to make you more secure and better prepared for cyberattacks as they come.

So let's talk about this a little more specifically, what we're gonna cover today. We’re going to talk about the problem, thinking like a hacker, we'll talk about some questions. Nothing complicated. Who am I and what gives me some knowledge about this? So here at ForAllSecure, I do a couple of things. I help shepherd a product roadmap. I lead our developer relations and advocacy programs.

And also I work very heavily across everything we do in marketing. I'm a ex-developer ex- QA engineer, ex DevOps engineer and manager, and then a marketer at a series of DevSecOps, DevOps type companies. I've a career of trying to figure out how we make developers more productive while still meeting the security and compliance requirements that are just an essential fact of life in today's digital world.

For all the “shift left”, we’re still finding a lot of things in production. (3:25)

So that said, Let’s get started. Like ten years ago, I was shifting things left and I was shifting things left because, hey, this was going to help us fix issues before they reached production and we shifted a bunch of cybersecurity left. We shifted a bunch of I.T. tasks left, we shifted a bunch of operations tasks left and you know, last year Invicti surveyed a bunch of AppSec professionals and developers and 41% of AppSec professionals and 32% developers say, hey, we find things in production, and sometimes that takes upwards of 5 hours a day.

So what we're really saying is, well, we shifted stuff left, but we're still finding a bunch of things in production mmmkay seems weird, but okay. 

False positives are more likely than real results. (4:18)

And so then 80% of teams say that their AppSec tools have a false positive rate of over 50%. And I always struggle with what stats to put on the slide, because any application security survey anywhere is going to have a numbers pretty similar to this, where the false positive rate is 45% plus.

And you can see here that a lot of folks say that they have 75% plus false positives. So, okay, we're still finding a bunch of things in production and developers and AppSec teams get a lot of noise from the tools they use today. All right. 

Speed and security are still at odds. (5:00)

Well, it's probably no surprise then that 75% of teams say they sometimes or often skip security checks to ship faster and meet deadlines.

You know, and maybe the 25%, you know, it's not true, that they lied on the survey. That's my snarky commentary. But let's be honest, if security's challenging and dev teams are KPI on shipping features that drive customer retention, that drive revenue, it's going to be harder for them to always follow security processes and this is still a challenge.

So like I want to take a step back because look, we've been shifting security left to help developers out for, you know, 15 years now. 

What really “shifted left” with AppSec? (5:43)

So what’d we shift left? Well, we shifted left a lot of things around posture. We shifted left stack analysis, dynamic testing, source code analysis, SBOMs, things like that. But we really kept the activity piece of application security and called it pentesting and kept it on the right post-production.

So now what's happened is traditional software, is, well, we shifted all the noise of that left. Well, we shifted left scanning for a bunch of known vulnerabilities, but we didn't actually make finding and fixing faster. We just made it happen earlier in the process. And so what we've done is, sure, we can find it and fix it earlier in the process and that does mean fewer issues reach production.

But we've slowed down how code gets released by sheer virtue of the noise that has shifted left along with application security. Now, at the same time, I hear this pain from our customers every day, and the world we live in has some really hard asymmetries to cybersecurity as well that compound this all. 

Cybersecurity Asymmetries (7:08)

So, I want to talk about those, because okay, we've overrun developers with noise and a lot of false positives, its hard to always follow security processes.

We're still just kind of scanning for posture. Hey, are there risks here instead of understanding is there actual exploitable behavior?

Attackers need one weakness; defenders must protect all paths. (7:29)

And meanwhile, attackers need one weakness. You know, I got the Death Star thing here in background like this is just the truth. As defenders, we have to do our best to keep as many different attack vectors secure and minimize our exposure. But if an attacker can find one weakness they're in and that's challenging. That's challenging to constantly play defense when just one weakness can lead to an exploit, can lead to a breach.

And, you know, cybersecurity, like as aside, we don't get recognized enough for what we're doing to protect all the paths. Cybersecurity is always in the news for not protecting one. Well, what about the other 500 that we successfully protected? This is an asymmetry of how cybersecurity works and how it's perceived today. 

There are 570 developers for every cybersecurity professional. (8:22)

Another piece, especially with AppSec is there are 570 developers for every cyber professional. So either we have to scale the knowledge and exposure of AppSec professionals to what's going on in my code base, what's going on my pipeline, how are things deployed, what's the attack surface of my apps, or we have to deputize developers to participate in the security process more effectively. This is kind of where shift left started. It was, hey, this is starting to become a problem.

I don’t know what these numbers were ten years ago, but probably, you know, 1-100 or 200. And so we started saying, well, let's help get developers to do some of this. But we just shifted security tasks left on to developers instead of thinking, what's the right way to get developers engaged. 

Defenders can’t stay ahead of attackers. (9:12)

Third piece here, though, is you just can't stay ahead. And this is the real frustrating part about application security. Monday, you do everything right. Your SBOM is like hey, there's no vulnerable components here. You have someone audit your code Yeah, it's perfect. Third party did some pentesting. You did all the different things like usually you're not doing all this. And you release a new version of your app.

And Tuesday, an attacker does some reverse engineering, fuzz testing, active pen testing, and figures out a way to exploit a previously unknown weakness and vulnerability. Well, here’s a Zero Day. And then Wednesday. You're pushing your vendors Oh, hey, this was introduced in a third party library. How do we get a patch for this?

You’re triaging impact, your containing breaches.

You're spending that five plus hours a day reacting to production incidents. But you did everything right. You ran all the tools you had, and they said, Yeah, you're safe. Go release, go ship. This is the scenario that every single time I talk to cybersecurity leaders, every single time I talk to CTOs, every single time I talk to developers, I hear one of these stories. And it's either too many false positives,

It's we did everything right. And then there was a breach on a zero day. It's I don't get any recognition for all the ways we're improving our risk posture. It's these stories and these stories collectively that are kind of the modern state of AppSec. And it's, look, we made a lot of improvements over the past decade. But this isn't a really great place for anyone involved in the process.

And what I hear day in and day out from developers and AppSec professionals is they're overwhelmed. They're overwhelmed, and they feel like they're constantly playing catch up. So we’ll pivot a little bit. Now let's talk about what an attacker does. Because it's not the same. It's close, but it's not the same. So.

How are zero days found? (11:16)

Okay. When we all hear about zero day in the news. And guess what, there's been a new hack. Oh, no, what actually happened there? And I want to start this by kind of showing off the landscape of tools. So, you know, forgive me, my Gartner Quadrant type graph, I promise you it's not I don’t remember what those accesses are, but if we think about how AppSec works today, there's behavior and pattern matching, are we seeing things that match patterns that we know are vulnerable?

Is there a library here that has no vulnerabilities in it? And then there's is your application behaving in a way that exposes a weakness that is vulnerable and then there's known and unknown. Hey, someone already cataloged this CVE. It's a known vulnerability. Maybe there's a patch for it, maybe there's not. And then we are finding new ways to break things. You know, you look at this when I talk to folk today, alright, do you have SCA?

Yes. Majority of the customers I talk to have SCA. Do you have SAST? Yeah, most of them have SAST. Some of them have DAST or IAST, one of those two. Do you have SBOM? It's becoming more of a thing, what you're pentesting program? Well, you know, we do once a year for compliance. Okay. Do you do things like fuzz testing? Do you use ML or behavioral testing?

No, not really. You know, we do it every once in a while. Now, some folks, we have requirements do this, but for the most part, folks are really focused on how do I use technologies that match patterns to tell me where I'm potentially vulnerable. And that’s how you get those 50% false positives. And what's more is you also get the well, yeah, that library is used in my code, but it doesn't actually get leveraged at runtime. And so just in the code, but it's not actually on the attack surface, which again, slows development down.

Now, what's really interesting about this set of technologies is if you think about a couple hacks recently, and I’ll, you know, talk through these a bit. Talking about Heartbleed, or talking about, you know, I don’t know if anyone saw this, we’ll send these slides out afterwards. It’s a cool video here, drone flies by Tesla and unlocks it. How’d it do that? It didn't, it didn't need to know anything, the attackers figured out a way with no knowledge go actually exploit the Tesla systems. Now, that's been fixed. But there's a reason both of these are in the top right. And almost every successful cyber attack against applications comes from tools in this quadrant.

It comes from techniques in this quadrant that are focused on understanding behavior, and understanding how to trigger previously unknown behaviors that lead to previously unknown vulnerabilities. Now, you might say, well, don't attackers want low hanging fruit? Yeah, they do. But, typically speaking, those aren't the attacks that cause the most damage. It's the ones where you don't know they got in, and you don't know they got it until you’re running around doing costly patching. 

So, if attackers are using these techniques to identify behavior in your apps and focus on the unknowns, it seems a little odd that we’re using primarily the tools on the left side here to defend against them. Yeah, the best defense is a good offense, maybe? I don’t know. 

Anatomy of an Attack (15:17)

So when you think about an attack. It’s how do you access the application? Map the attack surface? And then breach it?

How do you determine the underlying weakness and vulnerability that made that possible? And then how do you weaponize that to create an explosion against it and escalate your control and get to your objectives? Hey, you want to hold things ransom? Do you want to, you know, exfiltrate data? This is the process that happens. Now, this is not what we really do right now.

What we do is we say, here's a bunch of stuff that may or may not be on the attack surface. And here's what we know about the known CVEs that map that. Have fun, go.

So, what’s the solution? And so this is where like, okay, we’ve talked a little bit about the challenges talked a little bit about, hey, there seems to be a disconnect between tools we use to defend and what hackers use to attack and breaking this down. Okay, three easy steps to how attacks work. 

What’s the solution? (16:32)

What can we, as AppSec professionals do? 

Think (And Test/Triage/Fix) Like a Hacker (16:41)

Well, punchline, think like a hacker. That's, the title of this. We have to approach our applications by looking at how they behave when they're running and trying to breach the attack surface. We have to then understand the underlying weaknesses and vulnerabilities that allow those breaches to happen.

And then step three isn’t exploit its remediate. But this is the approach that allows us to play the same game that an attacker uses by thinking like a hacker. And I'm using attacker and hacker a little interchangeably here, which I shouldn't be. Hacking is a technique. Attackers are bad actors that use, a lot of times, hacking techniques in pursuit of bad outcomes and goals.

But on the white hat side, we should be hacking our own code, we should be hacking our own applications, and learning and using those techniques in our pipelines. So why do we want to do this? Because it's really easy for me to say, oh, look, just do this. Easy. But there's a really big shift that has to happen in how we talk about cybersecurity and how we talk about application security.

What you see today is reports on vulnerabilities. You've got 1000 CVEs? Well, great, what do I do about that? We need to make a shift. 

Find What’s Exploitable, Not What’s Vulnerable (18:09)

And we need to focus on what's exploitable and not what's vulnerable. Because you're trying to reduce risk, you're trying to minimize the likelihood of a successful cyber breach. You don't do that by trying to fix every single vulnerability.

You try to do that by fixing the ones that are most likely to be exploited and weaponized against you. Now, you're like, oh, wow, exploitability vs vulnerability. I'll be honest, a lot of times I say this, folks, and they're like, yes, duh. Like, if we could do that, we'd have been doing that already. We live in a slightly different world the past three, four years, than we did ten years ago when we started shifting things left.

Why didn’t we do this already? Why can we do this now? (18:57)

Ten years ago, people were like, oh, fuzz testing, wow, behavioral testing. Wow, really, it’s throwing 100 random tests at a wall and seeing if something breaks. And guess what, we got better computers. Now we’re throwing 1000 random tests at a wall and seeing what breaks. That's gotten better, smart, faster. At the same time, machine learning, we can now develop algorithms that take large scale test results, and use that to generate new and interesting test cases and inputs.

You know, I call it symbolic execution, like, hey, that’s the secret sauce thing for us. So I like putting it on here. It's a machine learning technique. It allows us to abstract the ways that applications work and execute against the abstractions versus trying to find every single potential variable that should fit in. And again, AI, ML, generative AI, it’s in the news these days. All that's doing is saying, hey, this learns from itself every single time.

Well, we can do the same thing in cybersecurity. If you're running 100 tests, 1000 tests per minute. And then you're feeding the results of those back in. Here's what I did. And here's how the application behaved. You can use generative algorithms to create new test cases, create smarter ones, create ones that expand your code coverage, and start to do these things at scale.

This is another piece, compute has gotten cheaper, and scale has gotten cheaper. So whereas five years ago, trying to do this great, maybe you get 100 tests every 10 minutes, and you're running with a team of 20 people. Now you can run 1000s of tests a minute tens of thousands of tests a minute at global scale for teams of thousands of developers and get real meaningful results saying this is a way your application is exploitable.

So it's all possible. And again, Mayhem, it’s a platform we make, it's a tool that does this, or it’s a solution that does this. But you can do these things, regardless of technology. What this looks like in practice, though, it's taking the focus on exploitability and doing a few key things in your processes on how you address security issues. 

Focus on Vulnerabilities With Proven Exploits (21:27)

First and foremost, you focus on vulnerabilities with proven exploits. If there’s not a known exploit against the vulnerability, it's not a priority. Exploitability is how you prioritize, not vulnerability. Not what ifs, but real, actual, how is this exploitable in my environment? This is the first and most important piece. If you're trying to prioritize on a list of vulnerabilities or buckets of categories and CVSS scores, that's helpful.

But you need to understand whether or not something's exploitable in your environment. And use that to prioritize how you fix so that when you fix, you know those fixes make you safer in a quantifiable fashion. If you're not doing that, you can't measure the outcome of your cybersecurity. You're just waiting for the next breach where you get your hand slapped.

Instead of saying we found 100 exploitable paths, we fixed 99 of these to prevent 99 potential attacks. That's the type of success metric that you want to be sharing. Because that’s amazing. 99% success rate, that’s great. 

Define Risk Based on Application Behavior + Posture (22:37)

Define risks based on application behavior, and posture and this ties in. It's not an issue of could anyone exploit this anywhere? It's an issue of how is my application running?

If something's only exploitable via the network and you're running an air gapped environment? You don’t need to care about it. Bold statement, I know. But how can this be used against me, in my application, in my architecture? That's the conversation that needs to happen. And a lot of times you hear this from developers. Oh, yeah, it doesn’t run that way.

It doesn’t run that way. Sadly, it’s not a real excuse. Like, what is the architectural decision that minimizes the exploitability chance? And how do we make sure that AppSec, Dev, CTO, CISOs are all clear on that? That's the type of information that needs to be surfaced up and across teams. 

Automate Testing and Triage to Free Time for Remediation (23:40)

The third piece: Automate testing and triage. Automate testing and triage. Automate testing and triage. Hopefully, a lot of folks have seen, you know, Github copilot, right, we can write some code and you know, different different folks playing around with AI models that, you know, write some lines of code for you. 

Look, I'm not saying go use chat GPT to run cybersecurity for you - that's not really what it's there for, but modern techniques. And again, things like AI and ML, they can help you understand what's exploitable, and they can automate generation of test cases. Running test cases at scale, as well as generating test cases at scale takes a lot of work off of developer plates and increases your code coverage.

The more you can minimize the work developers have to do to write tests and the work developers have to do to triage results, the better. When there's a vulnerability that's detected, before that gets shipped to a developer, you need to be sure that that's reproducible. And you need cybersecurity platforms that say to you here's how to reproduce this in your environment against your application. Because otherwise, you're playing this game where developers spend most of their time figuring out what to do about it.

How do I go make this work? How do I see this vulnerability for me? Before they can even get started on prioritization. I was talking to a dev leader the other week and they run an E-commerce business, fairly large, and the team spends more time triaging security issues, than fixing security issues. And, you know, you kind of think about it like, that makes sense. Research takes more time than code execution, a lot of times, or code development.

But what if you could get all that time back to fix more issues? That's the goal, that's what we want to get to is developers don't spend time trying to reproduce security results, they spend time fixing proven security results. So I'm talking about all these things like hey, developers have got to do this, developers have got to do this, developers have got to do this. But really, this is ongoing.

Don’t Stop Once You’ve Shipped (26:07)

You continuously test your application behavior, and again, application behavior. This is not your once a year pen test, to make sure you're compliant. This is how can my application’s behavior, anomalous activity, crashes, can they be weaponized against me? And if so, how am I feeding that information back for someone to go patch and remediate that? This has to happen continuously. 

And, you know, does continuously mean, someone runs it once a day? Does continuously mean, hey, you’re running a constant service testing this? Your mileage may vary there. But these techniques you’ve got to have everywhere in the pipeline. Because it's not just enough to try and catch things before they're shipped, it's making sure that you're constantly maximizing the coverage of your application behavior once you've shipped as well. So those are kind of like, hey, here are the things you need to focus on.

Here are the things you need to change about what you're doing to start thinking about, okay, how do hackers approach finding vulnerabilities and weaponizing them so that I can find vulnerabilities and remediate them? 

What Does This Actually Look Like in Practice? (27:25)

Summarizing it up, use the techniques and tools of attackers in your CI CD, you know, whether you want to go leverage some open source, you know, old school, like fuzz testing stuff, whether you want to use more modern AppSec platforms that use ML to do behavioral testing, you want to push those into your CI CD. And again, things that use ML, they can take a long time. So don’t run midband, don’t block your builds based on it, but find the results. 

You want to automatically generate test cases. It’s a lot better to automatically generate as many tests as you can so your developers don’t have to write as many. Developers only get validated results. Don’t make developers struggle to reproduce security issues. Because then, well, can I not reproduce it because it’s not real? Or can I not reproduce it because I’m not doing the right thing? 

And success isn’t no critical CVEs. I hear this all the time when I talk to security leaders and dev leaders. Well, you know, we just eliminate anything critical, and then we ship. It's about fixing exploitable CVEs, it's about knowing a given point, hey, this build had ten exploitable vulnerabilities in them, and we fixed nine, hey, this had 15. And we fixed ten. Hey, over the last year, we've identified 200 exploitable vulnerabilities in our application, and we fixed all of them.

Those metrics show cybersecurity, you know, AppSec development. Those show a contribution to improve security posture. And it allows us to start reframing how to measure application security. It's not just about avoiding a breach, it's about proactively implementing fixes that stop future breaches. And by focusing on exploitability, you're able not only to deliver safer software, and usually faster too, but you're also able to show security’s proactive contributions in a meaningful way, rather than just waiting to get a slap on the wrist the next time there's an attack.

So that's my sort of, hey, here's how we need to think about this. Here's how you start trying to implement this. I'm happy to take questions. I'm happy to talk a little more tactically about like, what does this look like in practice if folks want. You know, I'll just kind of close it out. 

How Mayhem Can Help (29:46)

Again, shameless plug. Mayhem integrates into your CI CD, integrates into your build process, integrates into your IDEs, and it is designed to run your application, understand its behavior and try its best to break it.

Self-Learning algorithms to constantly expand coverage and every single result is validated delivered to developer with Here's a way to reproduce it and here's a regression test to prove it got fixed so that folks aren't spending time on triage. You’re not spending time on false positives. You spend time fixing exploitable issues. 

You can get more information at Mayhem.Security, ask questions and yeah, thanks for listening and I hope that was informative and gives you the start of thinking about how to make some pivots and some changes in application security measurement, philosophy and eventually implementation as well.


Questions (30:45)

Lakshmia: Thank you. Thank you, Josh. That was actually really informative and I actually have a question before the questions start. 

What is your first thought when a new CVE is discovered? (30:57)

So when you see the alerts about, oh, there's a new CVE that's been discovered and stuff and like the panic that a lot of people put into it, what is your first thought when you see them?

Josh: So, you know, and I’ll kind of answer this two ways because it's been it's been a while since I've been in the hot seat in those scenarios. And so my first reaction these days is a lot of empathy going out to everyone impacted. You know, you’ve got to look at my Linkedin. The majority folks I know are in development and cybersecurity. Their lives

And let's be honest, their family's lives get royally impacted and thrown into an upheaval every time there’s a new front page news zero day and so a lot of empathy first and foremost because it's not easy. And a lot of times they take a lot of a lot of flack for doing everything right because you know, it's a zero day. Folks did everything right. So there's that. The second piece to kind of tell you what I hear from our customers a lot is it's how do I know how I'm impacted. And that I think, you know, I think the industry is starting to do some things with SBOMs that are you know, I'm not going to say they're fixing everything. And to be honest, I think they contribute a little bit to some of the problems I talked about earlier on, but that starting to give security professionals and leaders a quick way to say where am I impacted versus spending the first day in a state of panic, trying to figure out what the impact is.

And what I hear from folks is that's the number one worry every time there's a new, you know, front page headline there, it's oh, no, oh, no. How does this impact me? How does this impact me? And so I think at least there's some progress towards solving that. So it's a just an issue of, okay, let's go fix, let's go fix, which is significantly better than where we were a year ago.

How can machine learning help us prevent zero days? (33:10)

Lakshmia: I understand and I agree, definitely leading with empathy. So one of the questions that we got is how can machine learning help us prevent zero days? 

Josh: So and let us be honest, like you do everything right and attackers can still find something you didn’t. That's just the truth of the world we live in today. And so when we talk about preventing zero days, like it's not about stopping every single possible attack, what it's about is using the techniques that attackers use and when I think about machine learning and, you know, I'm going to put my product hat on a little bit here. 

Like the way that Mayhem works in the background is it uses machine learning to say, okay, based on the behavior of this application, let me try something else that will exercise a new part of the code or based on the response I got. Let me try a different input because I think it's going to break the part of code I just went over. Those are the types of techniques that allow you to find previously unknown vulnerabilities and a human can do that.

A human can't do that at scale. And so I think about machine learning as automating at rapid speed and scale, the like smarts behind doing manual pentesting, like exploring your code. It really helps you find more things that are exploitable faster. Now some of those may link up to CVEs, some of them may not, but it's really about unlocking the scale and unlike sort of pattern matching based security scans, it allows you to not like you don't actually care about is this known or unknown. 

You care about the behavior of your application and if it's exploitable. So I think it unlocks that, which helps you get ahead of attackers. But again, you can't prevent zero days. Anyone who says, hey, you're never going to get impacted by a zero day again, that's not true. The things you can get your trusted images you can do all that.

How often should we do security checks? (35:32)

Lakshmia: And so I have one more question. In your opinion, how often should we do security checks in the process of building out apps and such. 

Josh: I might get some flack for this one. The best security tool, the best security process, the best security solution is one that everyone follows. You know, as someone who's been on the vendor side for ten years, pretty much every CEO I've worked for would say like, no, no, no, ours is the best.

The fact of the matter is, you want the maximum amount of security checks that don't reduce people's adherence to them. The minute your security process starts causing folks to do workarounds, it's too much. And I've talked to some CISOs who are like, No, no, no, no, no. We have to mandate these things. And I don't disagree. But the reality on the ground is it's better to have a program with some weaknesses that everyone follows.

So you have the visibility and control and you know where your gaps are than to try to do more. People don't follow it. You don't know where your gaps are. And I realize that's a philosophical answer to what may be a very detailed question, like you should do some in your CICD, you should do some post-launch, but you shouldn't do anything that makes people go around your security.

And that's really the measurement you have to take. And sometimes that's, you know, qualitative feedback by asking the teams so that you're rightsizing what you deliver as a security leader. 

Thank You! (37:35)

Lakshmia: I understand completely. Thank you for your insight. I truly appreciate that. And I think the rest of the attendees have as well. So with that being said, guys, that is the end of this month’s webinar.

Thank you again, Josh, for coming and actually presenting us with this wonderful hour, which we truly appreciate you. Right back at you. We'll see you next time. Bye!

Share this post

Fancy some inbox Mayhem?

Subscribe to our monthly newsletter for expert insights and news on DevSecOps topics, plus Mayhem tips and tutorials.

By subscribing, you're agreeing to our website terms and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Add Mayhem to Your DevSecOps for Free.

Get a full-featured 30 day free trial.

Complete API Security in 5 Minutes

Get started with Mayhem today for fast, comprehensive, API security. 

Get Mayhem

Maximize Code Coverage in Minutes

Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.

Get Mayhem