The Hacker Mind Podcast: What Star Wars Can Teach Us About Threat Modeling
January 22, 2023
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Having a common framework around vulnerabilities, around threats, helps us understand the infosec landscape better. STRIDE provides an easy mnemonic.
Adam Shostack has a new book, Threats: What Every Engineer Should Learn From Star Warsthat uses both Star Wars and STRIDE to help engineers under vulnerabilities and threats in software development. Adam has more than 20 years in the infosec world, and he even helped create the CVE system that we all use today.
VAMOSI: So I found this animation online that speaks to the struggles of every design architect. It’s called the Death Star Architect Speaks Out. If you’re not familiar with the film released Star Wars film, now retitled Episode Four: A New Hope, young Luke Skywalker, a a farm boy, from Tatooine, manages to find and exploit a significant flaw in the design of the Empire’s Death Star, a massive structure capable of destroying entire planets, such as the home world of Princess Leia.
So, in this case, there was a single point of failure to the Death Star -- an exhaust port -- which in terms of the plot appears to be a simplistic excuse for the hero to defeat the enemy. In the animation I found online, the architect vigorously defends his design against the one, which, while juggling everything other vulnerabilities, he simply didn’t account for. And why should he? In his threat modeling, the vulnerability that the exhaust port on the Death Star was very much a one in a million chance.
Death Star Architect: Maybe the exhaust port isn’t to blame because the shot literally was not possible unless you had magic powers. Maybe if someone had told me to account for space wizards when designing the exhaust ports maybe we’d still have a Death Star. Maybe you should be blaming Darth Vader, who couldn’t shoot down some farm boy. Maybe you should all stop blaming the exhaust port which actually did it’s f#$%ing job!
VAMOSI: The animation drives home the point that you can’t always account for everything when you are threat modeling. It also established the value of having a common language. Here, many of us have seen Star Wars: A New Hope. If not, we at least understand the references to the Death Star as a part of our culture. Having a common framework around vulnerabilities, around threats , helps us understand the information security landscape better.
So are there other things that every engineer should know from watching Star Wars? In a moment, we’ll find out.
VAMOSI: Welcome to The Hacker Mind, an original podcast from ForAllSecure. It’s about challenging our expectations about the people who hack for a living.
I’m Robert Vamosi, and in this episode I’m going to talk about … Star Wars. Literally, how the rebellion fighting the Empire has echoes in how we approach and mitigate information security threats.
VAMOSI: In the summer of 1977, a low budget film opened with very modest expectations. By the end of the week, it would become number one at the box office, and would be labeled a blockbuster because of the long lines. A few years later it would become the highest grossing films of all time. Star Wars, the original film, when on to add several more sequels and prequals, and the resulting saga remains a part of our culture, with new episodic shows on Disney+ and new films in the future. So, given this cultural anchor, it seems we might ground a discussion around threat landscapes and threat modeling. Which is exactly what my guest has done in his new book.
SHOSTACK: So I'm Adam Shostack. And I think for the podcast today, the most relevant title is I'm an author
VAMOSI: Adam has been in different capacities in information security for over 20 years, and I have seen him speak at DEF CON, Black Hat and elsewhere. Oddly, we hadn’t met until now. Over the years, Adam has contributed in many important ways to how we think about and categorize threats, such as Common Vulnerabilities and Exposures which we’ll get into more in a minute. He’s the author of the acclaimed Threat Modeling: Designing for Security. And Adam’s back with a new book called Threats: What Every Engineer Should Learn From Star Wars.
SHOSTACK: I'm super excited about it. Because it solves a problem that I've seen since someone asked me a question and it was a very simple question. I was doing some training. And he came up to me and he said, I'm not a security person. Where do I go to learn about these regrets that we're talking about as part of your training? And I thought about it. And I thought about it and I said, eventually I just stumbled my way along to there's no good place to learn about all of these. And so that's really the genesis of it.
VAMOSI: This latest book is structured along the lines of STRIDE, a threat framework which is derived from a 1999 paper from inside Microsoft called “threats to our products”. If you haven’t heard of haven't heard of STRIDE before, Adam explains
SHOSTACK: Sure. So STRIDE is a mnemonic that helps us remember important threats and it stands for spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege. And these are threats that apply to every technology in various ways, they interact with the things we're putting together. And so that's why they're the basis of the book.
VAMOSI: There’s also a clever conceit to this book: Star Wars. But I assure you this wasn’t some last minute marketing shtick, this was intentional.
SHOSTACK: So, um, you come again, came about, actually in 2005. I was blogging, I was annoyed at something and sort of offhandedly, I said, I'm going to use Star Wars to explain this classic paper in computer security. By Salter and Schroeder. And I didn't really think about it. I made you know, it was blogging in 2005. I said, I'm going to do this and then I started doing it and I got through four or five of the eight principles. And then I hit like this, this dry spell where I couldn't figure out what to do. And I persevered. I worked through it. And people responded really well to this mix.
05:10 VAMOSI: I should note that Adam limited himself to the first three released films in the Star Wars saga, known today as Episodes 3, 4, and 5. But for anyone back in the day, the original Star Wars trilogy. But there have been dozens of books about Star Wars, and I’m sure Star Wars and Security, although they’ve not all been as successful as Adam.
SHOSTACK: And the way I think about the secret sauce is you can't just pour Star Wars onto the thing you're doing and have it work or you can't pour Harry Potter or Star Trek or anything else because if you do that, you end up with this disjointed bit where you're trying to make a Star Trek joke or you're trying to make a Lord of the Rings joke and it's tortured and people see it. And so what I've learned in doing this little ship of I'm going to use Star Wars to explain this thing is you can only do it if your metaphor is your lesson. really dry. It derives cleanly, from the key thing you trot from the point in the movie, to the point in the technology and if you do that, it comes together nicely.
VAMOSI: To give you an idea how Adam plays this out within the book, in the first chapter he asks how Princess Leia or even R2D2 know that it's really Obi Wan Kenobi and it's not being spoofed. How does he authenticate? Yeah, deep questions, which I was not asking myself in 1977.
SHOSTACK: Yeah. So the explanation the back, the question of authentication is a really important one. Right? The first thing we do is we use a computer now or a website if we log in, and how does that work? Why does that work? And what's happening there? In in the moving, there's no real explanation, but I've I've reckoned an explanation here, which is that the same technology that Archie uses to record the hologram and be used to do something like besides, it can be used to scan a bone structure and build a 3d model because our two can build 3D models of things. And so we can use that as a tie to the idea of biometrics as a way of authenticating a human being to a computer. And so we can build on that and say what are the other ways of authenticating a human being to a computer, right? We have things that we know about logins, and we can ask the question of, How does Princess Leia know that Archie knows Ben Kenobi. And so we can build these things, and we can have fun while doing it.
VAMOSI: And Adam’s book has another example from the introduction where the Star Wars motif is baked into the organic structure of the book. He says that the Force is a property of all living things, and security is a property of technological systems. And I liked that because like the force can be used for both good and evil. Obviously, you know, security systems can be used in both ways as well.
SHOSTACK: Yeah, yeah. And you know, the promise of the book is what every engineer needs to learn from Star Wars. And we're living in a time where the things that we build technologically have ethical implications. And one of the big challenges that I faced in writing the book is, how deeply do I play? Or do I delve into those ethical concerns that I think engineers ought to be working on and I made the choice and it's an engineering choice to not go very deep into ethics, because it would have made the book twice as long and I wanted it, you know, people, people give me a hard time occasionally and it's well intentioned, but they give me a hard time about how long my Threat Modeling book is. And so I made the choice to keep this one short and not touch on the ethics, but you're right. The force has a light side and a dark side. The technologies we use have a light side and a dark side. And I want every engineer to know that maybe this will be the next book. It will be an ethical dilemma in Star Wars. Star Wars doesn't have a lot of ethical dilemmas in it.
VAMOSI: Adam’s book is about Threat Modeling. And it’s interesting how you know with the physical world there can be an explicit list. Consider your homes, you know there's the risk from flooding fire, etc, etc. And knowing that risk, you can then secure yourself against them, mitigate them in various ways and so forth. But in information security, it's not always true. It's weaker, because you're up against human adversaries, who are capable of adapting and learning, and … really changing
SHOSTACK: So I think the distinction is not just adversaries because people break into other people's homes for various reasons. For me, the biggest distinction is how new this all is. It's it. We've only been building computers for something like 75 years. And so the ways in which they work are new to us. We don't have the deep grounded perspective. You know, when I got started, I could touch every computer that I was responsible for. And now I can't and that makes a difference in its security in ways that we're, you know, there's the bumper sticker of the cloud is just somebody else's computer. And it's half true. But it touches on this deeper truth, which is that our mental models are changing. And we have these different perspectives, right, the person who wrote that bumper sticker it's like, oh, it's just someone else's computer. And they're implicitly challenging you by saying, Stop thinking the cloud is different. Who's right, who's wrong? matters to me less than what are the nuances? What do I need to think about when I put my stuff into somebody's cloud? How do I deal with that from a security perspective because now I can't yell at Jim in IT operations about what's going on. But Alice, who's running the cloud, is way better at keeping things operational and patched than Jim ever was. What does that leave us? And how do we get away from the hours or Jim or whoever it is? What matters from a security perspective? I think that's a challenge that we all face as we're doing our jobs now. When we model there's this continuum from threats that are obvious once we look at them, or once we look for them to threats that are really subtle. So the problem with the Death Star is actually pretty obvious. There's no blowout panels in case there's a reactor problem. And they present this as this subtle flaw but hey, look at it. The rebellion finds it in like two minutes after the Millennium Falcon shows up. We can get into the whole thing about the Empire not having a blameless culture in which people can raise problems so they can be addressed.
VAMOSI: Again, not something audiences were thinking about in the 1970s when the original Star Wars film came out. Perhaps this is material for Adam’s next on What Ethical Workplaces Can Learn From Star Wars
SHOSTACK: That's the ethical Book. See, it's just so good. There's so many things I want to say. But let me go to the idea of subtle flaws. Versus obvious ones because that's where your question was. As a security expert. I really like finding the unique flaw, the exciting flaw, the one that gets me a talk at BlackHat or something else, right. I feel proud of that work. But the attacker doesn't care. The attacker will send you a document, we'll send you an executable in a zip file, titled layoff notices.xls.xe and the exe gets hit in and you click on it and you're popped the attack. We need to balance our desire to do exciting work with our desire to do engineering work, and really find all of those sorts of threats and focus on the things that are easy to do. At the same time we're getting joy out of our work. We're taking pride in our work. We're feeling like we're making a contribution.
VAMOSI: I want to give credit to Microsoft that over the years when particularly when I started at ZDNET, when the whole idea of Patch Tuesday came out, I would read those early patch bulletins and I would call out the blame that I remember them having, where they would dismiss this buffer overflow or potential denial of service because it's highly unlikely that you would have your system because it wouldn’t be configured in that way. Microsoft stopped doing that and has matured. But, back then, those bulletins always sounded to me like the architect of the Death Star. It's like well in these extreme circumstances maybe this could happen.
SHOSTACK: there are times when it's useful to say extreme circumstances. For example, if you've turned off the default security setting, and you're using this application compatibility setting, and Microsoft has statistics on how many people use those, and you've set this thing to be seven instead of 4 million, then this is way more likely to impact you. And I think those are why technologically grounded caveats those in these circumstances which are under your control, this is what's going on, this is why you should be more or less concerned. I think that's very powerful. When we get into and I'm going to figure out how the bad person thinks it's way harder to do it. Well, it's way harder to predict. You know. Trying to think of a good non-political example. I'm just gonna go to Star Wars here. It's way harder to think that Darth Vader is going to trap the rebellion in this way because he's aware that Luke Skywalker is his son. I mean, the whole plot of the Jedi is a little bit convoluted and they've written these explanations which actually make it all come together. But you don't know that as you're watching it, and you don't need to know what to enjoy the movie. But one of the things I do love about Star Wars, is there's actually these layers of richness. And as I was writing the book, gave me an excuse to geek out and go really deep into a lot of the Star Wars things. And I try not to mention too many of them because, as you said, people have watched the original three movies and that's where I layered the book. But if you want to go into what a hacker is going to do, what's going to motivate them, why are they going to take these actions and then say, Oh, you'll be fine. The Empire isn't going to come for your small moon or their Ewoks they don't even have gunpowder, what damage can they do? That that trips us up? Excuse me, and so I really like focusing on the known capabilities rather than the motivations as we think about it.
VAMOSI: Okay, leaving the Death Star aside for the moment, Adam has been involved in the Common Vulnerability and Exploits or CVE. He helped created that and continues to work with them on that. What I like about the CVEs is that they are no longer one size fits all. That there’s the Common Vulnerability Scoring System or CVSS, which allows industries to plug in their own risk factors and arrive at their own Risk score.
SHOSTACK: Unknown 21:29 one of the things that people always forget about CVSS is that it has an environmental term to the equation. There's a formal name for it, which I'm not remembering right now. But that is how it can impact you. This is how it will impact your business is really something that requires understanding how your business has the technology deployed. And one of the challenges that we face is that our deployments are so complex, they're so sprawling they're so evolved that often nobody has all of it. And we think about that, and people say things like it would be such a big project to map out how all our technology works here and to get every single detail into a single diagram. Yep, that diagram will be an eye chart. But the value of modeling is that we can draw some pretty simple pictures. And with those pictures, get an idea and we can say you know this is in its own VPC, we're isolating it, we've got a firewall in front of it. We're using a reproducible build so we can just redeploy it quickly. That sort of thing that we can sketch on. A whiteboard is a really powerful control contributor to intelligent conversations. And the way that I like to think about threat modeling, starts with that conversation about what are we working on and continues to work and go wrong. And so CVSS CVS can be completely in there. And people want to bring in all of these important nuances. And sometimes we lose the forest for the trees.
VAMOSI: Exactly. And I think this is what we’re getting at when we talk about threat Modeling, that you can’t really have a conversation with various stakeholders unless you are using common language to define the problem and the possible solutions to it. We we talk about the real threat landscape, it becomes overwhelming but if you focus in on, we're going to talk about specifically buffer overflows today or whatever niche thing that you want to get into, it now has a bucket for you to discuss in common terms.
SHOSTACK: Yeah, and that that's a thing we can point to so that we develop this common understanding is valuable to our brains. And it's one of the tricks that I use, it's my Jedi mind trick for the book, if you will, if I point to the Star Wars thing to focus people's attention. And then I tell a story about it. And doing that over and over again, forms the core of the book and also I use triad, as this core of the book, right, it's this book that took a long time to write because well, it looks very simple on the surface. Getting the structure to work and getting the storytelling to work. Really, you know this was my Jedi training, if you will. This was when my time on Daggubati was being yelled at by an editor a lot. Thank you, Jim. Thank you, Kelly. They both yelled at me in wonderful ways and I learned to make it go. But we undervalue simplicity and technology.
VAMOSI: When I first heard this, I had a negative reaction. It seems that a simple that is too simple can’t be secure, much like a system that is too complex is probably not going to be secure. Adam had a good response to that.
SHOSTACK: So, I believe it was Sir Tony Hoare in his lecture who said there were two ways to make a system secure. One way is to make it so simple, anyone can understand it. And the other is to make it so complicated that no one can. Okay. If you don't understand the security and it's so frequent that I see this, someone will show up and they will just throw this barrage of things at you. And in doing so, they're like, Oh, you didn't understand X. That is the reason the Excel calendar on a Mac starts on a different day than the calendar on Windows. Is this weird niche fact about leap years? Right? There's this weird story I don't even remember the details of. And since you don't understand that you clearly don't understand the problem. So I won. And its BS it's also really common in technology for us to value that ability to create this complex structure. But the simplest system the only I don't I don't believe that a complex system is actually ever made secure. The details if the details are only understood by one person or only one person can claim an understanding. Then what the reverse engineer what the pen tester does, is they hone in on places where you and I as engineers in building the system didn't have a common understanding of what our threat model was or what our system model was. And so you assume that I'm validating that the input is only ASCII characters. And I assume that you're validating that. The attacker is like, Hmm, I'm looking back and forth on the podcast here. It doesn't do a lot of good but I'm looking side to side. Nobody's validated this property that this function over here expects the code to have the input to have and so it gets broken. And the complexity was the enemy of security.
VAMOSI:Early in the book, in the introduction Adam mentioned, Bruce Schneier had an aside or a blog about how he had gone into the NSA and said, so where's your book of threats? Staying with this idea of the richness of every environment and everything is different, Adam tackles that question a little bit in this book by imposing a framework on some of the threats again, going back to stride.
SHOSTACK: It is, it is and, and one of the one of the engineering trade offs in doing this is completeness versus accessibility. There, there used to be the German equivalent of the NSA used to have this catalog of threats. And it was something like 6000 pages long. It only existed in PDF form and they stopped maintaining it. Because it was inaccessible. Right, who's going to read a 6000 page document ever you look at the index, you search it and you hope that things you hope that the words you're using are the search terms in the book. So there were threats that I wanted to put in that I left out because you have to get that balance, and I'm sure that people will disagree with my balance and that's okay, right. That's just part of there is no perfect book. But I hope to have written a useful book.
VAMOSI: Yeah, as a framework. It does and going back to, you know, categorizing at least having that model in our minds on how to look at these things. It's very, very helpful.
SHOSTACK: That's my hope. Yeah.
VAMOSI: So we’ve talked about the threats and the need to categorize it. Again, I want to stress how Adam has considered the Star Wars aspect of his own work. And underlying Star Wars is, of course, Joseph Campbell’s Hero’s Journey.
SHOSTACK: Since a lot of your podcast is about people's journeys. I want to talk for a moment if I may about the hero's journey. Which is one of those structures, George Lucas has talked about how he used Joseph Campbell's work on the hero's journey. And one of the early phases in the hero's journey is that the hero was forced to leave home and go on this journey. And it's challenging and they meet all these problems along the way. No one wants to go on a hero's journey. There's this little incident where his aunt and uncle have been murdered and Luke says there's nothing for me here anymore. And then he's off. Your listeners don't want that. Right. They don't want their house to be burned down and their family murdered. They want to go on with their daily lives. And so this, the fun of this book is an invitation to learn these things without having that deep challenge. You can pick this book up you can read a chapter at a time. And so, so your journey as an engineer as a security person, challenges will come at you. And my hope is that this book will give you some tools to deal with those challenges in a way that's accessible. That's fun. That's educational, without being a slog.
VAMOSI: So I’ve addressed this in other podcast episodes, but are engineers and developers being taught security? Perhaps not. I think for job security, they’re being taught how to ship code quickly.
SHOSTACK: I know it's hard, right? Number one, everyone's overworked. Number two. Everyone's having layoffs. And when there are layoffs, training budgets go first. Right it is better to keep people employed and not train them than to train them and then lay them off so I think people need more training on security. And I'm aware that organizations think they need training on ethics of AI on accessibility, on reliability, on performance on all of these aspects of how we build our systems. And I think that one of the great grand challenges which faces us as we build these technological systems, is how we build them in ways that really serve all of us. And I think over the last few years, we've seen a lot of we built systems that serve our businesses well and leave a lot of people hurting. And I think that that's going to be a big challenge for the next 20 years. is how we find those balances how we, if we will bring balance to the Force? Right?
VAMOSI: So then how do we entice them? Certainly, you know, integrating a Star Wars metaphor into the whole thing is very helpful because that's an audience right there. How do we get the engineers to come to that level if security is not something that they're tasked with?
SHOSTACK: You know, this is an important question. And I think that we, all of us, respond to the incentives placed before us. We are seeing, you know, in the last few days, I've seen headlines about fines in the hundreds of millions of dollars for not taking privacy into account in the systems that we're building. The new omnibus spending Act contains new powers for the Food and Drug Administration, to excuse me, the mandate cybersecurity for new medical devices. I think we're at an inflection point where we're going to see more and more regulation. That regulation will lead to executives caring about what we're building. And executives care will give engineers either space to care about things they've thought were important that weren't getting attention, or just a notice that they have to start paying attention. And I think one of the big challenges for the design of these regulations is how we enable innovation. Get the results that we want, and don't end up in a mindset where we're just going to see the box being checked, but no actual delivery on the goal behind it. And I think that's going to be a real challenge. And my hope is that engineers will say I want to be proud of the work that I'm doing. And therefore I need to do X beyond the checkbox beyond the letter of the law to the spirit of the law. And that we'll start to see more space for that.
VAMOSI: What I’m hearing in this is more regulation. But regulation doesn’t always have to come from the outside, from our governments. Regulation can come from within organizations, you know, like a Trustworthy Computing Initiative.
SHOSTACK: Yeah, you know, the work that looks back at the Trustworthy Computing memo that Bill Gates wrote in 2002 And you know, some of my management actually wrote the draft as a secure computing memo. Back when I was back at Microsoft, I knew that people and Bill changed it from secure to trustworthy, which I think is a really interesting shift. In what we're thinking about, Bill said, if we're going to have computing be the fundamental basis of our society, it needs to be trustworthy, that we need to know that we can count on us and I think that's a great frame. And I'm proud to have been a part of how Microsoft delivered on that for a good period of time in the 2000s.
VAMOSI: So, Adam wasn’t there at the beginning, but he did work at Microsoft during the Trustworthy era. I didn’t work at Microsoft, I was on the press side, at ZDNet and I would get hate mail from loyal Microsoft people, even employees, who said I was out of line to criticize how Microsoft handled researchers reporting vulnerabilities, how Microsoft tried to ignore or as we discussed, put blame on the user for not keeping with the defaults. This changed, in part, with Gates’ memo. And, even those with Microsoft, who agreed and wanted the policy to change.
SHOSTACK: And t here were a lot of people who went through very real hero's journeys. As they tried to figure out what the heck do we do to help all of Microsoft to do this and help that helped make that change happen?
VAMOSI: So how do we see a similar change a Google or an apple or an Amazon?
SHOSTACK: um, you know, first I want to say that I have friends working in security at each of those companies. And I think that the start of the changes there, and the big questions that happened at Microsoft is when I learned to do the engineering work. The big challenge for these companies is we need to fund the engineering work. We need to say it's okay for us to take 1% less revenue because we're not going to collect people's social security numbers. And so our advertising will be a little less precise because of this uber goal that we have or we need to see people being pushed in that direction. And so we were moving from what we know how to do if we see that we probably ought to do it. But the other thing that we really need is the culture of these places to be yes, we will. We're going to secure that. You know, someone said that culture is what happens when there's no written rule. And the question that we really face is if if somebody says we're going to take an action, I have an example for you here. I was talking to an organization that you've heard of. And they said, one of our business units went to put something in a new cloud environment that we've never built. In before. And we caught it at review. And we started making a list of all the controls we have in cloud number one that we needed to replicate into Cloud number two. And it took an extra three quarters to ship took an extra nine months to get all of and there were good business reasons they needed to go with this other environment in your organization. If you raised your hand and said, doing this properly, will take an extra nine months. Do you think that you'll be let go or promoted?
VAMOSI: So it all gets back to the environment that gets set. And there is some precedent in the Star Wars universe for this as well.
SHOSTACK: It is that you expect to be promoted for raising your hand and pointing out the thing. The turd in the room rather than it's gonna get swept under the rug. And I think that's to your question about what it will take for these other big companies to get there. What does it take to shift their culture and it takes the founder it takes the leadership. It takes executives saying it over and over again. And then backing it when push comes to shove in a stack rank in a promotion discussion. Unfortunately in today's economic environment in a layoff, discussion, you know, turns out that Anakin is not displaying good moral sense; his activities don't align with the culture we're looking for here. We're sorry, but you're not going to continue your Jedi training with us.
VAMOSI: I know it’s not in the scope of Adam’s book, but the recent Disney+ TV series Andor is, in many ways, the way I wanted Star Wars to have been. IT’s rich in detail, and it weaves social and political influences into it, much the way a secure team would go up against business and engineering forces. It got me thinking about how perhaps we could say that the security minded engineers in any organization are kind of like the Rebel Alliance going up against the Empire?
SHOSTACK: In many cases, they are they and this has been this was the sea change at Microsoft was in 1997/ 98 /99, there was this set of people who were pushing and fighting and arguing that Microsoft was not responding well to these problems that people and they really were like the Rebel Alliance, and they had folks inside the company who were trying to turn the ship. In fact, STRIDE the mnemonic at the core of the book was created by Lauren Cohn Felder and pre-read guard in 1999 at Microsoft. And they wrote an article which you can now find on the internet, titled The threats to our products. And they were fighting the good fight back before it was the norm at Microsoft. They said we have to do this, but they were part of what turned the tide and created a belief that if Bill made this commitment, if Bill said we were going to make this change, that we could actually achieve it because a lot of this security stuff is hard to engineer for. So as a leader, if I'm going to say, I want my people to build secure code, I have to know what that means. I have to know what success looks like. I have to enable them.
VAMOSI: I’d really like to thank Adam Shostack for talking about new book called Threats: What Every Engineer Should Learn From Star Wars. It’s available on Amazon and wherever good books can be found. And we’ve only just scratched the surface on Adam’s many contributions to the field of information security, in particular vulnerabilities, exploits, and threats. I look forward to have more conversations with him on future episodes of The Hacker Mind.
Development Speed or Code Security. Why Not Both?
Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.