Ready, Test, Go. // Episode 8

Digital Quality Lessons from Star Wars

 
Listen to this episode on:

Listen to this episode on: Apple Podcasts Listen to this episode on: Google Podcasts Listen to this episode on: Spotify Listen to this episode on: Castbox Listen to this episode on: Podcast Addict Listen to this episode on: Stitcher

About This Episode

Even if you live in a galaxy far, far away, many of the cybersecurity and digital quality concerns that cause product releases to fail are relevant. Even the mighty Death Star is left vulnerable by one tiny flaw that should’ve been discovered and patched — and this is just the start of what we can learn about digital quality from the world’s favorite sci-fi universe.

Security consultant, expert and author Adam Shostack shines a light on the Star Wars series to reveal cautionary tales about software quality, cybersecurity, thermal exhaust ports and more.

Special Guest

Adam Shostack

As President of Shostack + Associates, Adam Shostack delivers high-quality training and consulting in security engineering, including threat modeling and DevSecOps. Shostack also serves as a member of the Black Hat Review Board, RISCS Advisory Board and IriusRisk Advisory Board. He is also the author of three books, including Threats: What Every Engineer Should Learn From Star Wars.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: For four decades now, the original Star Wars trilogy has delighted sci-fi fans all around the world, with so many great scenes, classic characters, and unforgettable moments. I mean, how about that classic twist that Darth Vader is actually Luke's--[BLEEP]. Oh, actually, we better leave that out just in case somebody's been busy over the last 42 years, but they plan to watch it in the next few months. We don't want to spoil it for them. Got it, great.

Our guest today, Adam Shostack, has been a fan of the series ever since he saw the first installment, Episode IV--A New Hope, in theaters. The film resonated with him way back then, and he continues to take lessons out of it today.

ADAM SHOSTACK: So you know, I was thrilled. I spent the summer insisting on getting the little Kenner toys and playing Star Wars with my friends it was pretty much the prototypical 1977 Star Wars experience.

CARTY: While many fans dig a bit deeper into the mythology as they get older, the magic Adam sees in the films is that they're essentially movies for kids. The story has simple themes, like good versus evil, the corruption of greed, or droids down on their luck. Adam tries to enjoy those simple pleasures of the Star Wars universe.

SHOSTACK: As a kid, it's hard not to love the climactic final battle. As an adult, the thing that I most appreciate about the movies is the world building, is the little touches that draw us in and believe we're looking through a window at this galaxy far, far away. You know, I don't know that, if you had told me that 40-plus years later, I'd still be talking about it I would even be able to conceptualize what that meant. It's clearly stuck with me.

CARTY: One thing Adam believes strongly--whatever your fandom, whether it's Star Wars, Star Trek, sports, or something else altogether, as long as it's helping you live your best life and it's not pushing anyone to the Dark Side, it's all fine by him.

SHOSTACK: If you want to be a fan of whatever series you're going to be a fan of, I don't see any value in judging you, in describing it as healthy or unhealthy. If you're living a good life, and your fandom isn't getting in the way of that, why should we be any more judgmental about people who like Star Wars than people who like a sports team, or a band, or any of the other things that we see people get really excited? As fans, as enthusiasts, I think it's great.

CARTY: This is The Ready, Test, Go. podcast, brought to you by Applause. I’m David Carty.

Today's guest is Star Wars fan and security engineering expert Adam Shostack. Adam is the president at Shostack + Associates, where he and his team help deliver high-quality training and consulting in security engineering, threat modeling, and DevSecOps. Adam serves on several advisory boards as well, including the Black Hat Review Board. He is also an author, and his latest book, called Threats: What Every Engineer Should Learn from Star Wars, came out earlier this year. Look, when it comes to security, much like digital quality, it takes preparation and a desire to uncover severe issues, whether you’re protecting a rebel base or just trying to support that new mobile app launch. Let’s learn more from Adam, who is the guest we're looking for.

Everyone knows the importance of security, at least on a high level, but not everyone takes the necessary steps to build security in from the very beginning. And this is common with software testing, too, right? The notion that you can just add a pre-release step and everything will be fine, that's fiction, just like Star Wars. But before we get into Star Wars examples, tell me from a high-level perspective what a programmatic approach to security should look like in our modern day today.

SHOSTACK: When we think about security, like other things that we can think of as quality, you can't bolt it on at the end. You can't sprinkle it on. You’ve got to design for testing, and you've got to design for security. You’ve got to think about what can go wrong as you're making choices about how to build things. You’ve got to think about that through the whole lifecycle of the software. Whether you're thinking about it in terms of Agile--what are we working on this sprint? Does it have security implications? Or, you're thinking about it as something much bigger and longer-term. I work with automotive companies, and they're on five-year build cycles. It doesn't make sense to try and impose an Agile lifecycle when you’ve got to source 100,000 chips and then program them. You’d better be thinking about, can those chips hold some nonvolatile memory which has a certificate in its you can do a digital signature check as you're loading software? If not, bolting that on at later stages becomes more difficult and more expensive. And similarly, if I design V1 of my API, I roll it out this sprint, and I'm like, well, we'll add security onto that later, and then teams within my organization--or even teams outside my organization--start taking a dependency on the current API, I've built myself a migration problem by not thinking through what I was building a little bit before I rolled it out. And I could avoid that quality problem. I could avoid all of the investment in having two infrastructures, multiple layers of service discovery. All of those sorts of things go away with a little bit of forethought.

CARTY: OK, now let's get down to brass tacks. Let’s talk about the thermal exhaust port. This 2-meters-wide security vulnerability takes down the entire Death Star. Now, there's obviously a lesson here in both digital quality and security. So when you think about it from that perspective, from that example, what do you take away from it, and what lessons can we learn?

SHOSTACK: OK, so first, I’ve got to give a nod to a little video titled "The Death Star Engineer Speaks Out," in which he said, “Nobody told me there were space wizards who could make that shot. If somebody had told me that, I would have engineered it differently. “So at one level, it's a quality problem. At another level-- and it's a requirements quality problem. It’s not an implementation quality problem. At another level, we can think about it in terms of how the Empire handles failure. It’s a famous line. “You have failed me for the last time, “and then Darth Vader chokes you to death. If you have an environment in which people can't bring you bad news--"Hey, there might be a problem with this thermal exhaust port"--[MAKES CHOKING NOISES] then you’re not going to have quality. And so we've got--like I said, there's so much richness in the world-building. And we can take lessons about blameless cultures. We can take lessons about good requirements. We can take lessons about engagement with, maybe, the people building that thermal exhaust port were like, oh, should we add a couple of baffles here? Should we put a grate over this thing, maybe a pop-up--the right fix. As I said, I love fans. I love the enthusiasm that they bring, and we could spend an entire episode redesigning the thermal exhaust port. But the engineering reality, as shown in Star Wars, is they had multiple inhibitors to quality, deployment, and delivery that ended up killing billions of people, multiple years of massive investment, and really messing up the Empire's plan for all those planets to fall into line.

CARTY: Right, and if we're thinking about modern applications as Death Stars-- probably legacy apposing this case--with many different thermal exhaust ports opening over time, this is probably a complicated answer, but how can we best defend ourselves against intrusion or utter destruction?

SHOSTACK: So the first thing is to fit the first thing as the first thing. Let’s think about the possibility of intrusion early. And when we think about that, we can think about the thermal exhaust port. We can also think about the interior of our systems. And we can use structures. We don't have to search our feelings and come up with these security issues because we were born with knowing about them. And the book is focused around two sets of structures. One is a mnemonic, STRIDE--Spoofing, Tampering, Repudiation, Info disclosure, Denial of service, and Expansion of authority. And we can use STRIDE to help us anticipate problems. And we can assemble those problems, those threats, into what are called kill chains--sequences of actions that people might take. So they'll deliver an exploit. The exploit will work. It will persist. It will talk to command and control. And we can think about each of those stages and what we as defenders can do to protect ourselves against them, detect them if they happen, and respond if they do. And can I go super geeky here?

CARTY: Absolutely, it's a Star Wars episode.

SHOSTACK: So here's the thing. I believe that when R2 plugs into the Death Star computer,R2 is actually connected to what's called a honeypot. It’s a system that's designed to isolate and observe what the attackers are doing. Because otherwise, how does R2 discover where the Princess is? That’s not information that should be exposed to everyone on the network. And so if we believe that R2 is connected to a honeypot, the Empire can observe that. They know that people are going to the detention bay. They understand that they have only so much time to put the tracking device on the Millennium Falcon, which is going to escape. And if it's not a honeypot, all of that makes even less sense.

CARTY: Yeah, it's interesting to think about. And you get into so many different examples like this in the book. I do want to get back to the example that you mentioned before of the Empire handling bad news poorly in this command and control kind of culture. To me, it makes me think of the pathological culture of the Westrum model, where there's low cooperation, messengers are shot-- or Force-choked, whichever example you want to use--and people are made to be scapegoats. Now, I wouldn't expect Darth Vader to beat collaborative sort of leader. But can you tell us a little bit about why this type of power-oriented approach to leadership is problematic in a software development and delivery kind of context?

SHOSTACK: The first thing I think about when I hear this is diversity and inclusivity. If we have a whole set of people that we're paying to do work, I would like them to bring their whole brains to the problems we put in front of them. And if we're scapegoating, we're blaming, we’re doing all of these negative behaviors, people withdraw. They deliver the minimum they feel they can deliver, and we're just not getting their best work. And so yeah, there's so many things wrong with it, and yeah, I don't know how deep you want to go. I don't have this--

CARTY: It's fascinating to think about. I think a lot of us have probably worked jobs like that in our life, where you feel like you don’t have the support of leadership and, like you say, you withdraw. And it's not even necessarily a mental health issue so mochas it is a worker productivity issue. I mean, I think sometimes those points get obfuscated a little bit, but in order to get the most out of your people, you need to support your people, right?

SHOSTACK: Yeah, absolutely, absolutely. And this-- I like to use the core four movies, but there's this wonderful scene in 8 where the Rebel Alliance is trapped on this planet. They’re in this cave and P--General Leia, excuse me, starts walking off, and everyone starts following her. And she turns around, and she's like, why are you all following me? Somebody else needs to pick up and lead here. And that leadership style of, hey, I want everyone to pick up a piece of getting things done, we’ve got a clear mission, we've got a goal, and we're all working towards it together--is really the thing that allows a small, scrappy Rebel Alliance to both win, but also, it harms them regularly because people are running off in every direction. They’re struggling to get everyone moving in the same way towards that goal. Everyone has their own opinion about how to get there. And so finding the right balance is complicated, not only in our world, but even in the world of Star Wars, which we were just talking about being simplistic. But this question of how do we get people to work effectively is complicated because it's complicated. We all have a different perspective on things, and balancing those and orienting people without being overbearing is a tricky subject.

CARTY: Absolutely. That’s why there are entire sections of bookstores devoted to leadership books, right? But let's go back to your book. You have a chapter in the book on information disclosure and confidentiality, which I really found fascinating. Among other things, you explain how attackers can reconstruct cryptographic keys from the sound a CPU makes and other surprising methods of potential intrusion. What are some common ways you see businesses today err with sensitive data, and what does a safe data security posture even look like today?

SHOSTACK: So the first thing to say here is I throw in some of these fun examples like the sound of CPU makes not because it’s the first thing that people should be thinking about, but because we should be thinking about the way--excuse me, we should be thinking about all of the side effects of the computation we do, but before we do that, we have to know where our sensitive data is, we have to know who's supposed to get to it, and we have to build our systems in a way that allows us to operate them knowing that some people will need that sensitive data, and knowing that we need to protect it. And so being able to think about how do we protect, how do we detect, how do we respond when there are problems, because we've thought about where this sensitive data is--if we're not doing that, our data ends up all over the place. It ends up in systems that don't have proper access control. For example, over the last 5, 10 years, we’ve seen just about every application in the world be refactored so that it no longer contains credit card numbers. We used to have credit card numbers scattered all over the place, and then attackers would break in, and they'd steal them, and everyone would have a new credit card and et cetera. And when it's credit cards, maybe that’s sort of acceptable. But when it's your Social Security numbers, when it's your medical records, when it’s sensitive information about your politics or your religion--which, in various parts of the world, can get you in trouble--having that information scattered throughout our systems creates a difficulty in protecting it. You've got to protect everything to the same extent.

If you think about it up front-- if you say, wow, there’s an information disclosure threat, and because we've got people from all over the world working here, somebody brought this up early and said, wow, this is a problem--or maybe our lawyers brought it up and said, hey, GDPR means this data here is sensitive--we can think about that. We can think about putting it into one well-protected place and having everything else reference it, for example. We can think about, is this data that we want to sell? Do we want to let our advertisers see? For example, the FTC the Federal Trade Commission, has been fining companies that do medical-adjacent work because they were allowing Facebook to put Facebook tracking pixels onto their web pages, learn about patient diagnoses. If we thought about that and said, huh, we probably shouldn't have advertising on this page because of the sensitivity of it, we could manage that, and just a little bit of work asking what can go wrong up front can make a substantial difference.

CARTY: Yeah, it's an interesting point. And I don't mean to project anything onto you here, but in reading the book, it sounds like you're troubled by the capabilities of mobile devices--or at least the capabilities of potential threat actors to exploit them. And there are a number of examples you bring up-- for example, how with mobile phones, it has the ability to read text off of a paper that might be way off in the distance. I mean, the kinds of things that the Rebels or the Empire might use against each other maybe with the help of droids, or something like that. So there are clear privacy and security concerns that manifests from mobile capabilities. Is there a way that mobile developers can program apps in an ethical way that helps protect their users? What would you recommend there?

SHOSTACK: So the very first thing that I would recommend is be aware that the sensors in the device are way more powerful than anyone expects. But the second thing is, think about what you're doing, and think about the most--think about how the Empire might use this capability. Think about how a newspaper might report on it in a negative way and ask, do we want to do this? And we can get more specific, but I believe most engineers actually want to be ethical. Very few will raise their hand and say, the heck with all of it. I'm looking for the very most profit whatever the cost. Most people would be like, no, I would like to go home and look in the mirror and feel good about the work that I do. And so I feel like most developers are not doing the work that they do with the intent to be evil. They’re doing it with the intent to serve some customer base. And I do think that we need to start thinking more than we have about what that means. Generative AI is a really interesting example of this. Although I finished the book less than six months ago, and all of the things about ChatGPT and whatnot have happened since I finished it, as we roll these systems out into the world, I know the folks at OpenAI are working to think about the impacts of what they're doing. They have an entire team dedicated to the societal impact of this new thing they're building. If what you're building is really new, you can take a page from their book and say, what is the effect of releasing this? How will this impact the world? Can we build in safeguards? Maybe we get to the point of asking, should we build it? There’s a joke. Silicon Valley firm builds the evil doomsday machine from the book, For God's Sakes, Whatever You Do, Don’t Build the Evil Doomsday Machine. And they issue a press release announcing how excited they are to have fulfilled the vision of the book.

CARTY: Yeah, that's interesting, you know? And Bluetooth is another kind of example of this. And you wrote about, in the book, how even a person's gestures and typing can be measured from the data that's being transmitted over Bluetooth. Perhaps the Death Star blueprints might have been intercepted via Bluetooth signal. I’m not sure. But Bluetooth, obviously, is--it's, obviously, become a standard for connected device manufacturers. But there's something troublesome there, too. So where should IoT device manufacturers go from here? What is a safer way to move forward?

SHOSTACK: Yeah, well, it's a huge question. And there are so many emerging standards for connected device security that, again, thinking about it in terms of what is our device designed to do is a really important step. There was a thing that came out which is, one of these smart home devices had a microphone in it. And there's literally no functionality in V1 that uses a microphone, but it costs like a nickel to add, and they said, yeah, maybe we can add voice control later. And so they put a microphone in there with no control over, I’d like to turn the microphone off, or, please indicate that the microphone is listening. And so one layer of this is to consider these things as you're building these devices.

But there's another important layer, which is, what are the standards committees doing? So let's say I have an Acme Anvil-Dropping IoT Machine, and it's phone-controllable. We have to have the standards makers asking, what can go wrong with this machine? And the fact that it drops anvils on people, we’re going to leave that aside. But, how do I authorize? How does it log? What information does it send where? These threats are things that we can think about at the protocol design time. And like I was talking about earlier, where, if I ship my V1 without security in it, it’s very hard to build that in later-- to get quality on later. And so we've got to be looking to standards bodies to incorporate more threat modeling, more clearly stated threats, so that we can expect the devices that we use to be done better, rather than saying, oh, it's the app developers' fault. They couldn't have done anything because the standard doesn’t support what we need. But we're going to look to the app developer and call them unethical, doesn't strike me as fair, and doesn't strike me as moving to the folks who are best positioned to really solve these problems.

CARTY: You know, I've got my fingers crossed that for Father's Day I get the Acme Anvil Dropper. So I'm hoping the family watches this episode of the podcast. So, fingers crossed on that.

But you know, Adam, you mentioned it’s hard to parse data structures reliably. And there's a great line in your book that I think helps get this point across about data being tainted. Quote, "Just like you can never quite get the smell of tauntaun out of your clothes, you can never quite get data to be perfectly clean," end quote. Aside from the vivid smell this conjures up--I can smell it just talking about it--what sorts of checks and balances should be in place to make sure that we're helping to sufficiently sanitize data?

SHOSTACK: So thank you, I was really proud of that line.

So there's a few relatively easy things to check for in the simple case--Checking for length, checking for expected character set, checking for semantic quality. I’ve been playing a lot with AI image generators lately, and I wrote a little code to work with one that uses a REST API. And so I call this REST API, and then it gives me back a URL.I didn't want to manually copy and paste the URL, so I did a thing to feed it to the Mac OS Open command. And now I'm taking URLs from this AI tool, and I'm feeding them into the system-level API on my Mac. So what I did was I said it's got to be less than 120 characters. It has to match A to Z, 0 through 9, dash, colon, slash, and it's got to start with http://deepai,blah, blah, blah. And so what I'm doing is I'm saying--and I think the most important of these is the character set. Saying that I have only these characters are allowable means that if I get anything weird, it'll get rejected, and it'll throw an error message. The nice thing about simple things like URLs--and URLs are not simple, by the way--but the nice thing about the relatively simple URLs that this gives me is I can apply those things. I’m applying business context rules that describe what I expect to have happen, and I constrain that to a method called checkURL that checks that the URL meets my expectations the way it’s used further down in the code. The more we check that the thing is what we expect, the less we're surprised later on. And this can get really complicated. If the thing I expect is an MKV video container, that’s a really complex container format with layers of containers inside of it that eventually contain video frame diffs. I don't know what date--I don't know how you validate that except to put the thing into some sort of sandbox that prevents it from doing weird stuff. So you can write some code. You can use a Mac OS sandboxd.You can use Linux AppArmor to say, hey, this process should never write to disk. It should read from disk, it should write to the monitor, and that's all it should ever do. And if we do that, so we're constraining the behavior of code, we don't necessarily need to anticipate all of the weird ways in which people can go wrong because we're focusing on the thing that we can model.

And one of the choices I had to make in writing the book was, how much does every engineer need to know about writing software exploits? Software exploits are complicated. They’re a dark art. There's a lot of learning that needs to go into being able to do that. And I'm pretty pleased with the fact that I found some simpler, more actionable things like checking the business context of what you're accepting, checking the behavior of the app, that I think are more actionable for a normal developer.

CARTY: Adam, if our listeners watch the original Star Wars trilogy this weekend, what would you recommend that they look for or make note of that might be helpful for them to take back to their jobs?

SHOSTACK: So the most important thing want them to catch is the Post-It notes with passwords that R2D2 takes advantage of. Every time he's plugging in, you can see there's a Post-It note, there’s a password.

No, more seriously, enjoy the movies and recognize how they can teach us so much. We can use them as analogies for our day-to-day job, and they really do work.

CARTY: And if you really pay attention, maybe you can even write a book about it.

SHOSTACK: Indeed, indeed. There's room for more books.

CARTY: All right, Adam. Final sprint questions here. In one sentence, what does digital quality mean to you?

SHOSTACK: A lack of surprises--the thing does what we expect it to do.

CARTY: I like that one. That’s a great way of looking at it. What will digital experiences look like five years from now?

SHOSTACK: I don't think they’ll look like the metaverse. I think that we're going to see an explosion of AI-generated content that's going to make it very hard to know what is real. And I think that that's a huge danger that we're barely beginning to grapple with.

CARTY: Very interesting. What's your favorite app to use in your downtime?

SHOSTACK: Can I be aspirational and say Duolingo, or should I own up and it's Plants vs Zombies?

CARTY: Well, it's good to have a couple apps going at the same time, right?

SHOSTACK: That's right, that's right.

CARTY: And Adam, what is something that you are hopeful for?

SHOSTACK: The next season of Andor. I really think that Andor has been one of the best things done in the Star Wars universe in a long time, and I'm really excited about the slower-paced storytelling, the politics, the personalities that are coming out, and the expanding the Star Wars universe beyond some of the core characters to new characters gives the folks a chance to play with new styles of storytelling. And both as a fan and as someone who thinks about, how do we use storytelling to make the sorts of points I’m making in the book, I've just got so much respect for the way that Andor has been put together, which is simultaneously true to the core stories, and new and different, and a little bit more adult. And so I'm really looking forward to the next season.

Read More