Agile pioneer Lisa Crispin shares how teams can approach software testing as a holistic activity, how they can build trust with one another and why they must own quality in modern software development.
Ready, Test, Go. brought to you by Applause // Episode 35
Holistic Testing and the Culture of Quality
About This Episode
Special Guest

Lisa Crispin
Lisa Crispin is an independent consultant, influential author and co-founder of the Agile Testing Fellowship. She is a pioneer in Agile testing, advocating for a whole team approach to build quality into every product and process.
Transcript
(This transcript has been edited for brevity.)
DAVID CARTY: Donkeys get a reputation for being stubborn, but in reality, they’re cautious and curious animals. Ironically, it was curiosity that led Lisa Crispin to adopt a few of her own, just two to start, but then the family grew.
LISA CRISPIN: I grew up riding horses. I have had horses all my life. Well, more than 20 years ago now, I was leasing a horse from a friend, and her neighbor rescued some donkeys from an auction, some miniature donkeys. And she said, I’m going to take one of these donkeys. And I’m like, rolling my eyes, what are you going to do with a donkey? So as soon as I saw the little donkey, I fell in love with him, and I had to have him instead. We soon learned that donkeys come in pairs. You need at least two donkeys. So we adopted another donkey. I had the two donkeys, and boarded them at my friend’s ranch.
And then my husband and I bought our own horse property. This is back in Colorado, where we used to live. And we had coyotes on our property. And miniature donkeys can’t defend themselves against coyotes or big dogs. So I called my local donkey shelter and said, well, what do we do to protect these little donkeys from coyotes? And they said, oh, well, you just come out here, and adopt a standard-sized female donkey and she’ll protect them. And so we did, and she does. And she had to learn to drive too. Ernest, Ernest was the first one we got. And then Chester is the little donkey we got to keep him company. And then Marcella is the big donkey. And she takes her job very seriously. And after we got her, I mean, we had had coyotes coming up on her front porch. They were so bold in Colorado. And after we got her, it was like, when you burglar-proof your home. They could still get in if they wanted to, but easier to go to the next house. So we never saw the coyotes again.
And after we moved here, we had taken our donkeys to drive at a local non-profit farm. And they had one little elderly donkey who was all by herself. And winter was coming on, and they said, boy, we don’t have a place to put this donkey for the winter. And I said, well, we could take her for the winter, thinking in the back of my mind, and I bet they won’t want her back. And sure enough, they didn’t. So she has her forever home with us. And she’s now 39-years-old. She doesn’t have to drive or do any job. She’s retired. [LAUGHS] And so yeah, that’s how this happened.
And now they’re all senior donkeys now. So they used to be pretty low maintenance. I just had fun with them, but now they need a little extra care, and they take hours of my time. But it’s my hobby, so it’s fine. So yeah, they all pull cart, pull wagons, skid logs, do work around the farm. And the two little ones, we regularly take out other places. They love to go on a road trip. So we’ll put them in the trailer with the carts, and go to other farms, and drive, give them a new perspective. It’s really hard to keep them while giving them all enough exercise. And donkeys evolved as desert animals. So in the wild, donkeys walk 35 to 50 kilometers a day, basically, in the desert. So they’re eating like a blade of grass every kilometer. Well, we keep them in an unnatural environment where they’re confined to a smaller space, and they’re eating straw, or hay, and they process it very efficiently. So they’re all too fat. And so it’s a challenge to get out, and get them enough exercise all the time, so. [LAUGHING]
CARTY: Lisa’s donkeys get work done all around her property. Their biggest motivator, trust. Once you earn their trust, you have a loyal companion.
CRISPIN: We tried to train them and found out they don’t work the same as horses. You need a different approach. They have to work on trust. And once you have a donkey’s trust, you have to be careful never to lose it, because they hold a grudge. So with a horse, you can bribe a horse into doing things it may not initially want to do. Some people can bully a horse. I wouldn’t do that, but people can force a horse to do something it doesn’t want to do. And donkeys are looking out for number one. They are protecting– they’re very self-protective. If they don’t think you have their best interests at heart, and they’re not sure it is about what you’re asking them to do, that may not be the best thing for them, they just won’t do it. And that’s how they got their reputation for being stubborn. They’re just being protective.
Now, if they trust you, they’ll do anything. And my donkeys will come in my house. I’ve taken them inside schools, taken them to senior centers, assisted living centers. So that was the most surprising.
But it’s also the most fulfilling, in that because they trust me, I can ask them to do anything. And it’s a lot of fun. They spread a lot of joy. They love kids. They love older people. They love everybody. Chester, in particular, if he sees somebody take their camera out to take a picture, he’ll stop, put his little ears up, and say hey, why don’t you come pet me? And so nine times out of 10, they do. Who can resist a donkey? And that makes him very happy. So it’s fun to– other people do art, or crafts, or something that people can admire. And all I can give to the world are a smile because they’re looking at a donkey, or petting a donkey.
CARTY: Of course, people are motivated by trust as well. So perhaps, it’s no surprise that the lessons Lisa learns in caring for her donkeys translate to how she thinks about leading Agile teams. Oh, just don’t get their snack supplies mixed up.
CRISPIN: I’ve learned a whole lot of things from donkeys that really apply to Agile teams, or any kind of software development team. You got to have that trust. I’ve been lucky to work on teams where it wasn’t easy, we had a lot of learning to do, it took time to learn good practices, and commit to developing quality product and collaborating on that. And if you have that level of trust, you can disagree, like, oh, we’re not sure how to implement this new feature, or what’s the best way to architect this new capability. And we can have some pretty even heated discussions about it. We can be honest and open with those discussions because we know it’s not personal. We know we have each other’s back. We know that nobody’s going to feel criticized because they had a different idea, and we can come to a resolution. So I think that’s so important, to trust, psychological safety, I think it relates to that too. We have to feel safe to point out problems, try experiments. We might fail. We have to feel safe to fail. [MUSIC PLAYING]
CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David Carty. Today’s guest is donkey-raiser and Agile-testing pioneer, Lisa Crispin.
Lisa is an independent consultant, influential author, and co-founder, alongside Janet Gregory, of the Agile Testing Fellowship. She’s been at the forefront of the movement to build quality into every product and process, with her focus on a whole team approach. She is also the co-author of several books, including Holistic Testing, Weave Quality Into Your Product. Quality is everyone’s responsibility, but how do we really get there? What’s the turning point that helps non-testers, from developers, to designers, truly take ownership of quality? Lisa explains why it takes a real focus on culture to influence the daily behaviors of individuals and teams. So bring all your jacks and jennies to listen to our discussion with Lisa. Here she is.
You’ve been at the forefront of Agile testing for a long time. Tell me about what inspired your holistic testing model, along with Janet Gregory. And how did it evolve out of what you were seeing in the industry at the time?
CRISPIN: Oh, that’s a really good question. Janet and I both started in Extreme Programming teams probably 25 years ago or so, different teams, her in Canada, me in the United States. And we were lucky to be on teams that practice extreme programming. And part of that ethos is that the whole team owns quality. So that was ingrained in us, and we saw how well that worked. And also, we were integrated into the team. So we were involved in all aspects of software development, so around that entire software development lifecycle. And we developed early on– we borrowed Brian Marick’s Agile testing quadrants, or he called it the Agile testing matrix, I think. And we borrowed his idea, and expanded that model of when we’re planning our testing, we need to think about a lot of different types of testing activities. So testing activities that are technology-facing, that the team owns for doing their development the best way, business-facing criteria, of what’s quality to the business. And that’s a form of story tests and acceptance tests. And tests that guide development, using test-driven development, behavior-driven development, that sort of thing. And the tests that critique the product. We’ve written some code, we’ve deployed it somewhere, and now, we want to see if it really meets the needs of our stakeholders or our customers. So we have the quadrants model, and lots of people found that super useful.
And then in around 2010, people started talking about DevOps, and continuous delivery, and someone came up with this DevOps loop. I have not been able to find who came up with the loop. Patrick Debois came up with the term DevOps. He was one of the pioneers, but even he couldn’t tell me who came up with that DevOps loop that has this little stages around, an infinite loop. But one of the stages in there, it says tests. And that really bothered me and Janet, because I think what they mean is the automated tests that run in your continuous integration. But when you just look at the visual, you’re like, well, test is not a phase. We do testing all the way around. And then in 2016, Dan Ashby did a post that he called– with what he called as continuous testing model. So he took that DevOps model, and he wrote, “We test here, we test here, we test here, we test here, all the way around.” Made it so visual of, this is what we really do. Unfortunately, the term continuous testing got adopted to mean the tests, automated tests that run in continuous integration.
And testing is so much more than that. And so actually, Janet had the idea– she was playing– we were playing around with that loop, and how to look at it. And we kind of refined that model, but she really was a leader on it. And then she came up with the idea of calling it the Holistic Testing Model, because holistic captures the idea that a whole team owns quality and testing, and it goes through the whole life cycle of software development. And you know, the word Agile unfortunately, means 20 different things to 20 different people. [LAUGHS] It’s kind of diluted now, unfortunately. And of course, here, our brand has always been Agile testing, because that’s where our books were called. But we thought holistic captured it a lot better. And we got really good feedback on the model, because some people, obviously, lots of people were doing the things that we have in the model, and they’re like, yeah, that really captures what we do. I can really explain to other people what we’re doing is our testing approach, and our testing strategy by showing them this model, and talking through it. And other people who were really struggling, like, how do I fit testing into continuous delivery? How do we do all these testing activities? Where do testers need to be involved, or quality engineers? Where should they be participating? And the model gives them a place to talk about that, and craft their strategy, and see what relationships they need to build, and where they need to go, and see what they can contribute. So it seems to have really met a need, and it’s really great to see people enjoying it and using it.
CARTY: Yeah, The Agile Manifesto, the language is surprisingly clear and simple, and yet, it’s been twisted each different way in about the 20 years since it came out, right? But to get to one of the points you just mentioned, and I want to talk about all of this a little bit more, you’ve said that the best performing teams weave testing into every stage of development, like we just talked about. What specific advantages does that give them, whether we’re talking about speed, confidence, or the customer experience improvements that other teams might miss out on?
CRISPIN: Well, part of that is the secret to one of the big secrets of successful software development, is working in small batches, which I think people have been saying for 50 years. [LAUGHING] Why don’t more people do it. And the idea is, we work in these small increments iteratively. So we want to get those small changes done, and tested, and in production as soon as we can. And to do that, we have to minimize hand-offs. We need to work together. And we need to think about risks and value to customers. And so in this early planning stages, we talk about what’s the value of this change we’re about to work on? What could go wrong? How would we mitigate those risks? We may be able to mitigate those risks through tests, acceptance tests at different levels. It may be that there are things we can’t mitigate with tests. Some things you can’t test. Some things you can’t anticipate that users will do. And today, our applications are so complex. We just can’t think of everything. So we also have to think how will we mitigate risks on the right side of that loop.
Things that are in production, we need to have good monitoring. We need to have good observability. We need to be able to see a problem right away and analyze that problem. And who’s really good at seeing patterns? Who’s really good at noticing things? Well, testers. Often, when I’ve been on teams– and we’ve got dashboards showing what’s happening in production, and it’s a tester who notices, what’s that weird spike in our latency? So those skill sets are valuable in all areas of the development process. So I don’t know if I got off topic from your question, but–
CARTY: Yeah, no, I think that hits on it. And to piggyback off of that, we talked about the DevOps loop earlier, observing real outcomes, as opposed to making assumptions about user behavior, or just blindly tracking certain metrics, goes a long way toward achieving your goals. Measuring the wrong things drives wrong behaviors. So when we’re talking about observability or feedback loops, what should those actually look like in practice, especially across these different teams where they have quality woven in?
CRISPIN: Well, we want them as short as possible. And so that’s why I have found in my experience, it’s pairing, or working in an ensemble, or software teaming is what Woody Zuill calls it, that’s the fastest feedback. So if I’m working in a group with a couple of developers, and maybe the product owner, or somebody else, and we’re building a story together, I’m there to ask those what if questions, or to maybe point things out of, oh, should we have a test for this? Oh yeah. Or hmm, I don’t think that’s quite what the product people wanted in this story. And now, we can ask the product owner– hopefully, it’s right there, he or she is right there in the room with us. So that’s the shortest possible loop, when you’re actually working together.
And then maybe the next shortest loop is, we’ve been doing test-driven development, which has not been as widely adopted as we wish, even though it prevents so many bugs, and building with so much quality. But that’s a quick loop too. We’re writing a test, we’re writing the code to make the test pass. And now, we build up a suite of those unit-level tests, and we can run really quickly right on our desktop. So if we broke anything with a new change, we’ll know right away. And then the next step is, OK, now we’ll commit to our repository, and that’ll kick off a, build continuous integration, that will run more test suites. And the further we go, the feedback may be slower. So we may get to things like performance testing or security testing, that’s maybe further down the line, depending on what our needs are. Maybe performance testing is the first thing we need to do. But we get those later stages, that’s slower. And then into production, observing production, maybe we use a release strategy where we deploy to production, we don’t release it to customers, but we use it ourselves and watch what happens. That’s slower feedback. It’s getting more expensive, because now, we’ve made a bigger investment in that change. But it’s really valuable feedback. We still need it. And so the further we go, the slower the feedback is, but we still need that feedback.
And of course, we need to listen to our customers. So we have all our analytics tools. But, you know, even better to be able to directly speak to customers, get our customer support people to tell us what customers are saying, that user experience aspect. And I think that’s one of the places where people in testing and quality engineering contribute a lot.
CARTY: That makes sense. So we’ve got these different voices, these different stakeholders. Let’s talk about that culture component, and how it is all supposed to blend together. Now, I know you’ve written and spoken about this quite a bit in the past. It’s difficult to instill that culture, but it’s also an incredibly important part of the holistic testing process. A lot of teams might claim to have a culture of quality, but that can mean anything. So what’s the difference between saying that and proving it with day-to-day behavior?
CRISPIN: Well, quality is like mom and apple pie. Everybody says they want it. I’ll ask any executive what level of quality do you want? Oh, we want the best quality ever. But they don’t understand the value of quality, what that gets, what that gives the business, what that gives the customers. And they don’t know really how to build it. And honestly, people at the executive level of a company, managers, they don’t care about testing. They may care about quality. But however you got there, not their concern. I think it’s really important for each software organization, and each team in that software organization, to have that conversation about quality. What does quality mean to us? What level of quality do we want that we can actually commit to? And maybe not today. Maybe it’s a goal that we want to achieve. And I’ve found that teams where we didn’t have that conversation, we’re still a pretty good team, but we never got to the unicorn magic, because we weren’t making that conscious commitment. And it’s hard too if you’re like I say, if your leadership in your company doesn’t share that commitment, it makes it really hard.
I mean, I’ve had some consulting client companies where maybe the CTO, if you ask them about quality, oh, well, just don’t release crap. [LAUGHING] Like, well, that’s not all that helpful. But this is where I think that for me, things like the Google’s DORA key metrics come in, where those are– I think metrics need to be specific to your organization, but I also think there are some general things you can look at. How frequently are you able to deploy? And that can be very domain specific too. There’s some business domains where aircraft software, you probably don’t want to deploy that every 10 minutes. But frequency of deploy, and then how often do you have a problem, and have to maybe roll that deploy back? How long does it take you when that happens? So there are some different metrics that DORA has found through their 12 years now of research that apply across the board at many companies, very easy to track those metrics. And people can relate to them. If most companies in our industry release daily, and we’re releasing quarterly, something’s wrong there. Obviously, metrics are dangerous, because people can game them.
But I do, I think it’s more important to create a hypothesis first. Here’s what we want to achieve. And what can we do– what experiment can we do to have a baby step towards that goal? Make a hypothesis. Try something out. Use our retrospective to see how it worked, and go on from there.
CARTY: Yeah, I’m glad you clarified that within a particular industry, because I’m thinking, well, I don’t know if nuclear energy systems need a daily push necessarily. Maybe they do. I don’t know. I’m kind of speaking out of pocket here.
But to get back to what you mentioned before, whole team quality in the holistic testing model, I think people are of naturally a little bit resistant to change. They might want to stay in their own lane, that kind of thing. So you might encounter some friction along the way. In your experience, what is the turning point that helps non-testers, whether we’re talking about developers, product owners, donors, designers, whomever, truly take ownership of quality? What’s the difference-maker?
CRISPIN: Gosh, that’s a good question. I mean, I think it usually comes through personal relationships. So whenever I’ve started a new job, the first thing I do is, of course, you want to get to know your own team. But let’s say there’s a platform engineering team. Well, that’s really important. And we’re trying for a good DevOps culture where everybody’s engaged in maintaining the infrastructure and running the product. So I’ll book a one-on-one with the manager of the platform engineering team, and other people on the platform engineering team, and maybe somebody in customer support, so I can understand what kind of customer problems there are. And so just building those relationships, and asking people to, so, can you sit down for 20 minutes and look at this with me? I think I see something weird here. If people are really used to working solo, it’s really hard to break those barriers down. But if you ask somebody for help, they’re probably going to help you. One of the things that Janet and I both found in our early days of working being the first tester on an extreme programming team where they’re like, we’re all doing testing. What do we need with a tester? Which is the opposite problem, isn’t it? But it’s just to say, hey, before you commit the changes for that story you’ve been working on, can you just show me for 10 minutes, show me the code you’ve written, the tests you’ve written? Can we just play around with it. And yeah, 10 minutes. People are willing to give you 10 minutes. Well, nine times out of 10, you find some problem, some bug, or something they didn’t understand quite right. And they’re like, wow, that was great. I learned about that before even I checked it in, and it failed in the continuous integration. So they start seeing the value of that.
And the other thing I found really helpful is, if the developers don’t have a lot of testing skills, or they really think somebody else should be doing it, is to have ensemble test sessions, or you can rebrand them as test parties, and say, hey, you’ve got a new feature that you think is going to be ready to deploy soon. Let’s spend 30 minutes doing some testing on that. It might be exploratory testing, or we have charters that might be ad hoc testing, but invite the developers. Hey, do you want to get together for 30 minutes? And let’s just try out this feature and see how confident we are about it. And in that session, of course, questions come up. Probably, most of the time, some issues are found. And again, there’s like, phew, I’m glad we found that. We were about to put that in beta. [LAUGHING] And we better not do that. And so they see the value, and they start to understand what is this weird testing thing all about. And hmm, doing it earlier is better. And maybe we should have some of these conversations before we even write the code so that we have a shared understanding of what to build. And the most frustrating thing is to deliver a story to your product owner, because you think your team did a great job on it, and the product owner says, that’s not what I wanted. [LAUGHING] So somehow, you estimated that story, and worked on it without actually understanding what it was, and now, you got to do it again. So nobody wants that.
And so anything you can do to save time by having– we don’t want to do big design up front– but having conversations and using structures that really help us like, identifying risks, something like risk storming that you can do in an hour, and look at what are our most important quality attributes. So what are the risks that might happen that affect those quality attributes in a bad way? How are we going to mitigate those risks? Might be like I say, it might be with testing. It might be with making sure that we log the right events and data so we can monitor it, have observability. So having those conversations early, it’s a little investment of time. Example mapping at the story level, where each story, what’s the goal of the story? What are the business rules? For each business role, give us some concrete examples of how it should behave. The product owner, or stakeholder can give us those concrete examples. We can turn those examples right into executable tests. So the code is written once the tests pass. Oh, we’ve got regression suite now. So it all hangs together. And again, I talk like it’s so easy. It takes a lot of time to take a team that doesn’t have those capabilities and start building them in. I just spoke with a team yesterday that I’m going to be doing a coaching session with Agile Testing Days, and they’re in the insurance space. They’re servicing hundreds of insurance companies. They’ve still got COBOL applications. [LAUGHS] And they’re realistic. They would love to be Agile. And you’re not going to do a big bang Agile transformation in an organization with COBOL applications. And so they’re very realistic of they have test automation, but they really can’t automate the legacy products, so they have ways to deal with them. But they’ve got this 10-year plan step-by-step, they’re going to do this. In that context, it’s going to take that long. And they’re so smart to do it that way.
I know other companies where they take the latest fad, whether it’s SAFe, or DAD or FaST, or all these things that come out, which are maybe a valid thing in some context. They try it for a few months. They don’t even give it time. And then they say, well, it’s not changing anything. Well, that’s because your problem is something else. It’s not your development process. It’s actually maybe they have sales-driven development, where there’s so many conflicting priorities coming down from the business leadership, and they’re task switching so often they can’t get anything done, or unrealistic deadlines because something was promised to a customer. But just don’t give them crap. [LAUGHING]
CARTY: Just a good life lesson there, I think, in general. It’s funny. I know it’s not the context that you referred to testing parties as, but I just picture the testing party having a hard time getting started, because they’re like, do we have the right number of balloons? Are the balloons the right color? Is the music the right vibe? Is it too bright in here? Should we have some more chips? Do we have different kinds of dip? You know, which are all valid concerns. Let me just be completely clear about that. So you’ve got your ear to the ground. You mentioned talking with that insurance company’s testing team. So to that end, how much of a strain are testing teams today facing to justify the time and resource investment needed to stand up the right testing strategy? And how has that changed over the last few years?
CRISPIN: I really see a positive trend. I think our testing communities, we’ve got communities like Ministry of Testing, and other communities, some of which have revolve around yearly conferences, but people get together in between, that are supporting each other and providing lots and lots of content of courses, and talks, and webinars, and workshops to spread these ideas around. And one of the positive trends is, just even the language matters. And we need to speak the business language. Like I say, the executives don’t care about testing, but they care about outcomes and risks. They do care about risks. And so if you can speak it– if this could happen with this new capability that you want to release, and if it does, it’s going to have a very negative effect, we’re going to lose customers. So let’s take some time and think about what we want to do about that. So just changing the language, and we got to be part of that.
We should be part of the company. We shouldn’t be this separate entity. And part of that terminology shift– I always call myself a tester– but the popular term now, which I think captures more what I’ve always done it, and actually, in more recent years, probably the last 12 years, I called myself a test consultant, even when I’m working full-time. Because I’m not doing all the hands-on testing, I’m helping the whole team get the testing skills they need to build quality in, and do testing as they go. And people now use the term quality engineer for that. I was reticent about that, because I have a lot of Canadian friends, and they get really antsy when you use the term engineer, because it’s– it had been illegal in Canada to call yourself an engineer if you weren’t a professional engineer with a pinky ring. Software engineers was not a thing. They relaxed that in Canada, but I still feel twitchy about it. But when I see what quality engineering involves, you are helping at a bigger level.
You’re not just sitting in a corner executing testing. You are helping everyone in the team, everyone in the organization think about quality. What are our biggest problems? What’s our biggest blocker for quality right now? How can we do some small experiments to make that problem smaller together? What skills do people need? Let’s make sure that you get the training you need, or the skills you need. Or hey, I can compare with you, and give you those skills. There’s a whole variety of things we can do. And making our work visible. I learned early in my career in my very first programming job, my manager said, Lisa, part of leadership is making sure that the business people– in this case, the librarians, because I was working for the University of Texas library– that they know what you’re doing, and what it means for their business. That’s leadership. And I really took that to heart and made sure throughout my career, my manager knows what I’m doing, the business people– When I was a director of QA, here’s what our team is contributing. Here’s what we did for this latest release, and here’s what we found out. Here are the risks we found. Here’s how we mitigated those risks, that kind of information. Of course, you have to do it in a little executive summary for the upper managers. But making ourselves visible, because Cassandra Leung just did a– she did a talk at the Emoticon conference last month, yeah, about how can testers make their work visible that was just released on Ministry of Testing site. And she had some really great pointers for how to do that. There are lots of ways to do that, but I think that people in the testing profession are learning that they need to be a part of the whole team, that they need to get the whole team interested in seeing the value, because we got to communicate, we got to speak their language and show them. Everybody does understand. It’s important when they understand what you’re talking about.
CARTY: Right. And rising tides lift all boats there. If you’re demonstrating how you provide your value, then your role is understood across the organization, and everybody can march forward toward the same outcome. And that’s especially important today, right, because in the world of quality today, fast releases, global users, complex systems, there’s a lot going on. So what unique, new obstacles are you seeing teams face? And how does that tie into the holistic testing model?
CRISPIN: Well, of course, the big thing everybody talks about now is AI, right? And the interesting thing to me about the latest DORA report on AI-assisted development is, their conclusion is AI is an amplifier. So it amplifies the good, it amplifies the bad. So if you’re a team with a healthy culture, people are happy in their jobs, they feel free to learn and experiment when they need to, try new things. And also, if the company has said, hey, we’re going to use these Gen AI tools, here’s how we’re going to use them, here are the guidelines. Don’t share any customer data in ChatGPT, just some general policies, and they’ve got that infrastructure. It does amplify the good. They are more productive, because now they can use GenAI for some of the grunt work and free them up– it’s just like test automation– frees us up for the thinking work. And they do better. They perform better. The dysfunctional teams that are having to work in big batches of changes, so they’ll only be able to release every six months or a year, it’s really a struggle. They don’t have the right tools. They’re really driven to deadlines. They’re not given any kind of professional development time to learn new stuff. And then if they’re told to use AI, maybe they’re going to generate some code or some test cases, and they don’t have the skills to know that code wasn’t very good, or those test cases weren’t really enough coverage. They’re going to do worse, because they’re going to be depending on the tools. The tools are not really delivering what they need. So I think that that’s a big challenge right now.
And unfortunately, there are a lot of practitioners out there that are sharing their experimenting with the tools, and they’re sharing what they’re doing. And I’m seeing a whole lot of positive stuff about it. People, of course, are scared, because it can take their job. And it has. I mean, there are a lot of ill-informed company leaders that say, hey, we don’t need these stinking testers. We can use AI. And of course, there are a lot of tool vendors that want us to think that too. But it’s just like test automation.
We still need the people. We still need the brains, the eyes, the ears. And it’s going to make us better. It’s going to amplify our skills. It’s going to save us time so that we can do more of the hard problem-solving. But we have to go about it the right way. And I think there’s a lot of good guidance out there. Mark Winteringham has a book on using AI in testing. So that’s just one resource. There’s tons of stuff out there that are people who’ve really investigated and found good ways to use it.
CARTY: Yeah, test automation is a great comparison, because I remember having this discussion five, six, seven years ago about the fear around test automation replacing human jobs, and the organizations that had done that had very quickly realized their mistake, and had to hire testers back in order to cover that up, and to get back to a certain threshold of quality. So don’t think about AI as a magic wand. But if you could wave a magic wand– separate question– and instantly fix one blind spot most organizations have about quality, what would it be and why?
CRISPIN: Well, I think just making sure that everyone in the company, including the leadership, understands what we mean about quality, why it’s valuable to the business, why investing in quality– which could be a big investment. I worked with a startup 20 years ago where the startup was failing because everything depended on automation of their business, of the product. They didn’t have software, and they were failing. They were not getting releases out the door. And they heard about this thing called Agile. So they go, who’s the best person around here who could help us with that? Well, they heard about Mike Cohn. I don’t if you’re familiar with Mike Cohn, but he’s one of the leaders in Scrum, and wrote many awesome books,. And they actually got him to work for them. And he brought me on board. And they actually listened to him. And he said, my team needs time to learn. And they’re like, fine, we’re going to fail, so we’re going to follow your advice. You know this stuff, so we’ll do what you say. And so they didn’t expect us to deliver x amount of stories every two weeks. We, as a team, we decided we wanted to commit to the best quality we could. We wanted to write code that we would take home, and show our moms, and put on our refrigerators. And how do we do that? Well, these extreme programming practices, like test-driven development, refactoring, continuous integration, guess what, it takes time to learn those things. And while you’re learning them, you’re not going to deliver a lot of business value. So it took a couple years before our team was really performing well.
We did a little business value every couple of weeks, which is more than they had got before. [LAUGHING] And because they thought we hired these people to do software development, they know how to do it, we’re going to leave them alone. And we’re going to make them an integral part of the company and consult with them on what’s our next priority, what should our business do next. Does this feature make sense to do? How much is it going to cost to develop this feature? You know, and really involve us in those business decisions. And we just achieved so many things and we’re such a high performing team because of that, and because of the co-founders, that company understood the investment. And if I could wave a magic wand and make all company founders and leaders understand software development, you want to hire the people who know how to do that, and empower them, not get in their way. Let them do their best work, which involves time to learn. It’s an investment. It’s going to cost you money. Unfortunately, our US economy works against that, because it’s all about your quarterly results, and what was your quarterly profit. And that’s a problem I don’t know how to solve. [LAUGHING] You got to take a long-term view.
CARTY: It might not even be a problem a magic wand can solve.
CRISPIN: Yeah.
CARTY: It’s a complicated one.
CRISPIN: Yeah.
CARTY: Lightning round for you, Lisa, before we let you go. First question, what is the most important characteristic of a high quality application?
CRISPIN: Gosh, it’s so domain specific, but I’d say in general, usability.
CARTY: What should software development organizations be doing more of?
CRISPIN: Nurturing a learning culture.
CARTY: And what should software development organizations be doing less of?
CRISPIN: Sales-driven development.
CARTY: OK. And finally, before we let you go, what is something that you are hopeful for?
CRISPIN: I’m really hopeful that more and more organizations are embracing this whole team approach of, we need a lot of different competency, and skills on our team, and every team member is equally valuable. Whether they’re quality engineer, a software engineer, a designer, whatever they are, we all add value. We need a diverse team to succeed. And let’s have that psychological safety so we can succeed together.
CARTY: That’s a wonderful, hopeful note to end on. And thank you so much for joining us, Lisa.
CRISPIN: Oh, it’s my pleasure. I enjoy your podcast, and I’m really honored to be part of it.