Join Michael Bolton, software testing expert, consultant and author, as he discusses the importance of comprehensive testing, open communication between teams, and the effect of AI on software development.
Beyond Checkboxes: Rethinking Software Testing
About This Episode
Special Guest
Transcript
(This transcript has been edited for brevity.)
DAVID CARTY: If you’ve got an Irish tune playing in your head this time of year, you’re not alone. We are recording this right near St Patrick’s Day, but today’s guest, Michael Bolton, carries that tune with him all over the world.
MICHAEL BOLTON: I started playing Irish traditional music in about the year 2022, February 2022. I’d been listening to it for quite a while. I had friends who were professional musicians, are professional musicians to this day.
And one of them took me to see a band called Danu, a bunch of young guys. Six of them, if I remember correctly. I said to one of them, where can I go get more like this. So I went to more and more pubs where they were playing this more and more. And as a consequence of it, I met my wife. And she got into it as well because she had been going to music lessons with her son. He didn’t stick with it, but she sure did. And so that’s our principal hobby. Whenever I’m traveling for work, I pack the mandolin in the suitcase. And she brings her fiddle along with us. And there’s an instant community of non-work related stuff that’s kind of cool.
CARTY: Michael typically plays the banjo at home, but he goes more compact on the road, bringing his mandolin with him when he travels. And he doesn’t just play to the hotel room walls. Michael congregates with local musicians, guests, and sometimes an Irish stepdancer or two to play tunes, chat, and sure, drink some Guinness. In fact, he’s enjoyed playing in these sessions on five continents. Only two more to go.
BOLTON: My most exotic location, I guess, was Beijing. But we’re heading for New Zealand. There’ll be sessions in all three cities I’m visiting. Barcelona and, of course, all over the place in Ireland and in Britain, so it’s wonderful. Kenshin is from Japan. And there’s a huge Irish Trad music community in Japan composed of not Irish people, but Japanese people. There’s a really strong community in Gothenburg, in Sweden. Also in Copenhagen in Denmark.
Here’s the thing– wherever Irish people go, there’s Irish bars. And wherever there’s Irish bars and Irish people, there tends to be Irish music sessions. The nice thing about an Irish pub is that it’s cozy a lot of the time, and you can chat.
CARTY: What started as a curiosity blossomed into a fulfilling social, personal, and cultural endeavor for Michael. Not only does it allow him to meet new people and new places, but there’s a mathematical orderliness to playing that he finds calming.
BOLTON: It makes sense right away. I played guitar before for a long time. But I never really picked up a mandolin. But mandolin is very logical. It’s tuned just like a fiddle, and so it’s really friendly to Irish tunes. I’ve met a lot of people in the software business who are fascinated with pattern of all kinds, at various different kinds of things.
There’s a certain element of this kind of music that’s very geeky. It’s very highly patterned. And playing it requires a measure of concentration and focus. For me anyway, just allows me to disappear for a while in these wonderful tunes, some of which date back to– I know, at least one that dates back to 1790. Now, music doesn’t get played for 230 years unless it goes to something fairly deep in the psyche.
CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David. Today’s guest is reel roamer and software testing guru Michael Bolton.
With over 30 years of experience, he’s a recognized expert in software testing. Michael is the co-creator of rapid software testing, a methodology focused on efficient testing under time pressure. He’s delivered training in over 35 countries and worked across a diverse range of industries. A prolific writer and speaker, he has contributed to major industry publications and keynoted at over 200 conferences all around the world. His career highlights include winning the software testing luminary award in 2016, and serving as program chair for several renowned testing organizations.
Michael is also co-authoring the upcoming book Taking Testing Seriously with James Bach, and that’s due later this year. Let’s get back to that efficient testing point. What does that look like in the present day? As organizations aim to release faster and faster, and with more pressure than ever to deliver a high quality user experience that yields profit for the business, how must testers adapt to the daily pressures they face? To speak nothing of AI’s growing influence on work throughout the SDLC. You’d be hard pressed to find an expert with his finger on the pulse more than Michael Bolton. As long as his fingers aren’t on the mandolin strings anyway. Let’s think about software testing, or perhaps rethink software testing with one of the industry’s foremost thought leaders, Michael Bolton.
Michael, in your recent blog series, Four Frames of Testing, you mentioned that the word testing is overloaded and often leads to people talking past each other. I think that struggle between developers and testers, that dynamic is one that I think our audience would be familiar with. And it’s not just developers and testers, but to simplify that. But what would you recommend teams to do to establish a shared understanding of testing to improve collaboration across those roles?
BOLTON: Well, talk about it. It seems to me would be the first thing. Let’s get the conversation going about what testing really represents. Now, if you ask me, testing is evaluating a product by learning about it through experiencing and exploring and experimenting. And there’s lots of other stuff that goes into that– examining it, manipulating it, observing it, making conjectures about it, generating ideas about it, and refining on those ideas and expanding them, overproducing them, abandoning the ones that we’ve overproduced and recovering the ideas that we’ve abandoned, navigating, map making, recording, reporting, thinking critically, analyzing risk, all those things.
To the programmer, it’s running the code against some other code to determine whether what I’ve just done is reasonably close to what I intended to do, which is cool. And great, really important, because to not do that would be like not spell checking a document that you’re writing, a piece of text that you’re writing, or not rereading it for errors and grammatical problems and logical inconsistencies and stuff. The key difference, it seems to me, is that programmers are disincentivized in a lot of cases, not terribly well trained in a lot of cases, and annoyed and frustrated by having to do that in a lot of cases, as indeed most everybody is when they’re challenged to evaluate the quality of their own work, to look for problems in their own work. You’ve got to shift your mind from a creative I’m going to get this done mindset to a oh no, I have to look at it and see if there are any problems associated with it. And that’s hard. And we should give credit to programmers who are diligent about it and who are trying to do it really well.
There is another catch too, and that is many testers have the same point of view due to poor testing culture and poor management culture, I would say, that testing is seen as a kind of routine, almost factory oriented sort of activity where we are turning human testers into test case running machines. And investigation, discovery, experimentation, learning– that sort of stuff in a lot of places is giving fairly short shrift. It’s not well honored. And a few of us, not very many of us, alas, but a few of us have been trying to help the testing community recognize the significance of doing deep, risk-focused, business-focused testing that is designed to find important problems that matter to people before it’s too late. And that’s not a matter of routine, it seems to me. That’s a matter of probing and investigating.
CARTY: Right. So that word in particular you just said, investigation– that keeps coming up in the blog series as one key characteristic for testing. What can organizations do to better enable testers to explore and investigate and sort of reverse this type of culture that you’re explaining where these testing activities fit into a requirements checkbox?
BOLTON: Well, honor, it is the first thing. And it’s a sort of sad thing to say because testing has suffered from these kinds of misunderstandings and these oversimplifications for a long time. And it’s sort of pervasive in the software development culture. Testing is seen as a chore by lots of people and a kind of box checking and ultimately not very helpful activity. And the testing community itself has a fair deal to answer for in that realm.
I’d like to point out that finding problems in something, which is what really good testing helps to do, is the first step towards making things better. Not the only first step, but a really, really good first step, especially when we’ve got something that we have a belief is pretty good. Well, let’s challenge that belief and see if it’s really true. And if it’s not, we’ve got something to go on that can identify how to make the product better. So testing is seen sometimes as kind of negative by some people. And yet ultimately, its purpose, its role is an extremely positive 1. We’re helping to prevent businesses and businesses clients from harm or loss or damage or bad feelings or diminished quality.
CARTY: Yeah. And in that blog series, you ultimately arrive at four key criteria in digital quality that reflect the primary business risks with a digital product. And those are intention, discipline, testability, and realization. Can you briefly tell us about each of these and how they tie back to those blind spots that manifest throughout the organization?
BOLTON: Well, I want to be careful about one thing. And this is going to sound weird and controversial. But as a tester, making quality, building quality is actually not my job. Now that sounds pretty strange at first. But what I want to try to do is I want to help the people who build quality in– that is the developers and the designers and the managers– I want to help them by identifying threats to quality.
So quality, by the way, being as our mentor, Jerry Weinberg, framed it for us, quality is value to some person or persons who matter. So I don’t make the quality, and I don’t build the quality in. But for those who do, getting to a high quality product seems to me and to my colleague, James Bock, to involve three or four things, three from the perspective of the people making it, and then there’s one that we kind of sneak in there to help the other three things along.
The first is intention, as you mentioned. When we’re building a product, we want to get clear on what we want from it. We as a development team, I mean, not the testers per se. But we want to get clear on what we want from it and what our clients might want and our customers might want. And that’s a matter of, among other things, developing our ideas about it and refining those ideas and figuring out how we can frame the actual requirements for the product, which is capability for it to do things, doing so reliably, doing so in a way that is usable to the people who are going to be using it. There’s a bunch of other quality criteria too that are important to customers– scalability, performance, installability, compatibility, security, those sorts of things. Those are the requirements for the product. And as we’re building it, we’re trying to get straight on what those are.
As we, the development team, are building a product, we want to do so in a diligent and disciplined kind of way to prevent the possibility of– or to reduce at least the possibility, probability of easy to find and avoidable errors getting past us. So lots of things that developers do in that frame– review of code, coding carefully, coding thoughtfully, conferring with the clients and the managers and the designers and so on and so forth. Working with each other, collaborative review and pairing and so on. And from a perspective of trying to be sure about that, frequently writing low level unit checks and integration checks to, again, defend ourselves against the possibility that some easy bug might have got past us.
Now into all this, we also slip a notion called testability. Testability being as many different ways as we can think of for answering this question– in the unhappy event that there is a problem in the product, how would we know about it? Quickly, easily, efficiently, reliably, and so on. And so that involves things like building the product frequently, that involves building the product in a easily reproducible way, being able to rebuild the product quickly and reliably as well. So if a problem is found, we don’t make a building process that makes that easy. We want to make it hard for a new problem to get back in. So the discipline frame and the testability frame, which is what we’re calling that one, are fairly closely tied together. But the testability frame also points back at the intention frame too. We got things that we want to do to add to the design of the program such that if there were a problem in it, it would be easy to know about. So that would include things like building in lots of internal error checking, making sure that the team and the project are set up to find problems quickly and easily and efficiently. So those are related to things that we would consider to be project related testability, value related testability, and maybe even, fancy word, epistemic testability, which is reducing the size of the gap between what we know and what we need to know in order to make an informed shipping decision possible.
Finally, the fourth frame is the realization frame. We have a real product and we have realized our goals, we’ve realized our accomplishments. But we also have an opportunity to realize that there’s a problem with the product. That problem may have slipped past us at some point. It may have escaped even a highly disciplined development process. So that’s the kind of place where we tend to do, or we want to do deep testing, interaction with the product, experimentation, using tools by all means, but also getting experiential testing going so that we can find out what the real product is like overall. Testing the realization frame is testing when we’ve got a real complete built product. And the reason that that’s important is that all the testing we’ve done so far up until we’ve got that built product in our hand, is based on theory. What we know about the product so far is we know possibly quite a bit about it in its bits and pieces. But the realization frame is where we have the whole product, the built product. And it seems to me that we want to spend some amount of time with that to help avoid the possibility that a problem may have got past us without us knowing about it at the level of the system, not just in the components and the units of the code, but in the whole system, in its interaction with something very much like the world that it’s going to end up in.
Now, of course, after the product has been shipped, we can do what we might call live sight testing or testing in production to perform experiments that we hope won’t affect the production environment. And there are ways of doing that that reduce the danger and the potential for harm. One would sincerely hope that the product got a good looking over between the time that we think we’re done and the time that we deploy it or inflict it on an unsuspecting public.
CARTY: Yeah, there’s a lot in that blog series, the Four Frames of Testing. We definitely recommend our audience check that out because there’s a lot more detail in there. But I did want to ask you about your upcoming book with James Bach called Taking Testing Seriously. And in there, you emphasize the importance of escaping the, quote, “Echo chamber of best practices.” So what are some examples of this? What are some examples of widely accepted testing practices that maybe aren’t always beneficial? And how can teams develop a more critical eye toward those practices and toward digital quality?
BOLTON: Well my favorite bug bear on that is the focus on artifact-based testing, focused on the idea of the test case. No other investigative, cognitive, socially focused intellectual craft uses cases like the testing business does. I say this all the time– journalists don’t use journalism cases. Researchers don’t use research cases. Parents don’t use parenting cases. And for heaven’s sake, managers don’t use management cases and developers don’t use development cases either.
But there’s a reason for people to think of testing in this way. And that is, it is very convenient and very indisputable in a way very illegible to use test cases to have a nice, tidy procedure that says, do this, do this, do this, do this, and then observe this specified presumably desirable result. If we don’t get the desirable result, and we’ve done the procedure just right, there’s likely to be a problem in the product. But there’s an asymmetry in there. And that is just because we’ve done that, that doesn’t mean there’s no problem in the product. And what we worry about is overfocusing testers on the procedure and on that anticipated, as I say, specified presumably desirable output. Nobody else works that way. Teachers don’t work that way. Even though teachers have multiple choice tests, when we’re getting serious about teaching a kid something, we evaluate the kid’s learning not by giving them something routine, but by giving them a real challenge. When we are evaluating University students for master’s degrees and PhD theses and so on, we don’t give them a multiple choice test. And testing with test cases often amounts to that.
Now there’s some wiggle room there. There’s lots and lots of notions of what a test case is. And sometimes, what people refer to as a test case has a kind of looser structure than that. We would call that a test idea framed around a bunch of test conditions maybe because we do want to be able to reproduce problems if we encounter them. We want to be able to analyze them. So relaxing the focus on test cases can make debugging more difficult sometimes, because there’s factors that we don’t track, that we don’t necessarily understand, that we’re not necessarily aware of even. But that’s true for test cases too a lot of the time when we’re trying to figure out what’s going on.
So the big plus for test cases from a management perspective is that they make testing very legible. We would argue that they make testing almost over legible. There’s a great book on the subject of legibility called Seeing Like a State by a fellow named James C Scott. And it’s one of the examples he uses of this nice, orderly, tidy approach to growing trees, monoculture. So rather than having a messy, complex forest and forest floor with lots of critters on it and lots of different plants and so on and so forth, plant your trees in nice, orderly rows so you can inventory them and cut them down and tax them, interestingly, in a bunch of very orderly sorts of ways, which is a great sounding idea at first, until you realize that monoculture doesn’t make for good tree growing, doesn’t make for good environments, and it’s not very sustainable in the long run. So that’s one thing. Another example that he gives is of cities that are rigorously planned on a grid system, and that are very tidy. The canonical example in his case was the city of Brasília, which is very, very orderly to bureaucrats. It must have looked absolutely wonderful. Trouble is, it’s kind of unlivable. Things that are too tidy in the world, too tidy end up not affording diversity, variation.
And that’s essential for testing, because software doesn’t fit into a very, very orderly, tidy world. It fits into a messy, complex human social world. It’s all based on assumptions and trade offs that people have about the difference between the world as they envision it, nice, tidy, orderly oh, nice and simple versus the way people might want things. And the diversity of platforms, purposes, timings, all these different things enter into the world of how people are going to use software. And if you don’t introduce that kind of variation into your testing, your testing will suffer from a certain kind of malnutrition– vitamin deficiencies or problems of not getting enough protein in the testing diet.
CARTY: Michael, your book also describes insights gained from in-depth interviews, real world examples, case studies, and expert advice. Testing, like any sort of discipline, changes over time. It’s not static, it’s dynamic. So I’m just curious if, in the process of researching and writing the book, there was a particular example that came to mind that you found eye opening and might give our listeners something to think about. Anything that comes to mind there?
BOLTON: Well, only all the time. One of the most significant ways that you can refine your own ideas about something is to try them, to try them on yourself, to try them in collaboration with other people and see how they sound. Another great way for you to learn about the difference between what you think you know and what you know is to hear from other people on this subject.
So there’s a number of other people who have contributed to the book. I’ll name only a couple, and I’ll conveniently forget a few. One is a fellow named Keith Klein, a longtime friend of ours who has specialized in managing large testing projects and buying and selling testing services mostly in the big banking sector
Another is our friend James Christie who, in his work as an auditor, both internal and external for IBM, was able to enlighten us on a whole bunch of things, especially about the post office scandal in the United Kingdom, which is one of the most grotesque failures of management and, we would believe, of testing. In history, hundreds of people were prosecuted by the post office, and at least one committed suicide. Due to that level of prosecution, people lost their livelihoods and their reputations, and all because of problems in software that were swept under the rug, ignored, mismanaged, and so on and so forth.
But a lot of our ideas were taken up by a woman in China named Tai Tai Xiaomei. Absolutely wonderful because her perspectives are framed by her own experiences in her own culture and her own takes on things. So she has taken our stuff, melded it with hers, and created this thing that is just what we love to see from people, which is them taking our ideas and running with it, and then feeding them back to us. And it’s really interesting to see something translated from English into Chinese and then back into English again. And of course, since James’s Chinese is non-existent and mine will get me something to eat in a restaurant, but not necessarily what I asked for, she is a very fluently bilingual.
And that breadth of awareness of how people can express things differently and see things differently has led to some marvelous insights from her. So that’s just three examples from the book. And there’s a bunch of other colleagues who have been working closely with us, and some not so closely with us. And we’re delighted to have them on board.
CARTY: To that point, Michael, you’re a man of the world. It seems like you are constantly traveling. I know you’re on the other side of the world as we record this podcast. In your travels, what have you learned about how the discipline of software testing is viewed all around the world? How does it differ from our Western perception?
BOLTON: My observation is that a lot of the aspects of development culture tend to eat national culture for breakfast. In other words, it’s quite remarkable how someone who works for, let’s just say, Microsoft in Beijing is a lot more Microsoft, from my perspective, while at work and while talking about testing than he or she might be Chinese.
Now that’s obviously a grotesque oversimplification. But for example, the test case disease is a global pandemic. Avoidance of trouble, the reluctance to talk about trouble– that’s a fairly worldwide kind of thing. Saddening to me is that I get the sense that most worldwide cultures learned about testing through and from the West.
And often, I get excited by some testers in India, for example, who have taken aspects of Indian philosophy and applied them to testing. That’s pretty interesting. There’s a fellow named Lalit Bhamare. I always mispronounced his name. It’s so embarrassing. But Lalit has tried to do that for us, tried to introduce us to elements of Indian philosophy that are really fascinating and worth considering. And it’ll be interesting to see how much of that makes it into the book. Lalit’s working on a chapter too with us on strategy.
But it’s pretty interesting to see that not that much of it has made its way into the global testing world. I’d love to see more of it. Yeah. Yeah. And I don’t want to make too big a think about it. For instance, I’ve noticed that people in Israel, testers from there are a lot more inclined to challenge and to argue sometimes. And that comes from a certain kind of cultural and philosophical tradition there, too. An interview series on modes of thought proposed that a lot of Western thinking came from a rabbinical tradition of argumentation, and that made its way into science in interesting ways over the years. So I do have a Canadian North American bias in that respect in that it would be good to know a lot more about that.
CARTY: Always good to acknowledge as your biases right off the bat.
BOLTON: Well, I didn’t. But I caught up eventually, yes.
CARTY: That’s right. That’s right. Finally, Michael, I know you have a lot of thoughts on AI and automation, particularly as they influence the work of software development and testing. This is sort of an existential type of question in terms of how that impacts the role of the tester over time, but it’s certainly relevant. So how do you see AI and automation changing that role over time? And how should testers embrace or change their thinking about it?
BOLTON: Well, testers should always be skeptical, highly skeptical, because the difference between the testing role and the development role is the difference between the maker mindset, which the developers have, and the critic mindset, which testers need in order to be effective.
Now, to be a critic doesn’t mean to reject everything. Far from it. Good film critics love film. A good literary critics love literature. And they develop comprehensive understandings of those fields to place the work in its context, find out who it’s going to appeal to and who it might not appeal to, find out how it may have some kind of effect on the reader, the viewer, or in the case of a music critic, the listener. Well, software testing needs to be that way too.
Part of my personal accomplishments, for want of a better word, is to make clear as I can the distinction between checking the output from a program and evaluating the whole product to see whether it fulfills its requirements or whether it has problems or not. Tooling is really, really good for checking, and we haven’t tried to take a healthy perspective on that. We like checks. We don’t mind checks. Checks are a good thing in exactly the way spelling checkers are a good thing for people who are writing journalism or novels or nonfiction books of various kinds wouldn’t deny the value of spell checking. But that’s not all there is to it. And that’s patently obvious in the field of writing.
And in the field of software development, it should be just as obvious, just as clear that just because something is spelled correctly doesn’t mean that it’s going to be valuable or that it’s going to be trouble free. So, meanwhile, there’s often a surprising, startling amount of blindness to how we can use tools to help us to keep track of what we do, to help us probe the internals of a product, to generate data in useful ways, to analyze data, to and sift and filter and visualize it and reframe it. All that sort of stuff is wonderful, yet we don’t hear very much about people using tools for that to help with analysis. So we’d like to hear and see a lot more of that than we currently see.
When it comes to AI, let us remember this hugely important thing– that a pleasing demo is no evidence whatsoever for a reliable and accurate result. Moreover, there’s another dimension to this, which is worth mentioning. And that is that just as there are dozens and dozens of notions of what a test case might be, there are dozens and dozens of models and implementations and approaches to that which we call AI to the degree that really, AI is a marketing term these days. It’s being used to get people excited about the latest build of a product. And for years, we’ve been hearing about these software testing tools, for instance, that say, now with AI! Well, if you look at what they’re actually doing, there’s no reason to believe that there’s any machine learning algorithms involved in it. It’s just an appeal to some magic words. You can’t spell magic without AI now, can you?
CARTY: Sure.
BOLTON: So we got to be clear on what kind of AI we’re talking about and how it might help us. I think there’s a few things that are worth mentioning. Number one– when used for classification or prediction, it is very interesting and worthwhile to examine AI not in terms of how good it is, but as a lens on what we’re like, and to reflect on that to a significant degree. The kinds of problems that we see with AI are the kinds of problems that we see generally with all kinds of technology in that it reflects and reproduces and amplifies and extends what we are, what we’re like.
So the question we want to ask of software generally, and AI in particular, is is this what we want. The reason we want to ask that question is because technology is, in a sense, always agnostic about our motives and our capabilities. Any piece of technology that we apply to some task will enable us to do good versions of that task and bad versions of that task agnostically. That is, the technology doesn’t care whether we’re good at it or bad at it. It amplifies and extends and intensifies whatever we are.
So let’s look at AI critically, and let’s be careful on the nature of what the AI claim is. Let’s remember that GPTs and LLMs and even classification algorithms that don’t involve GPT and LLM based technology, let’s just remember that they have no concept of truth or fairness or they don’t have ethics or value or anything like that. They are just code running on Von Neumann machines. And like any other code, they behave in ways that are worth examining and aligning with whatever our intentions might be. In that sense, AI is just software. It’s just regular, good old fashioned software like any other kind.
What it also is, though– and I’m referring here now to the LLMs and the GPTs– it is fundamentally unreliable. That notion of no concept of truth is one thing when you’re running a classification algorithm. It’s an entirely different thing when you’re running something that is generating text, words, images, that are subject to human interpretation, that may drive human decisions. But we’ve got to remember that the machinery has no idea about what it’s saying. It just looks like it does. That’s key. That’s really important. Rodney Brooks’s famous analysis back in, what, may of- even earlier, maybe– of 2023 was that ChatGPT is not designed to produce an answer that is right. It’s designed to produce an answer that looks good. And it behooves testers, I think, at least as much as any– well, more than anybody else, because that’s our job is to notice problems. It behooves us to notice the differences between something that looks good and something that is right.
CARTY: Lightning round, Michael. First and foremost, what is your definition of digital quality?
BOLTON: First of all, I don’t know what work digital does here. Quality to me is the same across the board. Quality is value to some person or persons. That’s it.
CARTY: I like the skepticism on the question itself. It’s a very tester sort of perspective to the question. I appreciate that.
BOLTON: That’s me.
CARTY: Next, what is one digital quality trend — or what is one quality trend — that you find promising?
BOLTON: I’m having a hard time with that altogether these days. I see a whole bunch of recklessness in the applications of technology because I see the motivations and incentives for it being pointed towards making money and not so much towards helping people. If there’s anything about technology that it must do is it must serve people. We’ve got to remember that we’re here to serve people. And I’m worried that that’s an ember now rather than a roaring fire. To the degree that people are becoming aware of it and to the degree that people are realizing that maybe there is more to life than making money and maybe more to life than computers altogether. That could be a good thing. Let’s put that stuff in perspective, put it that way.
CARTY: To that point, what’s your favorite app to use in your downtime? Is there something that you think hits those checkboxes for you?
BOLTON: Wow, that’s– somebody asked me that the other day. Is there a piece of software that you find trouble free? And of course, my wife and I were at a sound and light show tonight. And where does my eye go? It goes to the one light that’s the wrong color. It’s clearly some kind of wiring or programming error. And my eye goes there. So it’s a really difficult question for me.
There’s a parking app in the city of Toronto that I have found pretty good, except for one thing. And a friend of mine pointed it out to me the other day. It doesn’t automatically figure out where you are. There’s no capability of it doing that. Other than that, it works really super reliably, and it’s convenient, and it’s pretty well thought out. But then this fella comes along, somebody else with another testing mindset to ruin your day on that. But he pointed out that it should be able to figure out where I am without making it just so I confirm it rather than having to look it up on that post over there. But boy, oh, boy, it really is a challenge for me to point at a piece of software that doesn’t frustrate or annoy me these days. It’s my lot in life.
CARTY: I certainly understand. And finally, what is something that you are hopeful for?
BOLTON: I’m in a pendulum all the time between existential despair and long term hope. And these are bad days. We gotta say that one of the things that technology has done in very unhelpful ways is to accelerate not just our social interactions, but some really nasty ones. And Silicon Valley especially has a lot to answer for in the ways in which we’re trying to apparently– or they at least– are trying to erase truth and facts.
But what makes me hopeful is that there are people in the world who recognize that and who are sounding the alarm. In the field of AI, significantly, for many years, it’s been women, women of color who are pointing out that, hey, this stuff has effects on people that you can’t whitewash the damage away. I think it’s really important for us to honor the people who are the skeptics and who are the cautious and the prudent ones, and trying to bring down a bit of the recklessness that I think we’re seeing because there’s so much money to be made at the moment, and we got to be careful about that.
CARTY: And that’s a great note to end on. Michael, thank you so much for joining us. This has been great.
BOLTON: Thank you, David. Much appreciated. Pleasure to meet you. And hope to do so face to face in the future.
CARTY: That sounds great.