Ready, Test, Go. // Episode 9

The Danger of Bias in Tech

 
Listen to this episode on:

Listen to this episode on: Apple Podcasts Listen to this episode on: Google Podcasts Listen to this episode on: Spotify Listen to this episode on: Castbox Listen to this episode on: Podcast Addict Listen to this episode on: Stitcher

About This Episode

Technology solves some of the problems in our society today, but it exacerbates others, such as discrimination based on a person’s race, gender, income level or disabilities. As companies continue to invest heavily in AI/ML technology, they must take caution to ensure new products don’t fall back on old, problematic patterns that cause harm to users or others.

Meredith Broussard, author of the book More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, joins the podcast to discuss the myriad ways these issues — not glitches — present themselves in modern technology and digital products, and what we can do about it.

Special Guest

Meredith Broussard

Meredith Broussard is the Associate Professor at New York University’s Arthur L. Carter Journalism Institute and Research Director at the NYU Alliance for Public Interest Technology. She is the author of Artificial Unintelligence: How Computers Misunderstand the World. Her work has been featured in the New Yorker, the New York Times, the Atlantic, BBC, Wired and the Economist.

Transcript

DAVID CARTY: Our digital world is changing. As AI is becoming more intelligent, and the lines between information and disinformation become more blurry, we all need to be vigilant in not only what we consume but how we consume it. Meredith Broussard believes unbiased, meticulous journalism is one of the best ways to hold the powerful accountable.

MEREDITH BROUSSARD: I specialize in a kind of data journalism called algorithmic accountability reporting. And that comes from the traditional function of the press, which is to say accountability, holding power accountable. But in a world where algorithms are increasingly being used to make decisions on our behalf, that accountability function has to transfer onto algorithms and their makers. My journalistic outlook is that algorithms need to be held accountable just like power needs to be held accountable.

CARTY: The field of data journalism in particular could be especially important moving forward. Data journalists who can take their time and be methodical in their pursuit of the truth will help illuminate real issues in our society.

BROUSSARD: Data journalism, of course, is the practice of finding stories in numbers, using numbers to tell stories. Data journalism is a little more challenging because you're doing math and you're writing at the same time. I've actually never had a problem finding students. My classes fill up every semester because people are really interested in this subject. The students really understand that this is a crucial issue, that storytelling with data is something that happens, not just inside journalism, but in every field nowadays. So one of the things I teach my students is we do kind of advanced spreadsheet stuff at the very beginning of class, which is actually the same level of analysis that they do in business school, right? So if you are going to be an investigative reporter, who is looking at company financial records, you need to know how to read financial records So data training is a really essential part of the journalism school curriculum nowadays.

CARTY: While Meredith sees new students interested in data journalism every year, it takes a skillful blend of data analysis and storytelling to really do the job well.

BROUSSARD: So algorithmic accountability journalists-- what we do is sometimes we investigate black boxes. Sometimes what we do as algorithmic accountability journalists is we write our own code in order to investigate social phenomena. I am really enthusiastic about the profession. And I think there are some really interesting stories coming out of the algorithmic accountability world. One of the things that I do in the book is I collect a lot of amazing journalism that's been done over the past couple of years, as well as some amazing scholarship, and put all of these stories together so that people can understand the weight of this problem, the intersection of technology of race, of gender, of disability. And it's going to take some really hard cultural conversations to work our way through it.

CARTY: This is the Ready, Test, Go. podcast, brought to you by Applause. I'm David Carty. Today's guest is data journalism advocate and author Meredith Broussard. Meredith is an associate professor at New York University's Arthur L. Carter Journalism Institute and research director at the NYU Alliance for Public Interest Technology. She is also an author. Her latest book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, published in March.

You know, digital quality means a lot of things. It means software testing, it means user experience, and so much more that we talk about here on the podcast. But, as Meredith writes about in her book, some of the issues that still plague us in society today can creep into product development. We're going to tackle some tough issues on today's podcast. So we simply ask that you come into this with an open mind and a desire to create software and digital products that work for all of your users, not just some of them. OK, let's talk with Meredith.

Meredith, congratulations on the book, and thank you for bringing these issues to light. I thought it would be great to get into an excerpt from the introduction of your book. And it kind of reads as a thesis to me for your book. So would you mind reading that excerpt for us?

BROUSSARD: Sure. "Digital technology is wonderful and world-changing. It is also racist, sexist, and ableist. For many years, we have focused on the positives about technology, pretending that the problems are only glitches. Calling something a glitch means it's a temporary blip, something unexpected but inconsequential. A glitch can be fixed. The biases embedded in technology are more than mere glitches. They're baked in from the beginning. They are structural biases, and they can't be addressed with a quick code update. It's time to address this issue head on, unflinchingly, taking advantage of everything we know about culture and how the biases of the real world take shape inside our computational systems. Only then can we begin the slow, painstaking process of accountability and remediation."

CARTY: Yeah, you know, I love that excerpt, Meredith, and I think it's a powerful call to action for tech leaders who might be listening to this podcast, right? So acknowledging that the answer is probably different depending on the business or the industry, what's a good first step to beginning to examine and, as you say, address these issues head on?

BROUSSARD: It's a complicated issue, right? And I wish there were one answer. I mean, I wish I could wave a magic wand and say, "This is how you fix everything." But it took us 30 years to get into our current situation. And so it's not going to be an easy fix.

So I think the first step is adding more nuance to the way that we talk about technology. In that excerpt, I wrote about how several things are true about technology at the same time, right? It is terrific, and it is also racist and sexist and ableist, right? Both of those things are true. Human brains can hold multiple truths at the same time. Computers can't, right? So that's really important to recognize.

We do also need to challenge an idea I call technochauvinism, the idea that technological solutions are superior, or the idea that computers can solve every problem, because there are certain social problems that we can't code our way out of, right? So, all of that said, where do we start in terms of fixing our code? Well, one place to start is context. It doesn't make any sense to say, all right, I'm going to regulate all AI everywhere throughout time, because we don't really need to regulate all AI. We need to regulate some AI, and so context here is key. I really like the distinction that is made in the new EU AI regulation, the proposed AI regulation that is about to pass. And they divide AI into high-risk and low-risk uses based on context, right? So if we take facial recognition, for example, I mentioned before that facial recognition is often used in policing, that it is biased against people with darker skin, it's better at recognizing men than recognizing women, generally does not take trans and non-binary folks into account at all. It's a very fragile technology. It doesn't work as well as people imagine. But it is used in policing. And so facial recognition used in policing on real-time video feeds is going to misidentify people, primarily people, say, with darker skin, primarily women, primarily trans and non-binary folks. That is going to happen, and so that might be a high-risk use of AI. And under this EU regulation, that would have to be registered and monitored, or perhaps it would be banned. But a low-risk use of facial recognition might be something like using facial recognition to unlock your phone. Now, mine doesn't work half the time anyway. There is a passcode that allows you to bypass the facial recognition. To the best of my knowledge, those biometrics are not going to a lot of harmful places. So that's probably a low-risk use of AI. So again, context is key. And we can start by attaching a use of AI to a particular context and then making a decision about how it gets used or what gets used in that context.

CARTY: Right, and I want to ask you more about facial recognition in just a little bit. But you mentioned the phrase "technochauvinism," which I believe you coined that phrase. Is that correct?

BROUSSARD: Mhm.

CARTY: Great. So you mentioned you discussed that in your previous book, and you discuss it in this one as well. Can you explain what technochauvinism is and how it ultimately has an adverse effect, not only on digital products and services, but potentially a negative effect on society as well?

BROUSSARD: Technochauvinism is a kind of bias. It's the sense that computers are superior. When we unpack that, we discover that what it's really saying is that the people who make computers are superior to others. Another subtext there is that math is a superior method of problem solving than other methods. And when we unpack that, we've realized, oh, yeah, there are lots of ways of understanding the world. Math is one of them. It's really great, but it's not inherently better than any others, because what are computers when it comes right down to it? They're machines that do math. We tend to anthropomorphize them. We tend to get really attached to our computers and our computing devices because we spend so much time with them. We trust, entrust a lot of the logistics of our lives to them. But, ultimately, it's just-- it's a machine that's doing math. It's a dumb brick. It's not your friend.

So we need to challenge technochauvinist ideas. And when you do start challenging them, it becomes a little bit easier to spot the problems inside automated systems. I really like a frame that is given to us by Ruha Benjamin in her book Race After Technology. And that's the idea that automated systems discriminate by default, right? And so a technochauvinist might say, oh, automated systems are going to be more neutral, are going to be more objective, more unbiased. They're got to be a better way of making decisions. But if we back off of that, if we say, all right, well, automated systems are going to discriminate by default, it becomes easier to see the problems. Now, why do they discriminate by default? Well, it's because of the way that they're built. So the way we build AI systems or machine learning systems, which are the most common kinds of systems in use today, is we build them the same way every time. We take a whole bunch of data that is collected from the real world, and we feed the data to the computer. And we say, computer, make a model. The computer says, OK. It makes a model. That model-- it shows the mathematical patterns in the data. And then it's very powerful. You can use this model to make new decisions, to make predictions, to generate new text or new images. These are very flexible, functional, powerful models. But with great power comes great responsibility. And we also have come to see that all of the problems of the real world, all of the historical problems, are embedded in the data that we use to train the machine learning systems, right? So some of the mathematical patterns are discriminatory because there's been discrimination in the world in the past. We all know history. It's not a surprise that there's bias, that there's discrimination in the past. And so we just need to not assume that the automated systems are somehow going to be better because they're mathematical systems. Like, they are sociotechnical systems.

CARTY: Right, and to that point, you write quite a bit about AI and machine learning in the book and how ML models are typically and inaccurately described as a black box. The math behind those systems is complicated, but if you have access to all of the information, it's within our capability to understand why these models arrive at the conclusions that they do. So between this black-box concept and maybe the scapegoating of insufficient or problematic training data, is there willful ignorance happening on the part of some tech leaders? And how can we do a better job of maybe sourcing or validating better training data to feed these models?

BROUSSARD: Well, I think that, I prefer not to assume malevolence. I prefer to assume that developers are going about their day and trying to write good code and do their jobs honorably. I chalk a lot of these problems up to unconscious bias. We all have unconscious bias. We're all working on it. We're trying to become better people every day. But we can't see our unconscious biases, and it's an inescapable fact that people embed their own biases in the code that they create. So when you have code that's created by a small and homogeneous group of people like we have in Silicon Valley then the collective unconscious biases of those folks get embedded in the code, right? So there is definitely some willful ignorance happening, but there is also some unconscious bias.

And this is why we need more regulation. There was an editorial in The New York Times recently by Lina Khan, the leader of the Federal Trade Commission, where she wrote about how it's time to regulate AI. And she lays out a framework for regulating AI. And it starts with making sure that AI systems obey the existing laws of the world, right? Existing laws and regulations. And I really like this because we've been arguing for 30 years about what kind of new regulation we need for technology. Nobody's gotten anywhere, really, so I think that a different approach might be more effective. And that approach is just enforcing existing laws inside our technical systems.

CARTY: Right, and to go back to the facial recognition topic, I think it's safe to say you're critical or maybe skeptical of those high-risk applications of facial recognition technology. You write in the book about research, and you mentioned it before, that shows facial recognition works more effectively in identifying light-skinned people, men as opposed to women, and that it commonly misgendered trans or nonbinary people. Can you tell us more about the way that this technology tends to fail and why it's particularly problematic in a law enforcement context?

BROUSSARD: This goes back to historical problems and the way that they get embedded in technological systems. So when we talk about today's technology, we tend to talk about it as if it is sprung from the head of Zeus fully formed. It has not. It comes from many, many years of iterative development, and it builds on previous work. However, when you have a system like that, the sins of the past get embedded in systems unless you proactively kind of root them out.

So facial recognition depends on computer vision technology. Computer vision technology depends on earlier representational technology, like color film. And color film came after black-and-white film. So we can see the history of sensing technology as a continuum, right? Well, with color film, there is a very long history of representational racism in color film because Kodak, the company that pioneered color film, did not represent an entire range of skin tones when they started selling color film. The way it worked was that you would take your film into a local lab to be developed. And the local lab had equipment that was either licensed by or provided by Kodak. The equipment had to be tuned. And Kodak would send out these cards called Shirley cards in order for local labs to tune their equipment to get the colors exactly right because there's a range of possible colors. And they're called Shirley cards because the first model on the card was a woman named Shirley. And Shirley had very light skin. She was pictured with some other, I think, primary-colored pillows. And so thousands of these cards were printed. They were sent to labs all over the country or all over the world, and that was how the machines got tuned. Well, because the Shirley cards didn't have a range of browns, the film developing and color photo printing machines did not have good representations of brown colors. And so if your skin was darker than Shirley's skin, you looked really muddy in color photos. Now, Kodak did start to provide a wider range of skin tones in their Shirley cards so that labs could update their technology in the 1970s. And it's great that they did that. But the reason they did that is a little problematic. They didn't do it because they realized, oh, yeah, we are not serving the majority of the world. They were doing this in response to furniture manufacturers. So they were trying to get furniture manufacturers to switch over from black-and-white printing to color printing. And the furniture manufacturers said, well, our mahogany and walnut furniture looks really muddy in color film, and so we are not going to switch over unless you make this technology better. So it was not about inclusivity. It was about capitalism. Now, we have this problem in camera technology. Well, guess what. In computer vision technology, there are sensing issues in, for example, video game technology. Video game sensors did not pick up on people with darker skin, especially in lower light conditions. They were better at picking up on people with lighter skin, right? So then what comes after video game technology? Well, it's facial recognition technology. Look, facial recognition technology has trouble. It's mostly tuned on people with lighter skin, right? So this is a constant issue.

The kind of blockbuster moment for facial recognition technology came with the publication of a paper called "Gender Shades" by Joy Buolamwini, Timnit Gebru, Deb Raji, and others. And what "Gender Shades" revealed was bias in all of the major facial recognition technologies. People tend to imagine that tech is full of all of these startups and all of this wonderful diversity. But actually, there's enormous consolidation in the tech industry, and there's only a couple of firms who are making all of the big core technologies of our time.

CARTY: Right, and to get to another example of algorithmic bias-- your book is full of different examples, like the furniture example you just mentioned-- academia is another area where we see this happen, with new examples emerging during the remote learning era of the pandemic. Bias has existed in education long before algorithms came into play, and it's a microcosm of many of the issues we see in our society today, right? Amid the protests around algorithmic harms in academia, what is or should be done to improve these systems in the future?

BROUSSARD: See, now we're back to the magic wand, I really wish I had a magic wand for education. I would fix a lot of things.

One of the situations that I wrote about in the book, is where algorithms were used to generate imaginary grades for real students. So I think a really good place to start is, don't do that. [LAUGHS] That was a particularly egregious example. That was something that happened during the pandemic, where the International Baccalaureate organization, which is an organization that awards a very prestigious secondary high school diploma globally-- the IB decided that they were not going to be able to hold in-person exams for their seniors, which makes a lot of sense because the pandemic was raging, and it wasn't safe at that point to have a lot of people in a small room for a very long period of time. This was pre-vaccine. And so it was a good decision to cancel the in-person exams. But IB decided that they were going to use an algorithm to predict the grades that the students would have gotten if they had taken the tests that didn't happen, which sounds so absurd in retrospect. But during the pandemic, we all made some strange decisions. You know, I made a decision to write a book, right? And so I write about Isabel Castañeda who, at the time, was a high school senior in Colorado who was caught up in this mess, because what the IB algorithm did is it predicted that kids from poorer schools would do badly on the tests, and it predicted that kids from wealthier schools would do well on the tests.

And how do things break down in education along racial lines in terms of rich and poor schools? Well, the richer schools tend to be the whiter schools. The poorer schools tend to be the schools with more black and brown students. And if you have studied education statistics at all-- because, again, what we're doing when we're doing this kind of algorithm system, where we're doing machine learning, we're doing prediction, is we're doing statistics, right? If you know anything about education statistics, you know that wealth is a predictor of educational success, right? Wealthier kids do better in school than poorer kids. So if we wanted to increase educational attainment, in the United States, at least, we would eliminate poverty, right? And how do you eliminate poverty? You don't make an app. You give people more money. It's pretty straightforward, right? So Isabel is a heritage Spanish speaker, straight-A student, multilingual, just top of her class, and this algorithm predicted that she would fail her Spanish exam, which is absurd, right?

So algorithms in education or edtech in general-- it does not work as well as people imagine. There's enormous amounts of waste going on. We need to change our thinking around this. We need to do stuff like, we need to audit education algorithms. We need to change the purchasing methods at schools because schools are often locked into these vendor contracts, where they lean into using a particular technology, not because it works really well, but because they're stuck in a long-term software contract, which, of course, generates a lot of waste of public funds. We just-- we need change at every level, at the individual level, at the institutional level, and at the policy level.

CARTY: Let's jump to accessibility and ableism. From your conversations with people with disabilities, you write that, "Today's tech is marvelously empowering until it isn't. Once you reach the outer limits of the tech's capacity, it becomes marginalizing.” And it sounds like you're encouraged by some of the progress being made in designing accessible systems, but there's still plenty of work to be done, right?

You explained a situation from a few years ago where you had a blind student sign up for a data visualization class you were teaching. And even the consultants at your school's disability services center were stumped by how to provide that learning experience, right? So where do we still struggle to enable everyone to use a product? And how do some people with disabilities become marginalized in this regard?

BROUSSARD: I think you've characterized the argument that I make in that chapter well. Technology is terrific for increasing access and has been really good for increasing accessibility. And there is still more work to be done. So I am really grateful to the scholars and activists who shared their stories with me so that I could learn more about disability.

One of the really important things that I learned from my research is that there isn't a one-size-fits-all approach to disability. And so we need to listen to disabled people about what they need. And that's where we should start when we're designing. We should start with participatory design. So one of the things that I learned about was the concept of a disability dongle. And so this is something that a designer comes up with and thinks that it's going to be amazingly useful for people with disabilities, but it's really not, right? So a good example of this is a wheelchair that climb stairs. There have been a lot of these invented over the years. If you google "stair-climbing wheelchair," you get dozens and dozens of different images. And to a designer, sometimes it sounds like a great idea. But then, often, when the designers go out and present this to somebody who uses a wheelchair, the wheelchair user will say, yeah, I don't want that. That's not really what I need. That looks kind of scary, and it might attack me. Who knows? But really, what I want is I want more ramps, more ramps and more elevators. And it's really clarifying, right? Like, we don't need to overengineer solutions. We need to make sure there are ramps and elevators.

CARTY: Now, for a lot of these problems we've discussed today, you've proposed a few ways to mitigate them. So I want to ask you about two of them. First, you write about the need for more public interest technology groups and initiatives. And you also are passionate about algorithmic auditing, which has yielded some positive progress in software development as well. Can you tell us about how these two areas can help combat some of these algorithmic harms that we see in our day-to-day lives?

BROUSSARD: Yeah, I'm glad you asked about that, because I do end the book on a note of optimism. The book is not entirely a bummer.

And so public interest technology and algorithmic auditing are the places that they make me most hopeful right now. Public interest technology is exactly what it sounds like. It's about making technology in the public interest. So sometimes public interest technologists are algorithmic accountability reporters. They're the folks who are opening up black boxes and discovering problems. And other times, public interest technologists are working on government technology to make it better. So they're doing things like updating state employment websites so that when there is the next pandemic and millions of people are applying for unemployment benefits simultaneously, the site won't go down. So these are infrastructural improvements. The same way that we need to do infrastructure work on our roads and bridges and tunnels, we also need to update and maintain and continuously improve our digital systems, because digital systems are infrastructure.

And so algorithmic auditing can be considered a kind of public interest technology. You can do auditing from the inside or from the outside. You can do it internally or externally. So external audits-- I've mentioned a couple of times already the Lighthouse Reports investigation and the COMPAS Investigation are both examples of external audits, where folks went in. They did not have access to the inner workings of the system at the time of its creation, but they figured out later what was going on inside these systems. And an internal audit is something that you can do if you are inside a company, where you can evaluate your systems for bias. And then you can make any necessary changes. I mean, sometimes, you're going to have to throw the system out because it's impossible to update it. But if you discover that you are using an automated system that has bias in it-- which, PS, if you are using an automated system, it has bias in it. When you discover this bias, there are some mathematical methods that you can use to address it and remediate it.

So it's a hard conversation, though. People are sometimes reluctant to admit that these systems that they've invested millions of dollars in are flawed, right? So we just have to accept that these are going to be difficult conversations. We have to go into it with humility and with the understanding that, yeah, we made a thing, and it does not work the way that we expected, and we're going to have to do some remediation.

CARTY: And you should want it to work for everybody, right? I mean, that only benefits the business, benefits society in the long term.

BROUSSARD: Yeah.

CARTY: OK, Meredith, in one sentence, what does digital quality mean to you?

BROUSSARD: Digital quality means creating systems that are inclusive, that are audited for bias, and are actually helping to make the world better.

CARTY: I like it. What will digital experiences look like five years from now?

BROUSSARD: Five years from now, I think digital experiences are going to look largely the same as they do now. We're going to have different-looking gadgets, but they're going to be basically the same.

CARTY: Meredith, what's your favorite app to use in your downtime?

BROUSSARD: That's a good question. I am pretty proud of the way that I have my calendar set up. I have my whole family on the same calendaring platform, and we have a family calendar, and then everybody has their individual calendars. And then I have a calendar tool that I use for making appointments with people. And it's not super sophisticated, but it does make my life easier. So I'm really delighted that that exists.

CARTY: My calendar game could use an upgrade. I might have to follow up and get some advice from you.

BROUSSARD: All right, we're going to talk about this later. [LAUGHTER]

CARTY: All right, I like it. And lastly, Meredith, what's something that you are hopeful for?

BROUSSARD: I am really hopeful for this administration's commitment to law and order in the digital realm. I'm really enthusiastic about the regulatory environment that says, let's enforce existing laws inside algorithmic systems. And that's going to mean things like people paying taxes and white-collar crime being prosecuted inside social media companies. And I think that's going to make things a lot better.

CARTY: Well, Meredith, this has been a really enlightening conversation. I just want to thank you not only for the work you're doing as an educator and as an author but for also taking the time to join us today. So thank you very much.

BROUSSARD: Thanks so much for having me. It's really been a pleasure.

Read More