Human Rights in the Data Age
About This Episode
We are the subjects of data — but, too often, fail to understand or influence how it is used. Humans must make trade-offs to enjoy the luxury of digital products while also maintaining a sense of decency and privacy. So, what does that mean for the companies who collect this data and the AI that interprets it?
Wendy Wong, author of the book We, The Data, joins the podcast to discuss why consumers must take an active role in their data sharing choices, and how organizations should adapt to the needs of these users over the long term.
Wendy Wong is the Professor of Political Science and Principal's Research Chair at the University of British Columbia. She is the author of the book, We, The Data: Human Rights in the Digital Age.
(This transcript has been edited for brevity.)
DAVID CARTY: Shopping experiences are very different today than they were decades ago. The brick and mortar experience has largely given way to digitally optimized and enhanced shopping options. And one product sector in particular appeals to Wendy Wong. That is eyewear.
WENDY WONG: During the pandemic, with all the Zoom stuff, people were only seeing my face. And so I had been building up my collection. Those are now, outdated because I need reading glasses power, as well. So I'm building my new collection. But it's, also, funny because I spent most of my childhood actually avoiding glasses. I have-- I used to have really bad vision. I had my vision corrected. And now, I'm aging so I need glasses, again, unfortunately. So I thought, I'd have some fun.
CARTY: What was once a frustrating shopping experience, often in a stuffy optometrist's office, is now much less stressful. And with so many interesting and colorful eyewear options hitting the market all the time, Wendy figured, why stick with just one?
WONG: Right now, I'm just trying to build out different shapes and textures. So like the glasses I have on, they're white, but they've got this fun marbling, which I thought was really great. And I've got the other pair that matches these, which are brown and more traditional. But I like the shape of these. And then, I brought-- I have another pair. So this is my newest set, which I just love because they're fun, and they're big. And I think, they work really well. I mean, one thing I've found when doing public talks is like, most people can't see your face, but if you've got like glasses on that are pretty distinctive, it's nice because then people have something to focus on when they're watching you talk, but they're not exactly right in front of you. So yeah. Anyway, it's an obsession and probably, going to end up costing me a lot of money, but it's fun. It's fun to experiment.
CARTY: Eyewear is no longer an accessory that most people dread wearing. It can even help you accessorize or augment your personality, especially where your co-workers might only see you from the shoulders up.
WONG: Especially since the pandemic, a lot of us have really, gone for some fun eyewear. So like my parents, we have this conversation about my eyewear all the time. And they like my more subtle looks. They think that glasses should not be seen, but I think, sort of, the opposite. It's part of your outfit, right? I mean, especially people who have multiple options now, you just pick your shirt with the glasses that you think fit best. Or maybe, it's just the day. So some days, I don't feel like wearing these options. And so, I've got other options to stick with. So I think, it's become a really fun experience. When I was a kid, I remember, I had these really loud, teal, marbled glasses, different shape, but not unlike these. Sort of, a fun pattern. And I remember, like, most people didn't have that. They had like, just one color or your little wired glasses. And so now, like, even-- so one of my kids actually had to start wearing glasses because he's very far-sighted, and he's starting to read and do math and stuff. And so, his glasses are pink and purple. And like, that would not have been a thing when I was growing up. I don't remember glasses being all different colors and being able to be crushed by a car and still work. That's just amazing.
CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I'm David Carty. Today's guest is eyewear aficionado and Professor Wendy Wong. Wendy is the Professor of Political Science and Principal's Research Chair at the University of British Columbia, Okanagan. She has written three books, including We, The Data: Human Rights in the Digital Age, which published earlier this month. Businesses are awash in data. More than they know what to do with. This data drives business decisions, and it can help improve a consumer's experience with your brand. But massive data collection and management can, also, have negative effects on the consumers from whom all of that data derives, especially where data collection is such a widespread, undertaking with little legislation to protect consumers. So how significant a change is the datafication of our society? Wendy has argued that datafication is as influential a development in human history as the invention and proliferation of the printing press. Take that, Gutenberg. With those high stakes in mind, here's Wendy to chat about data rights, digital products, and aligning expectations between businesses and consumers.
Wendy, let's start out with a distinction. What's the difference between human rights and data rights? And what motivated you to write about these subjects?
WONG: So I started writing a book about data. And I think, the first thing that people think about with regard to data and human rights would either be something about data rights, specifically, or something, let's say, around privacy. And so, one of the things I really wanted to do with this book was to open up that aperture a little bit to make sure that people think about human rights, not just in terms of these very specific types of rights, so either one that we think is being threatened, like privacy, or one that people think should arise because of AI and datafication, this idea of data rights. And so, one of the things I'm hoping we'll talk about a little bit more is why it's important to think about all the human rights that we have and also, the values that underlie the whole framework we have around human rights today. But I, also, think that when people think about data rights, they tend to think about it in terms of market relationships. So a lot of times-- and there's a lot of writing about data rights. A lot of times, people think about, quote, "my data" or like, some individual person's data. it belongs to them. But actually as I explore in my book and I explain why, it's actually, really, not accurate to think about data about you or data that have come from you as, quote, "yours" because data are co-created. It's not just as a data source, but it's, also, a data collector has to actually, do the data-making and storage and analysis. And so I make that really big distinction there. And I think, this is why talking about data rights has been-- I don't think that's the right framing because I think it, then, sort of makes it a market relationship, but then it, also, makes data as sort of a unique set of, maybe, property-- creates property rights around data in a unique way that I don't think actually help in terms of thinking about what's problematic about datafication. It's not just that we can't claim monetary value from data. I think, that's only one part of what's changed about human experiences with data. We're also talking about changes in social and political relationships. We're talking about cultural shifts in terms of how we relate to one another as fellow human beings and across societies. And so, when we talk about data rights, we're often talking about only economic relationships. And so in this book, I really wanted to back it up to think about human rights as part of thinking about the human experience, and how that has shifted with emerging tech.
CARTY: Now, it might be hard to paint with a broad brush here, but what's your assessment of where individuals stand today in terms of their data rights, data awareness, and the usage of all of this data in the present day?
WONG: Some people do survey work around how much knowledge people have and what they feel about either AI or thinking about data around their activities and their thoughts. But my sense of that work is that there are what we might call have nots in a have group. And the have group is much smaller than the have nots. So there are lots of really smart and well-trained people now who work in data sciences or computer science or computer engineering or other fields where they're used to working with massive quantities of data-- what we used to call big data. They are familiar with algorithms, and they're familiar with all the advanced techniques that are being developed. And those folks are really comfortable in knowing what the technology can do or not do, and I think, some of them are concerned about the implications. I think, most people, though, have probably now heard enough about AI and how it's changing our lives, but don't necessarily, then have the tools to then think, so, now what? So how are data being made? What are some basic understandings we can have of AI that would make it more tractable for us to get our heads around what's going on? So I think that most people have been exposed to understanding like the apps we use or many underlying social and economic functions are data-- are run by AI, but then, they don't know what to do about it. And I think, this is why part of my motivation for writing the book is to promote this idea of data literacy as a human right so that we can close the gap between the haves and the have nots, in terms of knowledge and comfort.
CARTY: Right. And you just touched on this concept of data literacy. Why is it important to gain data literacy skills? And what resources might be available to help with that task?
WONG: Typically, when we think about literacy in our day and age, we're thinking about reading and writing and sometimes, people talk about arithmetic, right? The three R's, so to speak. And I think, that, that's something that has come because of widespread education and basic education in industrialized countries, but increasingly around the world. People are becoming-- more people are literate than not in these ways. And when we think about what literacy actually is, it's not necessarily, the skills of reading and writing, right? It's the ability to be competent in society. The ability to communicate with others and get things done that you need to do on a day-to-day basis with confidence, with a basic understanding of what's going to happen, and with the basic understanding that you are on similar footing with the person you're interacting with. That's literacy. So it's become clear, I think, and it became clear in the process of writing the book that if we don't think of data, also, as one of these pockets where we need to have competence, we need to have literacy, we're going to be creating a gulf between people who a lot about data and about emerging tech and people who don't. And that's going to create increasingly disparate outcomes. And we're already seeing that in terms of how people understand the systems that they're using and the consequences of that. So data literacy is a way to think about the specific need to have data understanding and data skills. So that doesn't mean everyone's going to be a data scientist. That would make you a data expert. We just need competence. We need literacy, which means understanding the basics of what data are-- so thinking beyond digital data, actually-- understanding how data are made, what kinds of choices go into the creation of data, and how those choices actually, really can affect the outcomes that you get. And so understanding the relationship between the data that the algorithms are analyzing and understanding that actually the data shape as much of the output as the algorithmic assumptions that are built into those models.
CARTY: Right. And you mentioned how data collection and analysis can have disproportionate effects on different members of the population.
CARTY: So that kind of leads into the next question a little bit. Can you explain what it means to have sticky data? And how the stickiness of data might sometimes, disproportionately affect individuals based on characteristics like race, income level, or any other number of criteria?
WONG: Absolutely. One of the things that is challenging for human rights as a framework of legal and social and political norms is the stickiness of data. And so, everyone knows that digital data, in particular, are really easy to transfer and copy, right? That's why they've become so predominant in the way that we store information. and how it sparked tons of financial and-- the information revolution because of digital data. But we don't tend to think about the stickiness of data, which means, like, it's actually stuck on each and every one of us as individuals, kind of like gum gets stuck on our shoes. Like, it's really easy to step on gum, and you may not be aware of it. And it's actually, pretty tough to get rid of it. And so, this is actually the analogy I wanted to make when we talk about data about people. So they're really easy to make for four reasons that I talk about in the book. The first is that basically, a lot of the data being collected about people are very mundane. And they may not even be conscious behaviors or things that we think of as some subject of data collection, right? So a lot of data are about daily things we can't help but do. Like if you wear a smartwatch or you have a smart phone, things like waking up. How much you sleep. How many steps per day you're taking. Hard to avoid those things. Or if we think about how we text or how often we make typos, these are all data that are being collected about us that we can't really change a lot of these things. So they're mundane. And because they're mundane and everyday, and because they're hard to avoid, that's the first reason that data is sticky. Another reason data is sticky is because data don't just stay nicely in a single data set, right? So we know that once data are collected, they get bought and sold. They're used in various ways that are perhaps, very different from the original data collection intent. So data are linked together. And that leads to and is, also, part of the reason why data are sticky because they're effectively forever. So when data are created about people, we don't know what happens to them. And so because we don't know what happens to them, we should be assuming, they're effectively immortal. Even if they get deleted, we don't know that. We can't verify deletion. And the final reason why data are sticky is what we talked about a little earlier. It's this idea of co-creation. So it's not just, me making decisions about whether data are created. I'm a data source, everyone's a data source, and there are only some who are data collectors. They're the ones who are making decisions about what behaviors, what activities, what thoughts to be creating into digital data. Another reason why we can think of data as co-created, though, is because a lot of times, data are not just about us. In fact, we know that data are valuable not because of from me, as an individual, but because they can be pooled into different-- we can make different pools of data of people like us. And so data are valuable in the aggregate. They're collective, in other words. So the other aspect of co-creation is to think about how data have collective implications and also, multiple sources. So think about when people post pictures of you on social media. Have you always agreed to that? Not usually. But there's then, now, data about you out there that you didn't actually directly make.
CARTY: Right. And there are positive and negative ways of looking at that, right? I mean, it might be seen as a positive that you can pitch a product to a certain type of person, but then there are, also, plenty of negative ramifications there, too. To your point about immortal data, you write in the book with the quote, "The boundary between living and dying in a datafied world is increasingly fuzzy." And? you spend a whole chapter in the book discussing these so-called digital remains, and how our data outlives us. So what protections should be in place for those who have died? And is anyone beating that drum for data rights protection for the deceased?
WONG: Yeah. So just back to your comment, the whole book is about trade-offs, and that's essentially, what it is. It's to think about the trade-offs of datafication through a human rights lens, through the values of autonomy, dignity, equality, and community. So I think, actually, thinking about what happens to data when we die, but also, really, it's about what happens to data once created? It's out there. It's out of our individual control because again, it's not our data, right? It's data about us. Data taken from us in our activity. So think that's-- this chapter about data and what happens when you die actually, I think, it sort of nails that home because it does show that because of datafication, we have less agency over what people know about us. We have less autonomy. So in that sense, it really hits against that. And also, I think, it pushes against our questions of dignity. What does it mean to be a human being? What does it mean to be treated as someone with inherent worth? And what does it mean when people can take data from your activities when you're alive to generate either a copy of you, somehow-- a digital copy-- or to approximate you while you're still living and to create a digital double? Really raises questions about whether data are-- whether we treat data as though they come from human beings, which means they have inherent worth, or that they're just commodities to be traded on the market. So that's the stuff that I really touch on, in addition to equality concerns. We know that these sorts of bots and programs are not for everybody. They do take a certain amount of money to do. And so not everyone wants to or even can access these kinds of tools right now. And they, also, change the way we think about communities that we live in. If we're interacting now on a regular basis with a bot that approximates a person who's died, what does that mean about the line between living and dying? It used to be when you died, you stopped interacting with people in a very active way. And I think, now, that's changing. And to get to your question about safeguards, I would say, there's not a whole lot in place. So I think, lawyers now, estate lawyers, are thinking about digital assets, for example. So that you can choose to bequeath your digital assets to whomever, as you would a car or a house or money or whatnot. But I think, the catch there is, and having looked at these sorts of documents recently myself, they're very vague. So digital assets could mean your login to an account that-- your Amazon account or your email account. It could, also, mean the data that are generated from your activities. But because we don't actually have a sense of-- we don't have claim or ownership over the mass of the data that can be used to generate this digital double, that's actually a real issue, right? So how do we protect people from this? I think, part of what we need to do is to exactly, re-evaluate what this means for autonomy while we're alive, for dignity while we're alive. And how can we take people's wishes into account given that we have so much data about everyone and given that is a reality that we have to deal with? Does this mean going forward, we don't collect certain types of data? We practice data minimization? Does this mean that there are certain term limits for holding on to data or using data? I don't know. There are lots of different potential ways. But right now, there's not much one can do. I mean, even if you didn't sign over your digital assets to somebody, that's not going to stop companies from still holding the data that they've already collected, right?
CARTY: Absolutely. Not a lot of protections out there for data subjects. And speaking to that, we've seen some legislation around data privacy rights, I mean, particularly coming from the EU, and that impacts businesses all over the world. What can we expect in the future for legislation? And how do you see that affecting how businesses design and develop digital products?
WONG: So you point out the EU has probably, the most developed regime of thinking about not just AI, but also, what to do with data about people. And I think, for example, Canada, where I live, is also following similarly in the EU's footsteps with upcoming legislation that is going through the parliamentary process. So the data-- the way that people think about data has tended to be about individuals and individual choice and individual consent. So like, do you personally consent to the collection of these data? And if you do, then that's considered legitimate. But there's, also, this question of de-identified data, right? De-identified data are not covered by these data regulations. And I would say that probably, the most important-- some of the most important data can be de-identified, but that doesn't take away from the fact that they come from individuals. They come from human beings. So I think, that's something that we really need to think about. I mean, in terms of companies complying, we've seen it. Like, I've experienced it as a consumer. I've experienced it in looking at my research. Like, companies are responding, as much as they can, to the idea that they can be subject to auditing. That people have the right to take-- port their data and all these different things. And we can choose whether to have cookies record analytics on websites. I think, maybe, companies can also be a bit more proactive about what they think, not just in terms of data from individuals, but how we treat the data that are pooled together, that are de-identified, that are pseudonymous. What are the data-- what data do companies actually need? I think, that's really the key. In order to recenter a human rights framework is that, as I said before, it's trade-offs. There are improvements that can be made from gathering data about individuals and about groups. And I don't think that companies should cease data collection altogether or governments. I mean, we're just sort of at the point where having data about populations helps us improve people's lives, in some ways. What I think needs to happen, though, is we need to really think about and be more reflective about, to what extent are we over collecting data? Because I think that a lot of times, the impetus, especially in more data-intensive fields, is just to take the data that they can and think about it later. And so you end up with all these data that have never been touched or have, really, no stated purpose in advance, and yet, they exist. And so part of what I think is that we need to resist that urge and really be more reflexive to think more about the idea of data minimization, and how human rights can give us the moral framework around which to build a norm out about data minimization, right? So it's one thing to say we need to minimize what we collect. So on what basis do we do that? I think, thinking about dignity and autonomy and equality and how data can degrade communities, that's really where we should start thinking about data practices.
CARTY: Right. And from a business standpoint, being more efficient with your data is, also, going to reduce costs over time,
CARTY: I mean, it costs money to store all of this data. And it opens you up a more risk from intrusion, from things like that, right? So thinking about a more steady cadence and useful-- the usefulness of the data that you're collecting serves multiple purposes, humanitarian purposes and business purposes.
WONG: Yeah, that's a good point.
CARTY: And to think about this from another perspective, so there's a user's perspective on all of this, too, right? If we're talking about being more mindful about human rights and digital rights, what should the user's expectations be for a digital product as it pertains to how it manages data? If we can rethink about how we interact with digital products and create an expectation for data privacy and collection, is that going to be enough to move the needle for tech companies who stand to profit off of these data collection efforts?
WONG: I think, one thing we need to do is stop thinking about it as privacy because I just-- it's not that privacy is not important, and I don't want to come off as saying that because I don't believe that. It's a fundamental human right. But the way we thought about privacy in the North American context, in particular, is that we're keeping people out. We're keeping-- somehow, keeping a barrier, right? Privacy is about maintaining a barrier between me and everybody else, including prying eyes and the physical intrusion. And just given the nature of the technologies that have been developed in the last 20, 30 years, I just don't see that as-- we have to rethink, what that actually means. It's not-- and so when we talk about privacy of data, what is it-- what is private? What is personal? And so, what types of biometric data are actually, quote, "private?" And I think, one of the things that I've really thought about a lot is thinking about how we can use a technology like facial recognition technology to serve purposes that are beneficial for society without resulting in the discrimination of certain groups, without resulting in thinking that our privacy has been violated? And if we think about faces and facial recognition technology, to say that faces are private is actually, kind of a-- it's a funny distinction to make because we use our faces in social interactions, right? They're very much our identity. I think, our identities are often, very much, rooted in having our face-- we identify with our faces. But thinking about it as privacy is really, I think, not the right way. I think facial recognition technologies violate our autonomy. I think they violate our conceptions of human dignity when we start using the technology in ways to randomly or uninformedly either target certain populations or use as evidence in a very wrong and unhelpful way. So I think, part of it is that, I think, companies need to be-- I think, companies need to be more open in talking about how these technologies might affect future choices, for example. That would be an autonomy-- an idea around autonomy. But I think, in terms of privacy and that sort of language is important, but it's, also, really important to say, like, there are certain things that aren't necessarily, privacy that still are problematic, that still hurt our autonomy and our dignity or our chances of being treated equally in society.
CARTY: And the need to think, probably, through the worst case scenarios, too, of this data running amok because you could see the flip side of the coin where somebody might argue, well, facial recognition in a law enforcement context is an ethical use. It is serving a good purpose. Well, there's a downside to that-- a clear downside to that-- including discrimination and other potential negative side effects, right? So it's kind of opening up that discussion and being open to the positives and negatives of the technology, right?
WONG: Yeah. And understanding that these are technologies that human beings ultimately use. And so understanding that there are problems with the technology that are inherent, so the inherent problems being that they are very data-driven and that as a result, we have to-- data collectors feel that they need to collect a lot of data. We have to be mindful of that. But we, also, have to be mindful of the fact that, yes, there are certain ways these technologies can be used that can be helpful, but we need to balance that with, what is the harm that could potentially arise? And not to just think about it in terms of financial harm or in terms of these really extreme cases where, for example, people get arrested based on faulty facial recognition technology. And also, just plain racist thinking by certain law enforcement officials, for example. But also, the fact is, I think, that it's, also, thinking about how we need more data literacy. We need time to catch up to the fact that these technologies have such capabilities that we may not actually be aware of because many of us are only being exposed to certain products now that demonstrate the power of AI, that show that, maybe, a snippet of what some people might think AGI will look like, right? So ChatGPT being the one where everyone all of a sudden was, very much, made aware that the computer can sound like a human being.
CARTY: OK, Wendy, Lightning Round questions for you. Let's start with our first one. How do you think AI will evolve in the next five years?
WONG: I think, it depends what we do now and going forward. I think, there are a lot of different trajectories that are possible.
CARTY: What frustrates you the most in a streaming media experience?
WONG: I actually, really like my streaming experiences. And I think, if anything, sometimes the algorithm does not get what I like, I would say. It overemphasizes certain aspects of what I watched before.
CARTY: What is your favorite app to use in your downtime?
WONG: I look at Zillow a lot.
CARTY: It's fun to just kind pick out a different place in the world and see what's available, right? Is that what you do?
WONG: I like looking at people's decor choices and learning about the eras of how interior design has changed over time. I love Zillow.
CARTY: If you find my house, it would just be a mess. Kids toys everywhere. So don't bother looking it up. I promise, you're not missing anything there. And finally, Wendy, what is something that you are hopeful for?
WONG: I'm really hopeful for conversation and engagement around tech issues. I think, it is really important for people to realize they're not data subjects, but they are actually, data stakeholders because we co-create data. And therefore, we should have a louder voice than we currently do about how things are going.
CARTY: Well, Wendy, I appreciate you having this discussion with me. And thank you for joining us.
WONG: Thanks, David, this was great.
CARTY: That was Wendy Wong. You can find her book We, The Data: Human Rights in the Digital Age in our Podcast Notes. Thank you to our producers, Joe Stella and Samsu Sallah, and our creative team, including Megan Gawlik and Karley Searles. Subscribe, drop a comment, leave a review, or let us know what you think about the podcast by emailing at firstname.lastname@example.org. We'll catch you next time.