Select Page
Ready, Test, Go. brought to you by Applause // Episode 27

The Ethics of Data-Driven Influence

 
Listen to this episode on:

About This Episode

Sandra Matz how AI is becoming increasingly adept at influencing and even manipulating human thinking, and how brands must grapple with the ethics of modern data management.

Special Guest

Sandra Matz
Sandra Matz is an associate professor at Columbia Business School. Her research around computational methods and large datasets explores how psychological characteristics influence real-life outcomes.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: The intersection of technology and culture is a fascinating discussion. One that plays out all over the world, as different communities grapple with new technologies in their own ways. What better way to grasp that evolution than by experiencing it firsthand? That’s one reason why Sandra Matz travels the world, and why she recently found herself in Bhutan.

SANDRA MATZ: Bhutan was really, I think, one of my favorite places in the world. So we went to Thailand and Bhutan. So, Bhutan was the destination that we really wanted to go to because there was a conference that was hosted by the royal family. They were trying to– it was super interesting. It was trying to see how do you bridge technology and all of the progress that we’ve made in the West with the traditions and the culture and this really rich system of values that they have in Bhutan. So they just brought people from all around the world to just think, how do you integrate those two? And how do you use technology in a way that maybe amplifies the culture that they have, rather than flattens it.

CARTY: Sandra grew up in a small town in Germany, feeling a little trapped, yearning to see different walks of life, to draw on new cultures and experiences. She regularly travels around the globe now, and those different perspectives have changed the way she sees the world.

MATZ: People are incredibly friendly. So, I think traveling has always been part of my life. I remember when I finished high school, the first thing I did was embark on an around-the-world trip because I wanted to leave this tiny village that I grew up in and see something more. And I think ever since, I’ve just been hooked. I think if I actually were to do it again, I might pick fewer places and get to know them a bit better.

But I went to Thailand. I spent quite a bit of time there. There’s a spot that I love still, I think, one of my favorite destinations in the South of Thailand. It’s called Tonsai, and it’s amazing for rock climbing. So if there’s any rock climbing people out there, listeners, it’s one of the most beautiful spots in the world because you have the rocks are right on the waterfront. So you get these multi pitches where you’re really high up, and you have a view over the ocean. So that’s where I started and where I actually spent more time than I had initially intended.

And I went to Australia, and New Zealand, to Fiji, the US, Guatemala, yeah, like really many, many places. But I think, on that one, I think Thailand was the place that was felt magical. Spirituality, what comes to mind for me is like how do you connect with other people. And traveling for me, first of all, it kind of pushed me outside of my comfort zone because back in the day, I didn’t speak really great English. So every time I approached someone was almost like a challenge for myself. So I think in that sense, it felt like I was growing as a person. But it also just meant that I saw the world from all of these different perspectives. I felt like I got to appreciate what I had a lot more, right?

So when you grow up in a tiny place for 18 years, like, oh my God, I’m so sick of this place because, like, what? What am I going to do here? It’s totally boring. And then you travel the world, and you see how much you actually have at home. And I think I truly got to appreciate just the way that I grew up. So it’s a tiny village of 500 people, very, very small. And it was great as a kid, horrendously boring as a teenager and young adult. And now it’s great to come back to. Right now, now that I live in New York, it’s so nice to come back, see the stars again, walk in the forest and just have peace and quiet.

CARTY: Traveling with a one-year-old is no picnic, but Sandra felt that that early exposure to different pockets of the globe would be worth the occasional fussiness, but she does recommend bringing some help along the way.

MATZ: Yeah, it’s really fun. So I went on parental leave for six months recently. So we took the baby, we scooped him up, and we took him to Europe, to Berlin, Paris, Barcelona. And then we decided to embark on this journey around the world. I mean, I think my number one recommendation, if you do that, take the parents. Take the grandparents. So we traveled a lot with him when he was still younger and immobile. And I think that was one of the reasons for why we did. It was, like, at some point, he’s going to start crawling. At some point, he’s going to start walking, and then it’s going to be really hard to have him on a plane for 13 hours. But when he was still six months old, you just pop him into this little crib on the plane. And they sometimes wake up and scream, but most of the time it’s actually pretty easy.

And we also travel. I took my parents because I was, like, if I have the opportunity to go to Bhutan, I would love to take my parents. We could just do it together. And it was funny because when we decided to go on this trip, I was a little bit torn because it felt like maybe we’re just torturing the baby. It’s not that he’s going to enjoy the Thai food. Or he’s not going to enjoy the Bondi Beach in Australia. But then I also felt, well, first of all, he gets exposed to so many different people. And I think you see this right now. We just put him in daycare, and he’s so open. He’s not clingy at all. He loves other people. The moment that he feels like someone is friendly and is open to him, I think he just feels safe. I attribute it, in part, to the fact that we just took him around and had him interact with so many different people. And that’s really nice, plus the memories, obviously, that you make, right? So even if he doesn’t remember himself, we’ll have the photo albums to look back on and then just have these shared memories.

CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David Carty. Today’s guest is globetrotter and computational social scientist Sandra Matz. Sandra is an associate professor at Columbia Business School. Her research around computational methods and large data sets explores how psychological characteristics influence real life outcomes. She shares her expertise as a consultant, speaker, and guest on numerous publications, talks and podcasts. Her latest book, Mindmasters, the Data Driven Science of Predicting and Changing Human Behavior, came out in January.

We all have that stubborn family member. You know, the one who would insist the sky was green, just because you told them it was blue? Well, what if I told you the algorithms that power so many of our digital experiences today might be able to persuade them to a different line of thinking better than you ever could? Sandra argues that algorithms are becoming increasingly adept at understanding, influencing, and yes, even manipulating human thinking. Now, you might think that’s creepy and a threat to humanity itself. And you might be right. But there’s also potential to help us live better, healthier lives. It’s a nuanced topic, and one that we can stand to wrestle more control over as consumers. So if you are that cantankerous person, keep an open mind as we chat with Sandra.

First, congratulations on the book. Perhaps the central theme of the book is the notion that our decision making process as humans is quite malleable or suggestible, and that algorithms are getting better at directly influencing those decisions. So, maybe, let’s start there. What is the state of today’s AI landscape with regard to how persuasive these algorithms can be?

MATZ: Yeah, it’s a great question. And it’s funny. Because when you think about the fact that our decisions and choices are malleable, that was always true, right? So we were always influenced by the people around us, like the mood that we were in. Like, if you just think about how you do your grocery shopping when you’re hungry, you know that it’s influenced by the context and situation. And I think we’ve moved into this world where it’s not just other people influencing us and situational cues influencing us, but also these algorithms. And that’s obviously influence that is oftentimes a lot more intentional. And what it’s driven by is really all of the data that we generate as we interact with technology. There could be anything from what you post on social media, your credit card spending, the smartphone sensors that capture where you go from your GPS, who you talk to, your network, from the messages that you send. And all of these traces actually create this pretty rich picture of who you are, right? Because I can get a sense of what are your daily routines. What are your preferences? What are your habits?

And what we’ve learned over the last, well, now 15 years almost, is that we can take these traces, which seem kind of not even that intimate individually, but we can use AI and machine learning, and translate them into something that makes sense on a psychological level. So I can predict your personality, for example, your emotional state, your values. And you can imagine that once I know this stuff about you, once I know your motivations, your dreams, fears, hopes, aspirations, that gives me quite a lot of power over influencing your behavior. So tapping into your psychology to, for example, get you to buy a certain product. But then on the flip side, also to maybe help you save more. So I think this is what the book is talking about when it comes to how our choices are just, at some point, no longer just our own.

CARTY: Yeah, that’s a great life pro tip. Never go grocery shopping when you’re hungry. I’ve made that mistake way too many times. So you’re getting into the idea of digital footprints, which you talk about quite a bit in the book. You’re, kind of, talking now about how companies can use these digital footprints and data points to influence our decisions. But what’s interesting is most of us willingly contribute our life’s data points to the cloud, which eventually informs these algorithms in one form or another. So we kind of just talked about the state of AI and how they can help manipulate or suggest certain things. Maybe now we can talk about the state of how we are so willingly giving up a lot of this data for this purpose, right? So where are we at as a society and as a community in that regard?

MATZ: I mean, I think it’s just an unfair battle, right? So, like, it’s not so surprising that we’re constantly signing away data that we probably shouldn’t. Because first of all, most of us don’t have the time to constantly catch up with the latest technology, right? So you would have to understand what can your data be used for. In which cases can it be abused? In which cases is it actually helpful? And then oftentimes, we don’t even have a choice. Oftentimes, as well, you can either use the product and give us all of your data, or you don’t use the product at all. And most of us just want to use this products and services that are out there. So if it’s an either/or choice, that’s a choice that the brain is not made for. If the brain can get immediate reward in the here and now versus oh, maybe if you’re not using it, you’re protecting your data and you’re protecting your privacy sometime in the future, so that there’s no risk of data breaches and so on. Our brain is never going to choose that very opaque potential future. It’s going to go with, yeah, it’s nice to use the social media platform in the here and now and be able to talk to my friends and share some experiences. And so I think the notion that we’re signing away potentially more data than we should is just a fact of the system. It’s like it’s an outcome of the fact that nobody has 24/7 to look after their data, read all of the terms and conditions, let alone fully understand them. And if we want to change that, I think what we need is much more systemic changes, rather than saying, hey, you should do a better job managing your data.

CARTY: And the fact is, by the time you have already reached the terms and conditions, you’ve made your decision. You’re going to accept the terms and conditions regardless, right? Because at that point you’re kind of in that turbocharged onboarding phase at that point, which everybody wants to make it easier to onboard anyway. But so obviously there are some dark sides, or some unethical manipulation that can occur with these data points, whether we’re talking about psychographic profiling, micro-targeting, or other forms of manipulation, which you mentioned in the book. And maybe our minds go there first, but there is the potential for good as well, right? Such as improving your health or your financial decisions. As you write, AI can predict our mood, income, and mental state, perhaps even better than our own spouse. So tell us how AI and psychological targeting might be used for good as well.

MATZ: Yeah, it’s actually one of the topics that I care a lot about, and you also see this come out in the book, because I think what we’re exposed to typically is the dark side, right? If you look to the media, it’s usually, well, here’s how we’re being exploited, here’s how data is being abused. Not surprising because scandals like Cambridge Analytica, they’re just very top of mind, right? And they should be. So it’s very hard to think about, oh, how could we use this technology for good when the house is on fire? Democracy is at stake. And we’re just feeling like we’re losing grip, and we’re losing control over some of the data that we generate and the decisions that we make.

However, what I’ve tried to do over the last 10 years is really think about the flip side. So for every use case that is potentially nefarious or abusive, what if we could use it in the opposite way? And you already mentioned a few examples. The one that I started with was actually just trying to get a counter to the idea that if I know who you are, I can just sell you more stuff, right? So I can just get you to reach a little bit deeper into your pocket and maybe spend your money on things that at the end of the day, you don’t even need. And could we also use the same technology to help people save? Because that’s something that a lot of people struggle with, right? Especially in the US, the picture is extremely grim. I think 50% of people live paycheck to paycheck, and 10% of people couldn’t even go a week without being paid. And that’s a really dire situation to be in because it’s just something needs to happen, like your car breaks down. You can’t put it to the shop, then you can’t drive to work, you lose the job. So those people kind of really need some support in saving.

Again, it’s one of these things that is hard for the brain because you need to put something to the side right now, you have to make a sacrifice for maybe a benefit in the future. So you can make sure that you get to work when the car breaks down. And so what we did is we teamed up with this fintech company called SaverLife, which I really like. So they are trying to help low income families and low income individuals save more. And the way that they do it is they have all these different challenges. But one challenge is encouraging people to save $100 over the course of four weeks. And that might not sound like a lot of money to some of the listeners, but it’s essentially working with people who have less than $100 in savings. So for those people, that means doubling their savings over the course of four weeks, right? So it’s a huge lift for those people. And what we try to do is, with the consent of all of the users, we measure their personality. We got a sense of, again, what’s motivating them, what’s driving them. And then we crafted messages speaking to the personality traits. So, let’s say you’re someone who is very agreeable. So those are the people who care about other people. They’re very trusting, caring and empathetic. It doesn’t really help them if I tell them, you should just put some money to the side so you have a more money in the bank because they don’t care that much about money. What they care about is other people, right? So what we can tell them is, well, if you put some money to the side right now, you can make sure that your loved ones are being protected in the here and now. Or you can make sure that you can do something for your loved ones in the future. So just bringing, again, what is motivating them? It’s their positive relationships with other people. And then using these insights to help them drive behavior. So in this case, we actually saw a 60% uptake in the number of people who managed to save these $100 over the course of four weeks, which, again, we’re still nowhere close to perfect. So there’s still a lot of people who don’t, but it’s still a pretty significant improvement, just by trying to tap into people’s psychology and using it.

CARTY: Yeah, it’s all about progress. And hopefully we see I sort of fuel more of these altruistic or philanthropic missions in the future.

To bring it back to the example that you just mentioned, say you’re a consumer brand or a technology company. You’d obviously like to lead people to a conclusion, which is that your product or your company is the best thing since sliced bread. Now, if we’re trying to map out an ethical gray area, how would we sort out what’s a competitive, personalized product versus one that maybe violates consumers sensibilities? Where do you think the ethical line is there?

MATZ: It’s a great question, and it’s really tricky. So even if you ask different people. So I teach a class on the ethics of data, and I have maybe 50 people in the class. And I ask them, where is the boundary? Where is the threshold? What’s the line that you don’t want to cross? People have vastly different opinions. And so for me, there’s essentially is a very simple guiding principle. But I like it. It’s essentially like I always ask myself, how would I feel if what I’m doing to my customers or to my users was being done to someone I really love? So let’s say I use my sister. Now, maybe I’m going to use the baby at some point. But if she was using that product, and her data was being used or abused in a certain way, do I think this is actually ethical?

So that’s one of the advantages that we actually have in the context of data. Because even if you’re managing data and you’re creating products, it’s very obvious what the competitive advantage is, right? The better you understand people, the easier it is for you to persuade them and convince them to do something that you want them to do. But also, we’re also all on the other side, right? So we’re also all consumers of technology. And not just ourselves, but also people that we care about. And I feel like this test of how would I feel if someone that I love was being impacted– or Warren Buffett always had this test of how would you feel if it was like on the front page of a newspaper tomorrow? Would you feel embarrassed? And I like the newspaper. I think it’s actually I would probably feel more embarrassed if it was coming out in front of people that I care about deeply. So I’m just doing my family test. How would my family feel if I was doing that? But I think that’s a nice way of– it’s not restrictive in a sense that it has very clear here’s what to do, here’s what not to do. But it gets you to think about, well, is what I’m doing still in the interest of the user? Or is it actually going against their best interests? And then maybe there’s a better way of doing this.

CARTY: Right. It can be hard to try to navigate this, because it’s a very subjective sort of question in a lot of ways. Two different people will have two different answers. So with that in mind, who should be responsible for the challenge of navigating between leveraging data driven insights and respecting users autonomy? Because those two ideas are not diametrically opposed, but they can certainly lead to problematic outcomes, right? So should it be left to corporate technology leaders, or is there a system of checks and balances that can be put in place, whether it’s internal teams, or panels, or oversight boards? Something like that.

MATZ: Yeah. I think it’s a great question. And, ideally, they go hand in hand. So if you make it a two way street, and you make it part of the value proposition and say, hey, we try to understand you as much as well as we can. So that we can offer you this better service, but you don’t have to take it. So this is just something that with an opt in, we offer you to make your experience better. That suddenly kind of increases autonomy because it’s your decision to do something, and it also increases personalization. But it’s a great question. Who is driving that change? And leaving it up to corporates alone is difficult just because the incentive structure is so tricky. Right?

Most of the companies, or at least the ones that we have in mind when it comes to social media and so on, are driven by ads, and it’s just maximizing attention. And if that’s the only thing that you’re maximizing for, it’s really hard to take the step back and say, well, is something in the best interest of consumer because it might not be in the best interest of profits. And if growth is the only thing that counts in a marketplace, it’s really hard to leave it up to them. Which I think is actually interesting because I do have a lot of friends in these companies. I do have friends at Meta, Google, Apple and so on, and they are hoping, a lot of them are hoping for stricter regulation. Because what they say is, like, look, we don’t think all of the stuff that we do is ethical, but it’s really hard to not do it if all of the competitors are doing. So, it’s like this race to the bottom. And it’s hard to stop without an external force saying, hey, this is maybe something that we don’t want as a society. So it’s interesting that even from within those companies, I think they want to have a little bit more external control. And the external control can come in different ways.

So some of it could be regulation. And I always think about privacy by design, which is really making it easy for people to do the right thing. Now if you want to not have a company track your data 24/7, you have to opt out if that’s even an option. It’s not even true in all of the states in all parts of the world. But even in the ones that where this is an option, you have to opt out, which means that you now have to actively go in and again invest this time to read the terms and conditions. Manage your permissions. Nobody’s going to do it because we have better stuff to do, right? I’d much rather spend an hour with my kid and trying to fully understand the permissions of a service that I’m using. So privacy by design essentially changes the default. It’s, like, instead of saying you have to opt out like nothing is tracked unless you say so. And that also changes like incentives for companies. Because now you really have to convince your consumer that you’re actually generating value by using data. And it’s not just this lip service where we say, well, we need your data to create better service, but you’ve never seen the counterfactual. You’ve never seen what it looks like without the data. So I think that privacy by design on the regulation front and then potentially other forms of data governance. So what I’ve been thinking a lot about, and it’s funny now we’re kind of coming back full circle to our initial conversation about community and connection, and it’s how do we create a support system for people to maximize utility of their data? Because oftentimes when we think about regulation, we’re trying to minimize risk for the average person. And that makes sense. But it doesn’t necessarily help you make the most of your data.

So there’s these new forms of data governance that are called data co-ops or data trusts, which is a community of people with shared interests. My favorite example is the medical space. So, there’s a lot of diseases that are really tricky to understand. MS is one of these examples, determined by genetics, the environment, lifestyle, and so on. So it’s really hard to understand. And it’s also really hard to treat because it depends so much on your idiosyncratic, again, genetic makeup your lifestyle. And there’s this data co-op in Switzerland. It’s called MyData. And they essentially created this community of people, community of MS patients who can share all of the data in a safe spot. So the trust, similar to banks, has a fiduciary responsibility. So it’s legally obligated to act in the best interest of the people who are part of that co-op. And now, because there’s many people pooling their data, they can understand the disease much better. They can make personalized recommendations that go directly to the doctor of those people and say, here’s something that we’ve seen. Why don’t you try this kind of treatment? And then there’s a feedback loop where the doctor comes back and says, this worked. This didn’t work. So this is like a totally different system, right? Because we’re no longer just by ourselves. It’s no longer just a burden on us to manage our data, but we can do it with experts, and we can also do it with a collective which makes our data so much more valuable.

CARTY: Yeah, it’s really interesting. You’re getting into the concept of a data ecosystem, which you talk quite a bit about in the book. Can you tell us what the state is of data entrepreneurship or brokering in this sort of way? Because I think it’s really fascinating concept.

MATZ: Yeah, I think it’s still in its early stages, this idea of data co-ops. And it’s so different to everything that we’ve seen so far. And it also plays with new technology that we have. So it used to be the case, for example, that if I want to train a speech recognition algorithm or say Netflix movie recommendation algorithm. What had to happen is that we all send our data to Netflix, so they can train their model. They can figure out if you like Titanic. You also like Love Actually, and if you like RoboCop, you also like something else. And so they kind of had to do all of this on their server. But what we can do right now is something that is called federated or distributed learning. And that means that instead of me sending my data to Netflix, in this case, Netflix can just send the intelligence to me, to my phone because my phone is a supercomputer, right? My phone is so much more powerful than the computers that we launched rockets to space with just a few decades ago. So what they can do is they can send the intelligence, they can send the model to my phone. It kind of figures out which movies I’ve watched, which movies I liked. It updates the model. So it creates new intelligence, sends that intelligence back to Netflix. So we all benefit from better recommendations. It’s actually how Siri is trained on Apple. So instead of collecting all of your voice data in a central server, Apple just sends the model to your phone and kind of learns how you speak. It makes the model better, and then sends the intelligence back to Apple. So for me, this is like a total shift in how we think about and govern data. And I think data clubs are just entities that will drive this change because they have an interest, again, to act in the best interest of their customers and their users. And so they might be driving this technology shift when it comes to data ecosystems.

CARTY: Yeah, it’s a really interesting topic. Let’s flip the lens to put it on the corporate perspective, the company’s perspective. They currently work with data brokers who collect and sell your data, often at no benefit to the consumer, like you’ve talked about. If consumers can redesign the data game, as you put it, and upend the data broker system, what does that mean for companies in terms of how they buy and use that data, because there are financial implications there, too. And I know we’ve got a long way to get there, but it would be a pretty stark change from what we witnessed today.

MATZ: And I think It’s such a great question because it’s usually the pushback that you get when you say, well, you shouldn’t be collecting all of this data because the answer is always, well, we needed to offer great service and so on. It’s a competitive advantage. Technically speaking, unless you’re in the business of selling data. So if you’re Facebook and you’re selling your intelligence and user data to other people, if your data broker and you’re selling it to other companies, that might not apply to you. But for every other company that creates value by using data, but it’s not selling it, it’s actually much, much better to say I can offer the same service, right? So instead of you sending all of your data to me, we have an entity that we can just send our questions to that we can send our intelligence to, and we can still learn something about who you are, but we don’t need to safeguard your data. So let’s say I’m a medical company and I want to develop a new drug, or I want to develop a new treatment. Collecting all of this medical history data and genetic data is a huge responsibility. Because once you have all of the data, you’re sitting on a pile of gold. And now you better make sure that pile of gold is protected because otherwise you’re going to have it’s a huge financial risk. So we see how expensive all of these data and breaches are. We see the reputational risk that comes with it. So technically speaking from a company perspective, you’d be much better off saying, I don’t need to collect this data myself. I can just use it when it’s sitting somewhere else. So now I mitigate some of the risk of collecting it, but I can still offer the same service. And I think that, again, there’s already companies trying to do that. It just means we’re shifting away from data collection to data access.

CARTY: Right. Almost like we’ve hit a little bit of an inflection point there. Are there any digital quality or user experience initiatives that can help keep organizations on the right side of that ethical line? Whether that means testing with real customers, conducting user experience assessments, or any other methods like that. Is there anything that you advocate for there?

MATZ: Yeah, that’s a great– I mean, I think there’s this one. It’s almost like a thought experiment that I really loved, that I heard someone at Apple talk about once, and it kind of plays into this idea of how comfortable do you feel collecting data? Because you can imagine, typically you’re designing a product and you have a team working together. Everybody’s excited about the product, and most of the time people really do have users best interests at heart, right? So I think most of the time when you speak to people, they say, here’s how we’re going to make the product better. And for that, we just need the data. And what they do is they call it the evil Steve test. So back when Steve Jobs was still the CEO. And the idea is like they go through this thought experiment of like, what would happen if tomorrow we get a new CEO that has completely different values to the value system that we have right now, is not trying to help users, but is just trying to exploit them to the furthest degree possible. As much as possible. And would we still feel comfortable collecting the data that we’re collecting today and setting up the system in a way that we’re setting it up right now? And if the answer is no, you should go back to the drawing board and see if you can do better. So I think it’s a nice way of putting yourselves into the shoes of your future self, and without necessarily the pure enthusiasm that project teams typically have for their product. So it’s like playing devil’s devil’s advocate in a slightly structured and kind of fun way. And I imagine you have to empower people up and down the line to challenge that line of thinking and to do those assessments, but also to take their criticism or their assessments seriously.

MATZ: Yeah. And it’s a really good. So I also teach leadership. And one of the hardest things is get people to speak up. So it’s not fun to be the contrarian in the room. It’s not fun to always say, hey, but what about. So one thing that you can do is essentially kind of rotate through the role and make it like it’s the red team. So we know that it’s not you who’s kind of constantly trying to take down all of the good ideas that we have, but it’s the role that you are assigned to, and you don’t always play it. So sometimes you’re on the good cop side, but sometimes you also have to play the but what if card. So I think there’s ways in which you can implement some of these things in organizations in a way that’s structured.

CARTY: That gets into some of the psychological safety, workplace safety sorts of topics there. To finish with a broad question– in the book, you seem somewhat stuck between optimism and pessimism in terms of greater data, awareness and regulations. You spoke earlier about how you try to see both sides of that coin, right? New regulation is always on the horizon. Consumers become more aware of their data choices every day. New products make use of that data in interesting ways, many of them useful. So what’s your outlook on the future in terms of data management and privacy, and how will stakeholders need to adapt?

MATZ: Yeah. I mean, it’s a really hard one. And I keep going back and forth because it’s also not static. So as I think about this, I mean, I’ve been thinking about it for 15 years now almost. And then you get something like generative AI, and it just blows up the entire picture because it suddenly makes it so much easier to scale. I’m still, kind of, torn on both sides. What makes me optimistic is that I do think that there’s technologies now that help us solve some of the problems that we had before. It was always a trade off between I can either have personalization, convenience, better service, or I can have privacy, self-determination and so on. I do think with federated learning that we’ve just discussed, there’s ways in which you can actually have both. I can somehow protect my data and my privacy, but still get all of the things that my brain is hungry for. Like the service and the convenience of it’s easy to get from A to B because I have Google Maps and so on and so forth. So I think that’s what we need. As long as there’s a trade off and the brain, we have to pick as consumers, I think we’re lost. As long as we exclusively rely on regulation to solve everything, that’s not going to happen. Because, again, it’s so slow that we’re constantly playing catch up, and it typically solves the the basic problem, but it doesn’t necessarily help us with individual use cases. So my hope really comes from this, and it always sounds like a little bit almost naive to think that technology can solve technology issues. But I do think that in this case, there is a way in which we can break down this either/or choice into something like, no, I do want to have service and convenience, but I also, at the same time want to protect my privacy. And that’s what makes me optimistic.

CARTY: At the very least, technology can keep up with technology better than regulation can.

Sandra, lightning round questions for you here. First, what is your definition of digital quality?

MATZ: Digital quality. Oh, it’s a tough one. I think it’s like aligning your strategy and the content with whatever the person on the other side wants to see.

CARTY: What is one digital quality trend that you find promising?

MATZ: It would be probably be federated learning.

CARTY: Yeah, and you talked a little bit about that in the podcast. But where might that lead us, federated learning?

MATZ: I think it’s essentially a way for users to have it all. To have privacy and self-determination, but also get the convenience and service.

CARTY: Great. What’s your favorite app to use in your downtime?

MATZ: My favorite, Spotify.

CARTY: And finally, what is something that you’re hopeful for?

MATZ: What I’m hopeful for is I actually think the next generation of kids and students. So I see this like I’ve been teaching in a business school now for almost eight years. And I think even over the course of those eight years, I’ve seen so many young leaders come through, and they really care about values. So I think it’s no longer just, I want to optimize profits for all of my shareholders. But I think they’re really thinking about what’s the impact that I have with the companies that I work for, with the companies that I build. And that’s made me a lot more optimistic, I think, over the last few years.

CARTY: It must be refreshing to have that kind of perspective constantly infused into your research and what you’re doing. Right?

MATZ: Yeah, very much so. I think it’s the most immediate feedback that you get, right? Research takes forever. It takes years to get your paper published. And then 10 people read it if you’re lucky. But in the classroom, you constantly see what resonates with people and what is actually helpful in their day to day. So, yeah, extremely rewarding.

CARTY: Well, Sandra, this has been great. I really appreciate it. Congratulations on the book, and thank you for joining us.

MATZ: Thank you.