Select Page
Ready, Test, Go. brought to you by Applause // Episode 26

Optimism and Opportunity in the Global South

 
Listen to this episode on:

About This Episode

Payal Arora, author and professor of inclusive AI cultures at Utrecht University, discusses the importance of tech optimism in the Global South and how it can inspire the development of inclusive products.

Special Guest

Payal Arora
Payal Arora is a professor of inclusive AI cultures at Utrecht University and the co-founder of FemLab and the inclusive AI Lab. She is also a prolific author with more than 100 journal articles and award-winning books, including From Pessimism To Promise: Lessons From the Global South on Designing Inclusive Tech.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: Growing up, Payal Arora aspired to be an artist, both for the sake of creating art and finding a new lifestyle on the other side of the world. When becoming an artist didn’t quite pan out, she became an art dealer, where she saw quite a few expensive pieces, including some impressive examples of pop art.

PAYAL ARORA: Well, actually, I’m from India, Bangalore, and I kind of so-called escaped Bangalore when I was a teenager and thought, OK, I need to go to San Francisco, thinking I would be the next, I don’t some, major person. And it didn’t quite pan out, and then I. Realized poverty is not that sexy. So I– you know how it’s the next best thing is basically, if you can’t do art, you — well, actually, more like you don’t critique art. You sell art, in my case.

CARTY: As an art dealer, she learned to spot the buyers with the real power. And just like in the tech world, it’s not always the person with the fancy three-piece suit.

ARORA: No, that was really interesting because, for many years, I was an art dealer and dealing with very different kind of buyers, actually, because that was dotcom boom where you had 23-year-olds walking in with a stash of cash saying, hey, I want Keith Haring, and I want Warhol. And we used to have the largest collection of Andy Warhol, Keith Haring, Chagall, and Picasso.

We were trained to ignore the ones who were the most powerful. And I, so basically, did not show them any kind of attention because they find that really annoying, and then they feel that they’re not important enough. So you always have to speak about millions as something nothing, like it’s nothing to you. And so one of the things is you don’t even look up when they enter, especially if they are important. And that really gets to them.

So it’s interesting because this is very much part of human nature is, what gets us to be ambitious? What is it that gets us ticking? And so this was a genre who– of client who would dress really casual in jeans. The others would be– and often they would come with their entourage of suit people. And then you know that they were the advisors. The ones who were overdressed and eager to show their status was the ones who didn’t have the power. And then the interior people who had power but not quite, and they also wanted their commission. So there’s this larger politic of sales making. You have to assess who’s in the room, what are the different stakeholders’ interests and motivations, and you push it forward. And it’s something which has stayed with me for a long time.

CARTY: How has your past work inspired her research today? Well, she learned to read people and translate that into what they want. That was essentially her introduction to a global academic career, seeking to identify people’s unique motivations, cultures, and experiences.

ARORA: I think one of the things that have always done is speak from your strength. If you’re trying to sell something which is hard to sell, look at it not in terms of, what are the downsides to it? But what are its core assets that you can amplify, and why? And what can people relate to in that aspect, that artwork? You understood different kinds of buyers, different kind of groups of people and their sense of aesthetic, their dynamic. You learn to tailor how you communicate to these different groups and also what makes people click with something and shift from passion to purchase.

It’s interesting because now I’m an academic, and you would think these worlds are really far. And actually they’re not. We are selling ideas. We are– especially in today’s time, we’re trying to translate our ideas, our insights and research into action, in this case in the form of designs that work for different groups, are agile, communicative and yet can create some kind of consistency.

CARTY: This is the Ready, Test, Go podcast brought to you by Applause. I’m David Carty. Today’s guest is former art dealer and digital anthropologist Payal Arora.

Payal is professor of inclusive AI cultures at Utrecht University in the Netherlands, as well as the co-founder of FemLab and the inclusive AI Lab. She is also a prolific author with more than 100 journal articles and award-winning books to her name, including her latest, From Pessimism To Promise: Lessons From the Global South on Designing Inclusive Tech, which came out earlier this year. And that will be the focus of this episode. Her work on the Global South has earned her recognition as, quote, “next billion champion” by Forbes, that is, the next billion internet users, which will largely come from the Global South. But what we know or, rather, what we think we know about this large base of the global population is often distorted by the strategic lens of those companies that attempt to serve them. The truth is we can learn a lot from the Global South and perhaps even rediscover our optimism for technological innovation, which is all too often a source of stress and even desperation in Western culture. Let’s see the glass as half full with Payal Arora.

Payal, first of all, congratulations on your book, really, really great read, fascinating. I would recommend it for anybody, whether you have a strategic interest in the Global South or you’re just a citizen of the world. I really found it to be a great read.

But your book deals with the themes of optimism and pessimism quite a bit. Western culture is dominated by doom and gloom, particularly in terms of how it views technology, even as it stands to, you could argue, disproportionately gain from it. And the Global South is optimistic, even as they stand to be more negatively affected by climate change, economic inequality, sexual harassment, government retaliation, et cetera. So why do we see such extremes in perspectives? And maybe what can one side learn from the other to achieve some sort of middle ground between desperation and aspiration?

ARORA: Yeah, I think you really summarized this really well, and I think it’s a nice place to depart from is there is this rising divide, which is really deeply concerning and one of the key drivers for me writing this book because I do believe that the problems we face today are so formidable and are impacting all of humanity that we cannot afford to have these divides. It’s too much of a luxury to banter over, are you on this side or that side, because we have no option but to have hope because hope gets us out of the bed and gets us to think about ideas for the future.

And so this is basically the main point is pessimism is a privilege for those who can afford to despair. The rest of the world is moving forward because they’re rational optimists, and they understand that AI, just like any other innovative tool, is a tool which can be potentially instrumentalized in ways that can mitigate some of these harms. So yes, it can create harms, but it is also an instrument to control those harms, like the fight fire with fire. So whether it’s creative provenance, yes, it’s like generative media is creating all the synthetic images and audiovisual content, as well as AI can be the ones that can detect these kinds of content. So we’ve got to be able to use the power of these tools and ensure that it works for us. And so we have to channel our energies rather than trying to contain AI, rather than to resist AI. And I’m not saying we shouldn’t have guardrails, by the way. But I’m saying that much of our energy is positioning AI like this human against the machine and the sort of binary, how do we protect ourselves from the technology? How do we ensure that AI does not destroy our mental health, destroy our democracy, create– take away our very essence of who we are, destroy creativity? In fact, if you look at the headlines across the board, it really is about AI being against us. And I think this positioning is naive because we are the ones who create these technologies. We should be proud of the fact that, if we can create it, we can also change it.

It’s much like evolution. Evolution has not finished. It’s in process. And so is technology. Technology is always in the making, which means that you, obviously, can do something about it and shift it for change for the better, for that works for you. And so this is really the premise of the book and the driving force is that let’s come together and gain– so I think some common sense things is, as I mentioned, 90% of young people live in the Global South. So it’s an intrinsic to a teenage brain is that they, of course, are going to be optimistic because their lives are well ahead of them. So they’re not going to wait for the next generation to capitalize or to– they’re not going to sit in a corner and say, oh well, the climate crisis is going to ruin my planet, and I’m going to have nothing. No, they want solutions now. They’re ready to get their hands dirty. They’re hyper motivated in experimenting and failing as much until they get it right, which will work and protect themselves and the planet.

So I think it’s a, if they can do it, so can we, particularly in the Global South because, when they are so optimistic, what they’re saying is that we need to come up with these solutions together. And we do see the promise, and you need to see it, too, because we are in it together in that sense. And that should be a very humbling point because we are in our bubbles and in our cocoons. And I think we’re going deeper. And there’s this dark trend of going more and more inward looking. But that doesn’t stop these forces from taking place across the globe. So all we’re going to do is just delay the surprise, shall we say. And why would we do that?

CARTY: Right. I mean, there’s so much to unpack there. You do make a specific point of mentioning in the book that negativity and pessimism do not inspire change. And you call that pessimism paralysis. And on the flip side of the coin, you write, quote, “It is not naive to be optimistic about our digital future. It is our moral imperative to design with hope,” end quote. So I just found those two sections particularly inspiring in a way, really. And that kind of gets into this concept of the next billion users, which you’ve explored at length, we’ve just talked about here, basically describes the next stage of growth of the internet. The Global South makes up a large portion of this user base, and if we’re able to provide useful, unbiased platforms for the next billion users, how can we expect that base of people to fundamentally change our digital experience?

ARORA: Yeah, I mean, so one is that we have to look at it as not just a global– so we spoke about the binary, that AI against human or machine against the human. We also need to get past the Global North versus the South. I mean, when I underline it initially, it’s because these trends are happening. But imagine when we combine forces. For example, the Global North, what does it have as a major strength? It has a privilege of enjoying a liberal democracy, which most of the world would love to have. We have the kinds of freedoms that most countries would like and people would dream about. I mean, they are dreaming about. They’re fighting for it. And we’re talking about less than 7% of the world’s countries have a liberal democracy.

Now, the reason I underline this is because we feel that we don’t get it because the others are so optimistic. But that’s because they look at liberal democracy not as a Western construct, but it’s basically an expansion of the freedoms that you can have. I mean, look at an extreme case in Afghanistan. The women can’t sing in public. Now I think they can’t even talk to each other in public. This is something so out of world, so alien to our everyday life. But if we can’t understand this Handmaid’s Tale happening right now and we can’t build solidarities to understand that the digital for women in Afghanistan, who does not have access to the public space, jobs, can’t be visible, cannot be heard, then what happens when she goes at home and is stuck at home is that her humanizing happens online because that’s one of– in relation to her everyday reality, which is so suffocating, that the online allows her to express her opinions, to be more free, to resonate and to also mobilize the rest of the world that doesn’t seem mobilized enough to do something for them. And so I think this is just one of the many examples. For example, majority of the countries in the world criminalize homosexuality, from Uganda to many parts of the Middle East to South Asia, in some form or the other to a point of even death. And we take– we do take it for granted. It’s something that we really have reached a stage where, in fact, it’s pretty OK. In fact, even the most conservative in the US are like, yeah, it’s not going to be a talking point in our elections.

So I think this is something which is really how does this translate into tools and usage is because imagine if you’re a homosexual person in Uganda. You want to self actualize. You want to feel like, I am normal, I’m not a freak, and I’m not going to suppress it because this is very much part of who I am. And you find a community online. You engage with it. And you’re able to play with the kind of content. And that allows you to normalize yourself in space that you can. And I can go on about all these kinds of instances, which come from the fact that our freedoms are so deeply restricted in majority part of the world. And in relation to that, the internet and these digital tools enable for far wider sets of freedoms in a safer way. So yeah.

CARTY: Yeah. So in other words, that digital outlet becomes the source of inspiration, of inclusion, of the ideal that might not exist in these different cultures. But if we can tie it back to the strategic lens that a lot of big tech companies might bring to this discussion, you mentioned that a lot of those big tech companies are falling behind in appealing to these underserved global markets. And you argue that that can either be because they are seen as risky investments or there’s bigotry and bias baked into the algorithms or because they see connecting with these demographics as an altruistic mission rather than a strategic one. And you use this as a way to argue that startups might have an advantage here. So why are startups poised to potentially claim a market advantage over their larger brethren in the global marketplace?

ARORA: Yeah, so it’s a really good question. And there’s a lot of startups springing up across Africa, South Asia, and Latin America, I mean, across these regions, basically. What they have as an advantage– because typically, when I say, oh, look at this amazing number of startups, especially using AI and large language models, and they basically say, well, we have that, too, out here. So what’s special about that?

And the difference is that their vantage point is hyper localized. So for example, a dialect in India, instead of having a typical large language model, even in your regional languages, which itself is a feat, by the way, so we see that the OpenAI is primarily built to serve the English language, even though it offers multiple other languages. But it’s not as effective compared to, say, the Chinese model, I believe, it’s Qwen so. And now what’s happening is that, with dialects– and also it’s not just dialects, the way we speak. So I may have this sort of insider community, tribal way of speaking, from acronyms to slang to these kinds of expressions, which allow me to even further hypertailorize my tool. That could actually still have a sustainable business model because, say, a dialect in parts of Africa or in India is like an entire country in Europe or an entire state in the United States.

So I think that’s something which is well worth considering is just the social-linguistic dimension of it allows for a lot of hypertailorized products, like from educational books, which are being used, by the way, in many parts of Africa, where it was extremely formidable in terms of cost to print out multiple kinds of books in multiple languages. Forget dialects. I mean, we’re talking about even in India, it’s like 22 regional languages, almost 20,000 dialects. So in the– like just a few years ago, this was not a option. Today it is an option, not just in terms of books for kids. It’s in terms of all kinds of services, which actually speaks to them. So that enables them a point of entry and a consolidation of a loyalty of their user base, which is something that a typical company trying to transplant their data and their strategies cannot actually do because you’re not thinking about the context.

Also, in terms of the way in which you want to portray yourself in terms of your profile, for example, we always take for granted everyone wants to be visible and heard. But actually being visible as a woman in many parts of these countries actually could be very deadly in some cases. So you should be much more agile in context because a lot of these Global South countries need context to enable or disable the visual or to come up with avatars of different forms. And that’s where generative media could be very playful. Rather than reminding them how constrained they are, we can go into the realm of play. And so you see that happening a lot about playfulness, even in dating, by the way, where you can do through audio because it’s very high risk to date. Majority of them have arranged marriages, and they’re not supposed to be dating before marriage. And that’s, again, something very alien.

So what’s your starting point as, say, a typical company who wants to expand in these markets is you just assume that they are going to be like you because it’s so beyond our worldview. We can’t possibly imagine– in fact, we’re always saying, oh, these teenagers, they’re too visible, too heard, always trying to get into the attention economy. And these are our starting points. So I think it would be enormously helpful to basically codesign, co-partner in a legitimate way. These terms are so banal. But this is what I mean by– because you can’t possibly understand that context. So find partners who do understand it and imagine what kind of collaborations you can have going forward.

CARTY: Right. And to that point in our conversation before the podcast, you mentioned a few interesting examples of solving big problems with simple design solutions. I think that you had mentioned one around certain map apps not working in the Philippines or maintaining the safety of female drivers in another market. Apologies. I don’t remember which one that was, but can you give us some examples of what that means and how some of the problem gets rolled up into our cultural or regional biases?

ARORA: Yeah, absolutely. So in one end, we could go on and say that, oh, these women have all these problems. On the other hand, actually, let’s take the ride-hailing sector. There’s a huge demand for women drivers, but there’s a scarcity of women drivers. So basically there’s a big issue because half the world’s population are women. A lot of these countries, they don’t feel comfortable letting their daughters, mothers, wives go alone with some stranger in a taxi, OK. So what does the company do?

Basically, you need women drivers. They don’t want to drive. So get to the heart of it. Is it because– what’s a barrier? And what we found through almost a couple of years of research is it was far more simpler in certain sense is one of the biggest obstacles was the lack of toilets visible on their screen because women won’t go to a toilet– and they wanted it as part of the mapping but not just that. They also wanted the rating. Is it safe? Is it secure? Is it clean? And so if they– and I’m not simplifying it as that’s the main reason. But it was a big, big factor in them being enabled because they felt that they need to stop for a variety of reasons. Women need to go to the restroom for a whole larger range of reasons than males. So that’s an easy fix in a sense that, even if you invest in toilets with the government, for example, or ensure the cleanliness, get a rating system going, at least you’re becoming constructive because then you can actually get rid of that one big major barrier for why women don’t want to be drivers for your app. And then you can actually monetize and expand your market.

That’s just one example in India in many other examples– in many other contexts. And the other is about mapping, as you mentioned. So for example, Grab has pioneered a different kind of map because the usual iOS/Android maps don’t work because it’s sort of standardized because the typical question you’ll ask is, what’s the fastest route? But in their context, they also allow for, what’s the driest route? Because many of these contexts has monsoons, and they do a lot of crowdsourcing in order to get their riders, users to report which are places which are drier than the others. Also political rallies which are very stable in certain times, certain streets get blocked every year. And whether this is in Mexico for educational protests, there’s certain cycles that happen every year as part of the rituals or rituals. Like India has a gazillion celebrations where they take to the streets. And that’s predictive. So there’s a lot of prediction that is culturally coded and can be fit into your maps as you design it

 So you can see that it is not saying, oh, we need a completely, entirely new map. It’s more that, can you ensure that the cultural patterns, which are relatively stable and recurrent, are instilled into your map so people get a more adaptive response and more effective response to your query?

CARTY: Right, and AI plays a role here, too. When discussing AI in the book, you write, quote, “As AI unpacks human culture, UX researchers can, in turn, unpack the culture of AI,” end quote. So can you explain the role that UX experts should play in validating and tuning AI systems, especially in this global framework?

ARORA: Yeah. So there’s much that we learn about ourselves with the use of, say, generative media. For example, when you’re prompting one of these generative media tools, you understand how you’re thinking about things based on the answers you receive. And so there’s a learning process that is literally visible, whether it’s ChatGPT or if you’re generating an image with DALL-E or Firefly. And then you’re basically thinking, well, that’s not how I want to be represented. That’s not exactly what I said. What is it that the machine can’t seem to get about what I think?

And the classic case is also Google. This is a very old-school thing about autocomplete suggesting what typically people think and when it completes your sentences because, obviously, that’s been entered a lot of times. And it’s also responsive. I mean, it’s kind of also playful and fun. For example, the series Monsters– right after that, if you look at the search and you start at it, it’s like they all want to know, when are they going to be free? And you get a sense of, oh, that’s how the people are receiving the show.

And so there’s a lot of feedback that comes from these automated systems that tells us about how we’re not alone in the way we think about things. It could be problematic, in which case we need to work on it because the content is more racist or stereotypical in many other ways. Or it could be very playful and fun because you’re like, oh, this is so crazy. This is what they want out of cat content, for example, or the shows that I consume. So I’m not completely alone because oftentimes you’re consuming content alone. And you feel isolated because all consumption needs to be shared in some sense. Whether it’s a meal, whether it’s a TV show, we want to share that memory. We want to share that experience, that feeling. And this is a really important way.

And then there’s representation where we can potentially intervene because we can start to contribute new kinds of data to shift the kind of data sets that shape these generative AI tools. So for example, as we know, garbage in is garbage out. So what happens if I come up with a data set that I populate on the Creative Commons about how I want my community to be seen and heard? What would happen to the algorithm? You actually can influence it. And so that’s, to me, very optimistic is us getting our hands dirty, getting feet to the ground and moving forward to change these systems that can then better reflect who we want to be and how do we get to be seen and heard by the world.

CARTY: Right, and you write about this in the book, that global data sets aim to solve the problem of regional bias with AI, but they can actually exacerbate it, to some degree, creating what you call a demand for data presence. You wrote in the book about how Western perceptions of the rest of the world create caricatured tropes that work against the self-actualization and social well-being that people from the Global South and around the world crave from digital experiences. And some high population informal housing areas, for example, aren’t even mapped anywhere. So how can tech companies do a better job of sourcing data from these communities, to your point, influencing that algorithm and then implement it in a way to mitigate bias and data deficits?

ARORA: Yeah. First is recognize it’s a problem. I think it’s so fundamental. It’s like, if we don’t see it’s a problem, then we will not try to figure out a solution. And usually they see it, like I said, is that it’s too high risk a market or it’s not sufficient to markets, still looked upon in market terms. And I think we have to do it for the sake of doing it because it will pay back in a way in which you understand and educate yourself. So even if I’m not going to that market, say I’m an industry person, a designer. I’m like, yeah, I’m not designing for the Brazilians. Yes, but it’s really important to understand different contexts because it creates an elasticity of your mind because today you may not. But tomorrow they may be your competitors. And don’t think that Brazilians will stay in Brazil and Indians will stay in India. In fact, they are spanning across.

And I think we come with this model, that we still innovate in the West and disseminate to the rest. And so if we are not going there, then why bother? And when we do go there, we will scramble and make sure we get to do what we have to do. And that doesn’t quite work because we don’t have strategy there. We’re not moving from a point of a strategy advantage. And what is advantageous is getting to know not just your competition but the larger global dynamic because understanding your local user is intrinsically global because they consume global content.

In fact, I always find it very surprising that, say, when like the Squid Game came out, people were so shocked that, oh, how come it became a hit? That basically the so-called international content was actually pretty permeating. It’s not a niche content. And we still have these terms, “world music.” What does that mean actually? It’s basically anything outside the United States, basically. What does that term mean? And so you lump it all together. In fact, the industry term “rest of world,” it’s almost like a joke because anything outside of the US is rest of world, which includes Europe. But actually, Europe kind of falls between the cracks, I think, because it’s not even that.

So I think we need to get better at, A, not just identifying the problem, recognizing that we have to create a distance between what we do– I may be doing x product, x service. I may never scale. But it’s good to think in holistic terms, in global terms, and to educate yourself on diverse models so you can develop the elasticity of your mind and agility of your product or service because tomorrow is another day. And you don’t want to be caught off guard. And it will just enrich your strategies going forward. Simple as that.

CARTY: World music is nice. I’m more of a fan of Neptunian music, but it takes a while to get here. I will admit that.

So to get back to the original point, the pessimism versus optimism divide, I’ve got a term that is sure to elicit some dread, and that’s digital surveillance. But there is a way to reframe and reengineer that. You talk about this in the book. Can you explain the concept of a surveillance system of care and how we, tech companies and users alike, can do a better job of supporting that?

ARORA: Yeah. So if you basically say surveillance, people have this gut reaction. Or tracking, they have this gut reaction of, oh my god, this is oppressive. It’s controlling. I don’t like to be controlled. What about my consent? What about my freedom? So it’s really positioned against freedom. And I argue that it’s not necessarily the case. I’m not, again, undermining the genuine concerns of surveillance in particularly authoritarian regimes, where they are tracking citizens, or whether it’s surveillance capitalism, where companies, indiscriminately collect your data and potentially violate your consent and use it by sharing with third-party vendors, et cetera. All that stuff is [problematic]. But there’s another side to the story.

And if we have this one-sidedness, we will get this so wrong and alienate a large number of populations in the world. For instance, there’s a difference between watching each other and watching over each other, and you have women is a classic case because women around the world do not have the freedom and equal access to public space as males enjoy, particularly in patriarchal cultures. I mean, I brought up already the extreme case of Afghanistan, but we don’t need to go extreme. Whether it’s Mexico, where the death rate on women is so extraordinarily high and it’s so deadly, oftentimes, to walk on the streets at different times of the night, so basically they always have to think about it. And in South Asia, sub-Saharan Africa, you name it, across the board, women have a lot of concerns about access to public space, mobility through public space. But increasingly, they need to because they have jobs. They are becoming more integrated into society. These are all good things. But that also demands that they are mobile.

So their families are concerned about them. So there are tracking technologies, their families– which is integrated into, say, the ride-hailing apps, which ensures that and creates a sort of guardrail of where the riders don’t just take them off and maybe rape them or become abusive. So families can– and there is something about the panopticon, which normally is seen in a negative sense as a positive sense. That means if you feel you’re being watched in a patriarchal context by others who could get you in harm’s way, then you may actually think twice before being inappropriate with your customer, for example. Or for example, these cameras in dimly lit places or at the train stations, even in France, by the way, where if people are inappropriate towards women, they can get fined. And this provides proof. There’s a certain kind of justice. And people really want justice in these places, particularly because these are microjustices which matter because the typical legal system is way behind by decades. So they have court cases. And so usually, even if you have a very good law in place, the enforcement may never happen.

So what really matters to the average person living in the Global South is they feel a sense of fairness. And with fairness, you can actually be able to achieve that. And you can use AI to your advantage, which can track people in ways that can create accountability, responsibility, and safety. And that’s really what I mean by surveillance of care. And we cannot undermine how important that is, because security enables freedom. It’s not necessarily surveillance is the opposite of freedom.

CARTY: We’ve been talking about these competing and diverse perspectives of our physical and digital worlds today, but I want to hear from you, both as an expert and as a citizen of the world. You’ve compiled all of this evidence, these very persuasive arguments, and you’ve also lived all over the world and travel all over the world. What’s your outlook on our digital future? Where do you think we’ll go from here?

ARORA: I think we will go upward in a sense that it has to be hopeful because you commit to that hope. And I don’t mean in a sort of linear way. But I mean that we’re going to be surprised by ways in which we can come up with solutions to formidable problems. And AI is going to be our muse. It’s going to be an extended sense of our eyes and ears and our thoughts, and it’s going to play devil’s advocate. We can really use it in ways that can really help us brainstorm legitimately towards a more inclusive future, if we build the right guardrails, if we bring a really good set of diverse stakeholders on board across the globe and make sure we are committed to change for the better. And that’s it, not sitting in a corner saying, OK, the world is coming to an end, so let’s move to Mars. So that’s pretty much the bottom line.

CARTY: How much does it cost to move to Mars, just in case?

ARORA: I don’t know. Will Elon Musk give me a subsidized rate? I hope.

CARTY: Payal, lightning round questions for you here. First question, what is your definition of digital quality?

ARORA: Quality is authenticity and being true to yourself and the community around you.

CARTY: What is one digital quality trend that you find promising?

ARORA: I think about how we can de-bias data with creating new forms of data to basically train AI to make the world see it differently.

CARTY: What is your favorite app to use in your downtime?

ARORA: Ooh, funda. Do you know it? It’s a Dutch app which basically looks at different homes, and the Dutch have just amazing taste. So, it’s like we call it funda porn out here. You’re like, ooh, that sounds exciting.

CARTY: So, it’s like a real estate app?

ARORA: Yeah, yeah, yeah.

CARTY: OK, we have our versions of that over here, too, so very, very well accustomed to that experience.

ARORA: The user friendliness, I mean, because I’m used to the American apps and, no, no, this is on a whole different level. Yeah.

CARTY: OK, I’ll have to check this out.

And finally, what is something that you are hopeful for?

ARORA: I’m hopeful for, well, better Mexican food in Amsterdam, for example. I used to live in California, and I really, really miss it. So I know. You would think I’d have something more profound. But I’m thinking about food, so.

CARTY: Well, you’re talking about a multicultural lived experience. That’s very appropriate to, I think, our episode today.

ARORA: There you go.

CARTY: Well, Payal, this has been a lot of fun, and I appreciate your perspective quite a bit and appreciate you joining us today. Thank you so much.

ARORA: Thank you so much. It was an absolute pleasure.