Select Page
Ready, Test, Go. brought to you by Applause // Episode 30

AI Lessons from the Nonprofit Sector

 
Listen to this episode on:

About This Episode

Join Nathan Chappell, a leading voice in AI for social good, as he discusses the ethical implementation of artificial intelligence and offers valuable insights for both nonprofit and for-profit organizations navigating the AI landscape.

Special Guest

Nathan Chappell
Nathan Chappell leads AI strategies at DonorSearch AI, founded Fundraising.AI and has 20-plus years of nonprofit experience. He's an author, speaker and pioneer in ethical AI for social good.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: Have you ever looked at a pine cone or a piece of driftwood and wondered, what can I turn that into? Well, while you stepped over that forgotten scrap of wood, Nathan Chappell picked it up and turned it into something memorable.

NATHAN CHAPPELL: I got involved in woodturning during COVID, which was the time when I think a lot of us found our passions in life and our hobbies, and I had always loved woodworking. I was the kid in junior high that stayed late after school to make a schoolhouse clock and things like that. But it wasn’t until COVID that I took a class and decided, you know, I got a little bit of time. I’m not traveling as much. Fell in love with it, ended up converting my entire garage into a woodturning labyrinth of three different lathes and, just got really interested in different types of wood and things like that. So it’s been a lot of fun. I say that’s cheaper than therapy, but it’s actually not. It’s a lot more than therapy would be, but it has the same calming effect on me.

CARTY: Woodturning is basically when you spin a piece of wood really fast on a machine and use tools to carve it into cool shapes like bowls or pens. Think of it like pottery.

CHAPPELL: It’s almost the opposite of pottery.

CARTY: Okay. Well, maybe it’s nothing like pottery. The point is watch out for splinters.

CHAPPELL: You’ve gotta be laser focused on it. Otherwise, a piece of wood spinning in several thousand RPMs, bad things happen. Instead of adding clay to build something up, you’re essentially removing wood to turn it down. When you see a wood bowl, it comes from a giant piece of wood that is turned down. It’s called turned down, which means that you’re essentially carving it into that shape of a bowl. There’s treatments that you have to do to it if the wood is rotten or it’s light. You have to stabilize it, and there’s certain ways that you bake it and add resin to it and things like that. But you can turn driftwood — I’ve picked up driftwood from a beach up in Santa Barbara and made pens out of driftwood.

I’m not into woodturning to make money and to do something at scale. I love turning small things like bottle stoppers for wine bottles or a handle for a pizza cutter or pens. I’ve made about a thousand pens. And none of these are really to sell, but they’re basically made out of love to give to people that that mean something to me.

CARTY: Nathan collects a piece of wood from each location when he travels, including Baird’s Tavern, which has a small role in American history. For him, it’s a departure from his less tangible work in the AI space, getting hands on in a multisensory medium.

CHAPPELL: I’ll be anywhere. I was in Geneva last year. I probably smuggled wood home from every country I’ve traveled over the last couple years. I always have this bated breath. I’m like, am I gonna get through security this time, or are they gonna X-ray and see this weird shape? I took a bag of pine cones home from Geneva last year. And I was like, ’Oh, I’m sure going to get stopped,’ I’m sure it’s probably not legal just because of whatever [reason]. I’ll find out one of these days, but so far so good. I’ve been in the clear.

I was with a CEO of a multibillion dollar company. It was several years ago, but it really stuck out to me. I was asking her, ‘What do you do to relax?’ And she’s like, ‘I mow the grass.’ She’s like, ‘One, it’s quick, and it doesn’t take all day, but I can spend an hour. But also at the end, I can see the immediate impact of what I did. It was long, and now it’s short. And it looked messy, and now it looks nice.

Very similarly, that’s why I like to do small wood projects with pens and bottle stoppers and handles, because it takes an hour. It uses all of your senses. I would say it’s super easy. Everyone needs a hobby, and if they’re intimidated because it seems complicated, it’s not. It’s super easy. And even with some minor, very inexpensive tools, you could start turning something. And so if anyone’s in the Nashville area and wants to learn how to make a pen, I’ve brought people in my garage so many times. You walk in, don’t have a clue, and you walk out, [feeling] like,’ I could totally do this.’ People can look me up, and we can connect on all things woodturning. It’s just always fun to see the delight when I send it to somebody, and I’m like, ‘Hey, this is from our time at Baird’s Tavern,’ and they’re just like, ‘Oh my gosh. That’s so cool.’ It’s one of those gifts that I get more joy out of it than the person receiving it, I’m sure.

CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David Carty. Today’s guest is wood turner and nonprofit AI enthusiast, Nathan Chappell. Nathan is a pioneer at the crossroads of artificial intelligence and generosity. As senior vice president of DonorSearch AI, he’s leading the charge in how nonprofits leverage AI to drive social good. He’s also the founder of Fundraising AI, a community focused on ethical AI use and fundraising. He’s coauthored books like The Generosity Crisis and more recently, Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good. His insights have been featured everywhere from Forbes to NPR and his own podcast, Fundraising AI, which strives to be a resource for leaders in the nonprofit tech world.

We often talk about AI in the context of for profit businesses, but what about the nonprofit sector? How are these mission driven organizations using AI with both purpose and efficiency? What can we learn from their approach, especially when it comes to building systems that aim for high ethical and accountability standards? Nathan’s here to share some practical advice and get to the root of how organizations can build better AI systems. Get it? Root? Because of trees and wood? Yeah. Here’s Nathan.

Nathan, on this podcast, we largely speak with experts in the for profit sector. But can you give us a high level glimpse of how AI is being used by nonprofit organizations? And what are some of the key takeaways from a software quality and implementation standpoint?

CHAPPELL: Yeah. So, you know, the nonprofit and the for-profit sector are not all that different in some senses and very different in another sense, in terms of how they operate in essentially the currency of trust. Not different in the fact that they have lots of data or frankly too much data, and they need to make, you know, clear insights out of that data.

And so back in 2017, I started building machine learning models that were predicting which patients in a hospital were likely to make a charitable gift. And it was it’s very much the similar type of use case that a for profit would have, which would be, like, you have all the customers in the world, which one which are most likely going to, you know, upgrade their cell phone or buy a new phone or whatever it might be. In a hospital setting, only two percent of patients ever make a gift to a hospital, but average hospital has two million patient visits a year. So it’s this needle in a haystack. So, you know, about there’s about 1.7 million nonprofits in the country. Many of them have far too many constituents or potential donors that they could ever imagine, but not every person is potential donor. Less than half of Americans give to charity at all. Of those, the ones that are gonna give to your nonprofit, the number gets very narrow.

So from a very clear use case, so building custom AI/ML applications to predict who’s likely to do this thing. And then from there, understanding who, when, and how much and for what purpose. I think the main difference between the for-profit and the nonprofit world is that the issue of trust is at the foreground. Now most for-profit entities, they don’t talk about themselves not being trustworthy. I mean, they all have responsible AI frameworks, but the stakes are very low if you’re messing up predictions on who’s gonna buy the next pair of Nike shoes, you know, versus the nonprofit sector, if a very large, say, Red Cross, we’ll just use an example, a household name builds an algorithm that is racist and is ageist and all the other things and all the other ists, it won’t only affect a Red Cross. It will affect all the other nonprofits that are similar to Red Cross. And so the stakes are very high, and the prioritization on responsible AI, is really probably the main distinction. But the use cases are not that different. You just mentioned a lot of the organizations in the nonprofit sector.

CARTY: There’s obviously a lot of options out there, but I want to be respectful for your passion in this space. What are some particular examples or causes in the nonprofit sector that you’re encouraged by?

CHAPPELL: Yeah. So it started out, of course, you know, when we’re talking about predictive AI, it started out with larger types of organizations because, you know, predictive AI was not nearly as affordable or accessible as it is today, but also not as affordable, accessible as generative AI.

And so, you know, when I think about the early adopters in the sector they were they tended to be fairly large, like, hospitals, universities, large membership organizations trying to predict, you know, who’s likely to, you know, take this action. And that’s where a lot of the early AI/ML was. It was at the enterprise level because it’s very expensive. We also realized that the data that nonprofits had on their own wasn’t sufficient, so we had to go outside to source enhanced data. And so, we started purchasing lots and lots of data to enhance those models to then, you know, essentially round out their models and help them perform at a higher level. The big transition, though, has been since November 30th, 2022, when GPT came out. That was an important date because that made every especially in small nonprofit, it gave them wings. It allowed them to level up in ways that were never before imaginable. And so fast forward from, November 2022 to today, I think we see some of the most interesting innovative use cases of AI around small nonprofits that don’t have a lot of deep experience, but they’re curious.

After doing this for the last seven or eight years and seeing successes and successes and failures in AI, I’m absolutely more convinced than ever that it’s that curiosity actually holds the key to whether or not you’ll be successful or not in AI. That has a lot less to do with models and data and a lot more to do with asking the questions what if.

CARTY: Yeah. That experimental sort of mindset. Absolutely. Nonprofits tend to be very mission driven. In what ways does that clarity of purpose lead to better AI product decisions? And maybe how can for profit teams translate that into stronger alignment between business goals and technical quality?

CHAPPELL: I think nonprofits also suffer from mission drift as well. And so some do, some don’t. Right? And so, yeah, they should all be mission focused.

At its core, the intention of every nonprofit is essentially, exist to put itself out of business. And so the idea for a nonprofit is that you exist to solve a problem. And if you’re so good at solving that problem, essentially, that problem, you know, wouldn’t exist. And then, therefore, you put yourself out of business. I happened to work early on where I started my AI/ML career was at a cancer hospital. Now what’s interesting about it is that the cancer hospital was originally a tuberculosis hospital over a hundred years ago. It was on the West Coast where it’s this dry desert, and it became very well known all over the country as, like, a movement. So people in New York, in Boston, and very cold and damp places where tuberculosis is rampant would put their beloved family members and friends on a train and send them out west to dry out in this desert. And it was actually their stories, a book written two tenths in a desert. And it was literally a nurse, and they would just care for these people and they get better. And so when tuberculosis essentially was cured, so City of Hope is the name of it. City of Hope is now one of the top cancer hospitals in the country. Started out as a tuberculosis hospital, essentially put itself out of business, and it found a new mission. And it found a new mission because one of the nurses at the tuberculosis hospital got cancer. And they were like, you know what? Tuberculosis is cured. Let’s reinvent ourselves.

So there is an ethos, hopefully, ideally, with most nonprofits. It’s like, we don’t want to be focused on these issues forever. Like, we want to make a significant difference. So that mission driven aspect does provide, some laser focus. But it’s easy also to do too many things and to try to solve all the problems. And we see some nonprofits, you know, struggling with that as well. So I think the application though is, for those that I think are successful, do have kind of a maniacal focus on what they’re trying to solve. And instead of boiling the ocean, they’re like, here’s the problem. Let’s put everything at this problem and continue to support it.

Lots of applications in AI there though too because one of the biggest challenges I think when you have AI, especially generative AI, it’s a tool that knows something about everything and can do almost anything. And it’s very easy to lose focus and have this opportunity overwhelm, which I have personally as an entrepreneur. But I have to remind myself of, ‘Okay. What is the problem I’m trying to achieve?’ And then what are the best tools to help me achieve, solve that problem?’ And then stay focused on that and then peel back the onion. Right. Stay dialed in.

CARTY: There will always be more problems to solve. Right? You can solve the other ones later on.

You provide a lot of educational material and resources in your book. There seems to be an overall lack of AI fluency in the marketplace even as businesses plunge themselves into the technology. How important is it to continue to educate and upscale in AI, and how would you recommend that teams and organizations do that?

CHAPPELL: This truly tectonic shift that we’re going through as a society, as a humanity, I think is grossly underestimated by most people. I think they’re like, well, humans are adaptable. Well, there’s always been new technology, and we figured out crypto, didn’t we? And/or the Internet. AI like the Internet, except that it actually gets better and self improves, or like electricity is in everything but self improves.

There is, in fact, no technology that humans have ever encountered that self improves. So this is not in an incremental way, but in an exponential way. The limitations of AI are not based on the number of 26-year-old kids in Silicon Valley wearing hoodies. AI is training AI. And that is something that I think most people really fail to understand the exponential nature. We talk about exponentials as these loose terms and don’t realize that if you fold a piece of paper in half forty two times, it’ll reach from Earth to the moon. So exponentials are real, they are not incremental by any stretch. And so what that means is that there’s winners and losers. And as people that are looking at, what is my future in my job and what do I need to do to prepare, those that now AI will replace some positions, but it’s going to change almost every position.

I think those that lean in and really increase their AI fluency by reading blogs every day, reading books or talking to friends and neighbors about what how are you using this, and incentivizing curiosity within the workplace, I think are really going to stand the test of time. Those that put their head in the sand that pretend that this is a fad and it’s going to go away or it’s not exponential, are going to have a really hard time keeping up because every use case — and Procter and Gamble with Harvard Business School just did a study a few weeks ago, with 770 workers at Procter and Gamble. And the net effect of people using AI essentially performed almost as equally as teams, teams of people that were not using AI. But essentially in every use case, they’re doing their work better. They’re doing it faster. And at the end of the day, they’re having stronger emotional benefits, so they’re end up being happier. And so it’s going to be really hard to compete in the future if you don’t increase your AI fluency because you’ll be competing against people that are doing their work better and faster and in the end being happier because they did their work better and faster.

CARTY: Yeah. It’s a it’s a seismic shift. Nonprofits often work with limited resources, but they have to build AI systems that are accurate and deeply aligned with human outcomes. You mentioned the potential problematic elements of the ethical outcomes if it’s not aligned on that level as well. What can for profit organizations learn from this kind of deliberate high quality AI implementation?

CHAPPELL: On the for-profit side, every organization, I worry much more about the organizations that are using AI without thinking about what they’re trying to achieve and what those long-term consequences might be or the unintended consequences. So many are moving in, rushing into AI because it truly is an arms race right now in terms of who has better models faster, but not necessarily safer models.

Now in absence of any regulation in the US at least, so in in the UK, there’s some AI regulation that is still light in my opinion. But in absence of any regulation, it’s incumbent on any organization that is using AI to essentially establish, one is to ensure that the use of the AI is aligned with their corporate values. So look at go back and look at your corporate values and actually understand that they make a difference, and they should be pointed at what type of AI you should use and what AI you should not use and how you should use that.

Be prepared to say no. We’re in a new world where just because you can doesn’t mean you can that you should. That technology will continue to present itself even faster and faster with things that seem impossible or were impossible that can do these amazing things, but may not be beneficial in the long term. So I think every for profit, and every nonprofit, this is where we’re in it together. AI doesn’t help us if we all destroy humanity in the future by just doing things irresponsibly.

So I think going first and prioritizing, looking at your corporate values and aligning your AI governance with those corporate values is it has to be your top priority. And then when you’re presented with a new type of technology that shows that it can deliver really strong short-term results, but maybe in the long term, it actually could actually have unintended consequences on society or even in your revenue stream be prepared to say no. And so that AI governance policy is the only way I know of that allows you to give you that checkpoint to say, okay, like, this appears to be good short term. Is it good long term? If answer is yes to both, then we move forward. If it’s not and then the last part of that is that AI, as it’s a dynamic technology, this exponential technology also changes. Therefore, your AI policy, your AI governance also needs to change. It needs to be reviewed not once every ten years — that normally happens in data governance — but literally every six months. If you developed a policy last year, it probably didn’t talk about agentic AI. But now, 2025 is the year that that’s all people are talking about. Right? And so ensuring that your AI governance and your organization is staying up with advances in technology means that you have to have people around the table that are wrestle with these, what used to be philosophical questions that are now very practical questions.

CARTY: And empowering everybody up and down the chain to raise a red flag if one needs to be raised. How have you seen small resource full teams deliver better AI outcomes than maybe larger, more well funded ones? Are there any habits or practices that stand out that maybe some for profit, organizations can take to heart?

CHAPPELL: It’s fascinating, the net effect of AI. Of course, it was the big companies that invested. They have the money to invest, and, I mean and they were spending stupid money early on. Like, even the JPMorgan’s, Morgan Stanley’s, RBC were spending this crazy money to build their own LLMs to win that that arms race. But necessity is the mother of innovation. Smaller organizations, like a small mom and pop corporation to even midsize, to be honest, I think have this distinct advantage today, is that the nimbleness and the clarity of what you can do to actually enact change is actually really an advantage when you have this technology that’s completely democratized. Big corporations are spending so much money and time on just trying to turn a Titanic, right, or not the Titanic, probably not a good example, but turn carrier ship or whatever, an aircraft carrier. Super hard to do and super slow. Right? I mean and there’s in those hesitancies and all these kind of things where small and midsize companies I see have this great advantage because they have a thirst and to do something different. They don’t have all the bureaucracy of the things that, ‘Well, we tried that before and it doesn’t work, so we just don’t try.’

Going back, this is now me thinking back on the last seven years on this AI journey and seeing a lot of a lot of organizations succeed and a lot of them fail. The key difference is the organizations that reward questions more than answers are the ones that thrive in a in a world where AI is completely accessible and available and affordable. Those that incentivize, like, asking their people to ask ‘What if?’ just continuously. Train your team to color outside the line and to think out of the box and all the things that we used to talk about that weren’t really practical are now entirely practical because I can go to generative AI and be like, ‘Give me a 101 ideas of this stupid thing that I’d been stuck on,’ and it will probably give you a dozen decent ones. So, it’s just a new game with new rules.

CARTY: It’s like you’re reading my ChatGPT history with that suggestion. And, also, I hope that turning the Titanic thing wasn’t a Freudian slip. I think you’re more optimistic.

CHAPPELL: No. No, it’s not. Let’s hope not. It might be for some, but probably not for most.

CARTY: Sure. You sound more optimistic about AI outcomes than that. So transparency and accountability, these are often talked about in the context of responsible AI, ethical AI, but these are notoriously difficult to operationalize. Right? Are there any practical steps that organizations can take to build AI systems that people can trust and understand?

CHAPPELL: Yeah. I mean, there certainly are. First is, I think, recognizing that ethical AI is essentially a minimum expectation. I think people put that out there, especially large corporations of, ‘Well, our AI is ethical.’ The reality is ethics according to whom, to your point, it’s very difficult and subjective.

Second, responsible AI is not the same thing as beneficial AI. And so this just saying something is ethical doesn’t mean that it actually is avoiding long-term consequences. Like, social media, for example, is not unethical as a technology. In fact, by all intentions, it was created to bring people together. It was actually designed to essentially connect people that wouldn’t know each other otherwise and really, ideally, bring us closer together. But, of course, the net effect and how it’s been abused essentially has been quite the opposite — increases in anxiety and depression and teen suicide and all those net effects of too much. So while social media is not unethical, it’s also not beneficial, to the long term of humanity.

That’s something that organizations, every for-profit, nonprofit, every organization that is selling a product or a service or whatever needs to really wrestle with again, ensure that your AI is not unintentionally creating bias. And we don’t even talk about bias anymore. That was such a big topic several years ago. But we got really good in predictive AI at understanding how models are making decisions. So now what used to not be transparent is now totally transparent.

I’m a huge advocate — I have a few patents on AI/ML that at the time in 2018 felt like the right thing to do. It’s like, ‘Oh, you’ve got to protect your IP and get these patents on AI.’ And then you realize,  well, by all intents and purposes, that means that my AI is in a black box. And I wouldn’t show anyone that black box because essentially that would infringe upon the IP. I’ve become a huge believer in transparent AI, which means that, the AI can be interrogated. So predictive AI allows you to interrogate it, which means if I’m going to make a prediction that you’re going to buy a cell phone, you would get a score in the likelihood of you buying that cell phone. I should be able to tell you what math was used for you specifically that led you to a score of 80 that you’re likely to buy a cell phone. And predictive AI now compared to even seven years ago allows you to do that completely. There’s no reason why you can’t actually interrogate predictive AI. Generative AI, on the other hand, very, very different. Generative is kind of a crazy black labyrinth. In fact, OpenAI, Anthropic and Perplexity, there’s not a lot of clarity even from those organizations why those models work as well as they do.

So bias is being mitigated on one hand, but also because we’re putting a lot of trust in organizations that they can bias a model intentionally toward a certain way, we’ve got to be very mindful of that.

CARTY: Absolutely. Agentic AI, you mentioned it before. It has the potential to amplify positive impacts and unintended harms. So, what strategies can organizations use to help ensure that these systems act in alignment with their mission, and how can they detect and correct behavior that strays from those intended goals?

CHAPPELL: This is one of the things that keeps me up at night. I mean, I love philosophical questions, but these are now, again, philosophical questions that are no longer philosophical. We’ve thought about and we’ve watched all the sci-fi movies, and we know how they end, and, you know, it’s all the things.

When we think about agentic, I think there’s two parts that I think are, independently important, but also collectively imperative. On one side, agentic models that are essentially models that take action. They’re take autonomous action. So you give them a target and that target is to achieve something. Convince this person to buy this product or some of the innocuous things of just plan a vacation for me or whatever. Those are fine. You can choose to do that or not. But the ones where you’re basically saying, convince this person to take action on this thing. Now a model that has read every book on human psychology and consumer behavior, and does not know the difference between manipulating and whatever else, persuading. Right? It literally will take the most expedient path to getting a person to do that thing in whatever way, conniving or authentic. Organizations really need to pay attention to that because a model won’t know the difference between manipulating and sharing.

So taking, always, a human in the loop approach to agentic AI to make sure that that model is representing, again, the corporate values, the values of the organization, because I don’t think anyone wants to end up in the paper saying [for example], ‘Hey, AT&T just manipulated, 28 million people to buy a new cell phone.’ But a model won’t know the difference. It needs a human in the loop.

The second part of agentic that I think becomes more complicated is around the bot that takes the form of a human. And so the agentic behind a screen and a one dimensional cursor where you’re just using it to predict something or plan your vacation seems fairly straightforward to an extent. But, an agent that takes the form of a human, is an entirely different question. And it’s one that I wrestle with a lot. And I now if I want a career coach or someone to help me rehearse for an interview or something, that could be very helpful. Or I want a bot because it looks like a human to help me quit smoking. Bots are 87% percent more likely to help convince you to take their point of view than a human. So they’re really, really good at at this kind of stuff. But, portraying itself as a human, well, what kind of human? And so, in sales, not surprising when we see a lot of agentic bots, they’re very attractive women. And in fundraising too, and there’s a model that’s doing this now and not surprisingly. So the bias in there, obviously, is that people respond more. So it’s impossible to remove that form of bias, and so instead of perpetuating bias, we have to really be mindful about all those things.

I don’t have all the answers for you. All the main answer, one is make sure that it aligns with your values, both short term and long term. Second is, always, always have a human in the loop. I mean, it’s way too early to trust an agentic model that it’s going to continue to do its intended purpose without defining your values and making sure that there’s proper disclosures and things like that, at a minimum expectation. But I don’t know. It’d be great to have this conversation again in a year from now. In two years from now, it’d be like a whole different world.

CARTY: Well, hopefully, we influence some caution in this space, right? That said, if you could wave a magic wand and have all nonprofit and for-profit organizations follow a singular ethics framework, what would be essential to include in that framework? What kinds of problems would that help us avoid?

CHAPPELL: We created an open source framework a few years ago, and it was it was crazy. It almost killed me. I pulled together a collection of people from around the world to do this. We had 1,700 emails back and forth to create this framework, and we were looking at what was existing from government agencies and regulatory bodies and things like that. All to say is that they’re all about ninety percent the same, that privacy, security, explainability, ethics, all of those things that go into it. And there’s a ton. I mean, you can just grab one. In fact, ours at Fundraising.AI, it’s so generic. It could be used for any for profit or nonprofit. But it goes in two steps further, being that it’s focused on nonprofit. It focuses on environmental sustainability. So that may not be important for some organizations, but in the nonprofit sector, it felt important because if we’re doing good to humanity, we also have to be conscious that AI has an environmental impact. And it also means as consumers, we have agency and we should make it known that, we should look at sourcing clean energy for AI systems. That may not be true for a lot of organizations. They may not care about that. But if they do, we’ve created 10 tenants, basically 10 pillars for responsible AI, which include all those things. And, again, they’re free. They’re totally open source Fundraising.AI. Literally, we’ve seen so many organizations do this. They just go on there and copy it off the website, go into GPT and say, ‘Okay, here are 10 tenants that we agree to, or maybe only eight of them. We like these eight. Next to it, put their corporate values and say, these are our corporate values. Can you build me an AI governance policy that mirrors our corporate values within these eight things or these eight principles that we like?’ And instead of this taking weeks and months to do I mean, literally, you’re okay, two hours later, here we go. We’re good.

Last, as I already shared, is building in a mechanism where that is reviewed, and we update ours every six months because I mean this technology just changes and advances so quickly. So you just have to have this kind of routine behavior that is continuing to look at, ‘Oh, what’s the new thing that we need to add to this or address?’

CARTY: Lightning round questions for you here, Nathan. First one, what is your definition of digital quality?

CHAPPELL: Digital quality, data quality, for me represents data that is essentially, tested, that is aligned with the type of data that your organization would expect to use and see, and that has relatively strong parameters around data hygiene around it and continuous process for checking the quality of the data.

CARTY: What is one digital quality trend that you find promising?

CHAPPELL: I’m actually a little bullish right now on the opportunity for synthetic data. So this probably is, kind of a curveball for you. But I think there’s some technical breakthroughs in synthetic data that are letting us round out some big gaps and holes in in data right now and allowing us to do that in a pretty transparent way.

CARTY: Fascinating. What’s your favorite app to use in your downtime?

CHAPPELL: Oh my gosh. I use probably ChatGPT, I use almost continuously, whether I’m in a museum and I’m curious about a piece of history or if I’m in a garden and I need to identify a plant. That has to be my number one used app.

But my new favorite, pet toy app right now is free. It’s called Typpo, and it takes an annoying voice memo and it turns it into a video, with your subtitles. And I think it’s so much fun to just send people these custom little videos that once annoyed people, but now, actually, people look forward to watching and seeing.

CARTY: Oh, that’s interesting. And, finally, Nathan, what is something that you’re hopeful for?

CHAPPELL: I am hopeful. I remain a hopeful pessimist. I think is my persona is that. I worry about a lot of things in the future, but I wake up every morning knowing that good people wake up alongside me and want to make a difference in the world. So I’m truly hopeful that AI can essentially help bring people together. I think once we really set parameters around how to use it well that it can be used to bring people in closer community and connection. And, in fact, I think it’s the only scalable way to reverse declines in generosity. So I’m remain entirely optimistic that AI holds the key to a more generous future.