Select Page
Ready, Test, Go. brought to you by Applause // Episode 24

Generative AI: Hysteria vs. Reality

 
Listen to this episode on:

About This Episode

Ben Van Roo, CEO of Yurts, discusses the hype and reality of generative AI, focusing on its enterprise applications and the need for grounded strategies.

Special Guest

Ben Van Roo
Ben Van Roo is the CEO of Yurts, a generative AI integration platform. With a background in AI, national security, policy and data science, he brings a unique perspective to the practical applications of AI in both the public and private sectors.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: Ben Van Roo didn’t necessarily set out to become the CEO of a company. In fact, he was an Air Force brat from a military family. And as you might imagine, growing up in the ’80s with a pilot for a dad, he saw some pretty explosive things.

BEN VAN ROO: I mean, I think all kids, when they grow up, they only know their environment that they’re in and the situation in life that they’re in. And so, I was the youngest of three boys in a military family. My father was a pilot for the Air Force and flew a couple of different aircraft. But the A-10 was one that I knew that he flew in. And what I grew up with, was all I knew was a community that was military-based and it was the ’80s. And so the military in the ’80s Air Force guys could do fun things. I remember often going to ranges and watching them strafe things and blow up stuff and show us videos of them targeting semis on the interstate as practice. And it was just kind of an interesting time that shaped me years later, but it was kind of all I knew as a kid. Kind of a funny story that was particularly interesting is, occasionally when you have a military pilot as a family member, they will organize groups of flyovers. So we would have a seventh grade marching parade and all of a sudden these aircraft would come zooming over our head. When I was a kid, I thought this was a little normal at the time, but we went deer hunting, because I grew up in Wisconsin. And there was one particular occasion where two A-10s flew about 100 feet or a couple hundred feet directly over a cornfield, so potentially to kick out some deer, so we could shoot them. And that seemed like a perfectly normal thing.

CARTY: While he did not serve in the Armed Forces, Ben did work with the Department of Defense all over the world, in Iraq, Afghanistan, Japan, and elsewhere. Those experiences opened him up to some new cultures, which informs his perspective on collaboration, innovation, and product development today.

VAN ROO: I grew up in a world where you understood– There’s some really nice cultural aspects I think are trained in the military and in the families around discipline and respect and humility. And always really enjoyed that. And so it’s what’s been interesting now in my career, is that as my life progressed, I was the only male Van Roo that did not join the military and did not serve. And yet, my whole life has kind of been shaped by it. Working with the Department of Defense, understanding that culture a little bit, I spent a good majority of my career traveling around different installations, whether they’re international. And so I know you talk to people a lot in the UX community and in customer journeys. I think one thing that’s been beneficial to me, is that while I haven’t served and I never claim to and I would never claim to really understanding things, you kind of get the culture a little bit more when you’ve grown up around it. And so when you’re engaging with customers and potential customers, there’s kind of subtleties in how they do things that you get. And I’ve always enjoy being around it and kind of found it to be an interesting asset. And so I mean, at the end of the day, we’re all trying to, when we build software, we build technology, really understand the pain points of who we’re serving and respect those pain points and shape and craft whatever we’re building around how we want to meet those needs. And so for the most part, we work with really large organizations, whether they are parts of the Department of Defense or in the commercial sector or legacy, older, antiquated industries. And so you have to again, understand what environment are they in, what mission are they serving. Is it procurement for a Fortune 500 company or someone that’s being deployed into an austere environment? And no different than in a consumer application or anything else, you really want to understand, OK, what’s going to shape this person’s experience when they’re using my product or service.

CARTY: Ben still applies lessons learned from his childhood, and he hopes his children will apply some of those same tenants in our rapidly evolving world.

VAN ROO: Yeah, well, I mean, I think we talk about artificial intelligence now and where it’s going, It’s part of my job, I question a lot of what you and I probably have as parents of younger children and what their experience is ultimately going to be relative to probably less of a difference between us and our fathers. What I’m pretty grateful for, is I felt like for a lot of my upbringing, there was a good strong sense of community. There was a lot of the idea of serving a greater purpose than yourself, serving your family, serving your community, serving your country, was something to be proud of and to want to achieve and be a part of who you are as your DNA and your personality. And so, I hope I can pass some of that on to my kids of wanting to be engaged in some type of service activity, whether they want to be career professionals, a doctor or want to join the military or want to join a civic service. I think that there’s a lot of work that we can continue to do as a society and as a country. And I think we need some of our best and brightest spending tours doing that.

CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David Carty.

Today’s guest is Air Force brat and CEO of Yurts, Ben Van Roo. Contrary to the name of Ben’s company, Yurts does not provide portable round dwellings for consumers. No, but rather a generative AI integration platform for enterprises. His career spans AI, national security and policy, and data science, so he’s very clued in on the technology concerns of professionals in both the private and public sectors. There’s a lot of hype around AI and LLMs, and there’s a lot of worry, too. But where’s the reality? And how can organizations calcify their strategies into something useful and profitable?

OK, Ben, let’s start out strong. Can you share one or two real world examples of AI hysteria or hype that kind of made you shake your head? And by the way, I’ll give you the out here. Feel free to change names to protect the innocent on this one.

VAN ROO: Well, I’d say like, I don’t scroll through your LinkedIn feed for like 10 minutes today, any day. OK, so there’s this– It’s a good question.

I don’t want it to sound totally cynical, because I’m also really optimistic about where this stuff is going. But I think it’s fair to like, we are at such a weird and funny, unique moment in time with artificial intelligence, where on the one hand, a lot of what we see manifesting today in terms of large language models and computer vision models and progress is like, I would almost argue, totally predictable if you actually look at the trajectory of what was happening over the last 10, 15 years. And I’ve been affiliated with, properly affiliated with natural language processing and what are now known as large language models, for about 10 years. And you could see this stuff coming. And yet, no one understood it. And then there was this kind of explosion period when ChatGPT came out that now, the entire world’s eyes were open.

And so it’s very, very nascent. It’s in some ways, it’s actually been a long build, but on the other hand, it’s been very nascent, of like, this is something that people feel is fundamentally different. And part of the reason is, up and to this point, all you could do when you were building something that we see, is you had to put a lot of energy into training a model to do some sequence of minute tasks, and then it kind of got better and better and better. Now we have this really big explosive thing that can do kind of maybe anything, but a lot of things, not very, very well. And so we are in this moment now where the entire world is saying like, oh, my goodness, this is coming. And people are scared and people are excited and people are trying to monetize on it and very different ways. And so what I struggle with, is seeing the two extremes sometimes coexist, and keeping myself grounded and saying like, well, it is true that it’s great that people are trying to experiment with artificial intelligence and understand how it can shape their lives or their businesses, their bottom line, et cetera. And also, what I struggle with, are when you’re pretty deep in the field, you know what it can and cannot do. And then you see feed after feed after feed of, oh, like, and we are going and doing this. And it’s like, not true. Because it’s like two people and kind of a demo wrapper around GPT-4. And that doesn’t make any sense at all.

And so I think a really big area that I see the hype train hitting a giant cliff, is really with large enterprises and expectations around return on investment. And so when you think about grounding this podcast in quality and capabilities of, how are we building products, what does it mean, you this thing that can a very large language model that can do some stuff a little bit well, but not a lot of things particularly well. And it’s not actually tied to anything in the workflows, the day in and day out of what people need to do to get their job done.

And so there’s this huge disconnect where you could have, I mean, I see it all the time with really large enterprises that we talk to, they say, hey, we got a couple of open source tools and a model, we’re going to go deliver $5 billion to our company’s bottom line. It’s like, no, you’re not. Not at all. It’s not touching any of the systems that matter. It’s not touching your old ERP. It’s not touching your HRIS. It’s not touching your transactional business on how you do procurement or whatever. It’s just like a couple demos of like, hey, I can ask questions against a PDF. And so the hype that I see, is the gap between expected return on investment relative to where these systems are actually architected for big companies. That’s like the biggest thing. And like the thing that’s going to in six to 12 months of people are hearing, oh, AI’s going into a winter and the hype train is crashing. It’s because it’s not touching the things that actually matter that help us run our day-to-day business. As soon as companies can really make these tools a part of how they actually get work done at much deeper levels, that’s going to change. That’s going to change the game for lots of companies. But we’re not there yet. And so, we focus a lot on that.

CARTY: Right, right. And just the expectation set, one thing that we hear a lot, is that a general purpose LLM is typically around the sophistication of say, a college graduate or something like that, which is great that you can cover all of these tasks. But there’s a reason why it’s called general purpose. It can do a lot of things sort of well, but maybe you can’t get to that next level of depth.

VAN ROO: Yeah.

CARTY: But just to follow up on what you’re saying, you sort of cautioned against taking the bait on flashy AI-related announcements or high profile contracts, things like that, and instead, prefer a more grounded approach on an AI roadmap. So it really sounds like there’s a fool’s gold element to some of what we see in the space. How can organizations avoid these sort of missteps?

VAN ROO: I think a lot of it’s as much, whether it’s fool’s gold or not, maybe there’s a good use case that someone has. It’s like proper alignment of expectations versus value or achievable value. So, if you want to automate your sales team’s emails, you could get some value out of that, sure. That’s great. If you want to help produce SEO blogs, you can get value out of that. If you want to improve enterprise search, there’s value in that. But like, OK, we’re going to install this to do chat my HR documents, that’s not going to change the trajectory of your company by five-fold. And that’s where I actually think the biggest misalignment in expectations are.

But then from a technology standpoint, again, if you think about, where does quality come in, is it means that you have people that are engaging with an experience and a technology in the things that they care about to achieve, the tasks that they want to know. Joseph Juran would probably go back and say that there’s planning, designing the product. There’s being able to improve the product over time. And there’s basically be able to make sure like the fitness use. This product is well aligned to what I’m trying to do. We’re not really at that stage right now, where there’s enough examples of, you sit in your day-to-day to day workflow using 10 to 15 applications and this understands what you’re doing across it.

So it’s going to have the right data, it’s going to have the right access, the right systems. And that stuff is all coming. But, but it’s still an alignment of like we can only do surface level things right now to actually take a general model, apply it to your tasks at hand and provide value. There’s a bunch of really fun stuff that’s happening when you do have the right access to the data that you have and you want to iterate with models, you want to ask questions. Like, I wrote this long thing recently for a contract, and then I used our system to interrogate it and tell me where were there gaps. That stuff, it’s kind of fun. And it’s not perfect, but boy, we’re all kind of moving in that direction to start to have new ways to interact with this artificial intelligence. And I think as we explore the infrastructure, that’s what we focus on, the ability to connect to and make it really useful wherever you are and start to really explore, and think with even your audience, the experience of how do you engage these types of system, it’s not all going to be just chat. There’s new ways to plug it into existing applications. There’s new ways to just have it go and ask questions, go out and serve more of an interactive engagement. That stuff is going to be not fixed in 2024. But this is coming really, really fast and it’s very exciting.

CARTY: So if we’re placing an emphasis on realistic expectations, measurable success when it comes to AI implementation, from a digital quality and delivery standpoint, we mentioned all of these different types of interactions and how it’s going to evolve, how can organizations establish the mechanisms that they need in order to measure progress and success with AI?

VAN ROO: Yeah, that’s a really interesting challenge given what we’ve kind of talked about earlier. There’s so much noise right now of, who can do what. And it’s really strange when you hear people that are in your ecosystem talking about traditional metrics and scores around algorithmic performance, and you’re like, whoa, you’ve never engaged in that. And it’s exciting, but it also can be kind of noisy in and of itself.

I think there’s a few ways companies can think about this. Open source tools, smaller point solutions can be really good vehicles to experiment. And you set the expectation, we don’t know what we don’t really know as a company. Or as personally, just trying other types of tools of what’s interesting to you and how does it impact your life. I see that people companies doing that, and they’re saying, hey, a couple people are going to try this and we think this is a good use case and a couple people going to try this and it’s a good use case. And I think that’s great. But I think that’s not like enterprise grade software, that’s not enterprise grade workflows. It’s plugged in on someone’s laptop and it kind of works, and now you’ve like, OK, we proofed it out. Now what?

The other side of the spectrum is really large organizations that maybe want to say, hey, we’re going to spend 12 months and document every single process and then do this really large, methodical rollout. And I think that’s not without its disadvantages either. And so I think kind of again, what I’d focus on is, if you’re going to choose one or the other, you’re kind of setting your timelines and expectations accordingly. Larger rollouts, you need professional grade tools and systems and companies to engage that actually be able to do that. And then I do see a lot of people selling a lot of consulting services and a lot of GPUs around this. And the bills, planning for those costs, make sure you have to really like, OK, is this really going to drive the return on investment? Where the experimentation side, it kind of goes the other way. A lot of people can get really interesting proof of value. And again, like when your audience thinks about building experiences and building product and how do you do testing and iteration, this is very much like that. You can have people try these use cases and say like, hey, I think we can ask some questions here and maybe plug it in there and maybe automate some of these workflows. And that’s a very good in way to experiment, but that’s not production grade software.

So it’s almost like a UX testing experiment platform would be a lot of what a lot of companies are doing right now. And I think there’s a lot of interesting value in that. Because in the long run, all of this is going to change. But where these disconnects exist right now in industry, is there’s a huge crunch to deploy them at scale and make everything automatically better. And yet, very few companies are architected in a way to take advantage of that.

CARTY: So we’re talking about a lot of experimentation. We’re talking about a lot of tasks. And these are very different from fulfilling the responsibilities of somebody in the workforce. There’s obviously a lot of concern about how LLMs and gen AI, in particular, can affect the white collar workforce. And I know you’ve written about that. Given that we are applying a very grounded, a very level-headed lens to this discussion, how do you see these technologies affecting the workforce of testers, developers, marketers, whomever the case may be over the long run, and what impact might that have on digital products from a customer perspective?

VAN ROO: I think in the super short term, this year, next year, it’s not going to feel catastrophic. I think people are still going to do work in general, the way we do. But slowly, I think the tools will be refined, models will be better plugged into data or a little bit more fine-tuned to customers’ individual workflows and how people are getting their work done.

In the midterm, there’s a bunch of really interesting opportunities for people to take advantage of tools, differentiate, remove some of the work that they have to do today. People that are doing analysis on subjects. Even, someone wanted to learn about your podcast, we have some tools and there’s a perplexity, or they can just ask. Say, write me this, give me a whole backlog on what’s happening and what the people have talked about, and in seconds, get a pretty good answer. And that’s really pretty game-changing for certain groups. And so, I think what we have to assume, is to me, the cat’s, and again, I’m in this space, so I’m obviously super conflicted and biased, but I think the cat’s out of the bag. And I think this is very much just going to be part of how we get work done, whether it’s in education and how we learn, or how we actually execute against work.

There are some real questions around how do we still train ourselves to spot mistakes. How do we still train ourselves to engage with an AI and add a bunch of interesting value on top of it. In the longer term, I think there will be some industries and types of roles that will be dramatically changed and maybe some areas, certainly displaced, because some of the activities that existed that required people to do, just will not be there, period. And by long-term, five to 10 years. Not 50 years. Like, some of the stuff will come really soon. And I don’t know how to explain it other than being around natural language processing for, let’s say, 10 years, month by month, a month of progress now was a quarter to a half a year, maybe three, four years ago. And it just feels like it’s kind of screaming forward. And yet, there’s still a long way to go before we achieve this very strange AGI where we have, I don’t know. I’m not a big believer in the one model overlord or whatever, but I think that there will be real shifts in how the labor market behaves and what we are going to be asked to do. But I think a lot of it will not be a negative thing for us.

CARTY: Right. And there’s so much financial and intellectual power being thrown this way these days, so that makes sense. How important is human oversight and validation in AI-infused systems and outcomes?

VAN ROO: I mean, it’s still really important now. I think there’s oversight at, there’s kind of like regulatory, and then there’s as low as, I’m writing something and I’m getting something back. In general, everything that I get back from an AI system –and we have a very highly performant RAG system that we’ve written about and blah, blah, blah, and I still, you need to read and understand what you have to not trust and trust and how to understand references and how to engage and further interrogate whatever’s being produced. And I think that we shouldn’t trust these things completely today. But there’s a lot of community and there’s a lot of aspects of regulatory that have take that idea of not trusting to a little bit of an extreme, where people are like, oh, this is all bogus, it’s all hallucinations. And I think actually, a lot of people that use these tools will say, no, actually there’s some real value here.

And again, you guys think about how people engage with tools and have experiences, and people’s intuition is not wrong. If they’re finding like yeah, this isn’t really perfect, but it can help me get x percent, 70% to 80% there, then we take that for what it is. And that’s still valuable. We work with large enterprises, major scientific enterprises, the Department of Defense, and people can rail about, hey, we can’t use any of these models in there. We can’t trust anything. If they can’t find anything in their enterprise data stores. Like, they don’t even know where it is. So maybe we can use some of these tools to help that. We’re not saying it’s going to be the overlord and we’re going to plug it into everything that’s hypersensitive.

But maybe we should also be realistic, that artificial intelligence in some way, shape, or form has been building for a very long period of time, and some of these tools are actually super, super helpful. So when I think about the lower level, how do we trust and regulatory aspects of it all –I think we do need to have– You need to make sure that people don’t take these things verbatim and this is end all, be all. And there is training and understanding and process of that still requires decision support, logic and how people can just frame problems.

But the regulatory bodies in the government, I guess I kind of worry a bit more. I’m a little bit more skeptical, because I do think that some of the regulations that are being floated around, 100% support only a handful of companies. And I do buy into that I think that stifles innovation a bit. I think that stifles the development of where these things can go when we don’t totally know. I think there’s real challenges that underpin all of what we may do as a country, as a state. State of California has some real hooks around regulation that may influence development and companies that develop and build models from scratch. It doesn’t necessarily impact me quite as much, but still, it can a bit. And I think my skepticism is in that the entire industry is moving so fast, I don’t know. Our federal government doesn’t necessarily execute well, period. And so, there’s not enough information that says the boundaries that they’re going to try to float, make any sense, and or don’t potentially cede a lot of advantages that we have in this country to be very, very good in this space. And so, I think that people should have, at the personal level, trust, skepticism, engage, learn where it works, learn where it doesn’t. At the government level, I spend a decent amount of time tracking where this stuff is going, and I do have– I don’t know. I’m hopeful we can make progress there, but I’m a little nervous that we might go a little bit overboard, because we’ve all seen The Terminator one too many times.

CARTY: You always wonder when Skynet is going to get brought up in these conversations.

VAN ROO: Always does.

CARTY: We can’t predict where regulation is going to go, but you’ve worked in the public sector and private sector. You’ve worked with public sector and private sector clients in the past. As it applies to Gen AI, we can probably expect the public sector to emphasize security, privacy, compliance, things like that. And the private sector is going to focus a little more on flexibility, scalability, efficiency, profitability, things like that. How do these ideas compare and contrast? And what can one side learn from the other in this Gen AI journey?

VAN ROO: Yeah I mean, I get asked this a lot, where we’re working Gen AI. It’s a very hot field, moving very fast. Why do that and work with the public sector? Like people, like, that doesn’t mesh. What are you doing? What I’d say there is, think there’s been actually a lot of thought and experimentation and work in the public sector in trying to understand how any quantitative models and simulations and then basic automation and then early traditional artificial intelligence, and now this branch, they’ve been using models again since the ’40s or ’50s to help make decisions, period. Like, that’s math, it’s compute, and it’s progressed. And so, I think actually some of the more thoughtful conversations that are happening in artificial intelligence, are actually happening in places like the Department of Defense, like the Department of Energy, where there’s a very healthy skepticism of, this technology isn’t ready for x. But there’s also really good dialogue of, but it’s certainly useful for y and z.

But then to your point, within that’s then wrapped in, sure OK, it’s useful there, but then it’s got to go through all these rigmaroles of privacy and security and being deployed in their environment and you to have a lot of documentation and auditability and traceability. And so the benefit to us as a company, it’s focused us to build the tooling and the capabilities to deploy this type of new technology, but have it under wraps in lots of dimensions. From a cost standpoint, from an operational standpoint, from an access control standpoint, to an auditability standpoint.

So it just like, the burden that’s been on us is to make sure that, all right, we’re going to use this in a prime time setting. In the commercial world, what we’ve seen, is just all this random innovation, whether it’s like, hey, build your own avatar or do this or that. And there’s certainly some groups that are profiting on that, but they also kind of have to deal with sometimes the underbelly of the commercial world using these types of technologies for harm or for whatever. And so, where do these groups meet in the middle? I think the answer is, I would like to see the enterprise and the commercial world actually take a page out of sometimes the public sector. But we also need– I’ll think the pace of innovation is still really exciting when you kind of have the gloves off a little bit. People can do bad things, whether they’re using Gen AI or whether they’re using a computer and a basic script. But I think the long and the short of it is, one, the public sector is actually moving pretty quickly and doing some really thoughtful stuff. But they’re doing they’re doing it in a more systematic way. But I still don’t know if we totally understand how we can change our economy using this type of technology. So, I kind of like that the gloves aren’t completely, or they’re a little bit off in the commercial sector. I don’t know that I would change it a ton.

CARTY: You mentioned the DOD. I know I’ve seen a presentation where they discussed making use of a task force to help establish some of those concrete business outcomes, find the right ethical and reliable uses for AI, complete risk assessments, and remove some of those internal barriers to adoption. So if we’re thinking about the way all of this works, how important is it to establish a center of excellence or a task force or a tiger team or whatever you want to call it, to define and redefine those AI practices over time?

VAN ROO: Yeah, I mean, I think it’s affiliated a little bit with the size of the organization that you’re talking about. Obviously, the United States government’s quite big, and the Department of Defense is very big. And so you have so many organizations and subgroups that kind of live beneath that. I think the task force can be a really good way to try to create common language, ground and collect use cases. Some of the task force you’ve seen, I talked to them about, it’s as much like they ask me, what are you doing? What are you doing for who? And they’re trying to organize and understand what’s out there and get a really good pulse of what the activities that are happening, because it’s very much like collecting user feedback as a product or a UX. You’re getting that feedback and you’re trying to understand, OK, this person or this group is finding value here. And you rinse and repeat, and then pretty soon, you’ve got like an ICP. And I don’t think it’s that much different for the task force. I think where it doesn’t– I think if you organize this task force for information purposes and collection, sometimes it gets a little bit different when you’re saying, no, this is the authority that makes the decisions around how AI is used. And that can be a kind of a mixed bag, because two people over here have developed something really kind of cool, and maybe we let that bake a little bit until we go and we select it and we say yay or nay, we now have to use some big consulting company to rescale that up or not.

And so I think they’re useful for very large organizations that are trying to coalesce what’s happening. I’m not sure that I’m a huge believer that they’re the deciders of who shall get what. So there’s a little bit of responsibility and role of where they fit that I think is again, I’d be kind of thoughtful, depending on how much adoption want to push versus a kind of a top down decision making process.

CARTY: Lightning round here for you, Ben. First question. What is your definition of digital quality?

VAN ROO: Well, I mean, again, I’m an operations research guy, so digital quality to me goes back to what is quality, and that’s really around fitness of use. Are you designing, building, and maintaining and improving a product that is task-specific for someone? And that can mean a very pure experience or it can be a very tactical, it is accomplishing my need. And I think in the artificial intelligence space, we’re like in inning zero to really get this and nail these digital quality experiences well. It’s a very exciting time.

CARTY: The pitchers are still warming up, but we’ll know more soon, I guess, right?

VAN ROO: Yes.

CARTY: What is one digital quality trend that you find promising?

VAN ROO: The whole agent-based world of artificial intelligence is very intriguing. It is far from baked. I would encourage people to not think that these agents are going to go off and do everything. But I very much like the idea of having slightly more complex systems and tooling help us accomplish more tasks and chains of tasks. I think it is a very promising space. But again, it’s a very early space.

CARTY: What is your favorite app to use in your downtime?

VAN ROO: Well, I have a new company and a four-year-old and a two-year-old, so there’s not a lot of app for anything. I think, I’m probably a bit of a news junkie, and so I like that. I play a lot with the different competitive products in the AI space. I love playing with ChatGPT on my mobile phone and Perplexity. And seeing the weird edge cases of wherever I am in life, trying tools like mine or others, and seeing what comes to bear with these tools when I ask them questions or try to engage in certain scenarios.

CARTY: You know what it’s like, Ben? It’s like the Wikipedia rabbit hole, when you would keep clicking and keep clicking and keep learning and keep, and then all of a sudden you end up on a page, you don’t know how you got there, right? It’s exactly like that.

VAN ROO: Yeah. I mean, and that’s a little bit more wholesome than like doomscrolling on Twitter in the middle of the night, so yeah. I mean, I think exploring my surroundings and experience by way of some of these new tools, it helps me think about customer experiences that may or may not hit me in the head in my day-to-day, and I think that’s a lot of fun.

CARTY: Yeah, absolutely. And finally, what is something that you are hopeful for?

VAN ROO: I mean, I’m hopeful that in the near couple years, we’re going to be able to look a little bit past the hype of what artificial intelligence may be, and ground ourselves in what it might be in the mid-term. I think so much of this, and I’m grateful to be here today, so much of this is actually going to be based on really thoughtful, experienced leaders and product leaders on, how do we interact with this new thing and understand when to trust it, understand how to engage with it, understand the types of things that we can solve together. And it goes so far beyond the machine learning engineers. It’s so much more into designers and product leaders and experience leaders to push. And so, I’m very hopeful that this listening community will get that right. And then in the long term, I’m hopeful that we’re going to find our place with this, humanity is going to find it won’t all be perfect, but it will be better than it was before by having these types of tools and capabilities shape the next several hundred years of our future.