Listen in as Andy Sack, co-founder and co-CEO of Forum3, dives into what it means to build an AI-first organization, including insights from his conversations with AI luminaries like Sam Altman and Bill Gates.
Building an AI-First Business
About This Episode
Special Guest
Transcript
(This transcript has been edited for brevity.)
DAVID CARTY: You’ve gotta know when to hold them and know when to fold them. Andy Sack sure does. He’s a poker aficionado and enjoys using his ability to read the competition to pull in big pots.
ANDY SACK: I started playing poker in, I think, two thousand three. It’s probably a good thing I didn’t discover the game in the nineties. I was playing with other entrepreneurs, and it was just a great social game. And I think what attracted me to it was the combination of math and psychology and faster feedback loop. At the time, I was about to become a venture capitalist, and so, you know, venture capital is a form of poker. It’s time horizons are much, much, much longer, so the feedback loops are much slower.
I thought I could be a winning poker player. Sometimes I am, and sometimes I’m not. I have a good read of people in game and strategy, and sometimes that’s good and consistent, but it’s hard to be in that zone. It takes a lot of focus and energy to actually be in the zone and do that consistently. Then, you know, and then luck comes in the way and, you know, better luck even good.
CARTY: Andy doesn’t participate in poker tournaments anymore. Rather, he enjoys the routine of his regular Thursday night games where they get to choose the formats and the competitors that work best for them.
SACK: I have I no longer play Texas Hold Em no limit. My I have a regular home game we typically play on Thursday nights, and our rotation is PLO, Big O, which is five card high low, and short deck. And I don’t play any tournaments. Zero tournaments. I learned in two very early in playing poker that tournaments were not my thing. I like cash games. I like to be able to buy in, win or lose. If I lose, I reach in my pocket and can take up more money, and I like to be able to leave the table whenever I want.
Yeah. I mean, it’s a great recreational game on Thursdays. I love going to Vegas. Like, when my when my wife lets me, I’ll go and go for the weekend, and I’ll be in the poker room from nine in the morning until, you know, ten or eleven at night.
CARTY: In a unique way, poker can unify people from different walks of life, different backgrounds, cultures, priorities. I guess you could say it brings people to the table. Well, literally.
SACK: I really do enjoy the table talk at particularly in Las Vegas. Like, there’s a lot of funny stories. Like, it’s a unique way to meet other guys that you from all walks of life, and I really enjoy that. I wouldn’t mind playing one of the larger games, you know, going to the Triton series or something like that. I just love the game.
CARTY: This is the Ready Test, Go. podcast brought to you by Applause. I’m David Carty.
Today’s guest is rounder and AI transformation enthusiast, Andy Sack. Andy is an entrepreneur, an investor with over 25 years of experience across technology, finance, and venture capital. He is the co-founder and co-CEO of Forum3, an agency helping brands embrace AI and emerging technologies. He is also the co-author of the book AI First: The Playbook for a Future-Proof Business and Brand, which came out in June.
For the book, Zach and his Adam Brotman spoke with Sam Altman, Bill Gates, and Reid Hoffman among others to talk about the transformation to a new way of working. But what does wide scale aggressive AI adoption look like at the enterprise level, and what risks does it present? I suppose I could ask AI first, but let’s not keep Andy waiting.
First, Andy, congratulations on the book. Your book AI First aims to define really what it means to build an AI first organization. From your perspective, what are the most significant changes that come from adopting that type of strategy?
SACK: The most significant change is basically higher productivity. It often results initially in lower cost. But we’ve seen with the companies that really adopt, an AI first approach. It’s a process. It takes it takes six months, sometimes a year to do, depending on the size and the complexity of their organization, to make a real AI first transition. And, and those companies that go AI first, you see financial implications of that transition, I e, higher profit, faster growth within twelve months. In the book, which is framed as a two part book. It’s framed as a series of interviews.
First with, a lot of the thought leaders who of the people who are making models, Bill Gates, Sam Altman, Reid Hoffman, Mustafa Suleyman. And then and while we did it, it was the first serial book that Harvard Business Press has ever done. So we would release we’d have an interview, write the chapter, release it, and we invited a community, and we’d have these, monthly community calls. And as we went along the journey of writing the book, we got, like, I don’t know, halfway through. We had done, like, six chapters or something. And people were asking us, like, how to apply this?
And so we then went the second part of the book is basically case studies of businesses, Moderna, Khan Academy, and Suzy, in which we found executives who had really worked to transition their businesses to be AI First, and we ended up writing a playbook for how other people could follow and highlighting those case studies. Following the playbook is essential in in terms of achieving those financial results that I just mentioned.
CARTY: Right. You mentioned some of the tech visionaries that are part of the book. It’s really a who’s who list of AI leaders, including Sam Altman and Bill Gates, as you mentioned. Those are some pretty disruptive figures in tech and influential figures in tech.
SACK: Some of the some of the most disruptive.
CARTY: Yes, absolutely. I mean, name brand kind of guys. Right? So I obviously, they are conveying different thoughts, different opinions. But on a high level, can you kind of give us a sense of what they had to say and how that relates to building and delivering products and influencing change with AI?
SACK: I’ll tell two vignettes, which, which are covered in the book, but I think it illustrates the answer to your question. So the first is, in the meeting with Sam, which was our first interview, it’s the first chapter. The name of the chapter is The Holy Shit Moment. And a side note, I argued hard to call the book The Holy Shit Moment. And Harvard Business Press won out, and we called the book AI First. And in that interview, we went to Sam, and this is, again, November of 2023, one year after ChatGPT 3.5 had been released. And we said, and in the interview, we were asking him questions about marketing and creative and design. And Sam said, I don’t know much if anything about marketing and design, but 95% of marketing as we know it, will be done by AGI within 3 to 5 years. And at the end of it, we talked to him for an hour and we left the open the open, AI offices, which are in this industrial area of San Francisco. Outside, and there was a park across the street. And Adam and I were like, oh my god. Did he just say ninety five percent of all marketing jobs as we know it are gonna be done by AI? And we’re like, holy shit. Like, I gotta process that. Like, what does that mean? And that was eighteen months ago. And as I lift my head up today, I think, I mean, I don’t know if he’s exactly on target with the timeline, but it certainly feels like he’s more right than wrong. The pace of the technology advancement, the level of, you know, whether it be video, which Google just announced Veo 3, which is just an incredible advancement, deep research, which was released first by chat by OpenAI, but now all the models have it. Like, the reasoning models. Like, there’s so much that’s happened since that interview that the holy shit moments within AI and the pace of them.
Second, I think I’ll share is was, came from an interview with Bill Gates, which was Bill was talking about he was comparing the moment. He told the story of when he back in, I wanna say, the late 70’s, early 80’s, he went to Xerox Park and saw basically the GUI interface at the Alto computer. And he talked about how when he saw that, that that was like an moment for him in which it laid out the, really, the vision for the next 10 to 15 years of what he wanted to do at Microsoft with Windows, with Office, and all the products that they released. And he said it was like an incredible moment when he saw that. And he said his experience of seeing ChatGPT 4o, not 3.5, 4o, and when he saw ChatGPT 4o, that it was as significant, if not more significant than that moment in the late 70’s. And to hear him, who’s really the the grandfather of all software in the world, say that ChatGPT 4o was as significant of a technological moment, if not more so than that, really puts into perspective, I think, for me and then hopefully for your audience, how significant AI is as an as human invention.
CARTY: Right. It’s hard it’s hard to find somebody that can lend more context to that discussion than Bill Gates. That’s a very salient point. Your book offers a lot of strategies to really help businesses achieve early AI wins, and really the ability is there to ramp up faster than ever. Can you give us some of the benefits and drawbacks of ramping up quickly in that type of approach?
SACK: Well, what we recommend when we talk to companies, and we typically talk to executives and CEOs first, they approach us. And if any of your listeners are executives and they want help, you can reach out to us at Forum3.com. And when they reach out to us, they typically have some more significant application of AI. Gee. I wanna build something more significant.
And we in our playbook, we tell people to start with really micro applications, and basically getting employees, across the board comfortable with utilizing what we call micro applications or microservices. It’s basically getting your employees to just increase the usage of whatever large language model. We’re a ChatGPT shop predominantly, but we use Gemini a lot too. But whether that’s Copilot, Gemini, or ChatGPT, or another element, it doesn’t matter. It’s really about getting the getting that tool distributed across the employee base and really getting that the usage of those and getting sharing happening with the within the organization to say that it’s safe and what works and what doesn’t, because everybody’s got different jobs, and people are gonna use it different ways. And what ends up happening is if you just do that and let the tool free and don’t do anything else, there’s a 30% productivity lift just from that, which basically frees people up to either do more work or better work or different work. So we really encourage that as a first step.
Along with that, we recommend having an AI council, making sure that there’s an AI policy. I mean, I can’t believe I’m the guy who’s telling companies to put in policies. I’m the first person to not read those policies and ignore them as an entrepreneur. However, when it comes to AI, it’s actually a really important document. And I I’ll tell you why, because with AI, there’s so many tools that can be tried and used. And as an organization, you kinda need to put guardrails on that and both permit it and encourage it, but put guardrails on it. And that’s what the policy doc does. So those are the first steps that we recommend company go through.
CARTY: And you can usually still get a lot of experimentation done within those guardrails, right? When you’re applying it in context across the business, you can build a lot of really interesting tools and use it for a lot of interesting purposes.
SACK: Yeah, absolutely.
CARTY: So striking that balance between innovation, performance and quality, you know, this is something we talk about a lot in the podcast and it’s been around in tech circles for a long time, but it seems really more than ever with how quickly again you can put all of this into practice. How should organizations sort of grapple with that challenge of striking that balance?
SACK: I think with AI, productivity goes up and it’s actually very easy to produce. The challenge then becomes actually reviewing and verifying, and really ensuring that both the output is what you wanted and it’s of the quality. In our playbook, we talk about basically that productivity lift of the employees.
The second is, you know, when you have the AI console and you’ve identified where basically, which app or proof of concept you what area, what department, you wanna go tackle in AI implementation that’s more significant. I typically recommend in choosing anywhere from one to three areas so that it’s not like, there isn’t so much pressure on success or failure on anyone. I recommend not measuring ROI too early and basically go with experimentation and let two or three proof of concept applications fly. And it’s very easy with AI to get to, I’ll call it, the 30-yard line. It’s quite hard to get to the end zone where it’s a high quality application that’s launched, hosted, and supported. And it’s really that last 30 yards that becomes and no matter what you do that that I think is gonna distinguish, really the quality of applications of whatever that is. It’s interesting.
I probably would say that everybody’s moving too slowly. And there right now, there is no risk of moving too fast. I don’t think you can under invest in AI right now. And I say that because AI is moving so much faster than all of us. And, you know, maybe there’s some risk that by moving fast, you might get locked into a model. But the fact is you should be building knowing that the models are gonna be better tomorrow than they are today. There’s not gonna be a worse model than there is today.
And what I see in the market is basically people know AI is a big deal, but across the board, 70 percent, you know, still think of it like a Google search replacement and don’t really understand that this is not software. This is some form of alien intelligence that has been created. It’s an amazing set of capabilities. And, so I think the risk is actually not moving fast enough. Like, that’s where I come down. Now there’s lots of implications from that around quality, around risks and, like, open yourself up to fraud, et cetera. So I don’t wanna minimize those risks, but I think the risk is much more on the other side.
CARTY: That makes sense. How important is it to have a robust data quality and governance strategy as part of an AI first initiative? And what sort of common challenges do you see companies encounter in that area?
SACK: So I’ll say two things. I think data quality and governance is critical. It’s a critical ingredient to being AI first. And there are a number of key areas that I see because I deal with enterprises all the time that I see that actually impede the adoption and the movement and the transformation of becoming AI first. The number one is IT gets in the way, which is kind of crazy, but IT gets in the way.
Number two, data quality and governance becomes like, oh, we’ll do the AI transition after we do this. It becomes like, we got to do that first. And I think it’s critical and yet having it be a blocker on the transformation is, I think, silly because even with a total messy, with no data governance, and there’s tremendous amounts of unstructured data, and there’s tremendous amounts of productivity gain to be gained even while it makes the data governance and the data even messier.
CARTY: Now you’ve worked with some well known brands in helping them embrace AI for customer engagement in particular. How can AI play a big role in helping brands establish a high level of customer trust? And do you have a favorite success story?
SACK: We worked with one large retail chain of hair salons, and they had about 10,000 customer reviews that had been entered on their website. And they’d never done anything with that. And so we’re like, hey. Can you send us that data? And they sent us that data, and, basically, we ran it through. We did simple sentiment and data analysis on it using ChatGPT. And then we use that to create a report and a presentation, and we sent that back. And it was it was eye opening for them about some clear things that they had to fix immediately and they and some clear things that they had done right. And they had never done it, and we did that in a day. And I think both the fact that we did it when I say a day, it wasn’t a day’s worth of work. It was probably maybe it took two hours to get to the quality outputs that we wanted.
But that, like, is the kind of thing that is sitting around at every enterprise. Everyone has these, you know, customer touch points that for what one reason or another, the BI team, you know, it’s just the data is just sitting there and nobody ever just picked it up and did something with it. So I think that’s a really good illustrative example. It’s not the biggest. Like, it’s you know, we have other ones where people use it for training salespeople. I mean, the uses of AI are basically every aspect, every division of business from the finance team to the legal team to the IT team to marketing to sales. It’s everything in business. And that’s why I think you want to really transition all of your employees into and investing in their training and education.
CARTY: Alright, Andy. Let’s hit our lightning round questions here. First, what’s the most important characteristic of a high quality application?
SACK: Reliability closely followed by ease of use.
CARTY: What should software development organizations be doing more of?
SACK: Using AI. I mean, everyone’s talking about Claude’s software. What Claude has done for software development — between Claude and Cursor, the development stack has completely changed.
CARTY: On the flip side, what should software development organizations be doing less of?
SACK: Traditional software development. It’s more than just relying on those tools. It’s there’s a new way of software development, not just vibe coding. There’s literally a new architecture. There’s a new faster way of developing software, and the old way actually leads to the wrong results.
CARTY: And, Andy, finally, what is something that you are hopeful for?
SACK: I’m hopeful that that AI will be a force of good.
CARTY: Certainly beats the opposite, doesn’t it?
SACK: It certainly does. And it’s unclear.