Select Page

Ready, Test, Go. brought to you by Applause // Episode 38

The Craft of Engineering in the Age of Vibe Coding

Listen to this episode on:

About This Episode

Dan Vega, Developer Advocate at Broadcom and co-author, as he discusses the rise of vibe coding and why software engineering fundamentals, accountability and testing are more critical than ever in the age of AI.

Special Guest

Dan Vega

Dan Vega is a Developer Advocate at Broadcom and co-author of Fundamentals of Software Engineering from Coder to Engineer. He is a career software engineer and educator focused on bridging the gap from writing code to practicing true engineering.

Transcript

(This transcript has been edited for brevity.)

DAVID CARTY: You might have heard of the mistake by the lake, also known as Cleveland, Ohio, but Dan Vega is here to set the record straight. It’s a great place to live and raise a family. And don’t discount affordable housing.

DAN VEGA: I’m a big fan of Cleveland. The people, the food, the culture, and of course, our sports teams. Raising a family here — it’s very affordable. It’s somewhat safe. Cleveland gets a bad rep on the list of dangerous places to live in America. But I’m living in the suburbs. It’s not really that dangerous. I find it just a great place to live. Great people, great culture. People get all the jokes about the mistake by the lake, and that’s why it gets a bad rap. But I think people that do come here tend to enjoy their time here.

I remember when I was young, I moved out to San Francisco. I was making almost nothing here. I got invited to join a startup in San Francisco when I was like, 20. And I got offered $65,000, and I said, wow, this is 25 years ago. I’m going to be rich. And I moved out to San Francisco. Couldn’t find an apartment that I can afford. I ended up moving into a house with three other people. I was being maybe $800 to $1,000 a month for basically a room. And so quickly I found out that I was not going to be well-off. Like, it’s very expensive to live out there. I loved San Francisco. I loved my time there. It was so much fun, but it was just different.

CARTY: Dan runs meetup groups for development enthusiasts in the Cleveland area. Post COVID, those meetings have largely gone virtual. He’s accepted that unfortunate reality, but there’s some concern about what we’ve lost in the process.

VEGA: There’s a great tech community here. I mean, I grew up in this community, and it through some meetup groups and local conferences. I really got to know a lot of people in the community. Obviously, post COVID changed that for everyone. So there have been some challenges there. I run a local meetup group here, so that’s been challenging. One of the big things that I think contributed to this is like everybody started working from home, right? Everybody got used to working from home. We had the Java user group was downtown Cleveland.

I used to work for a coding boot camp called Tech Elevator, and we had our meetings out of Tech Elevator downtown. And so I think people would get off of work and it’s like, oh, yeah, let me shoot over there for an hour and a half, get some and hang out with some nerds. Well, when you’re not working from downtown anymore and you’re in the suburbs, do you want to make the trip downtown to go to this meetup? And so then we toyed with the idea of should we move it out to the suburbs, but then, like, you’re alienating the other side of the burbs. Do they want to come all the way out to the other side? I mean, Cleveland’s not that big, but still, for somebody to make the trip out.

So, yeah, it’s been really tough. We’ve done a lot of online meetups and just get-togethers and let’s hang out and go get some coffee type of things. But yeah, it’s been really hard. And at a macro level, that’s like a meetup thing. I’ve noticed it at a conference level too. Here in the US, conferences are just not as well attended as they used to be. Somehow in other parts of the world, they’re doing better though. There’s also this, there’s so much information online now and you can get any content you want.

And also, I’m older, younger generation is very TikTok generation. They want instant dopamine. Like, give me a one-minute video instead of an hour presentation where I got to sit there. So I’m sure there’s a bunch of contributing factors. I wish I had some answers for that though. People have lost that sense of community, I feel like. When we were younger, we would stay out until the lights were on and hang out with friends. And I don’t see that as much anymore. So I wonder if there’s some macro level there of like loss of community. You could still stay ahead. I don’t know that you’re going to fall behind. But yeah, you do lose something there. Especially in our industry. We are– part of it is tech, right? We need to know how to code. Well, less these days. But part of what we do is we work well with others. Like when we’re on an organization, we’re working with teams. That sense of being able to work as a group and personal skills, and being able to collaborate, you do lose those when you’re on a Zoom call.

CARTY: One struggle about living in Cleveland that is absolutely accurate. The sports heartbreaks. Dan’s top three come to mind pretty quickly.

VEGA: Oh my gosh, there are so many. 2016 is going to be number one. The Guardians versus the Cubs. Game 7. If that thing doesn’t get rain delayed, we’re probably champions. So that one stings the most. The drive, the fumble. Those stick out to me as well. So they offer a lot of breaking heart moments. I stick with them.

CARTY: This is the Ready, Test, Go. podcast brought to you by Applause. I’m David Carty.

Today’s guest is Believeland faithful and developer advocate at Broadcom, Dan Vega. Dan has spent his career as a software engineer, author, and educator, helping developers bridge the gap from simply writing code to practicing true engineering. He is the co-author of the book Fundamentals of Software Engineering from Coder to Engineer, which emphasizes judgment and clarity as core skills in a world increasingly dominated by automation and AI.

Dan joins us today to discuss the rise of vibe coding, a shift towards speed and AI generation. At best, it’s an enabler of innovation. At worst, it’s a practice that threatens to sideline the craft of development. When anyone can generate a block of code with a simple prompt, where does the real work of engineering begin, and what will organizations value about the human perspective? We’ll discuss why accountability, explainability, and humans in the loop are more critical than ever to reinforce quality in the products that we build. Let’s have some fun times in Cleveland today with Dan.

Dan, when you hear the phrase vibe coding, what immediately comes to mind for you? And what does that framing maybe oversimplify? Perhaps dangerously so, about the work of software engineering.

VEGA: Yeah, I think there’s a really great quote from Satya Nadella, the CEO of Microsoft, who said, I think what AI does, quite frankly, is lower the floor and raise the ceiling. So we’re lowering the barrier to entry, which is good. I would love more people to get into this industry, scratch that itch of coding and find it and see if it’s something that sparks a passion. So we make it easier for people to get into coding. Hey, I have this idea, but I don’t have $10,000 to go hire a team of developers to build out an MVP. I can get something up and running really quickly, which is really great. Now that’s awesome.

But it’s also not like, hey, we’re going to take that same mentality into a Fortune 500 company and build enterprise software. But I think for those who have experience a lot of these tools– and I don’t know the word vibe, coding is like a catchall. And I almost think of it as AI engineering some of these tools. How to wield them right, you could be so much more productive. You can. For me as a back-end developer, I love writing Spring and Java. I’m not great at front-end stuff, but I know how to get around. I used to try to create a front end for a website, and that would take me like forever. And that would really like halt my progress of trying to get something out there. Now I can take those thoughts and put a front end together and still build the full stack. So lowering the floor for barrier to entry, raising the ceiling for those of us who have experience.

But also, if you are someone who is trying to get into this industry, I also like to put a little warning on that. You don’t want to become AI dependent, because you don’t really learn that way. The way we’ve learned was, hey, I’m trying to write this thing. I’m trying to get this thing to work, and I would fail and fail and fail. Scour the internet. We didn’t have YouTube back then. Scour the internet for answers, and finally get that aha moment. I was like, oh yes, now I fixed that. That was a really learning moment for me. And the next time I ran across that, I didn’t make that same mistake again. So trying not to use AI as a crutch, but really a tool to get interested, to build something, to learned something new.

CARTY: Right, and that’s really interesting, because there’s this notion of how far can a single person or a very small number of people take AI. Can you scale that to $1 billion company? Can you take it quite that far? We’ve talked a lot in this podcast about how AI can be an amplifier of good and amplify amplifier of bad, right?

VEGA: Yeah.

CARTY: And that falls in line with that same idea. So to that point, based on your conversations across the industry, where are we today with AI-assisted engineering in particular? And how fast do you expect this style of development to spread?

VEGA: Wow. So I’ve been going to a conference every year now ConFoo. It’s up in Montreal, and I distinctly remember being at ConFoo last year, and I had some time in between sessions, I had a day or two, and I started playing around with Cloud Code, and I actually bought a subscription to it. And I was like, well, let me– I messed around with chatbots and stuff. And I remember using Claude Code, and I had this “aha” moment. I’m like, “Wow, this is like incredible.” That was a year ago.

Something has happened over the last couple of months. Maybe it’s the models are getting better. Whether it’s Opus 4.5, GPT 5.2, Claude Code has become this incredible tool. It has gone from an “aha” moment to “Wow, this is like, this is really good. Like it’s getting scary good.” Now, not to the point where I think people are getting displaced, but it is so good that I don’t know where we’re going– so I’m getting ready to go ConFoo and few weeks here. I’m trying to think out a year from now, thinking back to where I feel right now, and I can’t even imagine where we’re going to be then, because I love the craft of writing software. But I also, like, I don’t need to like– I don’t need the art of typing on a keyboard. That’s not what I enjoy doing. I enjoy building things. And so if I could steer something in a direction and build something that is exciting for me, and I’ve taken on so many more projects that I would not have been able to do without these tools. So for me, I’m giddy with excitement on where we’re going with this.

CARTY: Yeah, it’s transformational. I mean, just even how much you can use agents today. The average person to automate some of their workflows. It’s pretty amazing. And to that point, producing code now is easier than it’s ever been. Whether or not that code is high quality is maybe a different discussion. But with that in mind, you’ve mentioned this in your own work. How is the real work of engineering shifting? What sorts of tasks or activities should we be expected to take on?

VEGA: Yeah, I think those of us who have been in software for a while, we’ve been around this idea of the software development lifecycle, right? And really, this is just amplifying that.

I hear a lot of people talking these days about spec-driven development, like, oh, you need to come up with a plan before you let your AI write code. Like, OK, this is what we’ve been doing for all time, right? We gather requirements, we figure out what we’re going to build before we actually build it.

So we understand that already, we’re in that. But I think that SDLC really becomes important, understanding requirements before we do anything. I don’t care if you’re writing code or a machine is writing code. Having a plan upfront is really important. And then the other things I think become really important, especially inside of enterprises. Something like a code review. A code review inside of an organization maybe used to be like a quick formality, like, OK, let’s just go through this. Let me check some boxes and make sure everything’s OK. I think it becomes more important than ever, right? If you come to a code review of mine and we’re on a team, I don’t care if you wrote the code. I don’t care if some AI agent wrote 10,000 lines of code. But you need to be able to explain to me why some of those decisions were made. What were the trade-offs? Why didn’t you, why didn’t you do this instead of that? And as long as you can explain that and understand what code was written that we’re going to push to production, I think we’re still in a good place.

The problem: is everyone still going to stick to those rigorous code reviews now? And I don’t think so. Like you said, if something works, it works. If it’s a side project for me, maybe I don’t care as much because it’s not affecting other people. But if I’m inside of an organization and we’re working on an app that brings in billions of dollars, we need some more scrutiny on those code reviews and what we’re actually putting into production.

CARTY: Yeah, and to that point, I was going to ask you about code reviews anyway. Quality checks. You’ve argued that they matter more than ever in our AI-assisted world.

VEGA: Yeah.

CARTY: So what’s the process for implementing those practices without slowing things down terribly, right? Because the whole boom toward AI-assisted coding and development is speed. Speed, ease of use. So how do we implement those checks without grinding everything to a halt?

VEGA: Coming back to some of our fundamentals of software engineering, right? Which is when I work on a feature, whether I’m working on it or some AI is working on it, I’m working on small atomic commits. I’m working on a feature that is small and standalone. I’m not trying to build out an entire sprint of work. So when we’re in an organization, we usually have sprints. If we’re working in agile, we break these things down into small features. And if I’m working on a small feature, then I can have small atomic commits with those, something that makes sense. I remember being, I remember being a team lead, getting code reviews, and they were 10,000 lines long and hundreds of files, and I would immediately send it back and say like, no, no sane person in their right mind is going to review this code. You have to break this down into small enough features so that when we do get into a code review, we can talk about that code.

So I think starting there, having a plan, putting it into the right size features. Small atomic commits, feature branches so that we can test and isolate them. Again, the things we’ve been doing in the software world for a long time now.

CARTY: Yeah, that makes sense. To get back to the accountability piece, in systems that are heavily shaped by AI-assisted or generated code, how should engineers think about ownership and accountability, and where does that responsibility ultimately sit?

VEGA: Yeah, I mean, it’s your code. Whether you wrote the code, like physically typed it out or directed someone or something to do it, right? If you were in an IDE before AI there is a lot of times that you could use some features of the Ide to refactor code, right? And I think we’re just in that era at a much higher scale.

So no matter when it was then or now, you still own are still responsible for what goes in. And again, if you are just relegating to accepting everything, then you’re not really learning and you’re not really understanding what’s going into production. So if you take ownership of it, and you understand the code, you may not understand every single line. But at a high level, I get what I get what we’re doing here.

When things break and they ultimately will, you’ll understand how to fix them. If you just throw stuff out there at the wall and push it to production– I think we’re seeing this now where there’s probably a lot of code out in the enterprise that is probably not working all that great. And then it becomes much harder to debug and fix. So yeah, I mean, you take ownership whether you wrote it or directed, orchestrated, however you want to however you want to call it these days.

CARTY: Absolutely, and I’m a believer that we’re just starting to grapple with the risks of some of this technology. I mean, it’s very persuasive. And it is eye opening to your point earlier. But there is a lot of risk there when you consider you’re building all of these smaller software components, essentially. When you think about how much documentation went into that in the past.

VEGA: Yeah.

CARTY: Now anybody can just build those pieces with no documentation, no review. And that’s a tough concept. So along those lines, what real-world quality issues from a user’s perspective, can we expect to see when we’re talking about explainability and accountability lacking?

VEGA: Depending on who is orchestrating these agents. If somebody who doesn’t know what they’re doing, then yeah, maybe they don’t understand that documentation and tests are important. But if somebody who’s been doing this long enough understands that, hey, we’re just getting more productive here. We’re raising the ceiling. Yes, I’m building out this feature. It’s not enough to just build something, see it work, and we’re done with it.

Tests, tests have always been really great. For me, they provide this safety net. And this confidence as a developer that what I’m building is not going to break. But more importantly, what I’m building is not going to break the rest of the system. And so I think those things are important. And those things get a little bit easier to do. Whereas they might have been a hurdle in the past. Maybe people don’t know how to write tests effectively. I’ve also been in organizations where we don’t have time to write tests. We have one week to get this feature out. I don’t care about your tests. So I think those things become more important.

And I think it’s on engineers in the organization to make sure they push back on this, right? Yes, I can create. I can knock out 22 PRS in a day. But just because I can do that doesn’t mean we should start skipping some of these things. Writing tests, writing documentation, doing user acceptance, all the things that are important to make sure that we put out quality code. So yeah, there are some risks there because we talked about it earlier this generation of like dopamine hit. I think with AI you get that quick dopamine hit, this thing, I see it’s working. I’m done. Let’s move on to the next thing. And it’s like that TikTok. Like, you’re just moving on to the next video.

And so yes, there’s some risks there, but I think if they’re personal projects or throwaway things, not a big deal. But inside the enterprise, like these, the development lifecycle that we have, the constraints, the checks and balances that we have, are more important than ever.

CARTY: Yeah, I mean, you got to what my next point was going to be. It reminds me of when we had Michael Bolton on the podcast. I believe he was explaining the concept more broadly of generative AI Is not necessarily designed to produce accurate results. It’s designed to produce results that are accepted by the end user. So with that frame of mind, testing is more important than ever, right? So that is the one thing that you can’t just necessarily automate away and generate test cases for. I mean, you can create them more quickly, but it really places that extra emphasis on testing.

VEGA: Yeah, absolutely. And I’ve been using a pretty good framework for doing this. And I’ve seen some organizations do this as well. The one thing I like about this is you can basically repeat some structured patterns. Like in my applications. These are the types of tests I like to write and how I format them. If I can hand an AI, like, this Is an example of how I tested Widget A and now we’re building Widget B, we can at least get 80% of the work up front. But yeah, the last 20% is really scrutinizing those tests. Do we cover all the edge cases? And sometimes AI is really good at uncovering edge cases that I maybe would not have thought about.

So that’s another kind of tip that I always do, is OK, now that we’ve written our tests, is there any edge cases that I’m like, forgetting about? Because those are usually the ones that let us have those bugs sneak up on us. So I think there’s a really good chance to automate some of that. But then again, as we get into code reviews and ownership of code, those tests need to be rock solid.

CARTY: Device sprawl is a huge challenge. As systems scale across platforms, devices, and different usage contexts. Speaking to your point about edge cases, which aspects of the craft of engineering are really going to be most critical to maintaining consistent quality?

VEGA: One of my good friends, Nate Schutta, is also the co-author on the book. He has a really good quote, and I’m just paraphrasing now, that you should stick around an organization long enough to see the outcome of your decisions.

And so part of our jobs is always just weighing trade-offs, right? It depends. Like you can use this type of database. You can use this type of message broker, like whatever. There are many options out there, but there are many cases where you have to weigh the trade. And I don’t know that AI is particularly good at doing that right now. Maybe just in general, it can offer suggestions, but you probably have institutional knowledge of your organization’s software. And you’ve probably been through some of those trade-offs. And what didn’t. Did work and didn’t work.

And so I think really, under and again, typing speed has never been like a bottleneck. It’s been being able to understand those trade-offs and when to use one approach versus another. And then also, like when things aren’t right, make a decision to move to something else. So really understanding architecture and systems at a whole, how they work, how they fit together, those things I don’t think are going away.

CARTY: How should technology leaders balance this dichotomy here? Empowering developers with new tools while still focusing on maintaining quality and coherence across the product portfolio, and what visibility do they actually need to manage all of that?

VEGA: Yeah, that’s a tough question. I think one of the things we see right now is junior developers being affected. Because we’re not hiring as many junior developers, because we think senior developers can do way more. And I think that it starts at the top, from engineering managers to senior developers, to say like, yes, I am more productive. I am able to shove some of these mundane tasks that I don’t want to be doing and focus on more critical issues. But I don’t know that means, hey, we’ve given you these tools. Instead of closing 10 tickets a week, you should be closing 50 a week. It’s hey, I can now focus on some of the things that we keep pushing into the backlog and saying we don’t care about. Our test coverage is 40%. Our documentation is not up to date. We haven’t taken a chance to really take a look at these critical issues that are causing issues for us. I think understanding that, yes, we are going to be more productive, but let’s use that time to create better software, to fix some of those things in the backlog, to take on new projects if possible, instead of just looking at it as a number like, OK, we were doing 10 story points a week now we should be doing 20. And I think too many people get caught up in the productivity and just look at it as a money grab and, hey, let’s just be more productive in terms of closing things. I think there’s more that we can do. And I think that starts at the top.

I saw a quote on this. I can’t remember exactly what it was, but yes, it is. It is an unforgiving job market right now. It’s harder to get in, but also you have so many more tools available to you. Never mind the tools. I mean, just the amount of resources when it comes to books and internet and YouTube videos and GitHub repos, things that you can start to consume at your disposal. So maybe getting that first job is getting harder to do. Maybe getting that internship is harder to do. But man, you could go off and start creating something yourself. If you’re an entrepreneur, you could start thinking of businesses on your own to start. So there’s that. But I also would caution, the one you mentioned, a term vibe coding earlier. And I think for me, there were two camps out there. The camp of people that, yeah, I have this idea, I really want to turn it into something. And that was great. There was this other camp who was like, you guys learned how to code. I don’t have to do this. And I think people out there felt like they cheated the system. And I don’t think you did. You may have built something, but there’s no shortcut to learning how to code.

I still think you need to learn the fundamentals, especially if you’re in an organization and you’re working with others, right? I still think those are going to be very important. Things are going to break. And when you’ve asked your AI chatbot or your AI agent 10 times how to fix this, and it can’t figure it out, it’s time to go back to basics and understand some of those things. So some people have been screaming on the internet that the fundamentals don’t matter anymore. And I think it’s the exact opposite. I think we will double down on this. I think fundamentals are more important than ever.

CARTY: You reap what you sow eventually, don’t you?

VEGA: Yes you do. Yeah.

CARTY: Let’s look ahead a few years. I know that earlier, we talked about how all of this has just come along so quickly. We couldn’t have anticipated the rapid growth of AI. But if we’re looking ahead a few more years, what aspects of the engineering craft do you think will be most valuable?

VEGA: I think it’s always been problem solving, taking a look at a problem. Part of AI, right now, AI is very good at generating text, generating code, but it has no perception of the real world. It can look at a bunch of examples and see, hey, these are apps that exist out there. I can recreate that. But nobody has a fresh, you don’t have a fresh perspective of trying to solve a problem. And so you look at a high level. But then even just within your organization, I think problem solving has always been the foundation for me. I enjoy it. I enjoy taking a problem, breaking it down into smaller problems, and solving those. So I think that is a skill that you can really work on by failing and figuring things out. And I think that is always going to be there.

And if you have that tinkering, that passion for learning new things, that’s going to be very important. So both the ability to solve problems, the ability to learn new things, and weigh trade-offs in this industry, I think those are all kinds of foundational things that will continue to be important.

CARTY: And to get to the flip side, perhaps an existential question of sorts, what’s truly at stake if teams become overly dependent on AI? You mentioned this earlier. Gen AI is ultimately predicting the next word in a string, the next character in a string. And we’re not really getting to any new information that’s outside of its training data or augmented data. So what is truly at stake if we become too dependent on AI?

VEGA: I think a couple of things. One, as you said, I think the quality of software will go down. I think this is often lost. But we as software developers, whether you care or not should have some empathy to what you’re building. Like I worked for an insurance company. Maybe writing insurance software is not the most exciting thing in the world, but if you put yourself in the user’s standpoint, like, hey, somebody is using this software to solve a real-world problem. If you have some empathy about the things that you’re building and understanding that these are getting used by real people in the world to solve a problem, I think that makes software like human. We have that connection to other people that we’re building it for. And if we can just start clicking generate and a new application pops up, I think we lose a lot of that.

And I don’t know that everyone will enjoy using that software. I could be wrong, but if it’s that easy to just in an hour, click a button and generate the next salesforce, then why. does it exist? So it is very existential, but I don’t know where it’s going to go. But I think that AI doesn’t have this connection to the world, whereas humans do. And I think that is that’s always been important to me. And I think it becomes more important.

CARTY: Dan lightning round questions. Before we let you go. First, what is the most important characteristic of a high-quality application?

VEGA: Usability? I think our applications are being used by end users, and it needs to be usable. It could be like the greatest thing in the world. But if I get on and I can’t get onboarded to your application, and I can’t figure things out easily, then, then I’m not going to use it.

CARTY: Yeah getting back to your last answer around keeping the humanity in what we’re building. And I like that a lot. What should software development organizations be doing more of?

VEGA: I think software, I think organizations should spend more time focusing on building quality software instead of, hey, this is the most important two-week sprint of the entire year when we know it’s not, right? Like, take time to document features, take time to write tests, take time to go through user acceptance and make sure that we’re building quality software and not just getting out our 10 story points for the day, right?

CARTY: Yeah, absolutely. And on the flip side of the coin, what should software development organizations be doing less of?

VEGA: This could be a cop out, but I think software organizations should do a lot less of layoffs. Like, I’m tired of going on LinkedIn and seeing that Amazon laid off 16,000 people. It’s just heartbreaking. I know we went through this over-hiring period. And now everybody’s like, oh, we can let go of people. I saw an article that Salesforce let go of 4,000 people. And now two months later, they instantly regretted that. So maybe pump the brakes a little. Don’t let go of everyone yet. Let’s all figure this out together.

CARTY: Yep, absolutely. Echo that sentiment for sure. And finally, what is something that you are hopeful for?

VEGA: I’m hopeful that these tools that we have all been given really allow us to be creative and get more things out into the world. Again, lowering the floor for some of those people who have ideas, raising the ceiling for us. I hope that we see a whole new generation of ideas that maybe wouldn’t have been there without these tools.

CARTY: Enhancing creativity and innovation, not abstracting it away.

VEGA: Love it. Yep.

CARTY: Let’s aim for that. I think that’s a great point to end on. And Dan, thank you so much for joining us.

VEGA: Ah, thank you so much for having me. I appreciate it.