O
Ochtarcus
Tools
0

Vibe Coding Is The Future

Andrej Karpathy recently coined the term “vibe coding” to describe how LLMs are getting so good that devs can simply “give in to the vibes, embrace exponentials, and forget that the code even exists.” We surveyed YC founders to get their take on how this new way of programming is changing how products are built.

Transcript

Speaker 0:

It's like somebody dropped some, like, giant beanstalk seeds. At night, we woke up in the morning. There it is. Woah. What's going on?

Speaker 1:

I mean, I think our sense right now is this isn't a fad. This isn't going away. This is actually the dominant way to code. And if you're not doing it, like you might just be left behind. Welcome back to another episode of the Light Cone. I'm Gary. This is Jared, Harge and Diana and we're partners at Y Combinator.

Collectively, we funded companies worth hundreds of billions of dollars right when it was just an idea and a few people. So today we're talking about vibe coding, which is from a Andre Karpathy post that went viral recently. There's a new kind of coding I call vibe coding, where you fully give in to the vibes, embrace exponentials and forget that the code even exists.

Speaker 2:

Yeah. So we we surveyed the founders in the current YC batch to get their take on vibe coding. And we know we essentially asked them a bunch of questions. We asked them what tools are you using, what work, how has your workflows changed, and so generally, where do you think the future of software engineering is going?

And how will the role of software engineer change as we get into a world of vibe coding? And we got some pretty interesting responses. Anyone have any favorite quotes that jumped out from the founders? I think one of them that I can read verbatim.

Speaker 3:

is, I think the role of software engineer will transition to product engineer. Human taste is now more important than ever as cogent tools make everyone a 10x engineer. That's from the founder of Outlet.

Speaker 0:

I got one. Avi from Astra said, I don't write code much. I just think and review.

Speaker 2:

This is like a super technical founder whose, like, last company was also a DevTools company. He's, like, extremely able to code, and so it's fascinating to have people like that seeing things like this. There's another quote from a different RB, RB from Copycat, who said he's I am far less attached to my code now, so my decisions on whether we decide to scrap or refactor code are less biased.

Speaker 1:

Since I can code three times as fast, it's easy for me to scrap and rewrite if I need to. And then I guess the really cool thing about this stuff is it actually paralyzes really well. So Yoav from Kissixty, he says, I write everything with cursor. Sometimes I even have two windows of cursor open in parallel, and I prompt them on two different features,.

Speaker 3:

which makes sense. Why not three, you know? Exactly. I do a lot, actually. And I think another one that's great is from the founder of Trainloop. He mentions how coding has changed six to one month ago. 10 x speed up one month ago to now is a hundred x speed up. Exponential acceleration.

Speaker 1:

And he says, I'm no longer an engineer. I'm a product person. Yeah. That's super interesting. I I think like that might be something that's happening broadly, know. It really ends up being two different roles. Need I mean, it actually maps to how engineers sort of self assign today. In that either you're, you know, front end or back end.

And then back end ends up being about actually infrastructure. And then front end is so much more actually being a PM. You're sort of almost being like an ethnographer going into the obscure, underserved parts of the pie of GDP,.

Speaker 2:

and you're trying to extract out, like, this is what those people in that GDP pie actually want. And then I'm gonna, you know, turn that into code, and then actually evals are the most important part of that. When I was running triple byte, this was actually one of the things we noticed.

It was almost as important as a technical assessment of engineers when trying to figure out who's a good match for a specific company. Is there's a certain threshold of technical ability you need? But beyond that, it was, do you actually want to talk to users or not?

Like, some engineers are actually just very a lot more motivated by working on things where they know who the users are and they get to, like, communicate with them and they get live feedback and they can iterate and essentially being a product engineer. And other engineers really don't want to that at all.

They find it annoying having to deal with users, they want to just work on hard technical problems and refactor That's a back end engineer. Yeah, that's what we call a back end engineer. Yeah, sure. And that's only a theme that came up in the survey responses, this idea of sort of the LLMs are maybe gonna push people to choose because the actual writing of the code may become less important.

And it's about, are you really do you have taste and you wanna solve product problems? Or are you an architect and you wanna solve systems problems? Oh, and interestingly, I guess one thing that survey did indicate is that this stuff is terrible at debugging.

Speaker 1:

Yeah. And so you still the humans have to do the debugging still. They have to figure out, well, what is the code actually doing? Here's a bug. Where's you know, spot the bug. Where's the code path that, you know, we have some, you know, logic error, you know, just didn't figure this out. Right? There doesn't seem to be a way to just tell it debug.

You were saying that you have to be very explicit, like, as if giving.

Speaker 0:

instructions to a first time software engineer. I have to really spoon feed at the instructions to get it to to debug stuff, or you can kind of embrace the vibes. I'd say Andre Karpathy style is just like ignore the bug and just like reroll. Just like just like tell it to try again from scratch.

Like, it's it's wild how your coding style changes when actually writing the code becomes a thousand x cheaper. Like, as a human, you would never just, like, blow away something that you'd worked on for a long time and rewrite it from scratch because you had a bug. You'd you'd like, you'd always fix the bug.

But, like, for the LLM, if you can just, like, rewrite a thousand lines of code in, like, six seconds, like It doesn't cost why not? You know? That's kind of like writing you know, taking the approach of, you know, how people use.

Speaker 1:

mid journey or playground when you're trying to generate images. Like, if there are artifacts or things that I don't like, sometimes I don't even change the prompt. I just click reroll and I do that five times and, you know, sometimes it just works. I'm just like, oh, I can use that now.

Speaker 3:

Which is a very different frame of building systems because you're not building foundationally step by step. You're really doing it from scratch because, fundamentally, what's going on is, like, all these tools today are coming from the world of generated code that are in this latent space hidden somewhere, and you have to do it from scratch to find, like, a different gradient and not get stuck.

And then you wanna, like, add a bit of randomness, get it to regenerate. But I do think maybe I don't know. Whatever next generation of o five, maybe we'll get to the point that actually is able to build upon. I mean, as of right now, I think most of it is you need to reroll and rewrite, but it doesn't build upon it yet.

But we haven't seen any of the coding tools right now work well with reasoning. I think we have Well, o three is infinitely better at debugging than 3. 5 Sonnet. So, like, it definitely feels like we're headed in the direction where this may not be true in, you know, six months the next time we do this episode.

Diana, do you wanna talk about, like, the models that people are using, the IDEs that the people are using? There's some really interesting trends there. Yeah. I think as we mentioned couple episodes ago, we already saw this in the shift started happening. The vibe started to shift back in summer twenty four when Cursor was being used by a big portion of the batch, and now, by far, is the leader.

But the other thing that's happening, this is a very fast and moving environment. Windsurf is a fast follower. It's starting to be a very good product as opposed to Cursor. And I think, Jared, you have some first experience with.

Speaker 0:

why Windsurf is, like, better than Cursor. Yeah. I think the number one reason that people are switching is that Cursor today largely needs to be told what files to look at in your code base. If have a large code base, you can tell what to do, but you have to tell it, like, where to look in the code base.

Windsurf indexes your whole code base and is pretty good at figuring out what files to look at on its own. There's there's other differences too, but I think I think that's the most important one at the moment. Notable, Devin does get mentioned, but the drawback of Devin not really being used for serious features is that it doesn't really understand the code base.

Speaker 3:

It's being used mostly for small features and barely it's, like, barely mentioned. The other one, people still use ChatGPT, and the reason they use it is because they want to actually use the reasoning models. So it does get posted. People post some of the debugging questions to to figure out the use the more powerful models for reasoning.

Because right now, Cursor, WidServe are still in the old world. I mean, old world, less than six months ago, pre reasoning models, not in the test time compute. So founders are using that. And there's some founders. Some of them are self hosted as well, self hosting models because maybe they have more critical sensitive IP. They do that. And now talking about the shifts in terms of models.

The thing about cogen, the big game in town that we saw six months ago was Clots on a 3. 5. It's still actually a big contender. Most are still using it, but o one, o one pro, and o three meaning all these recent models are starting to see a it's almost like getting neck to neck now close with Sonnet 3. 5. The other one is four o, virtually no use for cogen.

And the other interesting thing is DeepSeek r one is getting mentioned. It's been used as, like, a viable contender as well. And Gemini,.

Speaker 0:

not really mentioned. The the the one thing I've heard from Gemini is because it has, like, the longest context window. I've heard from a couple of founders that they do use it, and the way that they use it is they put their entire code base into the Gemini context window, and they just, like, tell it to, like, fix a bug.

And it doesn't always work, but, like, sometimes it can just, like, one shot fix stuff because it's the whole thing in the context window. It will be interesting to see as people get more adoption on the newly released reasoning models with Flashback two point o.

Speaker 3:

I don't think people have tried it yet. But the long context window plus.

Speaker 0:

reasoning could be a good contender. What is the estimated code that's being written by LLMs and the current Badger? This is pretty crazy. So we we explicitly asked this question. What percent of your code base do you estimate is AI generated?

The way I interpret the question is like like, of the actual characters in your code base, not including any libraries that you imported, like, a percentage of, like, the characters were, like, typed by human hands versus, like, emitted by an LLM. And the crazy thing is one quarter of the founders said that more than 95% of their code base was AI generated. Wow. Which is like an insane statistic.

And it's not like we funded a bunch of nontechnical founders. Like, every one of these people is like highly tactical, completely capable of building their own product from scratch. A year ago, they would have built their own product from scratch, but now 95% of it is built by an AI. Except for, you know, maybe it sounds like we have Oh, one or two examples of people Yeah.

Speaker 1:

Who they're so young that they learned to code in the last two years. So they actually don't know a world where a cursor didn't exist. Yeah. This is.

Speaker 0:

one of my best companies, Dispatch, actually, is exactly this. The founders are extremely tactical minds, but they're not classically trained in computer science and programming. And they are incredibly productive and able to produce just a ton of really amazing product, and AI is writing almost the entire thing. Makes me think a lot of the.

Speaker 3:

discourse around sort of Gen Zs are the first digital native that grew up with the Internet. This is, like, the generation that grew up with native AI coding tools that they skip the classical training of a software engineer, and they just do it with with the vibe. But but they are actually very technical minded. I mean, they have degrees in math and physics. Physics. Yeah.

So they they they have that raw.

Speaker 0:

let's call it more like system thinking type of mind that you still need. Maybe we should talk a bit about that. It's like, what's still the same, and what has changed? I think this vibe coding will enable people who have these kinds of technical minds, who come from other technical disciplines like math and physics, to become highly productive as programmers much faster than it was in the past.

Like, I remember there were, like, coding boot camps. Like, back in the day, they would try to, like, retrain physics people into programmers, and then, like, it didn't work that well because it just takes too long to learn all of the syntax and all of the libraries and all the stuff that you have to know to be really productive. But, like, now.

Speaker 2:

now it's a new world. The coding boot camps are also very specifically focused on getting you hired at companies. And I think there was it was during this around, like, 02/2015 era where just companies themselves are rethinking how to evaluate software engineers in their hiring processes.

And it was moving there was a real shift away from, like, we wanna hire classically trained computer scientists, whiteboard algorithmic problems towards we actually want people who are just really productive and write code quickly. And some some of these arguments are, like, evergreen eternal. Right?

Like, I remember when Rails first came out, there was just, like, a real sense of, oh, like, I don't know, like, active record as a way to, like, interact with your database was seen as a great abstraction, but, like, there was still the same flavor of argument. Right? Like, I don't know. Like, if you don't really understand the internals, like, you're just gonna write, like, crappy low performing.

Speaker 0:

web software. How do you feel those arguments have aged.

Speaker 2:

if you look back on it now?

My feeling is that many of the most successful companies, I would say Stripe, Gusto, just two that really spring to my mind as ones that really heavily leaned into the actually, we just want people who are really productive with the tools, and we're gonna change our whole hiring process to just select for people who are good at like, the interview shifted from teach us how you think to you've got three hours on the laptop, and you need to build a to do list app and build it as quickly as you can.

And those companies have had a tremendous amount of success. It does it does seem like at some point as they grew and they scaled, then the bottleneck did actually become having people who are classically trained and systems thinkers to scale.

Speaker 1:

up and architect things. It does seem like how people are hiring engineers is changing, but maybe not changing fast enough yet. The results of the survey are relatively surprising to the four of us here. Probably pretty shocking out there even it's just like this thing that popped up in our backyard only in the last six to nine months.

My guess would be engineering hiring period has not actually caught up to this. People are still standing at whiteboards and doing that kind of thing as opposed to what can you get done. And so it sounds like the stripes of the world, they were ahead of the game and everyone has to hire engineers this way now. I mean, I wonder if actually even that's going to be sort of the old meta.

I mean, something that stood out from the survey responses was this idea of.

Speaker 2:

two themes we talked about. Right? Like one is how, okay, we're all just product people now. Like, actually, the thing that you need is really great taste and understand what to build. And the second was, actually, now what's really valuable is to be, like, a a systems thinker and an architect and to really sort of understand the bigger picture.

In which case, actually, like, maybe being a really productive coder, because that's definitely something that always fit my definition of when you're talking about who are great engineers, you know, and, like, the one of the dimensions are they're just really put like, they can write code really fast. Yes. Maybe that's outdated.

Maybe that's not if the LLMs are actually really good at writing code quickly Right. And to your point, now it's actually just cheaper to reroll and just write everything from scratch and try and debug.

Speaker 1:

Like, the skills might just be completely different. Problem is there's like two different stages. There's zero to one, which in which case speed is the only thing that matters. And then to your point about active record and Rails, that battle was actually fought to a standstill. Because of course using active record or rails allowed you to go from zero to one very quickly.

But then what happened to Twitter? It became the fail whale. Right? Yeah. Like basically once you get to one Mhmm. Like one you know, that architecture Yeah. Will not get you to a billion or 10,000,000,000 or a hundred billion dollars valuation or users or whatever. Like, it's just not going to actually work.

So I think you're gonna see a the the same thing. And then that there's nuance in what you just said. Right? Like, getting zero to one quickly and then being able to scale to a billion users are two totally different sets. And then I think that that might be irreplaceable for now around people.

And one of the things I discovered as we were scaling one of the biggest rail sites I mean, you scaled one of the rail biggest rail sites too, is that there aren't that many people who have to do it. Getting to one is so rare. Yeah. That's true. Did you reach this point where how you got to zero to one very quickly was like you used lots and lots of open source?

And then at some point, like maybe two years into this, maybe a year and a half into the startup even, we could not use random gems anymore. Gems anymore because they they were just never designed for companies at our scale. And so we had to like deploy it and pull their own stuff. Yeah. They would just fall over. Right? This very good example is what you're saying, Gary.

I think maybe to summarize a bit more,.

Speaker 3:

zero to one will be great for vibe coding where founders can ship features very quickly. But once they hit product market fit, they're still going to have a lot of really hardcore systems engineering where you need to get from the one to n, and you need to hire very different kinds of people. And to that, I think there's very good ex historical example, like Facebook as well. I mean, they got PHP.

Yeah. They got away with PHP, which personally is a terrible language. Yeah. Maybe I got flamed, but I think it's a bad language. I'm with you. I did never liked it. Sorry, guys. Was very bad, but you could ship things very quickly.

But at some point, it became such a big bottleneck for them to ship features that they had to hire the hardcore system people to build a custom compiler, Hip Hop. Yeah. That that it will run fast on bare metal because it's just too expensive to replace all their code. Yeah.

And that kind of people that did that are not the vibe code people, are sort of this hardcore systems people that, based on our survey, current tools are not good at that low level systems engineering.

Speaker 0:

Harj, I'm not sure everybody who's listening knows what Triplebyte is, but it's actually very relevant. Do you do you want to just describe for everybody what Yeah. Triplebyte was the company started in 02/2015.

Speaker 2:

And we were essentially building a technical assessment for engineers. Like, goal was how can you use software to automate evaluating software engineers. And the way we did it was pre all these cogen models, which is we built all of our own custom software to interview engineers, have humans interview engineers, and then essentially just label the data in.

Speaker 0:

every And interview them by asking them to write code.

Speaker 2:

It was technical interview. Was a high yes. Like, was asking to write code. We did actually include algorithmic problems. And is it true that you and your co founders have done more technical interviews than any other people on the planet? I think so, like, in terms of just pure hours. Yeah.

Because that was just like the early days of it, where like all day, every day, just Like thousands of people. Yes. Thousands and thousands of them, right? And then we scaled up, and we had like a team of about like 100.

Speaker 0:

engineers contracted just that that we would pay per interview completed. And so you're exactly the right person to ask these questions. You've literally spent more time thinking about this than anyone else on the planet. If you were starting Triplebyte again today and you had to design, like, new tactical assessments for engineers,.

Speaker 2:

what would you have them do? The big takeaway I have with Triplebyte and the screen in that in particular is just people want different things. And so you kinda need to know upfront like what exactly it is that you're evaluating for and then design your technical screen around that.

It's kind of what I'm getting at with where Stripe and Gusto and these companies just knew that they didn't care if someone had fundamental CS knowledge. So it didn't make sense to screen them on that. They get they they wanted to screen for the thing they they're actually gonna do in their job.

And then our product was more trying to screen for everything company trying to get a taste of everything companies might want and then figure out what someone's max skill was and then send the people with the max skill to the companies that would value that max skill.

And in today's world, I think I would actually have a screen that at least accounted for just how well people knew how to use these tools. Like I'm sorry, again, it's very contradicting what I'm saying earlier, but it might be the case of maybe how productive how quickly you can code and you can build product is actually something to explicitly screen on and just set the bar much higher.

You probably have to ask different questions, because I'll bet if you go back to the original triple byte assessment, I bet a lot of those questions, you could literally just copy and paste the question into ChatGPT, and it would spit out a perfect answer Yeah. In which case you're not really proving that much Yeah.

Well, this is competence if you're just copy like, the the questions probably have to be, like, a hundred times harder. Mean, this gets you the deepest stuff. Right? Not not necessarily. Because if you have someone else mon it it depends on, like, what conditions you're gonna put on the screen, which I think is interesting. So I know classic question was, like, build tic tac toe. Yes, of course.

If you do that unsupervised and you just let someone, like, come back with their tic tac toe solution, that's gonna take, like, two seconds. Right? If you want to watch them code it and force them to not use an LLM.

Speaker 0:

Well, I I guess that's a question. Do you force them to code it without an LLM? That I With the, like, the old questions? Or do you let them use an LLM and now you need new questions? Because the other the old ones became trivial.

Speaker 2:

That, I think, is what everyone hiring software engineers right now should be thinking about and trying to figure out. Yeah. I'm not sure I know what the correct answer is to that. Yeah. Interesting. I think there's gonna be probably you're gonna test for different things because I also did a lot of engineering hiring.

Speaker 3:

I think one key skill that's gonna, I think, remain that's constant. I do think skills of reading code and debugging are maximum. It's like you have to have the taste and enough training to know that the LLM is spitting bad stuff or good stuff. So, like, bad code or good, and I think you can see it clearly sometimes if a candidate is using the tools.

And there's actually a reasonable solution that the LLM outputs, and then the candidate is like, oh, this is actually bad. That is a sign. So I think knowing kinda more the high level thinking to know what is good versus bad.

In order to do good by coding, you still need to have the taste and you still need that kinda classical maybe not necessarily classical train, but enough knowledge to judge what's good versus bad. And you only become good with enough practice. I think that will be one. That will be constant. That would be my opinion.

Speaker 2:

Yeah. That's interesting. Just code it's sort like more like code review as the interview versus, like, actually, like, producing code.

Speaker 3:

Yeah. I I I mean, you could have some form of system design. You wanna know how good they can put a product out there. So it's again, it's testing for taste. So we're gonna test for debugging and then taste. But then how do you get to I guess this is a question going to these kits that you have that we call it AI coding natives. Yeah.

How do you develop taste when you don't come from a classically trained world,.

Speaker 1:

which would be interesting for next generation. To because if you don't, the startup dies. Right? So let's say this founder, they go off, they have 95% written by AI. The proof is in a year out, two years out, they, you know, have a hundred million users on that thing. You know, does it fall over or not?

And then one of the things that's pretty clear is these systems, you know, in the first realm the first versions of reasoning models, they're not that good at debugging. So you actually would need to descend down into the depths of what's actually happening. Yeah. And if you can't, then you got I mean, let's hope that they can go find another architect too. Yeah.

They're gonna have to hire someone who can. I think there's gonna be a generation of.

Speaker 3:

software engineers that are, like, good enough because it's so easy to retool there with with all these cogen tools. Like, the barrier enters solo. You're gonna be good enough engineers. There's gonna be tons of those. But to be exceptional, like, top 1%, I think you're gonna need to get into deliberate practice.

I mean, analogy we're talking about is Mark Malcolm Gladwell popularized this concept of ten thousand hours of practice to become an expert, which came from this research from what was his name? Anders Ericsson. Anders Ericsson. Right? Which it wasn't just the research was very specific.

It was about how do you find world class violinists, and it wasn't about just putting the time, but deliberate practice. It's, like, hours that are actually planned and thought and is hard work. You could become an expert with less hours. So I think what's happening with now cogen tools is that it's very cheap to put in the hours because the output is just so quickly. You can get to good enough.

But to become the best in the world and the best founder, you're gonna need that deliberate practice to go into the details, and you're gonna have to peel the onion and understand the systems and get to, again, to some extent being classically trained. I mean, a good example is maybe we go back to history. It's like Picasso, one of the greatest painters, he was amazing at drawing lifelike pictures.

Which is not what he's famous for. Of course, when when when you imagine a Picasso, you imagine the opposite of that. Yeah. There's this famous sequence of drawings on how he got to a abstract ball. It starts from being lifelike to iterations until he gets to the Essen to kind of the abstract art that he's very well known for.

But he could only get to be the best in the world because he was actually a very good painter and classically trained. I could draw super well, but that's not what he's known for. So I do think we'll see these two classes of engineers. You'll still have, like, a very fat class of, like, good enough. You need engineers for those.

But the best in the world, the founders that become outliers are going to need to put in the deliberate practice.

Speaker 1:

Yes and no. I mean, I think there are lots of really amazing examples of great systems level and world class engineers who ended up being CEO and CEO of the biggest public companies in the world. I think of Max Levchin. I think of Toby Lutke from Shopify. I mean, these are people who just like actually that great. And the thing is, there are lots of other people who are not that great,.

Speaker 2:

but also still CEO or cofounders of companies. And then it kind of goes back to to link up what we were saying earlier. It goes back to hiring. I mean, I keep thinking about the Twitter analogy that you brought up. So think it's a really interesting one.

Like, if you think if you compare Facebook and Twitter, in both cases, they went very quickly from zero to one into a scrappy, move fast, break things way. Facebook was able to solve the.

Speaker 0:

scaling technical challenges in a pretty impressive way. I think most people would would agree. I mean, Mark Zuckerberg was by far way more technical Well then. And way more in the weeds probably. Maybe. But I I don't know. Like I think Twitter scalability challenges were also harder based on the usage patterns.

Like the thing about the usage of Facebook is that it grows it it's like it's pretty smooth throughout the day. Like people just use it all the time. The problem with Twitter is that the usage is incredibly spiky. You get like a Super Bowl or like a, you know, like like a world event, and all of a sudden, you have like 10 times as much usage.

The way the fan out of the feed works is I think like fundamentally a very difficult computer science problem. Okay. That's fair. Though I also think that they were like really hamstrung by their tools. Do you remember using this terrible.

Speaker 1:

queue system called Starling? Absolutely. I used it because I thought, oh, Twitter's so much bigger than us. They're so smart. They wouldn't use something that's crap. No. They totally use crap, and then I use crap, and I couldn't make it work. And I was like, it was dropping jobs on the floor.

Like, it's just like all these crazy bugs happened. And then I was like finally, was like, I'm not using that anymore. I have to switch to RabbitMQ or whatever the heck the the the actually correct thing to use was. Yeah. And and like Ruby is an incredibly slow language. Even like 10 x slower than PHP, which was already too slow. So but I don't know.

I mean, basically, you should be so lucky to get to one.

Speaker 2:

Yeah. Is there an advantage for a technical founder to be classically trained and be a really deep thinker?

Speaker 1:

Well, I mean, you just I mean, a Toby or a Max Levchin is not going to get bullshitted by people. That from Stripe is Yeah. The same. I mean, I'll tell you a crazy story. When I was at Palantir, you know, I sort of burnt out there after a couple years after I designed the logo. And then I actually between that and going to start my YC startup, I spent six months as a interaction designer.

And I was at it was this, like, terrible venture backed company that ended up going in the ground, and it was like credit card software. It was the worst. I spent six months building like basically just interaction designs, which was really fun. That's what allowed me to work on my startup in my spare time because I had a lot of spare time.

But I remember designing, you know, this faceted search thing for like rental cars or something like that. And I go into my meeting with my dev manager and engineers who are gonna implement it. And they were like, oh yeah, it can't be done. We can't do it that way. Oh, like and I was like, what are you talking about? Just make the indexes like this. And they were like, woah. What do you mean?

Like and then they looked up my They're not going got that from your interaction designer. Yeah. Basically, they're like, how did you know that? And I was like, you fucking lied to me. And that's the thing, like, what founders and like, you know, when when you're hiring people, like, that that was like the wildest thing to me.

But it was like sort of the ultimate lesson where it's like when you're in the workplace, you sort of assume that the people you bring on, like, they're not gonna lie. They're not lazy. Like, we're all for the goal and the mission. Right? And I was like, no. Like, people who hire you hire, like, they totally will lie to you if you cannot tell that they're lying.

And then the worst part is, like, you kinda have to call them on it. Like, you know, sometimes you have workplace cultures that are so polite that people are like, oh, like, I'm gonna let that pass, then I'm gonna talk shit about them behind their back. It's like, no. You should fire them. The AI agents, incidentally, will do exactly the same thing. Yeah.

They will absolutely, like, the AI agents will bullshit you just like a human employee will if you don't if like, you're not No. Skeptical enough to like call them out on their bullshit and be like.

Speaker 0:

like, no. Like, you didn't make the change that I like. It goes back to your to your point about like why being classically trained is still helpful. You have to be able to call out all the people working for you, whether they're human or not. Being technical enough to be able to do that is a superpower.

Speaker 3:

So just to wrap up, basically, what's going on with all these tools giving superpowers to the best engineers and making the bad engineers also worse is this is a quote from the founder of a train loop. How coding has changed six to one month ago, 10 x speed up. Now one month ago, a hundred x speed up. Now it's exponential acceleration.

Speaker 0:

It sort of crept up on us, actually. Yeah. It was like somebody dropped some, like, giant beanstalk seeds. At night, we woke up in the morning. Vera is like, woah. What's going on? So,.

Speaker 1:

I mean, I think our sense right now is this isn't a fad, this isn't going away, this is actually the dominant way to code. And if you're not doing it, like, you might just be left behind. This is just here to stay, and, you know, vibe coding is not a fad. It's time to accelerate. So with that, we'll see you guys for the next light cone.

✨ This content is provided for educational purposes. All rights reserved by the original authors. ✨

Related Videos

You might also be interested in these related videos

Explore More Content

Discover more videos in our library

Browse All Content