O
Ochtarcus
Tools
0

10 People + AI = Billion Dollar Company?

Could a staff of ten or less — with the help of AI — create a unicorn?

Transcript

Speaker 0:

What is the state of this these AI programmers? Like, is it reliable yet, and where are we at? Will we just see software companies have way less employees.

Speaker 1:

and converge on a point point where you could have unicorns,.

Speaker 2:

billion dollar companies that have, like, 10 people on them. If we imagine a world where there could be companies less than 10 employees, maybe you could still be a family. But is that still a good idea? I have a controversial.

Speaker 3:

argument against what Jensen said. This one will probably piss some people off. Nice.

Speaker 0:

Welcome to another episode of the Light Cone. I'm Gary. This is Jared, Harge, and Diana. And collectively, we funded companies worth hundreds of billions of dollars. And today, we're talking about this one very controversial clip that lit up the Internet from Jensen Huang. I'm gonna say something and it it's it's gonna sound completely opposite,.

Speaker 4:

of what people feel. You probably recall, over the course of last ten years, fifteen years, almost everybody who sits on a stage like this would tell you, it is vital that your children learn computer science. Everybody should learn how to program. And in fact, it's almost exactly the opposite. It is our job to create computing technology such that nobody has to program.

And that the programming language, it's human. Everybody in the world is now a programmer.

Speaker 0:

So what do you guys think? Is this true? We're at the dawning of LLMs. We infused the rocks with electricity, and recently, they learned how to talk, and now they can code.

Speaker 2:

What does it mean? I guess the question is, are the are the next generation of founders or young or anyone who's young looking to figure out what they wanna do with their career, should they still study computer science? Is that still a good bet on the long run? Yeah. A lot of us spent a long time telling people.

Speaker 0:

over all of these generations,.

Speaker 3:

yeah, you should learn to code. If you're a nontechnical founder, you should learn to code. It's like the most important thing to do during college. Like, definitely, no matter what else you do, learn how to code. Right. So the question is, like, whether LLMs.

Speaker 2:

and AI is just gonna automate all of these jobs. And I think we have different views on it. Right? We funded a couple a number of companies that are actually doing building coding assistance that are taking task of developers? And what does the future look like for that? I mean, I guess the analogy.

Speaker 0:

that you could say, I don't really agree with this, but you could say that given photography, you didn't have to learn how to, you know, use a paintbrush in order to create representations of real life. And today, can prompt using an l you know, use using a diffusion model, you can actually, you know, just write out what you want and an image will be developed for you. Will this transition to code?

And some of the question that Diana has done a little bit of research on, and I think, Jared, you too, is, what is the state of these these AI programmers?

Speaker 3:

Like, is it reliable yet, and where are we at? Related to to Jensen's clip is the launch of Devon, which also, like, took the Internet by storm and has inspired many founders to go into this area, including a a lot of the companies that we've we've funded in the in the past two batches. It could be interesting to talk about that history and what the state of the art is with AI programmers.

Speaker 2:

Yeah. So right now, these are companies that I funded with companies like Sweep. We also work with Fume. A lot of them are solving a lot of tasks for more junior developers that have to do with, like, fixing the HTML tag here or a bug here and there.

That's fairly small, but it's a bit more difficult when you want it to actually build more complex systems, like build me the distributor system over the back end that was scaled.

Speaker 3:

That, we cannot do today. I think it's important to, like, to put context around Jensen Street that, like, three months ago, basically, AI could not program usefully at all. It was hitting, like, almost a zero. And what really changed I actually think it goes back to before Devin.

I actually think the real unlock for the current surge of interest in AI programmers goes back eight months ago to when the Princeton NLP group released this benchmarking dataset called SWE bench. And SWE bench is a dataset of GitHub issues taken from real programming problems.

And so it's a good representative dataset of real world programming tasks, the kind of things that programmers actually do. And this dataset finally made it possible for people to really tackle this problem of building an AI programmer, and to try an algorithm and benchmark it and see how good it is, and to compete with other people on the Internet.

Diane and I were actually just talking about how, if you look back in the history of machine learning, a lot of the big unlocks came from somebody publishing a benchmarking dataset, going back to the very beginning of deep learning. Do you want to talk about how deep learning actually.

Speaker 2:

got started, really? Yeah. So this benchmark with SuiBench is very reminiscent of ImageNet, which was a groundbreaking dataset from the lab at Stanford from Fei Fei Li. And it was a very challenging dataset and one of the biggest one that had a lot of images and lots of classes, where the task for algorithm was to classify.

Speaker 3:

and see what the image was. Because at the time, like, the biggest unsolved problem in machine learning, this is, hard to believe, was, like, to look at to to get a computer to look at a picture of a cat and be able to tell you, this is the picture of a cat. That was, like, totally intractable in 02/2006.

Speaker 2:

For because a cat can have lots of variation, it's actually a very hard problem because you have cats that are yellow, they're black, they could be in different positions, they could be like sleeping, they could be like laying down, and they all look very different. But how do you encode that when you have limited sets on that?

So before 02/2006, the traditional methods of machine learning were more statistical. You would do things where more discriminant. You will have things like support vector machines.

You will use things with feature extraction that were with hand coded signal processing feature extractors and with putting things in, like, the frequency domain or all these sorts of things that people try or wavelets, whatever. And people tried it, and that dataset was really hard. The error rate was, like, really, really high, like, 30%, forty %.

And for a bit of context, human perception on this dataset is about 5% accuracy, more or less. And then Five percent error rate. Error. Right. Correct. Yes. 5% error rate. And then all these standard methods were, like, 50% or more or 30 above, so which is really bad.

It's, like, way, way bad.

Speaker 3:

So then came about AlexNet. Right, Jared? Yep. A group from the University of Toronto, and they had trained a deep learning net using GPUs. And it was one of the first cases of people training deep learning networks using GPUs. And AlexNet blew the performance of everybody else out of the water. Was way better than all the other techniques.

And I remember the day that that news article dropped, it, like, took the, like, programming Internet by storm. I would argue that the AI race that we're in right now was is we're literally still riding the wave that was kicked off by AlexNet in 2012.

Speaker 2:

Like, it it it just kicked off this incredible race. Yeah. It was the first time that, at that point, it was getting to that human level perception. Then people found this this this phenomenon of stacking neural nets with lots and lots of layers. People didn't exactly knew what was happening in the middle, or people treated like this black box was actually starting to work.

So the interesting learning from this lesson is that sweepench is that moment in time where we can measure something and then we can get better at it because before with ImageNet, there wasn't big enough of a dataset to do that. So we will make progress in terms of programming.

But now the question is, are we gonna get to the point that we're gonna get AI algorithms that are just as good as programming with the humans? Is coding like a image recognition task.

Speaker 1:

One of the reasons this wouldn't happen because so far, like, if you zoom out, you have programming is one of the most promising early use cases for LLMs since they've, like, launched, essentially. Right? You have, like, the Copilot term, which really was the GitHub Copilot, specifically, like, a Copilot for programmers. Data compute, everything is scaling. The models keep getting better.

We now have, like you said, like, benchmark and, like, human attention focus on trying to make this better. Like, what what are the reasons we won't just this isn't just a straight scaling law?

Speaker 3:

Oh, I I think we will. We're now at, like, 14% on Suitebench. That's, like, the state of the art performance, and it's still well below human performance. I'm not sure what human performance would be, but certainly a skilled programmer could probably solve most of Suitebench given enough time.

So, like, I think the Suitebench mark is gonna go, like is I think we're gonna see rapid improvements for for the reasons that Diana mentioned. But SuiteBench is it's a collection of small bugs in existing repositories, which is quite different from, like, building a new thing from scratch. Yeah.

And so even when we get to a thing that can solve, you know, half of SuiteBench, that's still pretty far from something where you could just give it instructions for an app to build, and you could just go build the whole app. Yep. I think the way I think about it.

Speaker 2:

that was kind of what my question is, really SuiteBench the kind of task that are in sweep bench analogous to image recognition? But I think programming falls in a different kind of category of problems that it can solve. It is a bigger set because sweep bench is like a subset. It's still, like, in this idealized world.

And maybe to put a bit of context, I think in terms of engineering, there's two categories of problems on how we model the world. There's sort of the design world that is all, like, perfect where you have all the perfect engineering tolerances, all the simulation data, and all the laws of physics work perfect in that simulated world. And then you have the reality, which is messy.

I think the world of AI, LLMs, and all that, I think do a good job with this design world. But when you encountering real world, a lot of stuff breaks, and you end up with when I was working and building all this engineering system, hot fixes that come in and it's, like, random magic numbers to make the system work or, like, you could imagine all these self driving car.

I'm pretty sure there's a lot of magic numbers because it's just the placement of sensors that, like, kinda like physics. Physics, you have all these coefficients of friction, and they're not pretty like the laws of physics like Newton. They're like beautiful equations in this ideal world.

But in the real world, when you need to get systems to work, like engineering and systems and for startups, they solve real problems, you encounter friction. And there's all sorts of coefficients of friction depending on all the materials, and that world is infinite. So my argument is that I don't think LMs are going to be able to really encompass and really manage the whole real world.

The real world is like infinite. I hear like going to the Jensen original video, I I.

Speaker 1:

you were basically saying, hey, like, basically the dream situation is you type in, I want an app that helps me share blah blah blah photos.

Speaker 3:

Yeah. And the software just magically figures out how to build it. Yeah. And I guess one way, like, to build on that analogy, like, I I I think the world that Jensen was envisioning was a world in which programmers are like product managers today. If you think about a product manager, a product manager basically built an application by writing English. Right?

They write a spec, and then programmers go and they translate that into, like, working code. And so maybe in the future,.

Speaker 1:

that's how apps will be built, is you'll just, like, write English, and the, like, the the AI will take care of the translation. Which I think gets into, like, the heart of, like, this debate that has always happened amongst engineers and non engineers in Silicon Valley, which is how much of programming is an implementation thing. It's just, hey.

Like, you have the idea and the implementation are separate versus actually, like, you only get the ideas in the process of implementing. I know, like, Paul Graham is a huge proponent of the latter, right, like, in multiple ways.

Like, in programming, it's, like, the whole reason he's such a proponent of Lisp from the early days is you want a very flexible language because you only get the good ideas once you start building. And his philosophy actually translates over to writing where writing is literally thinking. Yeah. You're the process of actually writing is thinking. And I remember.

Speaker 0:

when I was learning how to do YC interviews, watching him and being in the room with him and asking him like, well, you know, what are you exactly looking for?

And one thing that he disabused me of was sometimes people would come in and I'd look at, you know, what they did in the past and, you know, I generally felt like, well, this looks like someone who's smart and with it and they did some impressive things in the past. Surely they thought through this and they just didn't say it in the meeting.

And, one of the things Paul would always say is like, oh, no. No. No. If they don't say it, then they themselves do not know. Like, the writing is actually thinking.

And I guess to sort of torture this analogy, but I kinda like it that we have we're sort of in this moment where if we take the analogy of, like, the the camera, like, made it so that you don't have to paint anymore, the subtlety there is that, like, aesthetics in the world still exist.

And I think the artistry of creating software or technology products is actually in that interface between the human and the technology itself. So my argument would be if you're doing back end software and you're writing APIs and models, that might get a lot of help from these types of, you know, AI programmers. Right?

Like, you can actually strongly type this stuff, and then you you can actually use language to translate that into saying what the product should actually do.

Speaker 2:

But there is still an artistry in that interface of what should actually even do and how. I think that's a very good point, Gary. I think maybe the other thing way to think about this advent with LMs and programming, if you think about the history of computer science and programming languages, as we progress, we became more and more higher language abstractions.

So we started with in the early days, it was just very, very much like coding in assembly. Yes. And it would took, like, so many lines of codes to just do addition. Right? Then you went up and did a bit of things, like, with Fortran and then c plus plus where you had to, like, really know about the metal still and manage your own memory.

Then you went into things that with more dynamically typed languages where you didn't have to think about the type like JavaScript and Python. Right? Or duct typing. Right? And now this is, a new thing with programming with English. Yeah. But you still need the.

Speaker 3:

artistry craftsmanship to come up with the design and the architecture. And interestingly, the best programmers today, even if they are programming in Python, they've learned to see. And they actually, like, know a lot about how the computers like, how the steps below the stack Yeah. Work, even if they're using the the higher abstraction. Yeah. I I was curious to ask everyone here.

Like, another potential counterexample.

Speaker 1:

is the natural language to SQL idea that has been around for years and years and has never really taken off. And I always wondered how much of that is because it's hard to build and implement, and how much of it is it because it's actually, like, it's not as simple as just I need someone to, like, translate my thoughts into a SQL query.

It's knowing, like, the right questions to ask about the data and, like, having some representation of how the pieces fit together. You have to have some sense of, like, the relational database in your head, at least the concepts to ask the right questions.

If it's true that that's there is some step before of, like, thinking involved, then you can't just extrapolate from, like, hey, it's it's just like we we started with, like, you know, binary code and we just, like, abstract it all the way eventually to natural language. There's gonna be some, like, gap between, like, the highest level of abstraction you can get in actual natural language.

I think so. I mean, we we kinda looked into a lot of these kinds of ideas and funded some companies doing this kinda this kinda idea.

Speaker 2:

I think AI will get to the point that you could actually do the translation from English to SQL, but I think the hardest part is not that. The problem with all these data model and why data engineering orgs are so big because when I had to kinda manage these teams, they're very messy.

The reason is because the hardest part is the data modeling because that's trying to encapsulate the real world, and the real world is messy. We have all these Yeah. Like, annoying coefficients and frictions that we have to model. It's like, okay. This person talks to who and this workflow works to who, and it's all very, very messy that a perfect model and AI can't really encapsulate.

And you kinda need the human to kinda think to it. Yeah. And that layer is like, how do you put an LLM to kinda parse through that and translate to the business requirements of the data model? Because if the data model is wrong, then it just causes all sorts of issues, and that's where things get hard. What do you think, Jared? I have a controversial.

Speaker 3:

argument Alright. Against what Jensen said. This one will probably piss some people off. Nice. My argument is that even if everything that Jensen predicts comes true, and in the future you will be able to build a great app just by writing English, You should still learn how to code because learning how to code will literally make you smarter.

We have an interesting piece of evidence for this, which is there's a lot of studies now that show that the way LLMs learn to think logically is by reading all the code in GitHub and basically learning how to code. And I think programmers have long suspected this, that learning how to code made them smarter, but it was kind of hard to prove with humans.

And now we have some actual evidence that this is really true. There's definitely some evidence that for some certain class of.

Speaker 0:

problems with LLMs, you're way better off having the LLM write code to solve the problem. Than to try to solve the problem itself. Exactly. Yeah. So tool use is actually a very weird emergent behavior and property of these systems.

Speaker 1:

Summing up, it's like, okay, let's say that one thing that's probably uncontroversial is there is absolutely gonna be some subset of programming work that will just be subsumed by LLMs. Maybe it's gonna be junior engineering work, like glue code, a whole bunch of certain type of programming work. We can all admit does not involve high creativity,.

Speaker 2:

high human reasoning. I should worry more about all the dev shops where all this stuff gets, like, outsourced, that type of stuff that gets outsourced to dev shops. Or even, like, frankly, like, FANG companies that have, like, armies of junior employees.

Speaker 1:

And so one potential consequence of that is if we're not that far away from the junior AI software engineer is will we just see software companies have way less employees and converge on a point where you could have unicorns,.

Speaker 3:

billion dollar companies that have, like, 10 people on them. Sam Altman had a recent comment about this that also went kind of viral on the Internet, the idea that in the future unicorns could have 10 employees or few or fewer, which is only a hap well, it's never quite happened. I think WhatsApp and Instagram are probably the closest to that ever happening. Yeah.

It feels like we've always had this has been a a thought for the last decade plus at Silicon Valley, and we've always had flashes.

Speaker 1:

of, oh, like, Instagram gets bought for a billion dollars with, like, 20 employees. WhatsApp gets bought for $13,000,000,000 with 15 employees or whatever the numbers are. But we've never seen, like, a sustained trend that we can point to. It's always like these flashes.

Speaker 3:

But maybe now we're at the point where we will just see a sustained trend. It's interesting. I feel like people who are new to Silicon Valley and new to being founders, they want to have more employees because employees are, like, correlated with status Yeah. Essentially. Yeah.

And we know the, like, more experienced founders who've been doing this for a while, and they are obsessed with this idea of having fewer employees, having as few as possible. Because after once you, like, manage a large company with lots of employees, you realize how much it sucks. And that's why everyone that that's why this meme has been around in Silicon Valley for a long time. Yeah.

It feels like there's often two types of people who.

Speaker 1:

really push for and are motivated for this smaller employee idea or smaller team's idea. It's that profile. And then it's also just engineers who are naturally more inclined towards, like, computers versus people don't are not excited about the idea of, like, managing lots of people Which is totally the Paul Graham thing.

Like, he was into this in 02/2005, long before it was like a trend in in Silicon Valley. Yep. It had to be a combination of foresight and personal preference. Right? Like, just not wanting to be, like, in an office with hundreds of people. I met up with Mark Pincus from Zynga here at YC recently, and the most interesting thing he told me was,.

Speaker 0:

I think at some point, a company gets to about a thousand people. And even the most forceful, the most sort of with it CEO, you sort of lose the capability to really impose your will on the company right around 1,000 people.

And if I reflect on some of the founders that we interact with sort of regularly who have thousands of employees, like, that's actually sort of what their daily lived experience is. Like, there are these things that, you know, you know are sort of extremely true. The company must go in this direction. And then even then, you're, like, a little bit boxed in, and you're, like, unable to enforce that.

I have to say, I feel like of founders I work with, especially sort of the younger, younger hardcore technical engineers,.

Speaker 1:

I think they actually grow into the leading bigger teams and just viewing people as a resource that should be used well. The example I can have is, like, Patrick Collison of Stripe. I worked with him on our first startup together when he was, like, 19.

And he was definitely the sort of archetype of incredibly intense engineer who want to be working on hard engineering problems all the time and view to too many people around as, like, a distraction from, like, the core work, to not wanna be hiring people, and not wanna be doing any of this stuff.

At some point, I think once he started Stripe, like, something changed where he realized that the way to achieve, like, his ambitions was to just take an engineering mind. It's like view the company as, like, another product that needs to be, like, engineered and built, and people are a core component of that.

And I think he just embraced the I need to be a very effective leader, hire a manager of people. And so I'm not saying in this new AI world, Stripe wouldn't have less employees if it would start it today, but it I don't think he would have this internal motivation to be like, I need to just not hire anyone so much anymore.

It would just be like more of like a expected value calculation of what is it better for me to automate or is it better for me to like rally.

Speaker 0:

people and use them as a resource? What do you all think? I mean these are hard things for a young founder to sort of approach and actually these are sort of some of the reasons why my startup didn't go as far as I wanted it to. I think the maybe most toxic or you know, difficult thing that I struggled with was this idea that like somehow your startup is your family.

And, you know, there's actually a clip online of, I think, Brian Chesky of Airbnb in a prior era actually, like, you know, saying that relatively emphatically. And then today, if you ask him, he would say, oh, no. No. No. This is definitely not a family. A family has all these old weird traumas.

Like, imagine, you know, bringing home, you know, a boyfriend or girlfriend, and they're, like, sitting with your family. And, you know, they go back, and they're like, well, what happened there? Like, why you know, why is that like that? And it's like, oh, you don't wanna you know, like, let's let's not ask about that. Right?

Like, you don't want to like, that having a family be your model of a company is actually kind of a bad thing. And the much more functional version of it is actually a sports team. Like, here's actually what we're trying to do. And, you know, basically, we need to win.

I think wanting to win is sort of the ideal analogy, whereas, you know, for family, there's these weird things like, oh, we just want love. And I was like, oh, no. No. That's not what a company is for. That's not what a startup is for. We're here to solve problems and win. I guess I really wish that I, had someone tell me that when I was, you know, sort of 27.

Speaker 2:

going through my first, stint at YC. I think that's a hard transition. I personally went through that because we were we went from very small engineering team to a very large one.

Once we went through Niantic, it was Pokemon GO and all of that hyper success with Pokemon GO is very jarring when you go from that small, intimate team and go into, like, engineering org of, like, 50 people, it it really that that that concept of going from this is your tribe and people and family where where you really know each other and everyone to getting the best the best performance out of everyone is very different.

And that's hard. And what could be interesting with this era where if we imagine a world where there could be companies less than 10 employees, maybe you could still be a family. But is that still a good idea? I don't.

Speaker 1:

actually believe this truth is worth talking about is, Jarrod, to your point of, like, programming just sort of makes you smarter. There's certainly some kind of learning founders go through when they hire people, build teams, deal with conflict, fire people, learn how to get the most out of them.

Speaker 3:

That probably just makes them more effective overall. Like, maybe smart's not the word, but, like, certainly makes you more effective figuring out how to work well with people and get the best out of them. Yes. You you learn a lot about people in the process of having to build a company and a team. Yeah.

And I I was thinking about what you said, Harj, about Patrick Collison and how he went from being a programmer to, like, learning how to run a company. And I was realizing, like, that's that's not just Patrick Collison. That's actually, like, all of our best founders are, like, exactly like that.

And sometimes people wonder how we can fund, like, you know, 18 year olds with no prior management experience and expect them to build a big company someday. And it's exactly that. It's because they treat it like an engineering problem. Yeah. Actually, and that's where you start you get back to the sort of programmers are smarter, basically.

It's like, you actually just treat everything as a programming Yeah.

Speaker 1:

It all just starts with video games and then learning to code. So that's sort of the path. This is something I take away from I read the Larry Ellison Oracle biography, and it's like a bunch of nuggets from there.

But like one really interesting one is there's a period in time where he completely ignored just like the finance function at the company because he thought it was the most boring thing in the world. And then Oracle went through a knee death experience where they weren't on top of their budgets and expenses and just almost ran out of money.

And he, like, forced himself to have to get on top of it so they would not die from running out of money again. And, like, the only way he could do it was to be like, okay. This is just like I'm gonna treat this like a programming problem. Like, it's just numbers. It's process. Like, I'm just gonna optimize this as though I would like coding.

And then he got really into it and just actually started really enjoying the whole process of process optimization, which then fed back into Oracle in a weird way because Oracle's business was a lot of, like, going to companies, figuring out which of their processes were messy, and trying to sell them software to, like, solve it.

He experienced the problem himself, and then he built the solution that he wanted, and then he was able to sell that solution to everybody else because everyone else had the same problem. Yep. Basically. But, again, it all came from, like, an engineer who wanted to avoid a messy people process problem just taking it on and treating it like a programming problem and actually becoming.

Speaker 3:

more effective at it than, like, the team that was built to work on it. I see this a lot with our technical program with our technical founders who are doing b to b companies where they treat their sales org this way. They definitely treat sales like a programming optimization problem. Yep. It's, like, stereotypical,.

Speaker 1:

actually. So what do we think the net effect of this is going to be overall? If AI, you know, makes us all more productive, if AI can start taking away some of the junior programming work, do we see a lot more unicorns? Does it make it possible for one company to become worth, like, a trillion dollars, or do we see, like, a long tail of lots of, like, unicorns started by much smaller teams?

And do we think the teams will even shrink? Because.

Speaker 3:

if we go back to predictions in the early two thousands, there were a lot of people who were predicting that as programming got more efficient, companies would be smaller. Because in the in the nineties, to build an Internet startup, you had to build everything yourself. You had to build you had to have people who knew how to rack servers. You had to hire people who knew how to optimize databases.

You had to hire, like, people to run payroll. And then all of that stuff got, like, turned into, like, SaaS services or infrastructure, open source, and so, like, you could focus on just your core competency. And there were a lot of people who were predicting that this meant that companies would have fewer.

Speaker 0:

employees because they wouldn't need all those people that you needed in the past. I remember racking servers, but I bet a lot of people watching this have never even stepped foot in everything Don't even know what that phrase means. Yeah. Is a, you know, what's a rack? Like, how does that even work? You just go and, you know, click a button on a website, and, like, boom, I have a server. Right?

Like, that's how it works. Right? Yeah. And the former is we're looking at some data earlier, and what we discovered is it it's it didn't happen, actually. Like, companies didn't get.

Speaker 1:

And Harch discovered the reason why. There's this concept in economics called the Jevons paradox, which is essentially once you make any service more efficient, like you make it cheaper to deliver, you increase demand for it. And so you actually just get more consumption. And like examples would be Excel spreadsheets, making it easier to do financial analysis.

Did not decrease the number of financial analysts. It actually just like increase them. I think typewriters being replaced by word processors come another example of where, yes, the strict role of being a typist and a typewriter went away, but the demand for people with word processing skills went way up. So.

Speaker 2:

software became cheaper to make, but at the same and programmers became more efficient, but it did not reduce the demand for programmers. It actually increased the demand for programmers. Which I think we actually see it in the number of companies that apply to YC.

There was this essay from PG just ten years ago that he he couldn't imagine the world where we'd have more than 10,000 applications per year. And at this point, we're getting over 50,000 applications per year, more than that. It is becoming easier to start companies more than ever because there's so much infrastructure built.

But at the same time, the requirements to be good at it and be a good founder are higher. I think it requires having even better taste and more craftsmanship to become the best founder now. Right? Yeah. Sometimes we joke that if we went through YC now in our younger self, would we have gotten it? Mhmm. It's actually very competitive now because the baseline is just so much higher. Yep.

So there's these things that at the end, you still need a computer science degree and engineering degree to really build that taste and craftsmanship to really have know what to build and build it well. You need to whisper to the AI and LLM. But how do you even whisper to it if you don't know how all this stuff works? There's this amazing Rick and Morty.

Speaker 0:

meme where there's a little robot on the table passing butter, and he goes up to Rick, master, he's like, what is my purpose? And it says, you pass butter. And then he goes, oh my god. And the funniest thing about that is like, you know, there's so many people in the world who basically have that job and they're not like robots, they're human beings, you know?

Like their nine to five is something that is incredibly rote and not that invigorating or exciting to them, And yet that's like sort of their entire lives. And how could we not celebrate the fact that now we have more software, more tooling, potentially robotics coming around the way.

Like that might free that person from having to pass butter And they can go off and do something else, something more creative. Like, ideally, maybe they learn to code. Maybe they learn to actually create things way off on the side in areas that OpenAI or, you know, sort of Microsoft or, like, whoever the tech giants are. Like, companies can't do everything. They probably shouldn't do everything.

Not only that, it's not clear to me that Lena Khan will allow that. So, you know, given that, actually, maybe that's the opportunity. Like, rather than just a few companies worth a trillion dollars, my, you know, my genuine hope, and I think that we're trying to manifest this world, is actually thousands of companies worth a billion dollars or more.

And, you know, of those might have a thousand employees, some of them might only have 10. Some of them might even be just one founder sitting there doing that thing. But at the end of the day, ultimately making it better for a real customer, a real problem, a real thing in society.

Speaker 2:

that frees someone from being a butter passing robot that's a human. I think that's such a good point, Gary, and I % agree with that. I think part of it is we're in this world of post abundance of sorts where it's easier to it it's easier to build things. It's easier to get the infrastructure up and running. If you get the right opportunity and there's a lot of capital too if you know where to tap.

But the bottlenecks is, can you enable this equation of human capital to flourish and match that opportunity and get the smart people that can do it and have a lot of the ambition in front of this capital? And this is why right now, our job is one of the coolest.

We get to do that and enable this flourishment of a lot of people that maybe could have been passed in different situations and give them a chance to build these companies that will go against the trillion dollar ones. Right? Just a thousand billion dollar companies?

Speaker 1:

We have all definitely lived through and hugely benefited from this trend of the more powerful technology becomes, the easier it is to get a company off the ground. Clearly, like, just open source software. I mean, I just think back to you when Jared and I first moved here, like, Rails was first taking off.

Speaker 0:

That was a huge innovation. Yeah. Right. Oh, that made me feel so powerful because before I had to use Java, and it was so disempowering.

Speaker 1:

Right. You had Rails, then you had Heroku kind of, like, come in and just make it easy to, like, deploy and to be, like, you know, you could be your own sysadmin, essentially. And so I just think that we all that that clearly made it easier for anybody to get their company off the ground. It didn't necessarily mean these companies got much smaller.

We didn't get, like, lots of 10 person unicorns, but we certainly got a more a wide cast a wider net of people who could prove out that they had an idea that people wanted with early signs of attraction, which then is what the kinda you need to attract, like, the human capital and the actual capital to go out and scale these things.

So I think even if we end up in a world where, like, AI is not gonna be able to, like, build, like, your perfect complex distributed system and scale to, like, a hundred million active users.

Speaker 3:

Even if it means slightly more people can take their idea and turn it into something and get it off the ground and get their first thousand users or their first bit of revenue, the human capital will come. The actual financial capital will come, we'll just get more of these things, which is great for everyone. I love that, Harj.

And I think that will that's that's one prediction I think we can definitely agree is going to come true. And how cool that is because there there must be so many great ideas that just never get off the ground because the person who has the idea just kinda can't go zero to one to to to getting that flywheel going. Or getting in in front of the right people.

I felt very lucky that I I grew up in Chile in the middle of this desert. There's, like,.

Speaker 2:

nobody really worked in computers, and they were just in the Internet. And going through YC was one of those moments that changed my life and the trajectory of it and really uplifted. And I hope that happens for a lot more people that we can work with. Well, so it sounds like the verdict is in.

Speaker 0:

Learn to code. Yes. You should learn to code. Sorry. Jensen is brilliant, but he is not right every single time. I think one thing that is uncontroversial.

Speaker 1:

is that over the last ten years, there have been more unicorns started each year. Right? Like and that's been because technology has made it more possible for people to get their ideas off the ground. I think I AI only accelerates that trend. Right?

I think we should just expect to see more unicorns started per year than ever because it is easier to go from getting your idea to, like, a prototype to your first users than it ever has been. And at the same time, it's still table stakes to be able to program and code because so much of the foundation knowledge,.

Speaker 2:

you have to have good taste to build something great. And you only get the good taste by going and studying engineering or computer science. The most important thing to me that.

Speaker 0:

I really want to manifest in the world that I think we get to do all the time at YC is that there are people here who are craftspeople or who could be craftspeople, and those are the people who are gonna go on to build the future. So with that, we'll see you next time.

✨ This content is provided for educational purposes. All rights reserved by the original authors. ✨

Related Videos

You might also be interested in these related videos

Explore More Content

Discover more videos in our library

Browse All Content