2024’s Biggest Startup Trends
2024 has been quite a year for AI and startups. As we head into the holidays and the new year, the Lightcone hosts reflect on this year’s biggest startup trends, moments, and breakthroughs.
Transcript
The wildest thing right now is you can start a company that can make tens of millions of dollars literally in twenty four months, and, you can do it for potentially, you know, 2,000,000, 5 million dollars.
A year ago, I remember many of the startups in the batch would get sort of enterprise proof of concepts or pilots in particular, and there was a lot of cynicism around whether any of those pilots would translate into real revenue. Fast forward a year, I think we have all firsthand experience that these pilots have turned into, like, real revenue. It's still early days, honestly.
Like, it you know, we sort of breathe a sigh of relief right now in 2024, but it's anyone's game, honestly. Like, these things are moving so quickly. Welcome back to another episode of the Light Cone. I'm Gary. This is Jared, Harge, and Diana. And collectively, we funded companies worth hundreds of billions of dollars right at the beginning. So 2024, what a year.
How are you feeling about this, Haj? Pretty great. I think this is the year that everything broke in favor of startups. What I've been thinking about a lot recently is when ChatGPT launched two years ago now, the immediate consensus view was all of the value would go to OpenAI. And very specifically, do you all remember when they announced the GPT or the ChatGPT store? Yeah.
Like, the I remember the consensus was everything that was built on top top of ChatGPT was a GBT wrapper, and the App Store was just gonna really be released and crush every single person trying to build an AI application. And OpenAI would be a ginormous company, but there'd be no opportunity for startups. Sounds kind of ridiculous to say that now because Who even remembers the ChatGPT store?
Exactly. The ChatGPT store itself was a nothing burger. But, like, more more importantly, what are the big AI applications today? Like, I'd say outside of Charjibouti itself, the breakout consumer application is Perplexity. The breakout enterprise application is probably Glean, maybe. In legal tech, you have case techs. You have Harvey prosumer, you have photo room.
Like, there's the point being there are many, many applications that have been built not by OpenAI. It's been a great time to build startups. Yeah. The wildest thing right now is you can start a company that can make tens of millions of dollars literally in twenty four months from zero. And you can do it for potentially, you know, 2,000,000, 5 million dollars.
That's sort of the story of one of these companies, OpusClip, which never had to raise a real series a, and that's something that we sort of see across the YC community as well. Yeah. I think that's a particularly important point that you can do it as a startup without rise raising tons of capital. Because post the GPT store launch, I then remember Anthropic and Claude emerged.
And the consensus view for a while was all of the values gonna go to one of these foundation model companies. And that the only way you can compete in AI is to raise huge amounts of money either because you've got venture capital or your Amazon or Facebook or Google with tons of cash already. But that if you weren't one of the big foundation models, there would be no value.
And the applications built on top of these things would either be built by the foundation model companies themselves or just not be that valuable. Again, something that turned out to be completely not true. Right? And in particular, what drove that is open source. Like, the weird series of events where meta like, the weights being leaked and, like, meta just, like, rolling The torrent. Yeah. Right.
That's gonna force the hand for meta to launch a LAMA, which is funny. And people thought, oh, was just this cool open source model, but it was eighteen months behind OpenAI. And people started doing a lot of derivative work out of it. It's like vicuna and all these other animals related to llamas that came out.
And it took the old llamas, one of the companies at YC as well that enable people to do local kinda Docker development, like, models running on device. It was pretty cool, but people didn't think that they were gonna be able to catch up. And the thing that changed from 2023 to 2024 is that during the summer, it was a turning point.
It was the first time that the top foundation model in all the rankings benchmarks was LAMA, and that was a shock to the community. Yeah. So it turns out choice matters, and choice means that it's not as much about the model. I think the model still matters quite lot. But once you have choice in model, it means you can't have this sort of idea of monopoly pricing. And you have that model.
Your competitor also has that model. But all the other things seem to end up mattering a lot more, which is product, your ability to sell, your ability to actually adjust to user feedback, your ability to get to zero churn. All of those suddenly become far more important than capturing a light cone of all future value through the model. Right.
A very specific way I I felt this is I remember a year ago working with startups in the batch that were essentially building model routers just like an API to call like a specific model. And I remember the a lot of the motivation for that at the time was reducing cost. It was like, oh, like you don't wanna just like burn up all of your ChatGPT calls.
You wanna spread them out across, like, various different models. And the the the argument against that was just, oh, like, the cost of all this stuff is going down to zero anyway. Like, there's no value to be having being like a model router and no one wants to build their applications with a model router. They're all just gonna call whatever is the best model.
I think fast forward a year, like, that's totally not true. Like, from what I can tell, the the model router was actually a really great entry point into just building sort of a new stack for building LLM powered apps. And most of the the applications we're seeing, I think they just don't wanna be beholden to a specific model. Does that map with what you've seen? Yeah.
Actually, one of the things we've seen now in the fall batch that just presented at demo day, which is one of the trends that shifted from summer twenty four and winter twenty four, was precisely what you're saying. Companies started to use multiple models for the applications, like the best one for speed at some point because sometimes you need to parse a lot of the input very quickly.
It's fine if it's a bit more lossy, and then you need the bigger model to handle the more complex task. So a lot of companies in fall twenty four have this actually multiple model architecture to use the best one for the best task, which is similar to the concept of the model router, but it was the idea evolved. Instead of being being more of a routing, it was more of a orchestration.
I think a concrete example we gave couple episodes ago was Canfor. It was a company you work with. They use the fastest model for parsing, like, PDFs, and the more complex ones, they use o one, and that's that's how it's done.
And other companies that's doing fraud detection, they have this concept of, like, a junior risk analyst where they just use, like, a fast and easy GPT four mini, and then they use the bigger one with, like, o one. Or the other example is, I think, Cursor talks about it in their episode with Lex Friedman.
They also have this complex multi architecture with multiple models, and this is why it works well. It's like they do one very specific for predicting what you're gonna type next, but one for understanding the whole code base. So very different tasks. So that's definitely happening now. Yeah. The other thing that popped up for FallBatch, there's a company I'm working with called Variant.
And what they're trying to do is take basically state of the art open source LLM models that can do cogen and then teach them aesthetics. So starting with icon generation.
And so they built this huge sort of post training workflow that should work on you know, as the open source models get smarter and better at cogen broadly, they can just, you know, take the next version of that and then take their post training architecture and dataset and then basically teach a given model aesthetics.
So what a certain thing is supposed to look like and not in a diffusion sort of way, but actually at the SVG level. And we think SVG will actually translate into all kinds of aesthetics.
So it's an interesting approach and like one of the newer ones in that post training is a whole coherent way to sort of skip the whole idea that all of the value is like accruing into the model, especially because of open source to your point.
The other thing I've I've been having flashbacks to is a year ago, I remember many of the startups in the batch would get sort of enterprise proof of concepts or pilots in particular. And there was a lot of cynicism around whether any of those pilots would translate into real revenue.
Lots of parallels to crypto and how anytime there's some new interesting technology, well, blockchain more specifically than crypto. But anytime there's a new technology, enterprises always wanna run pilots and POCs because it's someone's job to, like, check off. Yeah. We did the, like, hot new technology thing. The chief innovation officer must have his due.
We've spoken about this one of our one of our episodes, I think. And, again, fast forward a year, I think we have all firsthand experience that these pilots have turned into, like, real revenue.
And if anything, the startups in the YC batch now are going to sell into real enterprises faster than they have before and are ramping up revenue and reaching milestones like a million dollars ARR faster than I've certainly ever seen. Yep. The fall batch just did this actually again, which is actually the first time I think we noticed it was actually the summer batch of this year.
And one of the funnier things that we realized was, do you remember when Paul Graham would tell us how fast you needed to grow during the YC batch? 10% a week. 10% a week. And the wild thing is, in aggregate, across both, you know, summer and fall batches, that's what those batches did. Wow. So which I don't think ever happened Over the course of y c. Yeah.
Three x over the course of y c, which I don't think has ever actually happened On average. On average. It was only the best companies that did that, which is the top quartile or something. Right? So the companies are better. The general thing that is true is just that the time it's taking to reach a hundred million dollars in annual revenue is trending down. Yeah.
And not only that, we had dinner with Ben Horowitz recently. And remember he was saying when they started Andreesen Horowitz, the common understanding was that in any given year, there'd only be 15 companies that year that would even make it to a hundred million dollars a year revenue.
And they said they ran the numbers the last twenty years, and every decade, the number of companies that could actually make it to a hundred million dollars went up by 10 x. So what was 15 per year maybe twenty years ago, I mean, we're talking about 1,500 companies a year that have a real shot at actually making that number.
And when you combine that with what we're seeing in the summer and fall batches, it's not that surprising. And Jared had a really good argument on our last episode about how vertical AI is gonna enable this to have 500 plus companies to bloom. Yeah. That's why it's growing so fast.
It's because the value prop to companies of these products is so incredibly strong that, like, they're just flying off the shelves because, like, companies are smart, and they can do an ROI calculation.
And when the ROI is fantastic, all these all these truisms that people believe about enterprise sales cycles and, like, what it takes to get big enterprise deals go out the window because companies are smart, and they'll make rational decisions. You know, Haj, there there's another way that this broke in favor of startups that I was thinking about. It's hard to even remember now.
But a year ago, one of the things that people said a lot was that these LLMs are not reliable enough to deploy in the enterprise. They hallucinate. Yeah. Yeah. That that was why a lot of people said, like, the these pilots and POCs, like, won't translate into real contracts is because, yeah, it's too risky of a technology for people to actually deploy. Yeah.
And not only is it translating into real revenue, but it's translating into real deployments that are, like, being used at large scale, you know, doing thousands of tickets a day.
And I think it's because we've learned how to make the agents reliable via the kinds of techniques that Jake talked about when he was here and just the all all this infrastructure has grown up around around the models that's enabled people to make them reliable. Yep. That's actually a big trend. This year is this concept of thinking of AI more as agentic. Mhmm.
That is a term that kinda bubbled up a lot this year. It was not in the bubble space of conversation last year. Last year was more about a lot of things that were kind of a chat like. Chat to p I mean, that was kind of the riff on it, but now you're remixed into a bunch of agents for x y z.
And we just I mean, you just put out Gary just put out a great explainer video about computer use from Claude, but just the capability of the models keeps pushing Yeah. In the direction of just being able to do, like, complex multi step things and actually take over your computer and call other applications and and perform complex tasks that just didn't seem possible a year ago.
What about regulations? Seems like we sort of dodged a bullet there with ten forty seven, and it looks like some of the Biden EO is not that likely to survive the Trump White House. TBD, what that means in the longer term.
But certainly, one of the things that we were very worried about was that some certain amount of math beyond a certain level would suddenly become illegal or require registration at your local office. It's certainly been a weird time to be in tech because I've never experienced software and technology intersecting with politics so much.
And in particular, I'm not used to genuinely caring about national politics affecting startups in a YC batch or just, you know, companies that are less than a year old. But it did really it did really for a moment was worrying. It wasn't clear whether the startups would actually be able to build innovative AI applications versus suffering from regulatory capture from OpenAI and a few big players.
We're obviously very glad it broke in favor of startups. Seems like we're still in the early game. Right? I mean, it's very easy to see that the platforms themselves really will or could possibly resemble, you know, the Win 32 monopoly. Right? Windows has access to the APIs. They, in fact, know all the stats about what's working on their platforms. And guess what?
They can build it into their platform. You know, we sort of breathe a sigh of relief right now in 2024, but, you know, it's anyone's game, honestly. Like, these things are moving so quickly. I wouldn't totally breathe your last sigh of relief yet. You know? It's we we gotta keep working on this. Okay. So it's clearly been a great year for startups.
What else has been happening? Who else has it been a great year for, do we think? There's certainly been some big funding rounds. Right? Like, OpenAI unsurprisingly has raised huge amounts of capital. Scale? Yeah. Even within YC, though, we've seen, like, scale AI has really broken out this year.
$6,000,000,000 for OpenAI, one billion dollars for Scale, a billion dollars for SSI, the new Ilias Setskiver startup. Scale, I think, is just worth talking about because it's such a classic startup story. I mean, you were there in the early days. Right? You interviewed them for YC. Yeah.
Tell us what the idea was that they interviewed with and how they ended up landing on what, you know, is probably one of the best startup ideas of the last ten years. The fun thing about the Scale dot AI story is that it is the sort of epitome of the, like, classic YC startup story. There's other kinds of startups that get started, you know, like SSI, for example.
That's not a typical YC startup story where, like, some very well established people raise a billion dollars with, like, a PowerPoint pitch. But, like, Scale dot a I is, like, the classic story of how, like, young programmers can just gradually build a, like, $10,000,000,000 company over time by being, like, smarter and harder working than anybody else.
And so, yeah, when Alex, interviewed at YC, he wasn't working on anything related to AI. It was a completely different idea. And the idea for Scale dot AI got pulled out of him by the market. And it's it's actually still, like, several pivots because, like, the the original idea at YC didn't have anything to do with AI.
And then for a long time, he was basically doing data labeling for the self driving car companies. But they applied as I remember, they they applied with, like, a health care related idea. Yeah. It was a website for booking doctor's appointments. Okay. Yeah. That's cool. And then they pivoted during the batch.
Yeah. Do you remember how they came up with the data labeling idea? Because this must have been what was this? February? Yeah. The way they came up with the data labeling idea was that Alex had worked at Quora, and Quora had to do some data labeling for, like, moderation and stuff.
And so at the time, the big data labeling service was Amazon Mechanical Turk, and they were deemed unbeatable because they were, like, run by Amazon, and Amazon could throw infinite money at it. It was always at scale it was at, like, like, quite large scale already.
But Alex had a unique insight, which is he'd actually used Mechanical Turk at Quora, and he knew that it kind of sucked to actually use it. And so he he he had this sort of, like, unique insight on the world, and so he just tried to build a better Mechanical Turk, basically the version he would have wanted when he was at Quora.
And as I remember it, they really they their early traction came almost entirely from one customer, Cruise. Cruise. Right? Yes. Which needed to do tons of data labeling on all the images that the cars were taking as they were driving around San Francisco. You gotta, like, draw a circle around the traffic light and thing like that.
And the the cool thing about scale is that they've actually caught two waves. So they, you know, accidentally caught the first wave of all these self driving car companies. Because ML took off at that time in computer vision, there was just an unprecedented demand for label data for training sets that just hadn't existed before. And so they were able to ride that wave.
And then as that wave was like cresting, LLMs got big, and all of these companies needed to do RLHF at very large scale. And Scale was just, like, perfectly positioned to move into that business as well. Yeah. They I think the Scale story is just so interesting because it was pre LLM. It was it was clearly a multibillion dollar business anyway. Yeah.
And LLM it called the LLM way, which is now propelled into probably it's gonna be like a hundred billion dollar plus company. And I'm seeing that at the ground level too, where many companies I had that maybe finished the batch or even pre the batch didn't have an idea pivoted into an AI idea that's taking off.
Like, I'm just seeing much more success in founders who waited out and can find an idea that they just couldn't before. I have a company from a year ago. They proved they proved the whole batch. They couldn't find a great idea. It actually took them six months after the batch until they realized one of their parents ran a dentist office.
So he just decided to go hang out at the office to see, like, if there was anything he could automate. And they just ended up building an AI back office for dentist offices. Love it. And now it's just like their week over week growth is fantastic. It's doing really, really well. And I'm seeing lots of cases like that spring up. Definitely seeing that as well.
I think there's something about the advantage of having all these very hardcore young technical founders that are willing to kinda just bet the farm and go all in on just a little bit of a glimmer of, oh, this is where the future is gonna be. Let me just try it, and then it actually ends up working. Like, a story with the dentist.
I have a lot of teams that pivoted as well into different spaces where they kinda found that glimmers. Like, oh, computer use came out, and they have a couple companies that are working and betting and going in that direction, and it's, like, working well. I mean, it's it's still early. I mean, this is just a fall batch, but that's cool too. Okay.
So what are some of the the trends that we've seen or some of the the specific trends and waves that startups have been riding coming out of the batches? Voice AI is something we've talked about. It's clearly maybe the most promising vertical for AI right now in terms of just raw traction.
Do you think voice is a winner take all, or will it be something that has sort of a hundred different verticals that are very tailored to those specific verticals? That's literally one of the questions I get from some of our voice AI startups themselves. They're like, should I be going horizontal, or should I just continue to grow when within my vertical?
It feels to me like voice itself voice is I am just like yeah. It touches everything, and there's so many different applications for it that you can there's probably infinite applications to build where voice is the interesting element of it. I mean, things that just spring off top of your head, like language learning applications.
I'm sure there's not gonna be just one really cool voice AI powered language learning applications. It's probably gonna be multiple of them. Remote work, like teleconferencing is probably like a whole other area where there's interesting things to do with voice AI. And even within customer support, we highlighted a number of companies we talked about last time, company PowerHelp, Kappa dot AI.
Yeah. It it it turns out that customer support is not really one vertical. There's, like, many different flavors of customer support, and there's, like, very different on the inside once you get into the details. Because I think there's very specific types of workflows you need to do per industry, and that's to the point of why vertical AI agents are gonna really flourish.
I mean, same thing for voice. It's just very different workflows. If you're building the, I don't know, the voice agent to do customer support for our airline, very different than doing it for a bank, very different than doing it for b to b SaaS company, etcetera. Yeah. I guess that question of is there going to be pure horizontal integration is sort of like saying, will there only be one website?
Yeah. Or it'd be like saying like, there's just gonna be both. There'll be horizontal infrastructure companies that do really well in vertical applications. Because it say otherwise would be like saying, oh, like, Stripe powers payments on the Internet, and it's also just gonna have all the most valuable applications that accept payments in the Internet. It's just not how it works.
Like, there's enough value at just being the horizontal infrastructure layer. So I'm sure there'll be great voice AI companies that just make it really easy for you to build your own voice AI application, while there'll also be hundreds of really valuable vertical apps. What are what are what are the other trends that we've seen besides voice? We were talking about robotics earlier.
There's certainly we we are certainly working with more founders building robots this year than I think any year Ever. History. Yes. What's driving that? I have a ex Apple team that's called Weave Robotics that they're going to try to ship a real robot in 2025. It costs about 65, 70 thousand dollars.
But that's actually what it costs to have the actuators and the safety needed to actually have it work in your home. I think it's actually driven by this idea that the LLM itself can be sort of the consciousness of the robot. Like, am I doing this thing that, you know, my owner needs me to do? You know, how do I actually interact with them and the other people in the household?
But it's funny because then the voice language action model that might actually do a certain thing, like fold laundry, that's almost tool use inside of the broader LLM consciousness. So I feel like that's one of the things that I'm excited to see, you know, will it really work? And I think we're gonna find out this year.
I guess the way I think about it, robotics is basically half AI and half hardware. Half of the part of the equation is starting to work. Well, the hardware is still hard. Hardware is still very expensive. Yeah. There's still there's some evidence that being able to actually do laundry, for instance, like, that might be one of the first things that gets shipped.
I think the dream case for startups is going to be that you can build just the the AI or the software piece of it and run it on commodity hardware and do really great things.
The opposite case would just be actually if the two thing that if you need to be good at the hardware and the software and they're, like, coupled together and you need to produce both, then you would expect Tesla to be the obvious, like, winner in the space and it remains to be seen. I'm pretty optimistic.
We have multiple companies, I feel, that are trying to be creative on how to run the models on commodity hardware for specific use cases. It still feels early. It it feels like the robotics hasn't quite hit its, like, chat GPT moment yet. Maybe the moment is self driving cars have been working in San Francisco. I don't think he's talked enough.
People who don't live in San Francisco or, like, often don't realize the extent to which these are fully deployed in San Francisco, and regular people are writing them every single day. Yep. I saw Tony from DoorDash recently, and he said he exclusively uses Waymo, like, everywhere. I live in Palo Alto, I have no option forever. I would love to. It'd be amazing.
I mean, wild thing is there are only a few thousand of these deployed right now in the entire world. And They're all San Francisco. Yeah. What about big flops for 2024? I, you know, I seem to remember that we started one of our light cone episodes all wearing Apple Vision pros and quests, and we have not talked about AR since. Diana, what happened? It hasn't happened.
There's this moment for a lot of the hardware that needs to be a lot more lightweight. Like, we need to get to this form factor, but there's actually constraints with physics to fit all the hardware in such a small form factor. And in order to have enough compute and the optics to fit, it's just super challenging.
And I think there's still more actual engineering and physics that needs to be discovered. And that that's it. I I think the algorithms are there, but it's just lots of really hard hardware and optics problems.
It's a it's a tough chicken and egg problem because there's not enough hardware in people's hands for it to be worth it for app developers to build apps, and so there's not enough apps for people to wanna buy the hardware. And I feel like the people who did buy like, the the killer application so far seems to be using it as a really large monitor. It doesn't And it does work very well for that.
For Yeah. You which in movies. You've actually retained as a as a user, Gary. Right? Yeah. It's great for watching the movies. Yeah. Maybe the one device that I think actually been playing and actually feels good is actually the Meta Ray Barn.
Oh, yeah. It doesn't have any of the actual displays, but I really like it for the audio and voice. And one workflow I've been trying out is actually using the Meta Ray Barn and connect it to any of the voice modes for either Chateapiti or Claude and kinda have a conversation with it Mhmm. About a topic? Oh, I haven't tried that. Time. Yeah. Yeah.
That's a great idea. That is like a fun thing that I've been doing and just chatting with myself. Maybe look a little bit like a crazy person while you're walking, but it's been fun to kinda learn about different topics. Should we talk about AI coding? Oh, yeah. Twenty twenty four was the year that AI coding really broke out. Yep.
I mean, we had the majority of YC founders now use Cursor or other AI IDEs. They just, like, exploded over the summer. Devin proved that you could, like, fully automate, like, large programming tasks. Yeah. All that was this year. That's pretty wild. Replit agents continue to improve. Like, I'm hearing more anecdotal stories of people building Replit apps on, like, their way home from work.
Yeah. I'm really impressed. Like, Replit took this technology and popularized it among, like, nontechnical people for the first time. That's really crazy. And even more lower technical version is Anthropix artifact, where you can actually prototype very simple apps and chat with Claude to build really front simple front pages.
And then you could prototype stuff as a PM and show it to your engineering team, and it's like a full fledged working version. Yeah. It's wild because it just means that you can one person can do so much more. And do you think it's gonna change the nature of how startups are actually hiring? Are you seeing this yet?
Like, some of the founders I've met who recently raised their seed rounds coming out of YC, they're not really approaching it how maybe the classic advice would teach them. In the you know, in the past, you might say, let me try to find, you know, more let me try to hire more people.
Like, I you know, there are certain tasks that normally I have to find, you know, the person who did it at my competitor who did all of customer success. And I need to find that person who's under the person who runs that function, and I've got to hire that person and promote them. And they're gonna come with all this knowledge and people networks.
Some people are saying sort of the opposite, which is I'm going to get my software engineers to write more processes that use LLMs upfront. And, you know, I probably will end up needing to hire that person, but maybe after the series b or c and not right now. Yeah.
I think I've seen that as well with companies after the batch where they're looking for engineers that have more upside, and they're really fully native with the setup with the AI coding stack. And part of the one of the clever interview tricks I've seen is people do pair programming and watch them use the tools, and you can really tell if someone really has tinkered with them.
It's actually an engineer cut that that is not only good at coding, but also prompting and telling with when the AI output is not correct. I think the part of reading and evaluating the output of all these AI coding agents is actually a lot more critical. Critical. Yeah.
There's been an interesting controversy this past year about AI coding agents and programming interviews, because AI coding agents basically broke the standard programming interviews that companies have been doing for years. Actually, Harj, I'm curious what you think about this you ran a programming interview company.
I mean, I guess the the interesting debate is whether you should penalize or prevent people who are interviewing at your company from using cursor or one of these tools to ace your programming interview, or whether you should just lean into it and adapt and test to see how productive they are.
I generally think the way these things tend to is more in that direction that I think it will just become you'll just be measured on your absolute output and the bar will go up.
I think, like, Stripe, for example, were early on this about a decade or so ago where they recognized that so much of what they needed their programmers to do would, like, build web web application and web software and not do hard CS problems.
And so the industry shifted away from the Google style interview of lots of computer science problems and whiteboarding to just give someone a laptop and make them build, like, a to do app in, like, four hours.
So I think we'll just see the same thing happen where people just the industry will just adjust, and you'll just be interviewed using these tools and just be expected to do a lot more in, like, a two hour interview than you are today.
To your point, Gary, around just the the startups, like, maybe how many people they need to hire or just, how how do they scale, it seems too early to see, like, dramatic effects on that yet. But one thing that I'm interested in is so I watched an interview with Jeff Bezos recently, and he said that well, one, he's back at Amazon working on AI.
And two, that apparently Amazon itself has, like, a hundred or maybe it was a thousand. It was a surprisingly large number of internal LLM powered applications, presumably to just run Amazon. The last time Amazon took something, it ran for internal infrastructure and released it to the world was AWS, which completely changed how startups are built.
So I'm curious to see if they have interesting applications to run Amazon internally that they'll just release out and suddenly, like, there'll be new stacks to just build and scale your companies on. And we'll see the whole that we talked about in recent episodes of the 10 person, the one person unicorn.
One of the applications they did talk about is they did this giant migration for a old version of programming language. Whenever you need to upgrade different versions of database or etcetera, it's like a lot of work. And they use LLMs for it. It was like changing hundreds of thousands of lines of code, and it would have taken engineering project of six months or more. It was done in weeks.
I mean, Amazon's just such a perfect use case for, like, LM powered agents doing back office processes. They must have just, like Yeah. Just absolute goal buying of opportunities. And they just launched their big foundation model, actually, that is starting to be top in some of the benchmarks as well. So I think they're trying to be another contender through this race. Yep.
That's interesting because, like, from the bottom up, like, certainly from some of the people who still work at Amazon, maybe right out of college, many of them do not have access to LLMs or aren't actually barred from using it from in their day to day. So, you know, maybe that's one of the downsides of organizations.
When they get big enough, you know, the future is already here, but it is not evenly distributed even within the same organization. Interesting. But that bodes sort of well for both open source and sort of self hosting LLMs. Like, it's on my to do list to build my own stack of Apple minis and run Llama on my own little cluster on my desk.
I bought all the hardware to build my own machine, but then we had a baby and it hasn't happened. Well, at some point. I've been pretty excited in that, you know, YC has been operating back in person at San Fran in San Francisco for some time, but we got a real live demo day Yeah. All the way back. So no more Zoom demo days. No more Zoom alumni demo day.
You know, we did alumni demo day right here in this office right, you know, right downstairs. That was awesome. And then we took over the Masonic Center and 1,200 investors all in one room. It was actually really great for the founders, I thought, because it was about a third as many founders than the summer batch.
And it was more than two x, maybe three x the number of investors than than who had came to our investor reception party. So was like a ratio of 10 investors for one company, roughly. So I think all of them had a really good time. I'd almost forgotten how great the energy of an in person demo day is. Like, it's just not something that you can replicate over Zoom.
The YC demo days also always acted as the de facto investor reunion in Silicon Valley because it's the one event that all the investors would reliably show up at. And so they were really excited that we had brought it back because when when when we weren't doing it, there was no equivalent event. Sort of the homecoming for Silicon Valley. Yeah.
So now it's, you know, four times a year, and it's the one time that all the top early stage investors in the world are gonna come back to San Francisco for hopefully that week's festivities culminating in our demo day. So it's a real celebration. It feels like in person in general is back. That's certainly another theme of 02/2024.
Certainly, late stage startups that we've been meeting with and speaking to this year, One of the highest priority items has been figuring out how to get everyone back into person, back into the office. I think, like, the era of it's gonna be remote forever is definitely gone. I certainly Good riddance. Yeah. Exactly. Right?
And then find, like, yeah, in person is back, and then San Francisco is back. Like, a lot of thanks to you, Gary. The elections recently seem to have gone well. Like, there's a lot of optimism, I feel, around San Francisco. And and Yeah. We have a new mayor. We're hoping that he does the right things.
And, you know, we have a very thin moderate majority on the board of supervisors, but we did get rid of some of the worst people who created a doom loop in San Francisco. So I'm optimistic. You know, we didn't get everything we wanted, but it's tracking in the right direction.
And I think as in startups, as in politics, you always, you know, way overestimate what you get you will get done in one year, but you always way underestimate what's going to happen in ten years. I think it's gonna take ten years. It's gonna take twenty years.
But just as startups went from 15 companies a year to that could possibly make it to a hundred million dollars a year to 1,500 in any given year, Knock on wood. That, you know, I think San Francisco needs to be the beacon for all the smartest people in the world, and that that's actually probably the thing that I'm hope most hopeful for is that we can actually keep building.
So from all of us to all of you watching, happy holidays, and we'll see you in the new year.
✨ This content is provided for educational purposes. All rights reserved by the original authors. ✨
Related Videos
You might also be interested in these related videos