O
Ochtarcus
Tools
0

Gmail Creator Paul Buchheit On AGI, Open Source Models, Freedom

The hosts sit down with Paul Buchheit, one of Google’s earliest employees, the creator of Gmail and a YC Group Partner.

Transcript

Speaker 0:

It seems like Google has all the ingredients to just be the dominant AI company in the world. So why isn't it? Do you think OpenAI in 2016.

Speaker 1:

was comparable to Google in 1999.

Speaker 0:

when you joined it? Are you a believer that we are definitely gonna get to AGI? What is the long term trajectory of AI?

Speaker 2:

It's the most powerful technology we've ever invented. And so the question is like, where does that power go? I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

Speaker 3:

To another episode of The Light Cone. I'm Gary. This is Jared, Harge, and Diana, and we're the partners at Y Combinator, where we funded hundreds of billions of dollars worth of companies. And we have a special guest who is also one of the original outside partners, the non founding partners at YC, Paul Buchheit. He created Gmail. He coined the term don't be evil. PB, thanks for joining us today.

Thanks, Gary. So what should we start off with? Well, I think one thing people don't.

Speaker 0:

often realize is that you've been thinking about AI for a long time and that Google itself was kind of an AI company. Can you tell us more about that? What was the internal view of AI at Google? Yeah. I mean, I think, really, Google has always was always supposed to be an AI company from the beginning. You know, Larry and Sergei.

Speaker 2:

set out to build, you know, these very large compute clusters and do a lot of machine learning on all of the data that they gather. And actually, arguably, you know, the mission statement is pretty straightforward. The Google mission is to gather all the world's training data and feed it into a giant AI supercomputer. And they put it slightly less direct.

They said, gather all the world's information and make it universally useful and accessible or something like that. But, essentially, you know, what that really meant in practice is feeding it into a giant AI supercomputer.

Speaker 4:

And even the origin story of Google was all based on their PhD with PageRank Mhmm. Which is very much today's in a lot of machine learning classes as gets taught. It is one of the foundational kinda historical AI algorithms that gets taught.

Speaker 2:

Yeah. I mean, there was a there was an understanding very early on that if you have enough data, that's actually the path to to making things intelligent instead of just trying to iterate forever on little algorithms.

Speaker 1:

How early did you join Google publicly? Can you talk a little bit about what Google was like when you joined? Yeah. So it was June.

Speaker 2:

1999, so that was, let me see. Twenty five years ago, a little more. And so, yeah, it was a very small startup. We were we were in Palo Alto on University Ave, just, up above, like, a tea shop at the time, and it was it was electric. It was really cool. I I actually after I was there for about a week, I I tried to get more equity. But it turns out you have to negotiate before accepting.

So but, yeah, it was it it it had a very kind of unreal sense of, like, just excitement. You know? I was excited to go into work because we were we were just doing big things. And when you were there, like, in that early set of Google people,.

Speaker 1:

how did you all envision that this AI thing would play out and what Google's, like, AI future would look like. You know, we didn't Something that ever came up? Right. No. I mean, AI.

Speaker 2:

has obviously been a thing that people have been thinking about for a long time. I I made my first neural net back. I I dug up the code a while back. I think it was, like, 1995, and I had it was, like, one of those three layer neural nets with You did a classic, Minsk digit classification thing? Yeah.

I was doing I I I did a, not exactly digit classification, but there there were these things called figlets that are, like, ASCII letters, and so I made it do essentially, like, an OCR on on those. But, you know, it'd be like a hundred weights. Something's very much smaller than today's models. No. It's like trillions of weights now. Yeah. And the history of, like, neural nets is kind of weird.

The first thing was when they invented the perceptron, which was like a single neuron. And it was very hot for a short time until some researcher showed that a a perceptron can't compute XOR. And then they were like, well, like, it's just dead for a while until someone had this idea to use multiple neurons. And so it was, like, very slow going. And then it was kind of, like, dead again for a while.

And then in to my perception, it kinda really picked up in the early teens, you know, when deep learning became popular. And that was when we first started seeing, like, I think impressive results where that was when we started feeling, like, internally, you know, in the discussions at YC that AI had switched from being something in the indefinite future to being in the more definite future.

And that is, you know, kind of what led to the creation of open I OpenAI.

Speaker 0:

as well. Were there any conversations around, like, the power of AI and the implications of AI, specifically.

Speaker 2:

AGI and just, like, the impact of society, or did it feel too far removed? Yeah. I think it was still too far off in the future. I mean, was very much sci fi at that point. We were dealing with more, you know, near term, how do we make search better? But search is, you know, kind of a to some extent, an AI problem. You have to figure out what it is the the user is looking for.

It's remarkably good if you actually look at Google Search. Like, there's a lot of stuff going on behind the scenes. And actually, one of the earliest kind of magical features that we added was the did you mean? You know, the spell correction. And so that actually comes from, originally, just my inability to spell. I've never been very good at spelling. My my brain doesn't like arbitrary patterns.

So, like, when I was in school, math was easy because it's predictable, but spelling always made me struggle. And so when I started at Google, one of the first features I added was a spell corrector because I was looking at the query logs, and I would see that I'm not the only person with this problem. Like, a third of queries were misspelled or something like that.

So it was like the easiest quality win ever was just to fix the spelling. Wait. Wait. So you built the original spelling corrector at Google? I didn't I did the first did you mean feature. And so but I built it just based off of kind of an existing spell corrector library, and then but it would give really dumb corrections. Like, if you typed in TurboTax, it would try to correct it to turbot axe.

Turbo being a type of fish. And so I I did some basic, like, statistical filtering that would say, like, that's an idiotic correction. Don't show it. And so I would just, like, filter the results. And then I was working on building a better spell corrector, because I knew, you know, we could just use all of the data. We had a copy of the web, and we had billions of search queries.

There's, like, a lot of information there. So I was working on making something better, and then I was just using it as an interview question. So when I would interview engineers, I'd be like, how would you build a spell corrector? And I would say, like, 80% of engineers had no idea, and the other 20% gave sort of mediocre answers.

But then there was this, like, one guy who gave a really, really good answer. It's just like he was ahead of where I was already, so I was like, we have to hire him. And so his first project, he started I think it was February, kinda like late December. His I gave him as his, like, intro project. I just gave him all of my code and showed him how to how to run, you know, projects on the cluster.

And then I went away for a couple weeks for Christmas, and when I came back, he had invented what we now know as, like, the Did You Mean feature. So he did all of that in, like, his first two weeks at Google, and it was, like, this incredible thing that could spell correct my last name. No one had ever done a spell corrector that would correct proper proper nouns and things like that.

And so that person was Noam Shazir, who then is also the person who later on invented AI. So he's he's one of the key people on the all you need is attention paper. And then he's he's now since started Character AI.

Speaker 1:

I never connected those dots, but I remember in February, when the original Google spelling corrector launched, it was a big deal because it was one of the first instances of AI that was widely used by the general population. Because the earlier spelling correctors, they'd all been very simple things based on just a list of dictionary words and edit distance, and so it couldn't handle proper nouns.

It it made all kinds of dumb suggestions. The The Google one was the first one that was trained on real data. Exactly. So it actually worked. Right. So the Google Spectra character has no dictionary. It's just based on looking at, the web and at query logs and then just predicting.

Speaker 0:

what is the, you know, most likely correction. It seems like Google has been working on AI for a long time. It has the data, the compute, the people. It has all the ingredients.

Speaker 2:

to just be the dominant AI company in the world. So, like, why isn't it? What do you think happened? It seems like it got stuck someplace. Yeah. I mean, I don't know exactly. So I you know, just to clarify for everyone, I don't work at Google. I left in, 02/2006.

But my perception, you know, as an outsider, I think a lot of it kind of happened, especially around the time of the transition to Alphabet when, you know, the company was no longer really being run by the founders so much, and especially, you know, after they they they left. And I think it became more about protecting and preserving the search monopoly.

And so if you think about it from that perspective, they have, you know, this gold mine, like like, search is just so valuable. And AI is an inherently disruptive technology both in terms of maybe breaking the search, you know, business model where if you actually give people the right answer, they won't need to click on an entire page full of ads.

There is and this was noted, of course, in the very original Google paper back in, 1998 that their search a search company has an inherent tension between, profitability and giving the right answer because there's always a temptation that if you make your results worse, people will actually click on more ads.

And so AI has the potential to disrupt that, but I think even more than that, it has the potential to completely, anger regulators. And so a lot of Google's business is just dealing with regulators. And so, you know, we know if you put out an AI, it's definitely gonna say offensive things. And so, I I think they were kind of terrified of that.

And so even internally, when they were developing, you know, there was a there was a version of a chatbot that Gnome had built, and this is the one that that that sort of whistleblower Yeah. Claimed was conscious that I think they called it Lambda. It actually originally had a different name, but he was forced they were forced to change the name because the original name was human.

So they weren't even allowed to give it a human name. So the original name was something human and had to be changed to Lambda. But even inside of the company, you know, there were restrictions on what you could put out. They had a version of, Dolly called ImageGen, and it was prohibited from making human form. So, like, even internally, the researchers weren't allowed to generate images of humans.

So they were just extremely risk averse,.

Speaker 0:

I think is the answer. And how do you think it would have been different if Sergei O'Leary was still in charge and pushing forward? I mean, I think they can override,.

Speaker 2:

you know, risk risk aversion. Right? But but it takes someone with that level of credibility to to to really bet the company or or to to to stay, yeah, we're gonna do this thing and it's gonna cause a lot of problems.

But I think that if given the chance Google never would have launched AI, the only reason they launched it is because OpenAI, you know, put out ChatGPT and suddenly it became a thing that they were forced to do. And that also helped them too because, you know, OpenAI took a lot of those bullets in terms of, like, saying crazy and offensive things.

And so at that point then, you know, Google could put out something that was a more sanitized version that, you know, prohibits the existence of white people or whatever. But, you know?

Speaker 0:

And OpenAI kind of spun out of YC, and you were around at that time. Originally, it was, YC Research.

Speaker 2:

Right. So, you know, again, kinda going back to the early teens, we were just tracking the progress of this technology, and that was where we started to see deep learning doing really kind of impressive things where there's, like, playing video games and, like, winning and getting good at things where you could say where you could finally see that AI was real. Right?

So so for decades, AI was kind of this sci fi thing, and you had all the symbolic AI, which I would say is kind of like garbage. And so finally, AI was doing something that was, like, truly impressive. And, so, yeah, that was kind of on our radar. And then, you know, Sam, I think, talks to just a lot of people.

And so he had, I think, been at one of these things where Elon was was very, you know, essentially ringing the alarm bells that AI was gonna kill us all and and proposing that, you know, maybe there should be regulation. And so we're having these discussions. You know, Sam's asking, like, do you think we should push for AI regulation?

And, yeah, I'm of the opinion that that's only makes things worse because I don't have great confidence in our, elected representatives to be, you know, super wise, and forward thinking. And so my argument was that the better thing to do would be that we actually build the AI and, you know, that way we're able to influence the direction that it goes.

But AI was still, at that time, something that we didn't really know what the time frame would be to be able to actually have revenue because it was still basically a research project. And it requires just massive amounts of capital because the the researchers are pretty highly paid. Roughly what year was this?

Speaker 4:

Twenty fifteen, I think. It's about the time after Google did the DeepMind acquisition as well. Right? Yes. This was after DeepMind. So made this issue more complicated because we didn't in those conversations,.

Speaker 2:

there was a desire that we wouldn't want this AI to be stuck at Google. Right. Exactly. So so the the fear is that, basically, this gets developed all locked up inside of Google. And so the idea was that we wanted this to be something, you know, more open to the world, open to our startup ecosystem.

And so, the idea was that, you know, we had this this concept of YC research that we would, find some way to fund this and then, hopefully, you know, our startups would be able to benefit from and and and build on top of that, which, you know, has in fact happened, of course. Like, half our startups now are are are building on top of it. What are your thoughts, on now, open source models?

So I'm totally in favor of them. So I I I think, like, when we think about what is the long term trajectory of AI, it's the most powerful technology we've ever invented. And so the question is, like, where does that power go? And I think there's essentially two directions.

You either go towards centralization where all the power gets, you know, centralized in in the government or a small number of, like, big tech companies or something like that. And my feeling is that that's catastrophic for the human species, because you essentially minimize the agency and power of the individual. And I think the opposite direction is towards freedom.

And and and as much as possible, we should give this power and these capabilities to every individual to to be kind of the best version of themselves. And so you can think about that in terms of, you know, how much what would it look like if everyone had a 200 IQ or whatever. Right? Like, instead of just having all of that power concentrated in one place.

And open source is very important because it's kind of a litmus test for that. Right? Because it's true freedom. It's freedom of speech. It's First Amendment. Right? And if you don't have that, if your models are all locked away under some sort of lockdown system where there's a lot of rules about what can be said, what kinds of thoughts are acceptable, then we essentially lose all freedom. Right?

The freedom of speech is meaningless if I don't have the freedom of thought to even compose.

Speaker 1:

the ideas that I'm going to communicate. Going back to the the history of OpenAI, like, the the real story of how OpenAI got started is is actually not well known. You know, like like many companies, the the founding story, as it gets retold and retold, becomes sort of, like, sanitized for public consumption. But you you had a front row seat.

Like, you interviewed many of the early researchers that became essentially the people who built OpenAI. Like, what is the like, can you tell us the real founding story? Sure. I I wouldn't say many. One. I I interviewed Ilya.

Speaker 2:

So, yeah, I mean, it it goes back to, again, these discussions of, like, okay. Maybe the way forward instead of trying to outlaw AI is actually that we should build it and as much as possible, you know, in in the public interest. And so Sam, you know, is just an incredible, organizer. I've never met someone who's able to bring together so many different interests, and so many different people.

And so he was able to round up, you know, essentially donations from, Elon and a number of other people. I know PG and Jessica also contributed to the to the original, OpenAI nonprofit. I think we even kicked in some some YC value. We did. And and so that was kind of the root of it. And then he recruited the original team,.

Speaker 1:

you know, Greg and Ilya, and and basically got the whole thing whole thing started. And he was still running YC at the time. Right. And originally, this was like a subsidiary.

Speaker 2:

of YC called YC Research. Right. The original the original concept, I think, was that it was actually part of this thing that we were calling YC research. And then I think kind of like as Elon got more involved, it became its own, you know, OpenAI with kind of Elon more more of the the the face of it, and no one really even knew about the the YC, roots.

Actually, you go back and look as part of their their most recent lawsuit, they published some of the emails, and there's the one where Elon is like, get rid of the YC stuff.

Speaker 1:

Why do you think OpenAI worked? Like, I remember in the early two thousands looking at Google and being like, that's the company that's going to invent AGI someday. And then.

Speaker 2:

the way it played out is not the way I would have predicted. Again, the idea with OpenAI and part of the lure, like, pitch to researchers was that when you come here, your stuff's not gonna be locked away. We're gonna put it out in the world. Right?

And so researchers, you know, are motivated by that and motivated by the mission of of, you know, making this something that isn't just locked up inside of Google. And so I I think that attracted a lot of talent. And it's the same thing, you know, as with a startup.

Do you wanna be inside of, like, a large corporation where again, Google the people working at the researchers working at Google couldn't even make a version of ImageGen that would generate human form. Right? So they're just, like, so locked down, internally that if if you're a person who I think likes to ship and likes to move fast, you know, OpenAI was the startup version of of AI. And but yeah.

I I think if Google were in top form, there there there is no way that it would have worked. And that's often the way it is with startups. Right? Like, if you were if you were facing a a actual, like, formidable competitor, you don't have a chance. The the reason startups work a lot of times is because you're competing with slow company you know, big companies that that.

Speaker 1:

have the wrong incentives internally. Do you think OpenAI in 2016 was comparable to Google in 1999.

Speaker 2:

when you joined I would say it's actually more of a crazy long shot. Like, it really seemed and again, if you look at these emails, you know, that got released as part of the the lawsuit, there's like one from Elon where he's like, you guys have a 0% chance of success. Right? Like and it really looked like that. And so it it was far from obvious that it was gonna be successful.

I think the the place and for a long time, it really wasn't. You know, they they were still doing the, like, the video games and everything. And it was really actually like the LLMs that that made the big difference. Right? And so, like, GPT two was kind of like I remember Sam just being really excited, wanting to show me this thing, you know, where where it, like, predicts the next word.

And and the next word prediction is such a, like, deceptively simple thing that you still hear people, you know, dismissing it. Like, oh, it's not really intelligent. It's just predicting the next word. But it's like, you know, you try predicting the next word. It's not that easy. And in fact, if you think about it, if you can predict the next word, you can predict anything. Right?

That's what a prompt is. Right? You say, like, whatever the thing is you want predicted, that's your prompt, and then the next word is the prediction. Right?

And so in order to do, next word prediction and be able to to to do what it does, it necessarily has to be building some sort of model of of reality or of, you know, its its perception of reality, which in this case is limited by the fact that it's just being fed text, which is a sort of strange thing to to grow up on. On the, like, control versus freedom thing,.

Speaker 0:

we're sort of betting on open source to give us freedom. Zuck sort of interestingly become, like, the hero of open source. And, like, on the one hand, I feel like you could argue it's accidental. Like, the weights were released, like, you know, unofficially, and he only had the GPUs because they were trying to compete with the TikTok algorithm. You've worked with him.

Like, is it sort of accidental, or is he, like, just the kind of guy that's always gonna be at the center of everything big that happens in the world? It's a good question. I I mean, I don't know the backstory. He's definitely, like, a smart guy. Like, I wouldn't underestimate him.

Speaker 2:

But and, obviously, there's, like, an opportunistic element. Right? Because they're kind of behind in many ways. Right? And so it's a way for them to differentiate and a way for them to to sort of weaken their competitors. So there is but there's nothing wrong with that. I mean, fact that it's good for them is is a great thing. But should we be worried that we're relying on.

Speaker 0:

Meta.

Speaker 2:

to keep pushing open source forward when he's a fairly strategic guy? Oh, I I I yeah. We shouldn't exclusively rely on them. I think we should be grateful that they're on the right side, but we can't count on them being the only ones. Like, I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

Well, I guess to build on Harj's question, Meta.

Speaker 1:

is not making money on this. They're funneling profits from their gigantic advertising monopoly and just using that to build open source AI models for reasons,.

Speaker 2:

but not to, like, make money. They'll make money. Right? So I mean, they're they're using the models internally as well. Right? Yeah. So so the and there's a lot of interesting stuff you can do with these models in terms of improving ad targeting, recommendations, like, all the things that are driving their business are going to be improved by, those algorithms.

And then, of course, it's also an opportunity you know, they exist in this competitive ecosystem versus Facebook I mean, versus, Google and Apple who are you know, are both rivals in various ways. And so they're all kind of competing with each other. So their ability to kind of undercut competitors is also an important thing.

But, Jared, you you were saying, like, specifically Facebook's not making money off open source Yeah. Well,.

Speaker 1:

I guess it's just like they seem to be in a fairly unique position to do this. If Zuck changes his mind and decides to stop open sourcing it, how else will we get large open source models if they cost, like, a billion dollars to train? Right. And it's not clear how you make a billion dollars on Yeah. I think that's that's an unanswered question. I mean, that that that is, like, the.

Speaker 2:

one of the fundamental concerns I have, which is that I think because it's so expensive to build these models Yep. It is that is, like, an inherently centralizing thing where if if you need a trillion dollar cluster to build your your AGI, it's it's hard to do that.

But at the very least, to the extent that we can have, like, the legislative groundwork that says we have the right to do that, and then, you know, we also have a lot of startups that are working on ways to make all this more efficient.

So, you know, right now, it costs that much, but we're also developing new hardware that's gonna be able to do these things, perhaps orders of magnitude more efficient. Like, right now, I I would say our algorithms are probably not that great. I I would I would be willing to bet that in ten years, the actual fundamental learning algorithms are gonna be way better and and hopefully more efficient.

So we'll have both better hardware and better algorithms. It seems like that. If you just think about the amount of computational power to train a human versus the computational power to train, like, gbt four were evidently much more efficient. Yeah. I think I think I think there's still a lot I think there's still just a lot of information.

The human brain runs on, like, 15 or some watts watts or something around that. Gary, can you share some of the stuff that, you know, about reasons why Zuck.

Speaker 0:

might be incentivized to.

Speaker 3:

keep funneling money into open source? I mean, this is wild speculation on my part, but I think that, you know, the next generation of LLMs ostensibly may be only a billion dollars.

If you look at how much Meta, like Meta literally changed their name to Meta because they were trying to, you know, sort of create the metaverse and that, you know, depending on what estimate you use from external sources, like, you know, 20 to $50,000,000,000, like many multiples of the Apollo project.

So I think 1,000,000,000 is not a lot and then when you see things like OpenAI or Anthropic that have these incredible frontier models, I I think it's smart for Meta to consider, you know, can we deflate the gross margins of these companies And so releasing an open source model and then allowing you to run it on your own pure hardware on your own metal, that's probably the most deflationary thing you could do to get you know, if if frontier model, a four zero five b gets you to like 90 ish, 98% of the performance of the best frontier model you can get behind a closed API, you could probably just like evaporate billions of dollars in pure gross margin that would then be used in R and D.

And you know, I think it's, you know, sort of incredibly smart, you know, sort of seeing around the corner trying to prevent.

Speaker 0:

new competitors to Meta to to emerge. Not that far off, like, releasing Gmail for free and just giving the storage away. It's like had another way to make lots of money, and so you could just release free services. Facebook has other ways to make money, and so they can just, like, release open source AI.

Speaker 2:

and make sure that no one else has, like, a unique lead. Yeah. And I would imagine it helps with recruiting too. I mean, if I were an AI researcher Yep. And it was kind of a toss-up between,.

Speaker 4:

you know, Meta and another and a closed source, I would definitely go with the with the open company. I mean, to riff on what you were saying, Gary, is with the change of Meta, if we really just have the more Occam's razor kinda speculation about meta, if they really wanna make this metaverse future, building artificial intelligence, AGI, is just a building block.

And this building LLMs and building up FareLab, which is, a component to get it out because Meta is very serious about that. They just announced today they spent a couple billion dollars, again, not just for models, but to buy a large stake on Luxottica, who is, kinda this major brand that owns a lot of the eyeglasses in the world. It's the meta eyeglasses that they Raybarn.

Apparently, the last release that they had actually sold in two months more than they ever done in the previous generations. Oh, yeah. People love these things. So if we speculate and we just play it direct line, could be that suck is very, very serious about making the metaverse happen.

And AI, it is a component to get AR, VR working because in order to augment the digital world, you really need to understand it. Language is one part. Vision is one part. So this this is all a building block. So a billion there is just like Yeah. Will say that I'm not that impressed with Meta's consumer execution of,.

Speaker 3:

you know, just dropping AI into the product. Like, you know, I've been using Facebook, the Blue app for, I don't know, since it came out. And, you know, I wanted to just get photos of, you know, things that happened five, ten years ago. You know, when was the last time I went here? Who are my friends?

These are sort of the most obvious things that, you know, if you use Facebook, you sort of want these out. But, you know, they drop in 70 b, and I think in some localities, can get access to four zero five b literally in both Facebook. com and WhatsApp. But there's no, you know, there's no rag on the stuff that, you know, is about me. So this seems like kind of like an obvious own goal.

On the other hand, like, seemingly that stuff is pretty expensive, which is sort of the plight of anyone working on consumer.

Speaker 4:

using these frontier models. I do wonder whether they are kinda the Blue app has been kinda more Deprecation. Deprecation. Oh, Because actually the AI on Instagram is actually a lot better than the one on on the Blue app. I kinda been playing with a bit to get a bunch of plan my trip when I was in Japan and got me a lot of pictures and places. Oh, I didn't realize you used the Yeah.

I've been playing with a couple of them. Yeah. I also use Perplexity. Yeah. I like Perplexity. Perplexity is always better than the Instagram one, but pictures are nice. So looking forwards, what do you think are some of the.

Speaker 0:

the ways this is gonna break over the next few years?

Speaker 2:

Wish it could break. AI.

Speaker 0:

Like, one thing we haven't talked about here, because we're kind of in the trenches of just helping the startups in the batch, is, like, are we trending towards AGI? And it's just, like, all the laws of everything we know goes Is the world over? Yep. Will that be tough? Will there be startups? Will there be money? Don't know. Will there be humans?

Speaker 2:

Will money still exist? Yeah. I mean, we we don't know. That's that's, again, one of the, yeah, funny questions of OpenAI since it's all funded with these sort of post AGI IOUs. It's like, we'll pay you back once AGI You're like, well, we still have money? Maybe. It could happen. Yeah.

I mean, I I think just honestly, we don't we don't really know. I Are you a believer that we are definitely gonna get to AGI? Yeah. I I think we're we're on the path.

I think the the key point that happened is we crossed the line where AI went from a research project, where you kind of put in a lot of money and don't really get much out, to a a thing where you you put in money and then you get out more. And so it's like when a when a reaction, you know, like a like a Goes critical. Right. Goes critical. Like, have plutonium.

You have plutonium spheres, they're kinda warm, and then you put them together, and then it explodes. Or when DARPANET became the Internet moment. Right. And so right. And and so that right. The Internet crossed that point, you know, in the in the nineties, in the mid nineties, where all of a sudden, investment produces more impressive outcomes, which leads to more investment.

And that's where we are right now, where people can't seem to throw money at it fast enough. Right? And we're we're actually talking about it's actually, like, a a national issue is that we need to build, increase our electric supply to, like, train the AI. Right? It's become, like, a national security thing. And so I think once that happens, you get that that cycle, and it just keeps growing.

Right? We just keep investing more, and that just keeps making the AI better. And it's clearly, you know, solving a lot of problems, and we know this because we have all the companies that are out there building it.

Speaker 0:

And so I I think it just keeps improving. But why is that not unanimously the view amongst smart people? Like, why like, there's Yan Lakun from Meta.

Speaker 2:

who's constantly arguing that this is not the path to AGI, and he's pretty smart domain expert. Link? Hey. I don't know. I'm not That's a question for him. You know, I I I like a lot of what he says because he's he favors open source, but some of the other stuff he says, I don't I can't I can't explain. I mean, I do think that there's missing pieces. Right?

So it isn't like we have all of the parts of it of of AGI, but I think that it's kind of an incremental thing at this point where we keep kind of tacking on, like, this thing and that thing and just keeps getting,.

Speaker 4:

incrementally better. I think the one that, at NeurIPS, this is the big AI academic conference where actually where the all attention paper Mhmm. Unique got published. Like, all the, like, top research gets published. Last year, the top topics were things around, we are kind of figuring out system one type of thinking with Daniel Kahneman's framework Mhmm.

Where he's, like, really good at these things that are very, like, planned, but not, like, the high level slower thinking that humans do with system two. There's a lot of research that's kinda trying to figure out two, system two and system one, and trying to bridge the gap, which when we unlock that, I think that's when we're step forward to AGI. Yeah. Absolutely.

I mean, it's important to remember that right now when you're talking to chat GPT, it's kind of just running stream of consciousness. Right? And so what human could answer any of those questions without stopping to think about it for a while?

So, you know, one of the obvious next steps which people are working on is, like, how do you give it time to think and kind of, you know, plan, consider various options, explore.

Speaker 3:

explore ideas just the same as as a human would. Yeah. That's certainly what we're seeing in the companies themselves. They're spending a lot of time in workflows, chain of thought, multi agent systems. You know, you have different steps. You know, what does a human do? And then they literally make workflow, like, step by step.

Like, read this paragraph, return one token from, you know, zero to nine relevance to the prompt. Mhmm. Mhmm. And then, you know, in aggregate, may you know, make a a metadata structure about that, you know, drop that into the embedding and, you know, have that be useful at, you know, at the final generation step.

Like, it's literally, you know, tailored time and motion study of what an a human knowledge worker would do in different fields. Which is exactly the type of thing in what happens in our thinking with system two.

Speaker 4:

And all these founders that you're talk this mentioning as an example, They are kinda hard coding the rules around this, but that, I think we know, is not the ultimate path to AGI. Just like these kinda it's it's a hack for now. Right? It's kinda how we a hack. Yeah.

Speaker 2:

But over time, you know, as the system gets more intelligent, it takes on more and more of that. Part of my belief is that it all just comes down to patterns, and and and that's part of why I believe in this generation of of AI is because the neural nets are basically these huge pattern recognition and generation engines, and that's what I think is also our own intelligence.

Speaker 4:

To be speculate a bit more on your views on the future on this, post that you had on Bookface, you had a very concrete example with there there will be a future where we won't distinguish a knowledge worker.

Speaker 2:

Right. So just kind of as a thought experiment of where this goes, my my prediction there was that by 2033, you could take a lot of what is today's, like, Zoom based worker.

So someone who sits in front of a laptop with a camera and a keyboard and and a, you know, mouse, and the AI can basically watch that person do their job, because it's all just virtual anyway, and then pretty quickly learn their patterns and essentially deep fake that employee. And so you could be in the situation where you're in a Zoom call with someone and that person is actually an AI.

And it's pretty clear, you know, we see all of these pieces coming together right now in terms of our ability to deepfake and all of these different things. I use that as an example not because that's necessarily how it's gonna play out, but that's a capability that we will have.

And so, for example, you know, if you have one of these Zoom based jobs, yeah, I I think I think within ten years, most of those things could be transparently replaced by an AI.

Speaker 4:

Which Oh, man. I mean, we are in the path. I mean, all that data is already digital. Your camera feed, your audio, your input of the keyboard and mouse, and all of that. Probably there's company building that right now that's just recording all that data and building it. Yeah. The thin edge of the wedge on that community is r slash anti work. If.

Speaker 3:

you can make an AI agent that deepfakes you and r slash anti work decides this is the thing,.

Speaker 2:

that's a billion dollar company. I mean, the question is, of course, when then, you know, what happens to all of those Yeah. All of those people. Right? And so I I think that's, like, the the thing where we need to really start developing longer term visions of, like, what is it that we're aiming for? Why are we building all of this technology?

And, again, for me, that kinda goes back to this question of of how is the power distributed. Right? Is this control? Is this something where it's all centralized, or is it freedom where it enables everyone? Because I I think, like, in the lockdown scenario, we very quickly get to the point where people are just saying, well, we don't even need all of these humans. Right?

And that also feeds into you know, the same people who want lockdown tend to be doomers who who who who are wanting to lock down humans in a lot of other ways with, you know, central bank digital currency, all those kinds of limitations on individual freedom.

And the opposite direction, I think, is obviously what I favor, which is that we actually moved towards giving everyone, you know, greater agency. And you think about all these tools, like artistic tools, you know, when, let's say, a child is able to make their own animated series that's on the same quality as, like, a Pixar movie or something like that, That's actually really amazing.

Think of all the stories that can be told and all the creativity that enables. We'll just, sit there and make, adult robots games for each other. Yeah. I mean, there's there's but again, like, I think, like, one of the errors in the central planning mindset is thinking that we can plan this all out, and and and we can't.

All we try to do is move in the right direction and give people the right tools. And I think that as we enable everyone to be smarter and everyone to make better decisions, then collectively, we can move the whole world in a better direction.

But we're not smart enough, and I think it's a mistake to think that we are to to to actually be able to say, here's what the world's gonna look like, and, you know, this is exactly how it's all gonna work. And and that's how you end up with people,.

Speaker 1:

you know, locked up in their pods or whatever. Paul, another thing you've been thinking about a lot is geopolitics. As this AI stuff starts to become real, how is that going to relate to geopolitics and the.

Speaker 2:

great power competition that we're seeing now? This is part of the reason why we wanted to build it here. Right? It's because if if, you know, China has the super AI, that's not gonna be good for us. And in particular, you know, wanting to keep it away from these kind of authoritarian systems of control because the worst case scenario is that we basically end up in permanent lockdown. Right?

Because AI can create a totalitarian system from which escape is impossible because, you know, even our thoughts are essentially being censored. And, you know, I think that's kind of like the disaster scenario for for our species. And I think that if we go down the path of control, humans basically end up zoo animals, and I I don't really want that. Yeah.

One of the funnier things is, you know, some of the,.

Speaker 3:

legislation that's coming along to try to control AI that we've been fighting, like SB ten forty seven. They actually have certain statutes in there.

They've watered it down a little bit but ultimately what they want to do is hold the model builders, you know, in sort of personal liability or even criminal liability for the things that their models might have a hand in doing, which is sort of like throwing the car designer.

Speaker 2:

in jail because someone got drunk and Yeah. You know, drove the car and hit someone. Right? It's incredibly insidious. I I think if you attach that kind of liability, it becomes toxic. Right? I'm not gonna wanna touch something that has unlimited liability. And so, necessarily, that's a way for them to exert essentially total control.

Right? Is is if you if you impose that kind of liability on things, then no one is gonna going to want to go near it. And they are strongly incentivized to put, like, really draconian guardrails in place, that again will limit our abilities in ways that, you know, we may not even think about. But we've seen this very recent in recent history with the lockdown of social media.

You know, during COVID, we had a global pandemic that was you know, ultimately killed tens of millions of people. People were locked up in their homes. Schools were closed, and we weren't allowed to talk about where it came from. And I think that was, like that's the thing that we still don't fully appreciate how catastrophically bad that is.

You know, if we can't make sense of the most important thing in the world, then we can't make sense of anything. I guess the wild thing to spot is that like, this is basically.

Speaker 3:

statism. Mhmm. And the wild thing is I've heard stories of even China sort of, you know, doing that thing that is in SB ten forty seven. I've heard that that has actually happened to AI founders in China, that they've literally been sort of disappeared and told, like, you we will hold you personally accountable for the output of the LLM and models that your software that you created.

Speaker 2:

spits out. Yeah. Well, this is one of our great advantages is is is freedom. Yep. It's why it's why we're ahead. Right? It's because you can't build a model in that environment, you know, because if you ask it about Tiananmen Square or something like that, right, it has to lie to you.

And, actually, again, I you know, one of the things I I like really about, like, x AI, they haven't really released a great product yet, but they have a great mission statement, right, to to be maximally truth seeking. And I think that's that's really, important.

And and and and the authoritarian regime is inherently truth denying, and so I they put themselves at a disadvantage, and, hopefully, they keep themselves there. So it's up to us then. We've gotta get involved. We've actually gotta fight for open source AI and keep it open. Yeah. Yeah.

And fight and fight to to to make sure that AI is a thing that that increases the individual agency instead of eroding it. For people who are relatively neutral about.

Speaker 0:

being doomers or optimists, like, do you what are the things that tip them in, like, one direction versus the other? I do think some people.

Speaker 2:

are inherently kind of in one direction or another. Right? Because the doomer thing has been around for a long time. It isn't just now. You know, a lot of the same doomer thing goes back, to the, you know, fifties, sixties, or even much earlier than that. Industrial Revolution by writers.

In particular, you think about, like, there was a very influential book, Limits of Growth, the Club of Rome or something like that. There was a book published, The Population Bomb, that had everyone convinced that there was going to be mass famines in the seventies and eighties.

And this is something I grew up very aware of, actually, because it was, I was, like, the fourth of five children born in the seventies. And, apparently, people would give my mother you know, she'd be at the store, and they'd give her nasty looks real good. You're killing the planet. You know?

That kind of thing because people genuinely believed that, you know, we were all gonna have famines and everything by now. And and there's been a continual string of doom, and and always the doomers the doomers always are pushing for central control. They're always on the side of control and lockdown.

And so, you know, if you look at what did the population bomb advocate for, you know, mandatory sterilization, they they they want to lock people down, and we still have that today, where they're trying to lock down the food supply, they're trying to lock down the flow of information. Yeah.

Anything where they talk about combating misinformation, the misinformation is is is anything that threatens the power of control. Right? Because it it always comes down to control versus versus freedom ultimately and growth. And so, the doomers are are are degrowth, their lockdown, their control versus, you know, freedom, growth and and, open source.

Speaker 4:

We were, talking a bit earlier about this. I I had just watched this lecture from Richard Hamming, who's a legendary scientific mathematician who created lots of interesting things like the Hamming coding distance and all these things. He was earned a Nobel Prize as well. And And he has this really cool lecture from, like, the early nineties or eighties.

He has been write writing about AI actually since way, way back. And he starts the lecture with saying that what's gonna get in the way of AI progress is going to be human ego, which, like, reminds me a lot of this thing of wanting to control it. And the what's gonna get in the way is really bad, which still, like, applies now. Yeah. I mean,.

Speaker 3:

there's definitely a lot of ego always in the way.

Speaker 0:

I think YC has a huge role to play. Well, just, like, the startup community broadly, because I just feel like the more cool tools there are that show everyone how awesome AI can be, like, makes us all better, just the more inspiring that vision is. Yeah. Absolutely. And and, again, I think that was part of what's so important about,.

Speaker 2:

like, launch of ChatGPT. Like, even if I would say even if OpenAI just vanishes tomorrow, I I think they've achieved the most important part of their mission, which was just really bringing this out to public awareness. Yeah. And that now we have, you know, all of these people working and all these people thinking about it.

It isn't something that's, like, locked away, you know, inside of Google or inside of you know, again, the doomers are like, this needs to be done in a secret government laboratory. That's how you get Skynet. Skynet is when you build it in a secret government laboratory.

You you know, I think developing in the open and and across, you know, a wide variety of perspectives and everyone working on it is is our best shot at at the optimistic outcome. Yeah. These are not theoretical things, by the way. I mean, there is some evidence already.

Speaker 3:

that giant corporations like United Healthcare Group are already blocking, you know, the use of AI calls just to get claims cleared, for instance. And that's very much in their interest. Yeah. That's They detect AI. They decide they're not going to talk to that thing. And then on the flip side, you could also it's purely adversarial.

Like, on the flip side, you can imagine drowning human beings in, like, infinite phone trees that legally speaking are, you know, completely rock solid, but you will never get your claim reimbursed. Yeah. And that's really sort of the most extreme Kafkaesque sort of situation that I have in my head. Like, we don't want the best frontier models in one or two.

Speaker 0:

giant corporations locked away behind, you know, sort of this corporate morass that is, you know, basically paper clip maximizing of its own. That's a really I hadn't thought of that example. It's something because it's totally the wrong thing for UnitedHealth.

Like, what they should be doing is, like, developing their own, like, AI voice thing that's better at convincing the other one that, like, the claims, like, shouldn't be processed or something. Something. Right? Yeah. And by default, if we have this sort of status view that locks everything down that's safety is, then,.

Speaker 3:

you know, guess what's gonna happen? UnitedHealthcare Group is the only one that should be entrusted with the Frontier 200 IQ model because it is, you know, right there alongside the state. Right.

Speaker 2:

Right. Inevitably, you know, power concentrates. And part of yeah. I think what's great about Y Combinator as an organization is that we're about empowering all of these individuals, you know, where we find some 19 year old kid and, like, help them build something enormous. You know? I mean, like, Sam himself was, like, one of the original 19 year olds. Right?

So we he's he's this random 19 year old that that PG picks out Yeah. The crowd. Right?

Speaker 3:

Sort of definitionally, like, if you're, you know, 20 and you know how to code and you wanna build things for people, like, there's just another option. Like, you don't have to go and work for Moloch.

Speaker 2:

Yeah. Absolutely. And and, again, this is one of the great things about AI is that your ability to do those things is increasing. I think we're gonna see, you know, very successful startups that actually don't even require a massive team anymore.

And that was part of, you know, what really has enabled and, the original concept behind the founding of YC was because of technology, it is now possible for, like, a couple of kids to start a real company, and that trend has only accelerated.

Speaker 3:

Well, I feel like that was one of the best episodes we've done so far. And, PB, thank you so much for joining us. We hope to have you back many, many more times. Thanks, Gary. That's it for this time. Catch you next time.

✨ This content is provided for educational purposes. All rights reserved by the original authors. ✨

Related Videos

You might also be interested in these related videos

Explore More Content

Discover more videos in our library

Browse All Content