Founder FAQ: 40+ AI startup founders on the current state of AI tech
At a recent meetup for Y Combinator founders working in AI, we asked them about the current state of AI technology—the good, the challenges, and best practices.
Transcript
What is the most maybe unexpected way that you use AI in your regular life? Yeah. I I don't know if I should say this, but writing speeches for weddings. Yeah. We both set our answering machines to our our voice bot. Hey. How's it going? The heart is strong, Padawan.
Nice. This was, I think, programmed to talk about before. Now.
we have it open every day to help us code. Helps us code. It's great at coding. It's just making me code infinitely faster. You can describe to the AI what change you want to make to your UI, like build dark mode, and it'll edit all the code to implement that feature. Like these tools will make humans more and more like.
narrators, like like people who are describing what they want, then the models will actually create something that's even better than what humans would do if they're doing it themselves. And I think the fundamental, like, problem solving skills are always gonna be, like, really needed. And understanding the technology and its constraints and how to leverage it is gonna be so important.
What are current AI tools.
good at general sense? Yeah. Really, really good question. I think one of the things that's really counterintuitive about generative AI tools is that they're really good at what we thought they would be really bad at, which is sort of creative storytelling work. AI for creativity.
The goal is to make software so that anyone can make South Park from their bedroom. Example, you can make a photo of me, Eric, Rihanna playing volleyball on the beach. I like this one. That one that one's my favorite or I like this one. I like this one. We are building truly human like AI voices that's like very conversational.
like humans. Without AI, the voices sounded horrible. Now, it's it's just indistinguishable.
from a from a human voice. For us personally, what it's amazing at is semantic search. That's something that didn't really work before. Just like taking a random piece of text and finding relevant things. LLMs have great ability.
to read. They're pretty good at being able to take, like, arbitrary data.
and be able to answer questions about it. Because there's so much new type of data, like, it's really hard to adapt. And so we are currently fine tuning all of our models on these new types of data to make it as accurate as possible. For fashion, for very specifically, there are new terms popping up all the time. You have to, like, keep updating these models to know, oh, like,.
for this month, the trend is mermaid court, but for next month, maybe it's ballet court. Yeah. I tools are pretty good in giving you around, like, an 85 to 90% solution, But there's a lot more fine tuning or a lot more hacks that you need to put in place on top of them to ensure that you can deliver genuine value. You can use a bunch of simple operations to actually do something really complicated.
You need to really, like, give them structure about, like, how it should look like.
and give them,.
like, one particular task to do, and then they do it very well. If you're able to think through the process that you go through, then you can actually engineer a prompt or engineer like a sequence of steps so that you can have that entire process be even more reliable than you would be.
It's important to just be very iterative in your process and just debug and tune and iterate on your prompts as you as you go. If you think you have a solution, it may not be the same solution over time. Your data can change. The actual underlying,.
model quality can change with that. And so the biggest difference is just there's this.
sort of iteration required. I think the hardest part is you're trying to marry deterministic kind of software with probabilistic models and we sit right at the middle of that. It is like quite an exciting thing.
to work with because in the past with programming, the computer really just followed your instructions to a t and you can expect the same results given the same inputs. Now you put in the same inputs, you might get some variance. If we can actually introduce some randomness.
into our outputs, then we can explore our space a bit better and our models will get better from learning from all of these.
other choices that we can make. It's not reliable in the way that you expect it to be reliable, which is great for us because for us we're we're doing entertainment. It's like as long as it's funny, it doesn't matter. But I I guess if you're operating a car, that seems more complicated. So it is a double edged sword. Sometimes,.
it can hallucinate and make up something that wasn't intended. I I would define a hallucination as,.
AI generating something that doesn't exist, but looks like it might or should exist. They're still really bad at distinguishing fact from fiction. So they could create a storytellers, but they're surprisingly bad at knowing the difference between what's true and false. If you're a doctor,.
finding out what GPT decided, was the diagnosis for this patient probably takes a ton of time to verify. And if there's any mistake,.
then, you're in a lot of trouble. At what point do you trust the AI over the doctor? There's been a lot of effort in the industry recently to sort of prevent these hallucinations,.
but that's created this opposite problem, which is now.
it it will often think things aren't real or pretend that it doesn't know things that it really should do. Right? It will tell you it's like never heard of that article even though it's definitely in the training set. Right? Like, it's gotta be there.
A bit like a human, you know, when you read things and you take something away from that and internalize it, you can't necessarily exactly remember where you read it. And so when you're using these models with real world data, it's actually even harder to disambiguate what's what's a hallucination versus something that was a a nuanced piece of data. You can't ask it for citations consistently.
That's that's still a challenge. And so the trustworthiness there has has some way to go. It's not enough just to say like, hey, the accuracy metrics are are better.
You have to understand that there's more at play especially for, you know, human trust and and that is a key component if you're gonna develop a technology that people are gonna use at the end of the day. There's still like a lot of nuance where we we have to like steer them and that's why.
you'll hear a lot about this human in the loop. It's really important to still have a human in the loop. Having humans in the loop to initially assess whether the corrections that were needed to be made were accurate or not. There needs to be someone supervising that, but making sure that there are no hallucinations.
So there are a lot of pros and cons. It's really about, like, figuring out the right ways to steer it. That's the challenge that think, like, all the y c companies working on the AI are facing. We're we're given this new tool to work with, and we're all really just trying to figure it out. I never wanna lose sight of the fact that ultimately,.
this is technology and service of humans and we get to keep humans to say the so what. Ideally it's actually like deepening human connection where it's a lot more about you know interacting with people and and figuring out what's what's actually valuable to them.
✨ This content is provided for educational purposes. All rights reserved by the original authors. ✨
Related Videos
You might also be interested in these related videos