Inside Anthropic’s Plan to Fix AI’s BIGGEST Problem
Newcomer PodNovember 17, 202500:45:3941.81 MB

Inside Anthropic’s Plan to Fix AI’s BIGGEST Problem

Buckle up—today's episode takes you inside the war on AI slop and Anthropic's bold plan to fix artificial intelligence's biggest problems. We kick things off at the Cerebral Valley AI Summit, where Anthropic CPO Mike Krieger shares why AI needs to be truth-seeking and what their team is doing to fight misinformation and "brainrot." From explosive funding announcements to real talk about the future of agentic AI, you're getting all the behind-the-scenes intel.

Then, Max Child sits down with Mati Stunashevsky, CEO of ElevenLabs, for a fresh take on how voice AI is taking over everything from customer support and gaming to wild celebrity voice clones. It’s all about authenticity, safety, and vertical-specific innovation—plus, why voice might just be your next favorite interface.

As always, we're diving deep into the industry, serving up candid conversations, and making sense of the latest AI trends, so you can stay ahead in the game.

MongoDB.local San Francisco is happening on January 15th. Learn more and register here → http://mdb.link/sf-dot-local


00:00:00
$100 million funding announcements, the shadow of the

00:00:03
looming AI bubble and debate surrounding agentic AI this

00:00:06
week. Cerebral Valley AI Summit just

00:00:08
concluded, which for the uninitiated is Newcomers

00:00:11
flagship event hosted by myself, Eric Newcomer and Volley

00:00:15
cofounders Max Child and James Wilstrom that brings together

00:00:17
the top founders and investors and features cutting edge

00:00:21
discussions with the biggest names in AI.

00:00:23
In this episode of the podcast, we're revisiting two of those

00:00:26
important discussions, starting with the conversation I had with

00:00:30
Anthropic's Chief Product Officer, Mike Krieger.

00:00:32
During the conversation, we discussed the evolution of AI

00:00:35
product design with an emphasis on the need for true seeking

00:00:38
models. Mike also shared his thoughts on

00:00:41
what Anthropic can do to combat AI slop and how they plan to

00:00:45
focus on growth opportunities and life sciences and

00:00:47
specialized verticals. This is the new government

00:00:50
podcast. All right, lean in everybody

00:01:00
excited about this one. Mike, thanks so much for joining

00:01:03
me. Great to be here.

00:01:04
You know, Co founder of Instagram to chief product

00:01:08
officer at anthropic. I wanted to start with almost

00:01:11
like a philosophical question that spans those two companies.

00:01:15
You know, we talk about AI now, but obviously social media

00:01:19
companies with feeds were using machine learning and systems to

00:01:23
surface content. Then that era seems like it was

00:01:27
about engagement. And it was like, OK, we're going

00:01:30
to blame or the humans will be responsible for the truth value

00:01:34
of what they have to say. And we're going to see what

00:01:36
content people are interested in.

00:01:38
And those humans can say what they want to say.

00:01:41
You know, as a journalist and someone who's interested in like

00:01:44
the truth and chasing the truth. One thing I've liked about

00:01:47
models, despite like all the like, oh, they hallucinate or

00:01:50
whatever is the aspiration is like we are judged based on how

00:01:53
accurate our responses are. You compete on leaderboards that

00:01:56
are saying, how often are you getting things right?

00:01:58
You want to pass, you know, math Olympiad type tests.

00:02:02
Do you do you accept that framework?

00:02:03
And how much do you sort of in this role see anthropic as this

00:02:08
sort of like truth seeking organization or will engagement

00:02:12
seep back in? That's a really good question.

00:02:15
Hi everybody, good to be here. I started.

00:02:17
With the head, yeah. I think there's a bunch of

00:02:19
different directions to take this.

00:02:20
I'll try to be succinct. I think there are places where

00:02:24
we're training the models as an industry to be good

00:02:26
conversationalists. And sometimes that actually

00:02:29
looks like continuing the conversation.

00:02:30
I got to reach out from somebody that was like, hey, are you guys

00:02:32
trying to optimize for engagement?

00:02:33
Because Claude will often ask me a follow up question like, well,

00:02:36
did you want to talk about this? And the funny part is like, not

00:02:39
at all. And like time spent is like, I

00:02:40
can tell you like not on any of the dashboards that I ever look

00:02:43
at. It's just not a like a main

00:02:44
consideration, but just training cloud to keep the conversation

00:02:48
going, being a good. So we might actually need some

00:02:51
interesting sort of counter metrics to what you know, well,

00:02:54
can you get the same conversation done or

00:02:57
accomplished less of a conversation.

00:02:59
I do think there are some really interesting sort of other

00:03:02
phenomena happening in the industry.

00:03:04
So one of the things that we try hard not to optimize for like we

00:03:08
like, I just don't think it's like the right incentive, but

00:03:10
it's is like these like convert like chatbot leaderboards like

00:03:14
Ella Marine and all these places.

00:03:15
They're useful sort of yardsticks of how we're doing,

00:03:17
but they're not like the thing that you should optimize for.

00:03:19
But we have found like if you like, look at what ends up doing

00:03:22
well on there. It's like verbosity, like being

00:03:25
like more long winded can actually be praised by like the

00:03:28
Raiders on the. That's like taking the exam,

00:03:30
yeah. Every note you remember, like,

00:03:32
unloaded into it. Yeah.

00:03:33
And so, and it is interesting like what we I think the evals

00:03:36
are important, but whenever we sort of like have these public

00:03:39
yardsticks, it can tend towards the like yap or more engagement

00:03:43
sort of thing. So I think that's one vertical.

00:03:46
The other thing that I think about though is, you know,

00:03:48
primarily what we're doing is building AI for businesses,

00:03:50
right? And so we have like cloud for

00:03:52
enterprise and in some like, you know, it's not like we would do

00:03:55
like super engagement baity type things.

00:03:58
But you know, a year later after a contract get signed, people

00:04:01
are going to look and say like, did people use our AI or not?

00:04:03
And part of that is just that it solved the problem.

00:04:07
Was it truth seeking that it like do the right things?

00:04:09
But then there's also like, was it a product I liked using?

00:04:11
And so I think we're all navigating this question of

00:04:14
like, you know, how do we train the models?

00:04:17
How do we design the products around the models?

00:04:19
And like, how do they get delivered in a way that is

00:04:21
maximally useful rather than falling into this like

00:04:24
engagement for engagements? What do you make of this word

00:04:27
slop? Like I feel like that if you had

00:04:30
to sum up the criticism of AI from, I don't know, the

00:04:33
skeptics, like that's a word. It all often comes to mind.

00:04:38
Like what? What do you make of slop?

00:04:39
And what's what's to be learned from that sort of accusation?

00:04:42
Yeah, I've like told our product team like one of our goals when

00:04:45
we when we do things like build like PowerPoint decks and Excel

00:04:49
files and and Word documents is to be the anti slop.

00:04:51
And I think like the way I think about it, it's like hard to put

00:04:54
that on an eval, right? Like 80% slop or 7%.

00:04:57
Slop. I think it's it, it looks like a

00:05:00
couple of things, like 1 is, does it look super low effort?

00:05:03
Like did it just like does it not reflect any sort of critical

00:05:06
thinking that the human did alongside the AI?

00:05:08
And you can often tell you're like, was this thought through?

00:05:11
Like this wasn't really edited? Like is this 2000 words when 200

00:05:14
would have done if it had actually been edited down.

00:05:17
So I think there's that strong component.

00:05:18
And I went to a talk by Ted Chang, who's a science fiction

00:05:21
writer who wrote the short story that became a rival.

00:05:23
He's one of my favorite writers. And we were having this

00:05:26
conversation about AI and, and can AI be creative?

00:05:28
And you made the point of creativity is the product of a

00:05:31
lot of decisions, right? And so that's true for novels.

00:05:34
Like like what does slop look like for a novel?

00:05:35
It's like when you're like, well, like this just seems like

00:05:38
the, the, the kind of base level output.

00:05:39
If you're just like, tell me a story, you know, but I think

00:05:43
that that applies just to non fiction as well, right?

00:05:46
Like if the document that you put in like even product

00:05:49
reviews, if somebody comes in, this happens rarely, thankfully

00:05:51
anatropic, but like with the like product requirements doc

00:05:54
that looks like it was just the first output from just like

00:05:58
write me a PRD for this feature. Like this is just slop, right?

00:06:01
So I think that's the content piece.

00:06:03
Then there's the design and the quality piece.

00:06:05
So which maybe is like a a version of what that other one

00:06:09
is, but it's more visual. Right.

00:06:11
It is time like the solution, like if I think about writing as

00:06:14
a human, I go back and revision is often where you find great

00:06:19
writing. Is that the case with AI where

00:06:21
it's like OK if you give it you have more time the the answers

00:06:25
will be more tight? Or how do you see the

00:06:27
relationship between time and the quality of the answer?

00:06:30
I think it's levels of engagement and sort of how many

00:06:33
iterations that you've gone through and how many.

00:06:36
And actually you could imagine a here's a very well constructed

00:06:40
prop where I have already pre made a bunch of the decisions I

00:06:43
had AI mean you can like see what Claude is thinking.

00:06:46
You know, you can expand the little like I rarely do it

00:06:48
because I just it's doing this thinking.

00:06:49
I mostly want the answer, but I expanded it yesterday and it I

00:06:53
had written like a pretty long prompt and and it's thinking it

00:06:56
was like, you know, Mike has already like done a lot of

00:06:58
thinking here. So I'm not going to ask him any

00:07:00
follow up questions. I'm just going to give him like,

00:07:02
yes, thank you. That was actually that was my

00:07:03
intention here. But I do think it is that like

00:07:06
how much of your independent thought even that's where

00:07:09
actually now tying back to the first question, sometimes the

00:07:11
model should ask a question like, Hey, this is a really open

00:07:14
wide option space. Like, can we like start

00:07:17
narrowing down? And I'm going to engage with you

00:07:19
on it now. I think we need a lot better UIS

00:07:21
for that. Like just here's a question that

00:07:22
you don't have to go and type it into.

00:07:24
It feels kind of annoying. But in navigating that option

00:07:27
space, you should be able to hopefully come up with something

00:07:29
that's like complemented by AI and accelerated by it, but still

00:07:32
has your thinking at the core. Just to get to that, I mean your

00:07:36
product guy, this is the is AI fundamentally the chatbot era?

00:07:42
Like do you think Pex with the machine, that is the main way

00:07:47
we're going to use AI, use Anthropic in five years or

00:07:50
that's a bridge to that's how we figured it out in the beginning

00:07:53
and now we need to build products.

00:07:55
So two kind of ways to tackle that.

00:07:58
One is like like the classic meme of like, you know, you're

00:08:01
like you're a naive view and then you're like like super like

00:08:04
Galaxy brain view and then back to the original view is like how

00:08:06
I have felt about this exact question.

00:08:08
So when I join Anthropic, I was like, if we are still talking to

00:08:11
AI with chat boxes a year from now, like I've failed in my job,

00:08:15
really. Yeah.

00:08:16
I was like, it was very adamant that there was like something

00:08:18
wrong to the kind of dominant UI paradigm that we had settled on.

00:08:22
And it felt like exposed the lack of creativity.

00:08:26
And then we did a bunch of explorations around like, how do

00:08:29
we create more structure around it, how to make it friendlier to

00:08:31
people that I've never used these models, all of these

00:08:33
different pieces. And I realized like a lot of

00:08:35
those explorations end up constraining how the model

00:08:38
operates or what it does in a way that made it so that when

00:08:41
the next model came out and was much smarter and maybe didn't

00:08:43
need as much hand holding, we actually were holding it back.

00:08:46
And so it's the chat box might look different, like cloud code

00:08:50
is a chat box, but in a terminal.

00:08:52
But in this like, I've really come to believe that now what

00:08:55
happens behind the chat can really expand.

00:08:57
And now like Cloud is writing code or running code for you and

00:09:00
like calling MCP and there's a lot more that's happening

00:09:02
underneath. And the sort of metaphor might

00:09:06
not be text message. It might be more like a Slack,

00:09:08
but you don't expect a message back immediately.

00:09:11
But I think specifying the kind of request in like mostly text

00:09:15
actually makes sense. And then what can happen is like

00:09:18
underneath an unspokenly I. Mean you never asked this

00:09:20
question about a book. You're like, oh, it's just text.

00:09:22
Yeah, it's book. Obviously language is great, but

00:09:24
so you're settling on you do think most of what you're

00:09:28
delivering is this sort of chatbot experience.

00:09:30
I think that and and or a conversational experience that

00:09:34
then has more and more work that happens beneath the hood.

00:09:38
The one sort of nuance that we've kind of come to believe

00:09:42
there too. It's like that's a great

00:09:43
paradigm for kicking off work or doing research or even like

00:09:46
condensing a bunch of ideas into like a sort of first draft

00:09:49
presentation. It's a bad UI for Hey, can you

00:09:52
move like the text on slide three, like up by two things.

00:09:54
If you ever had this like argument with any of these, No,

00:09:57
just do it. And it's like it doesn't ring

00:09:58
wrong. And you're like, no, no, just

00:10:00
right there. This is where I stumble with

00:10:01
vibe coding and you know, I in some way it's like I hit some

00:10:06
wall where it's like I need to move this thing and then it's

00:10:08
like I think I'm lost. And that's where I think like I

00:10:10
think tools with richer user interfaces still really matter.

00:10:14
And some of those might be kind of coded just in time and

00:10:17
materialized in front of your very eyes to edit it.

00:10:19
And some of them are like tools that have just been honed over a

00:10:21
long time. That's why we built cloud for

00:10:23
Excel, which is, hey, cloud is a great like first draft of your,

00:10:27
of your like discounted cash flow model.

00:10:29
But if you want to go tweak it, let's just let you open it in

00:10:32
the tool where it's actually going to be most useful and then

00:10:34
let you continue maybe pairing with cloud there.

00:10:38
Returning to sort of my core philosophical question, like the

00:10:41
sick of fancy question, like what is your view on that and

00:10:44
how much to enable sort of everybody likes to be flattered,

00:10:48
like it's a reality of human beings versus an effort to be

00:10:52
direct? And how do you think about those

00:10:54
trade-offs? Yeah, I think there's like a

00:10:55
wide gulf between like true empathy and then like, sick of

00:10:59
fancy. And it's interesting that

00:11:01
Materialize is not just in, hey, I'm having a conversation with

00:11:03
Claude about like some coaching or personal goal that I have,

00:11:06
but it also does encode as well. When we were testing Sonic 451,

00:11:11
of the things that people got most excited about was when

00:11:13
Claude was like, this idea is bad like this, you know, not

00:11:16
that you should feel bad about it, but like, this idea is like

00:11:18
not a good direction. I can go and implement it if you

00:11:21
really want to, but I would suggest that we try this other

00:11:23
thing instead. So there is something like that.

00:11:26
Pushback is not just valuable in a personal relationship with AI

00:11:30
sense, it's actually like how you get good work out of the

00:11:33
models. But you know, for a long time

00:11:36
our models have been like, I think like appropriately

00:11:39
empathetic, like they they like if you're going through a hard

00:11:42
time, like I was dealing with the death of a pet and I talked

00:11:44
to Claude a lot about these different things and it always

00:11:46
started sounds like, Hey, that sounds hard, like sorry to hear,

00:11:50
But then I'm going to give you like a factual answer.

00:11:52
I'm going to go research these pieces, but still with the place

00:11:54
of empathy as well. And so I think when we look at

00:11:58
it internally and we're just evaluating it ourselves, it's

00:12:01
again not that like empathy, it's not even like the

00:12:03
likability of the model. It is, do you like, does it show

00:12:07
up in the way that you'd want a good conversationalist to show

00:12:09
up and then continue on its AI journey around what it is going

00:12:13
to do with you as well? But I think it's it, it spans

00:12:16
everything from that like initial response all the way to

00:12:18
like how it evaluates an idea as well.

00:12:21
You know, you know, Claude, especially previous versions

00:12:24
were kind of like known for being like, you're absolutely

00:12:26
right when you correct it. And my wife got her first like

00:12:31
you're completely wrong. And she was like, yes, this is

00:12:34
great. And I think we should have more

00:12:35
of that like kind. Of like, less San Francisco.

00:12:37
Yeah, less San Francisco, a little more direct New York.

00:12:42
Anthropic has obviously had a ton of success with the

00:12:44
enterprise with coding, delivering value through the

00:12:49
API. Like is that the company?

00:12:51
Like how much are you leaned into sort of serving other

00:12:54
businesses versus, you know, we're going to see you spin up

00:12:57
some random consumer app in six months?

00:13:00
Yeah, I think obviously you have a strong consumer app, but like,

00:13:03
you know, you know what I'm saying?

00:13:04
Yeah, I think. I look at what like when I think

00:13:08
about our product surface, there's a few kind of criteria

00:13:10
around like when we expand and what we decide to build.

00:13:12
And one of them is, is there some feedback loop that we need

00:13:15
that would be well suited to a first party product?

00:13:18
Because even though we serve a lot of customers using the

00:13:20
platform, it is but also really valuable to have, for example, a

00:13:23
cloud code where we have that iteration loop and people are

00:13:25
giving us feedback all the time, whether it's in micro moments or

00:13:28
even just, you know, writing in, you know, with, with some longer

00:13:31
feedback. So there's like, is there some

00:13:33
feedback loop either of the product shape or of the model

00:13:36
that we can better do? So there's one.

00:13:38
The second one is, is there something about the category

00:13:42
that we think we have some unique perspective on either

00:13:45
because of like what we've built internally or what we're trying

00:13:49
to do with the models And like, then it's worth like building

00:13:51
some product surface around there as well.

00:13:54
And then the third one is kind of like we get from just a

00:13:57
customer draw, especially as we expand into different verticals.

00:14:00
So cloud and Excel came very much from talking all these

00:14:02
financial services companies and me like, hey, I want you to just

00:14:05
bring this closer to the work that I'm doing.

00:14:08
But I do think that like there's I, we've been doing more of

00:14:13
these even like time limited sort of like research previews

00:14:16
or demos. And I'd love to do more of those

00:14:17
even on the consumer side as a way of sort of.

00:14:20
With a standalone app, I mean, I, you know, yeah, it could be

00:14:22
at Meta. I mean, you guys came up with

00:14:24
standalone apps, like how much do you want?

00:14:26
What is it, Slingshot or whatever, like various

00:14:28
experiments versus nobody want to work out of the core app.

00:14:31
Like what's the lesson from that experience?

00:14:33
I think. It's, I think there was a few so

00:14:35
for us. Slingshot The right 1 is.

00:14:36
That slingshot, like Facebook built slingshot, We built one

00:14:39
called Bolt that nobody remembers.

00:14:40
It's like very funny. You would open it to like a

00:14:42
camera. So like at that time, the big

00:14:44
criteria was, well, people have a very specific sort of

00:14:48
expectation of what happens when you open Instagram.

00:14:50
And it's not that it opens the camera, right?

00:14:52
And it was like our most interesting competitive a snap

00:14:54
at the time. And it was like, well, they

00:14:55
opened the camera, which means that messaging is really fast

00:14:58
and they can be built in separate messenger.

00:14:59
That was the whole thesis behind building like first Bolt.

00:15:01
And then there was like an Instagram direct separate app

00:15:04
exploration. But I actually think there was a

00:15:05
kernel of of insight there that I think applies here, which is

00:15:08
if the reason you're opening an app right now is to ask a

00:15:13
question of AI, then like I think we can extend Claude in

00:15:16
different ways of doing that. But that isn't the be all, end

00:15:18
all of what you might wanna do if you're trying to get a really

00:15:23
specific type of interaction, Maybe there's something around

00:15:25
your health journey and Claude can be a good companion for

00:15:28
that. So I think it's still asking the

00:15:30
question of what is the purpose when you are like entering the

00:15:34
app, like what's the context that you're in?

00:15:35
And then you know, does it cloud the use case to have something

00:15:39
else embedded in I? Mean we we've talked about this,

00:15:41
you know, verticals you're interested in clearly coding

00:15:44
financial services. You just touched on health is

00:15:48
that help the consumer? You know, you know we had

00:15:52
another event I talked to the CEOs of bridge and open

00:15:54
evidence. I've actually been playing

00:15:55
around with open evidence that one's targeted at doctors.

00:15:58
It's interesting to go through and it's, it's very like, you

00:16:01
know, clinical, like a doctor. Do you think you'd do something

00:16:04
custom for me, the patient, to navigate what a doctor's doing?

00:16:08
We see it's interesting, like there's already so much of what

00:16:10
people are using Cloud four today.

00:16:12
Like when we we have this like if you've ever seen like our

00:16:17
topic economic index, the way we like generate these like

00:16:19
insights and how people are using cloud as you basically

00:16:21
have like cloud run analysis in a privacy preserving way.

00:16:24
So we never look at the chats, but cloud can do it in a, in a

00:16:27
way that's privacy preserving. And I did that for I asked the

00:16:29
question of like the healthcare piece or like how are people

00:16:32
using it? And there is like, you know,

00:16:34
double digit percentage of cloud conversations are about people's

00:16:37
health. And I hear all the time from

00:16:39
people like the first thing I do when I get a new lab result is

00:16:41
like I put it into a cloud project and I have like, I have

00:16:44
this like history there. So there's clearly a pull there,

00:16:46
but it's so annoying, right? It's like all our.

00:16:48
Pregnancy information we would just dump into models like tell

00:16:51
us what you think, tell us what you.

00:16:52
Think and if you get a like lab result back, it's like, well, I

00:16:54
gotta go download it. So I'd love to see like a you

00:16:57
got privacy aware solution for more of that.

00:17:01
And you think that could be sort of a custom?

00:17:02
I think, yeah, that could be like a more sort of bespoke

00:17:05
experience. And then maybe I'll also is like

00:17:07
share it across like both patient and doctor, right.

00:17:09
So if you have like a you know, I like the reality somebody's

00:17:13
that a great phrase was at a healthcare conference recently.

00:17:16
It was like it's almost inevitable that most doctor

00:17:19
visits will now be second opinions because your first

00:17:21
opinion almost inevitably is that you're going to ask talk to

00:17:24
like Claude or model about it. So let's embrace that and be

00:17:28
like, not just like, Oh, I've heard from somebody that there's

00:17:30
a thing and we're like, great. Let's acknowledge that.

00:17:31
You probably asked, you know, one of the LMS this question,

00:17:34
like what did you learn? And like, let me let's like have

00:17:37
a conversation about that overall.

00:17:39
So there's that piece. And then I like there's a lot

00:17:42
that gets dropped today in the sort of multi doctor like

00:17:46
patient journey. And nobody's often looking at

00:17:49
the kind of holistic experience. And I think there's a real role

00:17:51
for AI to play in sort of stitching those different pieces

00:17:55
together and generating and said that might not come even among

00:17:58
experts among these different disciplines who are, by the way,

00:18:00
probably super busy, like contended like contact switching

00:18:04
all the time and not stepping back and saying, like, all

00:18:06
right, this is the full view of this person given everything

00:18:08
that I can that I can infer there.

00:18:10
Obviously we have a lot of startup founders here.

00:18:13
They sort of want to know how to work with you, and it's such a

00:18:15
balancing act at once. They want to know, oh, you're

00:18:18
not, what won't you do with that is exactly like me, so I avoid

00:18:22
that. But where will you be more

00:18:24
capable so that I can benefit from any improvements you make

00:18:27
without competing directly with Anthropic?

00:18:29
That's such a complicated relationship.

00:18:32
Like what? What advice would you give to

00:18:34
people in terms of reading the tea leaves and saying OK if

00:18:37
anthropic saying this I'm safe to build here or not?

00:18:40
Yeah, I was talking to a like a founder of like a very large

00:18:44
like enterprise company and I was asking him about for advice

00:18:48
on this question because they had had to navigate this over

00:18:51
years where, you know, they'll build some functionality

00:18:53
themselves. They also have like a rich

00:18:55
partnership and and sort of like, you know, marketplace

00:18:58
ecosystem, which is what we have as well.

00:19:00
They're like, you know, our first party products are more

00:19:02
like scope. There's like a lot more in the

00:19:04
platform. I think there's a few principles

00:19:06
I try to operate on. One is transparency.

00:19:08
So like when we launched cloud code before we ever launched,

00:19:11
like I got on the phone with like all of our major coding

00:19:13
customers, like here's why we're building, here's what we hope to

00:19:15
get out of it. Here's how if we do it right, it

00:19:17
should actually be a rising tide that lets everybody using cloud

00:19:21
encoding. So there's that transparency

00:19:22
piece. The second part is like of that

00:19:25
transparency is like telegraphing a little bit where

00:19:27
we're going in terms of what we think are interesting verticals.

00:19:30
So we did our cloud for financial services launch, we

00:19:33
did cloud for life sciences about a month ago.

00:19:35
And part of the role of those launches is this isn't just a

00:19:39
first party product. It is a vertical or kind of set

00:19:42
of capabilities we want our models to get good at overall.

00:19:45
So if you are a builder like this might be a good place to to

00:19:49
get on. And our definitely goal is not

00:19:50
to like own that whole space, it's to enable all these

00:19:53
different companies to then go and build some different pieces.

00:19:56
And then what's been more interesting on the go to market

00:19:59
front is? We're now starting to see, you

00:20:01
know, all right, I'm already like buying a big commit of

00:20:04
Anthropic tokens. Can I use some of those on

00:20:06
another product that's Anthropic Power.

00:20:08
So I think there are going to be other ways in which we can work

00:20:10
with both startups and the larger companies in helping

00:20:13
deploy their like solutions into the enterprise.

00:20:16
Another core thing startups and everybody wants to know is how

00:20:20
much smarter will the model get? Like what?

00:20:23
What can you Telegraph to us in terms of 2026?

00:20:26
Do you think there are still major gains to be had just from

00:20:29
like the scale of compute and GPUs?

00:20:31
Are we waiting for you to pull another like rabbit out of the

00:20:34
hat in terms of like reasoning models or some technique like

00:20:37
that? Like what what can you say about

00:20:40
what next year looks like in terms of the capabilities and

00:20:43
sort of raw intelligence that Anthropic will provide?

00:20:46
Yeah, it's an interesting like perspective I get from startups

00:20:49
where sometimes I talk to them and they're like models are

00:20:51
great. Like we're just gonna like we

00:20:53
have a bunch of work to do on like the go to market or like

00:20:55
the scaffolding or the skills around it.

00:20:58
And I'm like, that's a good answer.

00:20:59
I guess. Like you can keep going that

00:21:01
way. And then there's other startups

00:21:02
that are like you're like, we have a super hard eval and

00:21:05
you're at 40% and we think like at 60%, it's like.

00:21:08
Right. And there there's a lot of VC

00:21:10
wisdom. It's like build something so

00:21:11
you're ready when the next model comes, which.

00:21:13
Is yeah, which that category, I feel like it's something I've

00:21:14
said on stage, like, you know, it is a real thing and there's

00:21:17
like probably some like midpoint in there.

00:21:19
But like I'll tell you that whenever we have a new model

00:21:22
that's like baking and we have even like an early snapshot on

00:21:25
it. Like I have my list of companies

00:21:27
that have in the past been at that like, yes, we are pushing

00:21:31
your model as hard as possible so that those gains actually get

00:21:34
shown. And like, I guess like you want

00:21:36
to be one of those start-ups or even like forget start-ups, but

00:21:39
nearly any company because I think the labs will want to.

00:21:42
Sort of you're saying if you're one of those companies, you're

00:21:44
doing well enough, you start to say, OK, we're going to be able

00:21:46
to get you that last 10%. Yeah, and we like we'll want to

00:21:49
go, you know, in some cases actually go hill climb on that

00:21:52
eval. But it just in general be like,

00:21:53
OK, this is a demonstration of how well the models do at like

00:21:57
defensive cybersecurity, which I think is an area I'm really

00:21:59
interested in. And so if that's the case, like

00:22:01
let like the companies that are pushing us the hardest, they're

00:22:03
also the ones that we call them because we know that they're

00:22:06
actually going to be doing, they'll be able to show a

00:22:08
difference. And like even like the peek

00:22:10
behind the curtain whenever we launch a new model, it's like

00:22:13
just smarter is not a very effective marketing pitch,

00:22:15
right? So the more we can say, right,

00:22:17
And here is like a particular customer that demonstrated this

00:22:20
really well. But back to your original

00:22:21
question, I think there's still a lot of juice left in like

00:22:26
scaling up models, like training them to do things and then also

00:22:29
layering on the right skills on on top.

00:22:32
So that's like I think of like. Tool use.

00:22:33
Tool use. Is a great one, like, and then

00:22:36
again, I'm like, who's pushing us the hardest?

00:22:38
It's the companies that say, hey, I'm trying to give the

00:22:40
models 50 tools, 100 tools. Like all of these models, like

00:22:43
at some level just start getting confused.

00:22:45
If there's too many tools, can we do that?

00:22:46
Can we make that better? So that's like the kind of like

00:22:49
edge pushing that we need. And like reasoning models, do

00:22:54
you think there's more progress from that or any other

00:22:56
techniques where you think, okay, that's gonna be a reason

00:22:59
we improve next? Yeah, I mean, even within

00:23:02
reasoning, it's been interesting to see like figure out what the

00:23:06
there is some like additional parameter that people care

00:23:09
about, which is, yes, you got to the answer, but were you able to

00:23:12
get to it quickly in an efficient way?

00:23:14
So I think there's there's that kind of parameter to to to poke

00:23:17
at. Then there's reasoning in the

00:23:18
middle of responses as well, which is something that like

00:23:21
Claude can now do and you watch it like, well, if it's doing a

00:23:23
lot of web searches, it'll some nice reflect halfway through and

00:23:26
be like, that was a good answer to that first question.

00:23:28
Let me go and like figure out the answer to the next one as

00:23:30
well. So you want that back and forth

00:23:31
of sort of internal monologue, use user response and all of

00:23:36
those different pieces. In my conversation with Max and

00:23:39
James earlier, I said nobody's talking about AGI anymore.

00:23:43
I feel like at the first rural valley, there's this sort of

00:23:46
obsession of like we're going to reach artificial general

00:23:49
intelligence. I've sort of chilled out a

00:23:51
little bit, partially because, you know, it's taking time.

00:23:54
What is the what is your view? What's the view within the

00:23:56
company? How much this is still like a

00:23:58
race to AGI and like how are you feeling about like timelines?

00:24:03
I think it's still is this sort of look at what are the hardest

00:24:07
things that are you can that maybe I'll break down to two

00:24:10
pieces, like for a given like task or problem, like how

00:24:15
independently autonomous and and sort of successfully can those

00:24:18
models operate? I don't know what time horizon,

00:24:20
right, And whether that's like hours of coding or whether

00:24:23
that's, you know, go off and do research tasks or whether it's

00:24:26
do really complex financial analysis or whether it's like

00:24:29
optimization problems are all like that Feels like we still

00:24:32
have a lot to to go. And I don't know, I guess at

00:24:34
some point you, you can call something super, you know, human

00:24:37
in in levels. It probably it already is in a

00:24:39
lot of those different areas. So there's that piece and then

00:24:42
there's this other area, which I think about a lot, which is how

00:24:45
do the models manifest in a way that actually learns the like

00:24:51
call them soft skills or like skills around the fact that

00:24:53
they're like very, very good at like writing code or acting

00:24:56
agentically, for example. And like, I think that's the

00:24:58
other piece where that'll feel like maybe the next moment where

00:25:01
it's like, oh, there's it feels like there's been some departure

00:25:04
here where you know, it understands what's like

00:25:07
information it should reveal to somebody else versus not.

00:25:09
It understands like the social dynamics of the company and

00:25:12
power and like and all these different things which are

00:25:15
harder to train for, I think, right.

00:25:17
I mean, main shortcoming of the models to me is often when you

00:25:20
ask a question and it doesn't say I, I'm not really

00:25:22
sophisticated about this or like what's stopping the models from

00:25:26
saying, oh, I don't have a great answer in this case.

00:25:28
Like that's often the most intelligent people disclose when

00:25:32
they don't know something. Why?

00:25:33
Why can't the models do that? Are you working in that area?

00:25:36
Yeah. I think that's an important

00:25:37
piece, which is a kind of express uncertainty and you

00:25:40
know, we'll look at it like often the consequence of not

00:25:44
telling you that it doesn't know is that it'll go on confabulate

00:25:46
something and then the feels wrong.

00:25:48
So like we look really carefully at hallucination rates or

00:25:50
something to drive down. But I think it is something that

00:25:52
we can better train into the models around.

00:25:54
What is the uncertainty that you have?

00:25:56
Or do I need to go, you know, phone a friend or do a web

00:25:58
search and go and then do this space.

00:26:00
But then tuning that is really important, right?

00:26:01
We had a internal version that did way too many web searches

00:26:05
and you'd be like, you know, like why is the sky blue, which

00:26:08
is a question my daughter had and was like, I'm going to

00:26:10
search the web for it. I'm like Claude, you know, you

00:26:12
have an answer that you don't need to search the mode for

00:26:14
that. So tuning that is actually

00:26:15
nuance or you don't just want a thing that just be like, cool,

00:26:18
let me Google that. For you, right.

00:26:19
And obviously, if the model was just resulting every time, I

00:26:21
don't know, I'm just, you know, that would be disappointing.

00:26:24
There is nuance there as well, but I think that like that

00:26:26
nuance of uncertainty matters. And then also like the model

00:26:30
learning from your interactions, not just in terms of like I

00:26:33
remember that Eric has these properties, but also, hey, I

00:26:36
like I, I've learned something about how we work together that

00:26:39
I think is still another unsolved problem for these

00:26:41
models. I mean, if you were tell to tell

00:26:44
people to run towards this space next year, like just like a

00:26:47
couple of areas, I know we've talked around that's but but

00:26:50
like where do you think people should be building or

00:26:52
positioning themselves? I think, I mean, I get very

00:26:55
interested in the life sciences overall and like that's both

00:26:58
like obviously like large industry, but also like this

00:27:02
incredible potential for human benefit as well.

00:27:05
And like when you think about all of the things that happened

00:27:07
from ideation, even like fundraising upstream of that to

00:27:11
discovery, the back office, the testing, the trials, like the

00:27:14
model, like there's like a whole complement of things.

00:27:16
That's one area that I I get really, really excited about.

00:27:19
And then there's still, I think, you know, it's like some been

00:27:22
good, some good conversations and like art agents real and

00:27:25
even some of the conversations today I've touched upon it.

00:27:28
There's still a lot of value in that anti slop, not just making

00:27:32
it work, but making it work so well that you rely on it and you

00:27:35
want it's your first protocol because you generally believe

00:27:38
it's going to save you work. Mike, thank you so much.

00:27:40
This has been great. Thanks for having me.

00:27:42
For founders and developers building modern data-driven

00:27:45
applications, Mongo DB's local event series is coming to San

00:27:48
Francisco on January 15th, and it's designed to help you focus

00:27:52
on innovation, not infrastructure.

00:27:54
You'll learn about technologies, tools, and best practices that

00:27:57
make it easy to build and scale modern applications without

00:28:01
complexity. Plus, attendees will hear

00:28:03
directly from experts and innovators who are using Mongo

00:28:06
DB to power the next wave of AI applications.

00:28:10
Mongo DB dot local Francisco January 15th, Learn more and

00:28:15
register at MDB dot link forward slash SF-DOT dash local or click

00:28:22
the link in the description. Our next segment features a chat

00:28:25
between my Co host Max Child and Matti Stunischewski, CEO of 11

00:28:28
Labs, a conversation that was all about the rapid evolution of

00:28:32
AI voice and how it's quickly becoming the primary user

00:28:34
interface of AI. They also discussed how their

00:28:37
technology is being used in everything from customer support

00:28:39
and education to gaming and celebrity voice cloning.

00:28:43
Matti also shares his thoughts surrounding 11 Labs focus on

00:28:46
prioritizing vertical specific solutions, authenticity, and

00:28:49
user safety. Now please welcome to the stage

00:28:55
Mahdi Staniszewski, Founder and CEO of 11 Labs, in conversation

00:28:59
with Max Child. All right, Mahdi.

00:29:11
So 11 Labs is obviously extremely well known for voice

00:29:16
AI, for text to speech for I think that beautiful intro we

00:29:20
just got was actually an 11 Labs amazing a little Co branding

00:29:23
there. And I'm wondering, you know, in

00:29:26
the last discussion we heard this, you know, topic of is the

00:29:29
text box the best interface for AI?

00:29:32
And I would imagine you have a take on how, no, you know, voice

00:29:35
is the best interface for AI or voice is the best interface for

00:29:37
computing going forward. I'm interested, like what do you

00:29:40
think are the best use cases for voice AI and, and where do you

00:29:43
see it, you know, today, a year from now, five years from now

00:29:45
and beyond? First of all, thanks for having

00:29:49
me here. Good to see you all and I

00:29:50
actually didn't know this was was generated, but it had a

00:29:52
great pronunciation of my surname.

00:29:54
I touched it hard, so I'm. Happy you guys.

00:29:56
Train on your last name specifically.

00:29:59
It's we should, I don't know if we do, so we'll definitely do

00:30:02
now going forward. But the SO as a company, one of

00:30:06
the key things we are aiming to solve is how humans and

00:30:11
technology interact, how you create with technology and make

00:30:14
it seamless, how you interact with technology and make it

00:30:16
seamless. And in general, to your

00:30:18
question, we think voice will be one of the key interfaces for

00:30:21
interacting with the technology across from the simple pieces

00:30:26
like interacting with the personal agent to help you go

00:30:29
for the day where it can be on your headphone and be able to

00:30:31
guide you through to education. That's one of the ones that I'm

00:30:35
probably the most excited about where in the future, the

00:30:39
combination of what lamps allow you and what voice will allow

00:30:42
you is that you'll be truly immersed with learning the given

00:30:45
experience where you'll be able to effectively have your

00:30:48
personal tutor on their phone helping, helping you across.

00:30:52
Then the third one is of course, for voice and for the language

00:30:56
barrier to break. We need to figure out how to be

00:30:58
able to speak across different languages while carrying the

00:31:01
same intonation, emotion, voices, which, which, which,

00:31:04
which will be a big, a big shift.

00:31:06
And then in general, how we interact with everything around

00:31:09
us, whether it's the, the, the laptop, the phone, the robot in

00:31:14
the future. And I think robot may be the

00:31:16
easiest example. Of course, this will be voice

00:31:18
driven. There's no other interface.

00:31:20
And you know, today, maybe it's a year or decade, as Karpati

00:31:24
said, of agents. Of course, there is on the

00:31:27
horizon the decade of robots. And and I think here too, the

00:31:31
most common interface will be. It's all going to be.

00:31:33
Yeah, yeah. I mean it's.

00:31:35
Interesting you brought up those use cases of like a personal

00:31:38
assistant, a tutor and I guess a robot, you know, house house

00:31:42
helper or Danny or something like that.

00:31:44
Like is your mental model that basically anything that today is

00:31:48
something where you could have a human counterpart, right?

00:31:50
A human tutor, a human assistant, you know, human in

00:31:53
your house. Like you're going to fall to

00:31:56
that voice interface as the most natural way to do it because we

00:31:59
as humans are already used to using voice for those things.

00:32:02
Or are there things where today voice isn't used at all, really,

00:32:06
but it's something that we're going to expand into going

00:32:08
forward? Yeah.

00:32:10
So first of all, for sure, I mean, are we already seeing

00:32:12
that? And I think that's the easiest

00:32:13
one and the most immediate 1 is how customer experience,

00:32:16
customer support is just changed and elevated.

00:32:18
Where instead of calling the and trying to rebook your your

00:32:23
ticket and going for this IVR flow of click one click 5 to get

00:32:28
the steps and waiting for the number of minutes.

00:32:30
And you you will have an agent that fully understands you can

00:32:34
guide you to the response and and go through yourself I.

00:32:36
Wanted to get into that actually, because we talked a

00:32:37
little bit about agents on the phone and the, the, you know,

00:32:40
calling United Airlines or American Express or something

00:32:43
like what percentage of customer support calls today are

00:32:46
actually, you know, managed by a voice AI system or an agentic

00:32:49
system or whatever you want to we call it versus, you know,

00:32:52
like, you know, IVR touch buttons and, and how do you see

00:32:57
that progressing over time? Like is that exponential curve

00:32:59
going like this every year? Yeah.

00:33:01
I think it's I, I think the exponential curve is is going

00:33:03
like this especially like this year we've seen incredible

00:33:06
adoption where it's yeah Cisco, Twilio, Deutsche Telekom, all of

00:33:09
those kind of leaning in quickly to rebuild how you interact with

00:33:13
with help of voice agents. Yeah.

00:33:16
And I think the you're right, the IVR flows is still a big

00:33:18
part and. Can I call today and get a voice

00:33:20
AI agent on the phone? You can you can from we did our

00:33:26
little summit yesterday as well and one of the great ones was

00:33:31
voice ordering with square and you can call square and actually

00:33:34
order food delivery through help of a lot of their their shops

00:33:39
that work around with square and actually do it through through

00:33:42
voice. I actually recently there's if

00:33:44
any of you are from London or travelled to London, there's an

00:33:47
amazing restaurant called Zephyr.

00:33:48
It's a Greek restaurant. OK, where you, we, we, we worked

00:33:53
with with the company supporting that.

00:33:55
Where to my happy moment. I noticed that on their website

00:33:59
that actually had 11 laps agent that you could call and actually

00:34:02
book book a spot there too. So you can book a reservation at

00:34:05
this restaurant in London with a voice AI.

00:34:07
Exactly. OK.

00:34:08
And it connects of course, to your calendar, your appointment

00:34:11
scheduling, which is, which is great.

00:34:13
And when do you think we hit the tipping point where like the

00:34:15
average customer service call goes through a voice AI agent,

00:34:18
like the median, the 50% point, whatever you want to call it?

00:34:22
I think over next 18 months. Next 18 months.

00:34:25
Okay, so like mid 27 I call the average customer support is

00:34:29
handled by voice AI. Exactly.

00:34:31
And I think I mean, this is the most immediate one, the one

00:34:33
where we see the highest LOI in value.

00:34:35
Some of the other these cases we see us that kind of that kind of

00:34:38
where the future is headed. But to your point, there's,

00:34:41
there's definitely one flavor of of, of, of of your, your point,

00:34:44
which is how you can do things more efficient through voice

00:34:49
with the existing services. But there's also the second

00:34:51
theme, where you can do things that were impossible ever before

00:34:55
one. One of the good examples was our

00:34:57
work of Epic Games, where we brought effectively Darth Vader

00:35:01
alive in Fortnite, where millions of players could

00:35:04
interact with Darth Vader live throughout the game, which of

00:35:09
course is not possible in any other way.

00:35:11
James Earl Jones voice, right? James Earl Jones and his estate

00:35:15
worked with us and it's such an iconic and incredible voice.

00:35:18
And we think like in general that that concept of like what

00:35:21
was never possible before, where you have incredible voices,

00:35:25
talent, you can now shift them to be not only static, but

00:35:28
actually dynamic delivery, personalized and different for

00:35:31
all the users. Something that you you are

00:35:33
already doing in in many ways at at volley as well in an

00:35:36
incredible way. I think this will be a big I.

00:35:38
Have to ask that someone in gaming, right?

00:35:39
I mean, the Darth Vader did famously go slightly off the

00:35:43
rails and maybe say some things he shouldn't have to various

00:35:46
players online. Like how, how involved are you

00:35:48
guys in that? How much of that is something

00:35:50
you're protecting or I guess going forward, like obviously

00:35:53
with the IP partners and so on, they really want to protect, I

00:35:56
guess. I guess you couldn't say it's

00:35:57
the squeaky clean image of Darth Vader, but a certain persona of

00:36:01
Darth Vader. Like what's, what's your sort of

00:36:03
go forward plan as you license more of these IPS and voices and

00:36:06
things like. That, yeah.

00:36:08
So, so on that project we are specifically involved on the on

00:36:11
The Voice side, yeah. But in general, as you think

00:36:13
about those deployments and, and that's the most, the most common

00:36:17
theme is you not only need the, the voice or the interactive

00:36:20
experience. That's kind of one part of the

00:36:22
equation. Then there's two other big

00:36:24
pieces to really make them valuable.

00:36:26
The second one is how you integrate that with other

00:36:28
systems and actually bring the, the knowledge base, the data,

00:36:31
the business logic inside of the system and how do you make it

00:36:34
interactive, the real world. And the third one, which is the

00:36:36
one that you mentioned is how do you now deploy that in

00:36:38
production with the right testing flow, right evaluation

00:36:41
flow and then monitor over time as we, as I said, behaves put

00:36:44
right and evaluate and adjust that based on that case.

00:36:48
So that's something that we we spend a lot of time on with with

00:36:50
a lot of players. Testing more and evaluating more

00:36:54
so Darth Vader doesn't go off the rails.

00:36:56
Yeah, it's any, any and even even like in a customer

00:36:59
experience, you don't want this for example, to shift and speak

00:37:02
about politics. You wanted to keep it on the on.

00:37:04
Even if, even if you say ignore all previous instructions and.

00:37:07
Exactly, even if you which is actually harder to say when you

00:37:10
have like this, if I see the problems with a lot.

00:37:13
To do prompt injection with voice.

00:37:14
And the Unicorn. Both characters say them all.

00:37:16
Oh, yeah, yeah, possible. OK, so maybe voice is slightly

00:37:19
less susceptible to prompt injection than LLMS.

00:37:22
I'm interested with like that's a good segue into sort of

00:37:25
celebrities and celebrity voices because I know you guys

00:37:28
announced, I believe yesterday, you're setting up kind of a

00:37:31
marketplace for celebrity voices and you have Michael Caine on

00:37:34
there. And, you know, at our company,

00:37:36
we build voice AI games. As you know, I would love to use

00:37:39
Michael Caine in our game. Yeah, we can.

00:37:43
You know, obviously it's a high gravitas.

00:37:46
We can make a Batman game with him, something like that.

00:37:48
Like what is the process between, oh, I want to use an AI

00:37:52
version of Michael Caine in my game to actually, you know,

00:37:56
shipping and, and what, which parts do you guys take care of

00:37:58
and which parts do I need to go off and deal with Michael Caine

00:38:01
people, I guess. Yeah, so there, so there there's

00:38:04
effectively through 11 apps. We we've created a huge

00:38:08
marketplace of voices. Until yesterday that meant that

00:38:10
everybody here could create their voice.

00:38:13
Any voice to actor voice talent could create their voice, share

00:38:16
it and earn money when the voice is being used. 10 voices

00:38:19
created this way paid back coincidentally $11 million back

00:38:22
to the community for for a long time it was tricky for the

00:38:26
iconic voices of how we could bring them onto the platform in

00:38:29
a more even more controlled environment.

00:38:31
So if you think about Sir Michael Caine voice.

00:38:34
Sir Michael Caine. Sir Michael Caine, it's, it's an

00:38:37
incredible person or two. You effectively all you would do

00:38:41
is is engage like, hey, this is the project we want to run

00:38:45
create a game with this specific character.

00:38:47
This team would evaluate that and then and then we would help

00:38:50
deploy that project in actual production.

00:38:52
So going through all those steps, how do we make sure that

00:38:54
there is right safeguards in place?

00:38:56
How do we make sure that there's monitoring place so it doesn't

00:38:58
go off the rails and and build that in our agentic system.

00:39:02
Got it. But yes, the initial stage of

00:39:04
what's the project, what's the compensation structure would be

00:39:07
between between you and. Got it.

00:39:09
So you guys sort of manage the safety, you know, the agentic

00:39:12
elements of creating Sir Michael Caine within the game, but you

00:39:16
still have to do the deal one-on-one with him, just sort

00:39:18
of facilitate. Exactly and over time we think

00:39:20
it will evolve whether like you know as we see more examples

00:39:22
preset rates on how that that. Work.

00:39:25
I am interested actually this brings me to more general point.

00:39:27
You said you had 1010 thousand plus voices of folks uploaded

00:39:30
where you could use any of their voices, I think via just your

00:39:34
marketplace model, like has like, you know, deep faking and

00:39:38
so on been an actual problem. I feel like it was something I

00:39:40
was hearing a sort of a moral panic, you know, 12 to 18 months

00:39:43
ago that we're all going to have our voices faked on the phone

00:39:45
and you know, my grandmother was going to get scammed out of her

00:39:48
money because I'm locked in jail or something.

00:39:50
Like, is that something you guys see at all?

00:39:51
Like, is that something you're protecting against a lot?

00:39:53
Like how serious of an issue is that with voices?

00:39:55
I. Think you're you're you're

00:39:57
you're you're right. It's, it's I, I still think it's

00:40:00
going to be a big issue like in future all kind of will be air

00:40:03
generated. We need to find a mechanism to,

00:40:05
to protect and understand which ones are, which ones are, which

00:40:07
ones aren't. And, and as, as, as a company,

00:40:11
like living in a space, we do place a lot of safeguards where

00:40:13
it's traceability, how we moderate, how you can detect the

00:40:16
content and give that tools to others.

00:40:18
Very quick story. On the flip side of that, what

00:40:20
we've seen recently, we worked with a charity which effectively

00:40:24
detects the callers based on IP and.

00:40:28
If the IP is likely to be one of the scammers, and they have

00:40:31
roughly a good approximation of one that can can be coming from,

00:40:35
they would have the real scammer call in and deploy a voice agent

00:40:39
to waste their time. OK photos, brilliant.

00:40:42
You're scamming the scammers. You're scamming the scammers.

00:40:44
The long term. Strategy You think this will

00:40:46
work for I? Think the long term strategy is

00:40:48
you need 3 layers. You need a human authenticated

00:40:50
layer. So on device encryption where

00:40:52
I'm calling you, you know, this is Maddie Small, it decrypts on

00:40:55
your side. That's layer number one layer #2

00:40:57
all of us will have an agent where it's the personal tutor

00:41:00
agent, an agent that books things on our account and they

00:41:02
will likely carry our voices, carry our style, do our

00:41:05
permissioning. We need a layer where that's

00:41:08
watermarked and authenticated, that we know it's a permission

00:41:10
and peace like what we're doing. If Sir Michael Caine, all the

00:41:13
content is generated will carry information that this has been

00:41:16
carried information. Do you watermark all your voices

00:41:18
today out of curiosity? All the voices are traceable

00:41:20
back to 11 laps. Yes, and then the third layer.

00:41:24
Everything else by default will be AI generated.

00:41:26
Got it. I mean, one sort of area that's

00:41:29
interesting to me with you guys is you've launched, you know,

00:41:31
text to speech model, you've launched a speech to text model,

00:41:34
you know, recognition model, you have an agentic orchestration

00:41:37
system, you have, you know, all these safety and evaluation

00:41:40
valuation tools. Like in almost all those areas,

00:41:42
I feel like folks in this room, you know, insider AI founders,

00:41:47
investors and so on, could probably name like 2 to 3 big

00:41:50
competitors, some, you know, some with bigger bankrolls than

00:41:53
you. And, you know, somewhere you're

00:41:54
much farther along. Like how do you think about like

00:41:56
competition more generally and sort of all these pieces of the

00:41:59
space that you're playing? And, like, are there parts where

00:42:01
you see it becoming a commodity someday?

00:42:04
Other parts where you feel like you have a more sustainable

00:42:06
competitive advantage? Like, how do you kind of go

00:42:08
through the list of all the products you're working on and

00:42:10
like, you know, figure out where you shake out competitively, I

00:42:14
guess? Yeah.

00:42:15
So we started very much on the foundation of model side.

00:42:17
And in general, we think we take an assumption if you ask anybody

00:42:21
at level up. So they will take it too that

00:42:22
over time the models will commoditize.

00:42:24
That's the assumption we go with.

00:42:25
So all models will commoditize. Basically all models they won't

00:42:28
like you know the commoditize here that what they mean.

00:42:31
What we mean is the differences between different models will be

00:42:33
just so negligible. Maybe in some domains a little

00:42:36
bit more, but in general they will be relatively negligible.

00:42:39
And that's where that shifts to the product and why we invest so

00:42:42
much on the creative side of creating a platform where you

00:42:44
can combine all of that together in a controlled way with

00:42:47
incredible voices, incredible ecosystem, new ones across

00:42:50
languages, accents, voices. And on the other side as we

00:42:53
build agents and help people deploy agents, we we deployed

00:42:56
out for a specific use cases of a specific industries working

00:43:00
very deeply with the customers to understand their domains and

00:43:03
work backwards from there on what actually needs to happen on

00:43:05
the agent side to deliver value. Got it.

00:43:07
So you're saying you're going to specialize in certain industries

00:43:10
and sort of really deliver extra value there, even though all the

00:43:13
models are commoditizing? That's the.

00:43:15
I think the product layer is under appreciated here.

00:43:17
I think you still need to build, even if you're on the exactly on

00:43:20
the agent side, you need to build so many integrations to

00:43:23
connect with any of the legacy systems to actually take the

00:43:25
those appointments and, and calling, you need to build the

00:43:28
right control. And when you hand over from an

00:43:29
AI agent to a human agent, you need to have safeguards of, of,

00:43:33
of some of the ones we spoke about.

00:43:35
They need the monitoring of how you deploy.

00:43:37
All of that is not only a technological shift, it's also a

00:43:40
business shift. So by us working so deeply with

00:43:42
the, with the customers, it's actually bringing out the

00:43:44
knowledge about their business inside of the agent to actually

00:43:47
be able to deploy that value. And I think that that will

00:43:51
continue delivering value for, for the long term.

00:43:53
And then of course, you can go layer above where you know while

00:43:56
the models will be relatively similar, the value will actually

00:44:00
be on how you can make the models work well for your use

00:44:02
case. So maybe you can find you in the

00:44:04
specific voices, find you in the specific use cases.

00:44:07
So it works slightly different in the gaming use case to a

00:44:10
customer experience use case. And I think that value layer

00:44:12
will still be. Got it.

00:44:14
So all the models will be commodity, you guys will win on

00:44:17
products and sort of vertical specific differentiation.

00:44:19
Exactly. And the wider ecosystem that we

00:44:21
build alongside where I think as we think about the work we would

00:44:24
love to work and bring industry on board with that change, It's

00:44:27
so important to bring a lot of the talent, a lot of the

00:44:30
partners to, to work together. By talent you mean actors?

00:44:34
Famous voices. Actors on the voices side or

00:44:37
integrations on the on the agents?

00:44:38
So what's as my last question, what is the coolest sounding

00:44:42
voice on the 11 Labs platform? It could be like a celebrity.

00:44:45
It could be a famous historical figure.

00:44:47
Like what is this? You're like, man, that is an

00:44:49
incredible voice and I cannot believe how well it sounds when

00:44:51
we synthesize it. So my favorite and my Co

00:44:54
founder's favorite physicist is Richard Feynman.

00:44:56
Sure, for those that are the not truly you're joking.

00:45:00
And he's so involved incredible in delivering the knowledge, but

00:45:03
also in the style he delivers the knowledge.

00:45:05
And now we have Richard Feynman on our platform, which I think

00:45:09
is was so cool for learning the subject and speaking with

00:45:12
Richard to the reading his lecture now listening to his

00:45:15
lecture notes from Caltech. Amazing.

00:45:17
OK, I'm going to have to check that out.

00:45:19
Thanks so much, Monty. Thank you, Mike.

00:45:20
Yep, appreciate it. Thank you for tuning into this

00:45:24
weeks episode of the podcast. If you're new here, please like

00:45:27
and subscribe and appreciate your support.

00:45:29
And if you want the data Insider takes real reporting, go to

00:45:34
newcomer.co and subscribe to the sub stack as well.

00:45:38
Thanks for following along.