The Casino enters the arena + Guillermo Rauch of Vercel on AI factory builders
Newcomer PodAugust 22, 202500:50:5946.69 MB

The Casino enters the arena + Guillermo Rauch of Vercel on AI factory builders

Is the AI bubble popping—or just catching its breath? Eric Newcomer and Tom Dotan spar over Nvidia jitters, Sam Altman’s “bubble” dinner, the MIT “95% fail” headline, app-vs-model margins (Cursor, Claude Code), and Chamath’s SPAC-as-casino shtick. Then Eric sits down with Vercel founder/CEO Guillermo Rauch for a fast, idea-dense jam: assistants → agents → multi-agent teams, why GPT-5’s real story is coding, “vibe coding” and code-last workflows, who gets paid in the era of AI factory-builders, whether to study CS, why taste beats code, and Guillermo’s six-month prediction for a breakout vertical agent.


00:00 Did the AI “bubble” pop? Altman dinner & sell-off vibes
01:16 MIT survey “95%” headline vs reality
09:04 Capitalism, incentives & Chamath’s SPAC “casino”
18:17 Interview starts — Guillermo Rauch (Vercel)
22:07 GPT-5 reality check & the “Einstein-in-a-box” test
37:37 Future of engineering + should you study CS?
48:36 6-month prediction: a breakout vertical agent; underestimating GPT-50


00:00:00
I think we need to break down the logic of why this week

00:00:03
people decided that the AI bubble has burst.

00:00:05
And there were some key things that happened that we, I think

00:00:07
we can deconstruct and decide whether or not they're actually

00:00:10
worth, you know, being a hater over.

00:00:12
The first one was Sam Altman had a dinner that I was not invited

00:00:15
to with reporters where he discussed, among other things,

00:00:19
that it was all on the record, you know, GPT 5 and some of the

00:00:22
disappointments around that, which are very real.

00:00:25
And also at some point said something to the effect of the

00:00:27
quote in front of me basically like, yeah, I think we're

00:00:29
probably there's some exuberance.

00:00:30
So we're in a bubble right now where some stocks are are very

00:00:33
overvalued because of AI. But then in the next sentence

00:00:36
also was like but also AI is transformative.

00:00:39
It's a classic power move. Elon does this too.

00:00:41
It's like you're in the Super hypey thing.

00:00:43
There's almost nothing to lose by being like, yeah, I don't

00:00:45
know, People are really think I'm hot shit.

00:00:47
Maybe too much, you know? Yeah, yeah, yeah.

00:00:50
My, my, my lovers love me too much.

00:00:51
My haters don't get it. But if there's something wrong,

00:00:54
it's because of something other people did, right.

00:00:57
I just want to disregard almost everything Sam says these days.

00:01:00
I mean, it's just he's on. He says so much and it's so

00:01:04
contradictory that it just doesn't even even though it

00:01:07
probably did play some part in the sell off of AI, you know, AI

00:01:11
driven stocks, the idea that oh, Sam Altman has declared a

00:01:14
bubble, so we must be near some sort of a top.

00:01:16
And then there was this MIT survey that came out and I think

00:01:19
that probably had the most effect on a lot of people.

00:01:22
Which headline number was that 95% of enterprise AI, generative

00:01:27
AI projects fail just have 0 impact on a company's efficiency

00:01:31
and bottom line. And that is it is a bad number.

00:01:35
The problem, of course, is that a headline number versus what

00:01:38
was actually in the survey is like pretty different.

00:01:41
And this was not like a a survey that was put out by a fully

00:01:45
disinterested organization like it's effectively a pro AI.

00:01:49
Right, it's. Organization.

00:01:50
It's always funny when somebody like runs away with one line

00:01:54
that you said, but then like you're like, oh, but I'm sort of

00:01:58
trying to make a pro AI point here.

00:02:01
Yeah, yeah. Like it's almost like a rat fuck

00:02:04
in that sense. Like if you're like one of these

00:02:05
people who decides to buy the headline and then you actually

00:02:08
like distribute the survey and then anyone actually reads it

00:02:10
sees that, Like, the key points that it makes is that AI is not

00:02:14
incapable of transforming a business.

00:02:17
Just the implementation of AI and its inability to learn and

00:02:21
adapt to people's workflow is what's causing it to be

00:02:24
unsuccessful. My take away from actually

00:02:27
reading through this thing quickly, quickly was that like

00:02:29
the 5% of companies that have been successful with it are like

00:02:32
going to win. Like it's really about like the

00:02:34
fact that they implemented it better and were able to use this

00:02:37
technology is why like it is actually a transformative tech.

00:02:40
And like one of the co-authors of it is a product manager at

00:02:43
Microsoft. I was just looking him up.

00:02:45
So like these people that are writing this have a they have

00:02:48
like a dog in this fight. And it's not like long term

00:02:51
it's. Going to be a good flow of the

00:02:52
argument is basically everybody sees the potential of Chachi BT

00:02:57
and Claude. They try it out, but then they

00:02:59
almost let and they just don't do enough to actually get their

00:03:02
data integrated into whatever AI tools they're bringing to their

00:03:06
companies. And so then it ends up not being

00:03:08
more useful than just like consumer grade stuff and like

00:03:11
here's what you actually need to do to be better than just like

00:03:15
using Chachi BT. Right.

00:03:16
Like that was the key point. Here's what you need to be doing

00:03:19
to make this successful. And that's going to end up

00:03:21
probably being the talking point from all of the sales people at

00:03:25
the cloud software companies that are selling this stuff is

00:03:28
like, oh, yeah, you read that report.

00:03:29
They made these mistakes. You won't make this mistake

00:03:31
because you're going to buy it and use it in this way.

00:03:33
So it ends up the whole thing was an op, I've decided.

00:03:36
I will say I texted a lot of investors this week just like,

00:03:40
what's your read on the bubble like, you know, people who are

00:03:44
like, you know, investing in these companies often.

00:03:46
I'm interested in what, you know, yeah, people who are

00:03:49
trying to make money like, you know, it's like if it's an early

00:03:53
bubble, it's like, oh, I need to keep going.

00:03:54
But you know, they're they're trying to time the bubble for

00:03:56
financial reasons. And I did get some, you know,

00:03:58
somebody who was like full bubble.

00:04:00
Now trying to decide whether this is the start of 2021 or

00:04:03
late 2021. Obviously, everything unwinds

00:04:06
from the pandemic at the end of 2021.

00:04:08
There's still upside to be seen at the beginning and AS.

00:04:12
Is the person selling? They are.

00:04:15
I am in tread lightly mode right now.

00:04:18
Ha ha. The somebody else was like, I'm

00:04:21
watching the Fed. You know, a lot of people are

00:04:23
like, I don't know, what do you think?

00:04:26
Yeah. I mean, so nobody's like, what

00:04:27
an absurd question. You know, Andreessen Horowitz,

00:04:30
who sort of dispositionally is like, why would you ever sort of

00:04:34
position yourself as a bear? Our job is to fucking be

00:04:37
professional. Bulls Just came out with a piece

00:04:39
that was, you know, knocking down basically, I thought Tom,

00:04:43
not trying to fight with your article about cursor.

00:04:45
I think, I think the sophisticated critical.

00:04:48
MBA so they just view me at the end of.

00:04:51
It the sophisticated critical take, I think as you know, we're

00:04:54
about to say tomorrow, it seems like foundation models revenue

00:04:57
is still strong, but you know, application companies, like all

00:05:01
the coding companies, you know, have pretty negative margins.

00:05:04
They're built on top of money losing businesses and themselves

00:05:07
lose money. How's that all going to work

00:05:09
out? And Andreessen and Co, well, you

00:05:10
know, I think it's, I assume it's Martine Casado and I think

00:05:14
Sarah Wang sort of, you know, pushed back against your take.

00:05:17
What do you think about that as somebody who is worried about

00:05:20
app company margins, I. Think the margins are less the

00:05:23
issue then like I think there's an interesting dynamic which I

00:05:27
tried to get to in that story, which is the relationship

00:05:29
between the model providers and the application companies.

00:05:32
And it's clear with cursors grossly you can growth that you

00:05:35
can build a huge user base on an interesting application, maybe

00:05:40
has some brand name loyalty to it.

00:05:42
But like especially with developers, like they will just

00:05:45
switch to the thing that provides them the most bang for

00:05:47
their buck. That's going to be the best

00:05:49
quality product. And like the growth of Claude

00:05:51
code over like 6 months, which I know wasn't impressive to Ed,

00:05:55
but but is to me, you know, getting to like a $400 million

00:05:58
ARR shows that like people will gladly switch to a new thing

00:06:01
that seems like it'll work well. And Anthropic has, I mean,

00:06:04
they're money losing too, but they're going to have better

00:06:06
margins than Cursor will because they're paying wholesale access

00:06:10
to the model versus the retail prices that that that cursor has

00:06:13
to pay. So I'm not incredibly bullish on

00:06:16
application layer companies in a world where model companies

00:06:19
build the same software. If they decide they don't need

00:06:21
to, that they really think they can make enough money through

00:06:24
their API business or like a consumer, a pure consumer play,

00:06:27
then maybe there's a world for application companies.

00:06:30
But like, I think it's a very in this particular sector, it's a

00:06:34
very narrow Moat. And I mean, I was talking to and

00:06:37
I. Sort of agree with the

00:06:38
Andreessen people. These companies are not idiots.

00:06:40
They're gonna figure out like some of the stuff, like the very

00:06:43
expensive stuff they're gonna offload to cheaper models.

00:06:46
I agree with you that like the big business question is just

00:06:49
like how sticky are these products?

00:06:50
I think that yeah, it's like how sticky are they?

00:06:53
I don't I and I don't, I don't have an answer, but I don't

00:06:55
think the bubble is gonna swing either way.

00:06:57
It's like, oh, they're not sticky.

00:06:58
They'll because the people are moving to another AI product.

00:07:02
You know, at the end of the day, coders want these coding tools.

00:07:06
The margins will be figured out. They're going to consume some AI

00:07:09
product, either foundation models or companies built on top

00:07:12
of it. Either way, it's good for the AI

00:07:14
hype train. Well, and I to defend the piece

00:07:17
at least like I wasn't. Yeah, No, no, I don't.

00:07:20
I don't just think too Andreessen because apparently

00:07:21
they wrote it at me. Look, I think you know that that

00:07:24
the cursor is not running out of money.

00:07:26
Like the the story was very clear that, you know, yeah,

00:07:28
negative gross margins are bad. So they have like a billion

00:07:31
dollars or something? And they're losing, you know,

00:07:33
apparently their burn is like in the single digit millions.

00:07:35
So it's really an issue of like, you have plenty of time to

00:07:38
figure these things out. And I mean, you mentioned Uber.

00:07:40
Like I covered Netflix during its most cash burning days where

00:07:44
I'll never not bring this up on the show.

00:07:45
Like I worked for an editor who like, made a career for a time

00:07:49
writing columns saying Netflix is a House of Cards.

00:07:52
Right. Has he written me a culpa?

00:07:54
No, we never do. We move on.

00:07:56
We're like the we're like the Uber haters and the AI.

00:07:58
Haters move on, you don't played it.

00:08:01
I was the factual I provided. Where's your music pulper about

00:08:04
Uber? Like, I was like the gun runner

00:08:06
of the Uber skepticism era, you know what I mean?

00:08:09
I was like giving all the haters, they're like ammunition.

00:08:12
They needed to like shit on Uber's like money losing.

00:08:14
But I was always like, I don't know.

00:08:16
And then I always got annoyed that the people who took my

00:08:19
stories and like, oh, they lose a lot of money did all these

00:08:21
like insane math formulas based on them.

00:08:23
And I'm like, there's no way. My numbers are like, so like

00:08:27
there's such impressionistic, you know, like Uber was giving

00:08:30
very like rough numbers to people when they were trying to

00:08:33
back out their economics out of it.

00:08:34
I was like, this is a lunacy. Yeah, well, the problem with the

00:08:37
hater argument there is it it relies on the investors who see

00:08:40
more than we ever get to really just being dumb, right?

00:08:44
I mean, it's the idea that, like, they don't understand it,

00:08:46
but we as the skeptic and the hater do.

00:08:47
And believe me, I don't like giving investors credit.

00:08:50
Like, it's not fun. It makes my job worse.

00:08:51
But like, they do kind of do it for a living.

00:08:54
Like that's the one thing they should be able to understand.

00:08:57
Right. Is like the basic economic We

00:08:59
should get haters on here too. I want to.

00:09:01
Get some more haters, right? No, no, we like the Ed episode 1

00:09:03
of the things the Youtubers. Somebody got mad at me for

00:09:05
saying do you believe in capitalism?

00:09:07
You know, like it was like some sort of shibboleth, but it was

00:09:10
it's sort of a it was a sincere question.

00:09:12
Like some of these people. I do want to understand like

00:09:15
what premises do we share? Because like, to me, part of

00:09:18
believing in capitalism is like investors are fairly rational.

00:09:22
They're making different bets. They have their own, you know,

00:09:24
it's just like I, I, it's useful to know how much people will

00:09:27
think like the whole capitalist thing doesn't ultimately direct

00:09:30
us towards likely outcomes. Like to me, markets are great

00:09:34
truth mechanisms and like we play a part in that by, you

00:09:37
know, point poking at things and interrogating them and then

00:09:41
people change their ideas and then they eventually move their

00:09:43
money. But when people have commentary

00:09:46
that acts like investors won't listen to a good idea and, like,

00:09:49
change their strategy if it's in their incentive, yeah, it feels

00:09:52
like. Well, I mean like market, the

00:09:54
truth mechanism is over time. I mean, like, the periods of

00:09:57
irrationality in the midst of a bubble can give people the wrong

00:10:01
impression of, like, the, you know, solidity of a business.

00:10:04
Like, I think that's what Ed was arguing.

00:10:06
It wasn't like capitalism is failing because these companies

00:10:09
exist and keep raising more money.

00:10:11
It's just like it hasn't reached the point yet where like, you

00:10:14
know, whatever the Warren Buffett line about, like the

00:10:15
shore, the shoreline going out and, you know, finding out who's

00:10:18
naked. Right, right.

00:10:20
And I mean, one thing we really believe in and I think smart

00:10:23
business journalists are like there are people who can be

00:10:26
behaving according to their incentives and driving us off a

00:10:29
Cliff, right? I mean, this is sort of the like

00:10:31
Tiger Globals and the soft banks.

00:10:33
Like often I've made the point in the newsletter that it makes

00:10:36
sense career wise to be the like biggest bull of the biggest

00:10:41
mania, right? You're like, oh, at least they

00:10:43
were the most important person, you know, like in that moment,

00:10:46
like the Tesla longs and stuff, like what's her name?

00:10:49
Kathy Wood? Like, even if you have a

00:10:51
terrible performance, you sort of like win fans just for being

00:10:54
like really extremely bullish. And so even if you like believe

00:10:59
in capitalism like I do, certainly I think they're all

00:11:01
these actors who are like, you know, almost like taking

00:11:04
advantage of, you know, human psychology rather than, you

00:11:07
know, trying to make the most money they can.

00:11:09
They're just, they're like, oh, I'm going to be able to, if I'm

00:11:12
the best bull in the world, I'll be able to raise the most money

00:11:14
again in the next bull run in my game is just to like run as hard

00:11:17
at the bull market as possible every time.

00:11:20
And you know, like that's a, that's a game people play with

00:11:23
with great success. Speaking of taking advantage of

00:11:25
human psychology, A Chamath is back out there again.

00:11:30
Welcome back. Scam in the arena?

00:11:32
Scamming the arena's back? No crying at the casino Chamath.

00:11:35
Right. I know what a what a he's so

00:11:37
good at it. He's so good at it.

00:11:38
He's like, like, he's basically calling himself like the casino.

00:11:42
And it's like, sure. The casino the the the random

00:11:45
outcome generator I understand. Don't.

00:11:47
He's literally like, don't invest in me if you're not

00:11:49
willing to lose all your money. And if it's like trivial to you,

00:11:52
like, yeah, who invests like that even if you're rich?

00:11:54
No rich person wants to lose their money either.

00:11:56
Like everybody is investing to get a return.

00:11:59
Like, I know it's just like a bullshit thing you say, but.

00:12:01
This is the unique twist of the Trump era, right?

00:12:03
Is that like he has discovered there is a whole cohort of

00:12:07
Americans who like being scammed, who like.

00:12:10
Well, that's true. I mean, sports betting is the

00:12:12
best thing in the world. Like yeah, exactly Like, yeah.

00:12:14
I lost a lot of money on sports betting over the weekend.

00:12:17
Yeah, Yeah. God, yeah.

00:12:19
That was just fun. It was.

00:12:20
It was. Just for the fun of it.

00:12:22
Was just for the fun of it. Yeah, there.

00:12:23
I just went down to the sports book and I put $50 on a on a

00:12:27
horse called Duck Stuck go and at.

00:12:28
Least it was in person. You got some, you got an

00:12:31
experience out of it. My my issue with the sports

00:12:33
betting apps is like, what are you even getting?

00:12:35
You know, like. I think it's really mistakes of

00:12:37
watching a game. I mean, if you're not watching

00:12:39
the game, you're putting money on it, then that's a real,

00:12:41
that's real sick, right? I did invest also.

00:12:43
I put a bet on Australian rules football because I thought the

00:12:46
spread was so ridiculous, but turns out it was not that team.

00:12:49
That team lost by like 150 points, which I don't know what

00:12:52
that actually means. Anyway, Chamath is yeah, I think

00:12:54
he absolutely represents the peak of Trump era.

00:12:57
We like to be scammed mentality where he is actively calling

00:13:01
what he's doing a casino that there will be crying and you

00:13:04
will not make any money. I will make money from it.

00:13:06
He made money last time. I mean you like wrote I thought

00:13:09
a great piece about like his scam the last time through.

00:13:12
Like what are you expecting this time around 2:00?

00:13:14
Pieces on Schmoth. I mean, one thing that makes me

00:13:17
so sad about the world we live in is just how much your

00:13:20
Internet brand and your real person brand can be so far

00:13:24
apart. Like real people in Silicon

00:13:26
Valley think he's like a huckster.

00:13:28
Like people who work with him, it ends poorly for them.

00:13:30
Many of his professional relationships have gone poorly.

00:13:34
He didn't make people money on the SPAC's.

00:13:36
He made money through fees. Like he's looking out for

00:13:39
himself. And to some degree, you still

00:13:42
want somebody who's like looking out, who cares about caring

00:13:45
about your reputation. He's good because it means that

00:13:48
the part people you partner with and work for and invest their

00:13:52
money for you, you care about your long term reputation.

00:13:55
You want to be good by them. And this is someone who hasn't

00:13:58
minded his long term reputation. It hasn't, you know.

00:14:01
It's not a reputation as an investor, but if you look at the

00:14:03
way he positions himself, he wears his like, I don't know the

00:14:07
fucking brand name suits. He loves to be rich, yeah.

00:14:10
Yeah, And so, like, again, and not to make everything just

00:14:13
about Trump, but this is all part of the same thing.

00:14:15
People in the real estate world knew that he was an absolute dud

00:14:18
as an investor. Like all of his projects would

00:14:21
go bankrupt. He was a brand name to to, you

00:14:23
know, add a modicum of flash on top of an otherwise shitty

00:14:26
project. Here you're like, you're like,

00:14:29
bring back the Wasps. Where were the rich people who

00:14:32
cared about I? Mean what?

00:14:34
Old money. Just the people who.

00:14:35
Cared in their wealth. No bless oblige.

00:14:37
You know, we need, we need like people who are not.

00:14:40
We just need the Carnegie. I am watching Gilded Age right

00:14:42
now. So I have a very strong

00:14:44
attachment to to that era of, of, of wealth.

00:14:47
But yeah, those guys look tremendous compared to the

00:14:49
Chamats. I mean, they've done nothing or

00:14:51
he's done nothing, but he's built an an image that I'm sure

00:14:55
he will have no problem getting people signing up for his specs.

00:14:58
Well, it's that sort of fatalism that makes it possible, right?

00:15:00
It's just sort of like people are like, oh, he'll raise the

00:15:02
money. We should do it and.

00:15:03
Yeah. And I guess maybe the last point

00:15:05
to. Talk about I'm not cheering for

00:15:06
him. I don't know what to say.

00:15:07
I would be very sad if good people lose money on what are

00:15:11
certainly going to be failed IP, OS or whatever you call them.

00:15:14
Specs. I do think to be fair, you know,

00:15:17
we're business journalists. Like I do think he changed his

00:15:20
structure this time. I do think he's slightly more

00:15:22
aligned now with the actual outcome of the specs, though.

00:15:26
Knowing Chamoth, he's found some other way to make money as well.

00:15:29
But. Burdens.

00:15:31
I don't see why, but he's not scamming people, sorry.

00:15:34
This. Well, that you believe in

00:15:36
capitalism, don't you? There's infinite numbers of

00:15:38
shots on in a free model. Sure, yeah.

00:15:40
But I mean, this is always, this is one of these, I don't know,

00:15:43
to me psychological weaknesses the the elites.

00:15:46
Once you're in the club, you're like, oh, you're in the club,

00:15:48
you should get to do it again. It's like there there are tons

00:15:50
of human beings like try it with somebody who hasn't lost people

00:15:53
a lot of money. Like to me this sort of desire

00:15:56
to like, keep giving money back to the type of people, to people

00:15:58
who just like, gamble the money rather than saying maybe we

00:16:01
should try it with somebody new, that it means like a human

00:16:04
psychological weakness of just like loving fame rather than

00:16:08
success. It's never gone away.

00:16:10
I mean like Michael Milken in the 80's, the junk Bong King

00:16:13
still like runs a large financial conference in Los

00:16:16
Angeles, the Milken conference. Nothing, nothing tends to really

00:16:19
happen to these people for the long term.

00:16:20
I mean, some of them do get like banned from trading, but that's

00:16:23
not even close to what's going to.

00:16:24
Happen with, not anymore. Yeah, exactly.

00:16:26
That's it. Like I'm not accusing Chamath of

00:16:28
anything of that level. He just was an opportunist and

00:16:30
fucked over a lot of people who didn't.

00:16:34
We're looking at fights. So Chamoth come on the show.

00:16:36
We're I don't think he is. That would be a big get for lots

00:16:40
of reasons. Is probably the person who's

00:16:41
written the most negatively about him.

00:16:43
I'm not betting on that one. But if you're if you're a hater

00:16:46
positively negatively, we're we're ready to argue with you.

00:16:48
We like this arguing format. So we we want to have arguments.

00:16:52
Not quite an argument. I like him too much.

00:16:54
But after this we're going to have Guillermo the Guillermo

00:16:57
Roush over cell. He's coming on the show.

00:17:00
I think he had, I know a really good prediction about where

00:17:04
things are going. We talk about agents.

00:17:07
He obviously runs a no code company and I continue not to.

00:17:11
I have not built my no code startup empire.

00:17:14
If it's so easy to no code like where, where are the apps that

00:17:17
I'm? Using maybe you should start

00:17:17
getting into fights with lovable.

00:17:19
I know dude, I don't know if people know what you're talking

00:17:21
about. We don't yeah, that that can be

00:17:23
deep lore for newcomer fans. You can look up Eric's tweets

00:17:26
trying to get his. Money waste your time for I I

00:17:28
lost it on Sunday just because I I got charged twice from lovable

00:17:33
after trying to cancel. They, they have extreme dark

00:17:36
patterns that they say they're going to try to reform.

00:17:38
Literally you click a big red button that says cancel and then

00:17:42
you still are not cancelled from lovable.

00:17:44
They're like, oh, that was cancelling within lovable.

00:17:46
Then we had to send you to stripe and then you have to

00:17:48
obviously run through the stripe.

00:17:49
But anyway, they, they swear they're changing.

00:17:51
They're going to fix it. Maybe cancel in Swedish means

00:17:54
something else? Maybe it means like I'm still

00:17:56
down. I like the.

00:17:58
Company I would have come I it was just like I'm not using it

00:18:00
this second. I would have come back now.

00:18:02
I don't know. I'm pretty angry with them, but

00:18:04
hopefully they're. Reformed we're big supporters of

00:18:06
Europe tech over here on newcomers so we we long term

00:18:09
long term bulls on lovable and every European tech company all

00:18:13
right we should probably cut on over to your interview with

00:18:15
Guillermo all right I'm. Here with the founder and CEO,

00:18:20
Versal, Guillermo Roush. Welcome to the Newcomer Podcast.

00:18:24
Thanks for having me. Last time we were hanging out,

00:18:27
I, I, I feel like you convinced me that Purcell would be every

00:18:30
company in the world. So we'll see.

00:18:33
We'll see what you've brainwashed me into by the end

00:18:36
of this conversation. Where?

00:18:38
Where were we? I I think it was huminex if I'm.

00:18:42
I'm remembering, I think we. Had like lunch off that and just

00:18:44
like the the great founder visionary that I was like, oh,

00:18:48
what, what won't this company do?

00:18:51
You're you're obviously touching a lot in AI.

00:18:54
You know, like, you've got VO now with sort of your own vibe

00:18:57
coding product, but you're also sort of powering a lot of like

00:19:00
the vibe coding that everyone's talking about in sort of some of

00:19:03
the infrastructure to make it possible for people to sort of

00:19:09
spin up sort of websites and applications quickly using

00:19:12
artificial intelligence. But also, you're just like a fun

00:19:15
founder who's willing to sort of like, say what's going on and,

00:19:18
you know, not, not just let the venture capitalists dominate the

00:19:23
chattering among the Silicon Valley classes.

00:19:25
And terminally online so you can ask me about any of this

00:19:28
happened on X5 minutes ago and I'll know.

00:19:31
So I, I just want to start off with, yeah, like the existential

00:19:34
question about like the mood. I mean, I've been honestly been

00:19:37
texting everybody today about like sort of animal spirits or

00:19:41
like how's the, how's the market?

00:19:43
Like, you know, what's your read on sort of the AI mania?

00:19:46
Like, do you feel like there's still a lot of room to run here

00:19:50
or are you getting signs that there's like some some paranoia

00:19:54
about the euphoria? You mentioned Versailles is in a

00:19:58
very specific and special position that we see every,

00:20:03
almost everything that's being built on.

00:20:04
AI has to have some kind of user interface to the world.

00:20:08
And so traditionally, we've been supporting founders in building

00:20:12
this AI portals, right? AI applications now increasingly

00:20:16
agents. And, and one of the things that

00:20:18
I've been noticing is that there's basically 3 chapters of

00:20:21
AI so far. Number one was assistance #2 and

00:20:26
it's the arc that I think we're currently in is agents.

00:20:29
I believe that there's a third phase.

00:20:31
I think what's really interesting being so close to

00:20:32
developers is that I've always kind of called out us

00:20:36
developers. We're very self-serving.

00:20:39
So if we get AI, we're going to apply to our job first.

00:20:42
We're going to make our own life easy.

00:20:45
And So what I'm seeing with developers is that the next

00:20:47
generation is going to be teams of agents or multi agent

00:20:50
architecture. We've gone from assistant to

00:20:52
agent to multiple agents. The alpha developers today, what

00:20:55
they're doing is they're, they're instead of just limiting

00:20:57
themselves to like 1 agentic session, they're they're

00:21:00
spinning up 20 in parallel or 10 or 5 or whatever.

00:21:05
And that gives you a glimpse of what I think is going to be the

00:21:07
next phase of AI. You can think of the current

00:21:09
phase of AI as you have a virtual Co worker.

00:21:13
You can think of the next phase of AI as they have a virtual

00:21:16
team of Co workers and I'm the manager.

00:21:19
Is the challenge with just having the virtual Co worker is

00:21:21
in some ways the human gets the lame part of the job?

00:21:24
Or it's like I have to sit there and be like, sure, yeah, I keep

00:21:27
keep doing the hard work. Like, yeah, I keep just

00:21:29
hallucinating. Yeah.

00:21:30
Like, and it's tedious and, and perhaps you don't get that 10X

00:21:33
effect. You know, I do see a lot of

00:21:35
debate, healthy debate sometimes of like, is AIA 2X or is it a

00:21:40
10X for my job? I saw an article that a

00:21:43
programmer put out the other day saying like, look, in certain

00:21:45
areas of my job, it definitely feels like a 10X because the

00:21:48
example that this gentleman was giving was like, I don't write

00:21:51
compilers every day. And so like the fact that I was

00:21:54
just able to sell the AI, can you write me this like compiler

00:21:57
transformation is like, whoa, like I feel like a superhuman.

00:22:00
But then he was saying, like in other areas of my job, like, I

00:22:03
don't know, sometimes it like kind of gets in the way and

00:22:05
whatever. To to sort of really, I guess go

00:22:08
directly at the current moment. I mean, yeah, what's your

00:22:11
reaction been to like ChatGPT 5, I guess like to GPT 5 because in

00:22:17
some ways, I guess I think if that were amazing, the lead here

00:22:20
would be like, oh, the models are going to keep coming up with

00:22:23
new ideas and that's where we're going to make progress.

00:22:25
And just by the fact that you're saying, oh, we're going to find

00:22:28
ways to use the existing models and agents to like get more out

00:22:31
of them by having them collaborate to me as sort of a

00:22:33
sign that we're not having the like, Oh my God, Chachi PT or

00:22:38
GPT 5 is like blown all our minds.

00:22:40
Do you agree with that or what's your read on it?

00:22:42
I agree with you that analyzing the GPT phenomenon is really

00:22:46
important to the industry because you're right.

00:22:48
Like it did, it came out and it didn't come with a short proof

00:22:52
of the Riemann Hypothesis. Exactly.

00:22:55
Where's our exactly in the expectation or whatever?

00:22:59
Yeah, right. But something that was really

00:23:02
interesting was that Open AI is taking the coding problems

00:23:06
really seriously prior to its release.

00:23:09
They came to us and they're like, can you please run this on

00:23:11
V-0? Can you please run your evals

00:23:12
with it? They were asking us like what

00:23:14
are the vibes from your team? What are the vibes from your

00:23:18
tests? Is it good at design?

00:23:20
So there was a lot of attention being paid to the topic of

00:23:24
coding. And the simplistic view of that

00:23:28
is that the market is becoming hyper competitive on the

00:23:31
enterprise side and that Anthropic is making a ton of

00:23:33
progress with coding. There's amazing Chinese models

00:23:36
coming out like Quen, etcetera. They're really good at coding.

00:23:40
And so the simplistic view is like, oh, coding is a hot

00:23:42
market. So that's what they're going

00:23:43
into it. My view is actually that because

00:23:46
agents are the future, not assistants and not one off one

00:23:51
shot things, the future to superior and and and higher

00:23:55
intelligence and more productivity will be agents

00:23:58
writing code to solve problems. So the thing to pay attention to

00:24:01
is not a consumer observation. In fact it almost backfired for

00:24:05
consumers right? Because we were like, can you

00:24:08
please bring back my boyfriend or girlfriend or?

00:24:10
Oh, right. We don't want it to be smart.

00:24:11
We just want it to be. Yeah, like that was the main

00:24:14
thing, right, you know? Art of Maine had almost

00:24:17
anticipated that this was going to happen because they did the

00:24:19
following thought exeriment. If we could revive Albert

00:24:23
Einstein and we all get a digital copy, how many people

00:24:27
would be jazzed? I know a lot of people here in

00:24:29
Silicon Valley. You and I would be like, holy

00:24:31
crap, right? Einstein in a box.

00:24:33
Sort of like you're a crazy guy you don't want to talk about

00:24:36
like, reality television or whatever.

00:24:37
Like when I wouldn't want to talk to you all the time.

00:24:39
I see the point, right? Yeah.

00:24:41
The average person would be getting too much horsepower.

00:24:44
They're like Kate Albert. Like, what should we watch on

00:24:45
Netflix tonight? And he's like, I don't know,

00:24:48
like I don't watch Netflix. Let's talk about internal

00:24:50
relativity. And so I think GBT 5 had a

00:24:53
little bit of that of like, it's becoming intelligent in a way

00:24:56
that's going to enable this like emergence of intelligence that

00:25:02
you can't easily probe for. The future of intelligence will

00:25:05
be and, and we're seeing this with like test time models, like

00:25:08
going in a loop, spending a lot of energy writing code, testing

00:25:13
it, exploiting multiple branches of the search algorithm.

00:25:17
And these are things that a consumer cannot possibly

00:25:19
ascertain. Like the most that a consumer

00:25:21
can do is, you know, I have friends that keep questions in

00:25:25
their head that they know like a is are not very good at.

00:25:29
And so like they do their own little benchmark of like, oh,

00:25:31
let's see if GBT 5 gets this right now, testing the

00:25:35
intelligence or the IQ of AI in itself is becoming really,

00:25:39
really difficult. That's why I open AI was so

00:25:40
interested in like, what is what's the B0 point of view?

00:25:44
You're creating a coding agent that you know is a specializing

00:25:47
in in vibe coding, etcetera. And so that means you have a lot

00:25:51
of tests. That means you have a lot of

00:25:53
benchmarks like, you know, what does that mean for you?

00:25:57
I believe every company should be concerned with creating this

00:26:00
benchmarks of their own institutional knowledge and

00:26:03
intelligence. Well, some of it, you know, it's

00:26:05
like for me, it's like I put it in like, oh, is it good?

00:26:06
How good is it now at writing a story like me?

00:26:09
How, how you know good? Is it keeping track of yes,

00:26:12
proofread problem. You know, you, you end up having

00:26:14
your specific challenges. So when you're in an industry,

00:26:17
it's not that hard to figure out like, oh, is it doing a better

00:26:20
job for me or not? You could also make the case

00:26:22
that you should, you could be working on a writing agent that

00:26:26
it's more than just one forward pass of like.

00:26:29
And this is another thing that kind of backfired, but it's also

00:26:32
very interesting is that they try to ship in a model router,

00:26:36
right? Like the default thing was auto.

00:26:39
And then it's going to choose if it's going to think hard or if

00:26:43
it's going to give you an immediate response.

00:26:46
Another thing we've learned about AI that's fascinating to

00:26:48
me is that there is this metaphor of like thinking fast

00:26:52
and slow, for which there is a book and like people have been

00:26:56
talking about for years, like prior to like AI existing.

00:26:59
And it's, it's proven true. There is 2 modes of thinking.

00:27:02
There is fast, fast twitch muscle fibers or like you just

00:27:06
want like, hey, like, can you provide this really quick and

00:27:09
find me typos? And then there is a like, no,

00:27:10
no, no, like, let's analyze. Like, are you repeating words

00:27:14
too much? And that requires like a slow

00:27:17
thinking. And so open AI try to, because

00:27:20
they're in the pursuit of creating a good consumer

00:27:22
experience. They were like, OK, we're going

00:27:23
to try to route automatically. That appears to me to be an

00:27:28
extremely hard problem, if not impossible.

00:27:30
It's hard to know from the query if it's a hard problem or a slow

00:27:34
problem. We have a related problem in

00:27:37
computer science called the halting problem.

00:27:39
Like by just reading some code, we don't know for how long it's

00:27:42
going to run. It could run until the death of

00:27:46
the universe, or it could run really quickly, but we have to

00:27:48
run it to actually know. And so there's almost like an

00:27:52
uncertainty of like, should I spend a lot of energy thinking

00:27:55
here or should I respond really fast?

00:27:58
In some ways, this is the argument for the role of

00:28:00
startups in AI and in sort of narrowly focused companies.

00:28:05
It's like, really, you're saying the model companies should

00:28:07
almost be an API layer? They're not going to be great at

00:28:10
deciding in every sector whether it should be a faster or slow

00:28:13
thinker and or what. Content or how many agents you

00:28:17
use, etcetera. And so where I was going to go

00:28:18
with your case is that it'd be really cool to be crafting the

00:28:23
ideal writing agent. I, I know most people listening

00:28:26
to this will have an intuitive sense of the answer to this

00:28:29
question, but like, I'm curious to hear you define it.

00:28:33
What, what is an agent to you? Like, does it need to take

00:28:37
action? Like, I mean, deep research

00:28:38
itself is an interesting case where it's like, oh, I mean, a

00:28:41
consultant is like an, an, you know, taking actions in a

00:28:45
certain way and deep research behaves like a consultant.

00:28:48
On the other hand, and people have probably heard me say this

00:28:51
before on this podcast, Like to me, the gauntlet for an agent is

00:28:54
like, can you go spend my money? Like, will I let you really make

00:28:57
sort of permanent sort of decisions that that have real

00:29:01
consequences in the world? I don't know.

00:29:03
How do you think about what what sort of the minimum requirement

00:29:07
for? What makes something the

00:29:08
minimum? Requirement is that it produces

00:29:12
an artifact, an output that is generally longer than an answer.

00:29:18
So it right, like if, if I, I make that decision between like

00:29:23
an assistant and an agent. So if I'm asking like, you know,

00:29:26
what's the province main the largest province in Argentina

00:29:32
versus build me an application. Funny enough, for us, that was

00:29:36
like one of the biggest jumps in how people have perceived the

00:29:40
intelligence of B0B0 started being what do we call one shot?

00:29:44
You would tell it, give me a user interface for a document

00:29:49
collaboration system. And the feedback that we got, it

00:29:52
was like some people like literally have because this was

00:29:54
so early, like it was basically maybe a few months after Chad

00:29:59
GBT launched, some people would tell us, all right, there's not

00:30:04
going to be any jobs anymore. I just saw God, like, holy crap,

00:30:08
what do you guys do? And some people tell us this is

00:30:11
the worst piece of crap ever because on in the one shot world

00:30:16
quality so divergent. It's almost like playing the

00:30:19
lottery. In fact, some naysayers of AI

00:30:22
for coding are saying that like what drives the revenue is not

00:30:26
the quality is the lottery effect, which I completely

00:30:30
dismiss. But I wanted to tell you this is

00:30:31
funny. It's a dangerous idea because

00:30:35
yeah, it's a grabby idea. Interesting.

00:30:37
The next big jump for us was when we made V-0 really work

00:30:42
like an agent. And So what it does is that it

00:30:45
makes that one shot, but then he looks at it, Visio asks itself,

00:30:51
does it work? Does it compile?

00:30:54
The most recent version of her agent even physically looks at

00:30:57
it like using computer vision. So as an example, say I want to

00:31:01
make a website for my daughter's birthday.

00:31:04
Please theme it like, I don't know, Disney Princess.

00:31:10
It'll then look at what it cooked.

00:31:11
It's like, does it look like what I was asked?

00:31:14
And so an agent is taking a multi step approach, much like a

00:31:19
human would in order to produce a higher quality output.

00:31:25
One of the key ingredients there is this idea of reflection, like

00:31:28
you're the agent is asking itself questions, is looking at

00:31:31
artifacts. It is it's using tools.

00:31:34
One tool that we have is it can look at a screenshot, it can

00:31:38
look at designs for inspiration. So sometimes the model doesn't

00:31:42
know what you mean by like build me a podcasting app like

00:31:45
Riverside and so he needs to go and research.

00:31:48
Do you like the term vibe code? By the way?

00:31:50
Do you embrace that term? I personally do it's funny

00:31:55
enough because like our products becoming so successful, like the

00:31:57
Fortune 500, some people are like, do you really want to say

00:32:01
vibe coding? I think vibe coding is it's it

00:32:06
evokes this idea of you're not actually coding, which is really

00:32:11
important. The term came out because Andrew

00:32:15
Karpati and a bunch of our people were realizing AI was

00:32:19
progressing at a clip where your own micromanagement of the agent

00:32:24
was producing diminishing returns.

00:32:26
AI Once Upon a time, like in AI times like 6 months ago.

00:32:30
Once Upon a time it was so bad that you literally had to look

00:32:34
at every token it output. As the models get better and the

00:32:40
world becomes more agentic and self healing, you realize that

00:32:44
you're looking at it. It's just wasting your own

00:32:46
energy and your time. You can start trusting and even

00:32:51
seeing because we can render that output.

00:32:54
We can run the application. You can see it come to life.

00:32:58
And so there has to be a term for that.

00:33:01
The winning term right now is by the coding, but it's basically,

00:33:05
sometimes I've called it instead of being coded first like a

00:33:08
product, like cursor, it's code last.

00:33:12
You might occasionally maybe at the end of the process you look

00:33:14
at the code, but the code is not the important thing here.

00:33:17
And I believe that most agents are going to going back to like

00:33:21
why GBT 5 is obsessed about coding, even though people are

00:33:24
obsessed about four O being nice to them.

00:33:27
I believe that agents will have to write code to get to that

00:33:31
next frontier of value for society and for intelligence.

00:33:35
And I believe that a lot of people will not even realize

00:33:37
that this agents are writing code.

00:33:40
What's happening to your employee salaries?

00:33:43
I mean, you know, the the AI engineers, obviously we've seen

00:33:46
all these stories of meta poaching everyone from the

00:33:50
research labs, like how's this playing out for sort of the

00:33:53
typical software engineer in Silicon Valley or what?

00:33:57
Yeah. What are you seeing in terms of

00:33:59
trends and comp? Are the the $100 million

00:34:03
salaries as widespread as chattered about?

00:34:06
We are in a little bit of a microcosm here, a metaphor that

00:34:10
someone gave me their days like look like it seems like if you

00:34:13
extrapolate what you're saying. And because I know that V-0

00:34:17
hires developers to build V-0, we're transitioning into a world

00:34:22
where the job of an engineer is to build the factories of code

00:34:28
rather than being the line worker.

00:34:32
And so the engineers that are capable to building the

00:34:34
factories of code, the Bible coding platforms, the agents,

00:34:39
the AI interfaces, the assistants, those people are

00:34:43
highly empowered today. And so it becomes that highly

00:34:47
competitive space. It's like, you know, how many

00:34:50
soccer players like Lionel Messi are there that have that

00:34:54
specific set of skills? And then like, you know, there's

00:34:57
lots of soccer players, but a handful that make a ton of

00:35:01
money. And So what I would recommend to

00:35:05
people is that they position themselves in that world, right?

00:35:08
Like become the person that understands how to bring those

00:35:12
two worlds that I talked about earlier.

00:35:14
The theoretical AI potential and it's applied AI presentation

00:35:20
layer. That's kind of the, our thesis

00:35:22
of the Versailles AI cloud is that as this model layer keeps

00:35:27
getting smarter, better, more accessible, you should be

00:35:32
focused on applying that AI to specific domain, a specific

00:35:36
business, a specific vertical, a specific kind of agent.

00:35:40
But you're going to need a cloud that has all of the like right

00:35:43
patterns, tools, defaults, etcetera, for building those.

00:35:49
And that's, that's our thesis with, you know, we're not going

00:35:51
to build a cloud that is 1 to 1 exactly like AWS.

00:35:54
There's no advantage there, but we want to build a cloud that is

00:35:58
perfect for that next generation of companies and developers.

00:36:03
I'm trying to understand. I mean, this is we're in an

00:36:05
insane situation where you're right at the high end.

00:36:08
If you're building the AI factories, you could command

00:36:11
insane salaries. On the other hand, if you're

00:36:15
sort of the sort of, I don't know, worker, more like line

00:36:19
coder, software engineer, I guess you're worried like, oh,

00:36:22
you see V-0 and you're like, oh, that's coming for my job.

00:36:25
So my question I think is coming at you from 2 ends, which is

00:36:28
like at once, like, oh man, we see these insane salaries.

00:36:31
On the other hand, I feel like they're the sort of, I don't

00:36:34
know, Wall Street Journal story saying don't, don't study

00:36:37
computer science because they're not a new job.

00:36:39
So I guess take one at a time because I think they're getting

00:36:42
mixed. They're getting mixed together.

00:36:44
One talk about each end of of the equation here.

00:36:47
For the other end, yeah, I do have a complete different take

00:36:51
on universities and colleges and that is a kind of worms.

00:36:55
I think we should address it. OK.

00:36:57
On the line coder, I also think that the demand for code from

00:37:01
those people has actually never relented.

00:37:05
Meaning let's say that you were able to produce like 1000 lines

00:37:09
of code per hour per day, whatever your boss probably

00:37:15
wanted you to produce even more like there's more features or

00:37:18
bugs, there's more innovations, more products, etcetera.

00:37:21
And so if you use AI, you're that's why I was saying that

00:37:25
it's so exciting to be seeing people go into this multi agent

00:37:29
ways of working. And so the person that uses AI

00:37:34
is going to run laps around the person that doesn't.

00:37:37
But do you? Think they have more like

00:37:39
frontline engineers in a decade than we do today?

00:37:43
It's very hard to estimate in, in like in total numbers, right?

00:37:48
But I do think we'll definitely have more software, right?

00:37:51
Like, right. Yeah.

00:37:52
The demands, we all agree on that.

00:37:55
The, the definition of an engineer will likely expand in

00:37:58
such a way that I can, I can tell you definitively, yes,

00:38:01
right. Like, because we're putting

00:38:04
software generations in the in, in the hands, in the hands of

00:38:07
everybody as a whole, there will be, you know, hundreds of

00:38:12
millions, billions of people creating software, be a specific

00:38:16
engineer as we understand them today.

00:38:19
I can't even, you know, I, I had a tweet a couple days ago that

00:38:23
Elon replied to say I, I was saying, like, I can't even

00:38:28
explain just how different software engineering will look

00:38:30
in five years because the perception that people have

00:38:35
today is of a code editor. Like that's it.

00:38:39
Like if you watch the hacker movie, what is the idea of a

00:38:42
programmer or they're looking at a code editor?

00:38:45
But now I'm looking at my fellow factory builders and I don't see

00:38:53
them looking at code editor anymore.

00:38:55
I'm increasingly seeing them talk to all these agents.

00:38:59
And so that to me already like if I was able to transport

00:39:02
myself to five years from now, I don't think software engineering

00:39:06
looks anything like what do we do today?

00:39:08
And so it's very hard to say like like are we calling those

00:39:11
people that are managing the fleet of agents engineers?

00:39:15
You know, most likely I would say it's, it's more on the

00:39:18
customerization or democratization of software

00:39:20
creation. To a direct point you were

00:39:23
touching on earlier, do you think people should study

00:39:26
computer science right now? I think people really need to

00:39:31
understand how AI systems work, period.

00:39:36
They need a deep understanding of the technology stack, the

00:39:39
limitations of the technology, what makes them safe, what makes

00:39:43
them not safe, what makes them make mistakes, whether they're

00:39:47
good, whether they're are they not good.

00:39:50
I do have a little bit of apprehension about do the

00:39:54
universities today themselves know?

00:39:57
And I'll say most likely not, right?

00:40:00
Like, think about how fast the industry is moving.

00:40:03
Right that. You're like come work in

00:40:05
Versailles and figure it out here and skip our user.

00:40:08
Road map is moving week by week, right?

00:40:12
It's very hard to say in Q32026, Visor is going to look this way.

00:40:18
We're going to need to find a way to get the universities up

00:40:22
to speed on AI, on vibe coding or start to blur the lines

00:40:28
between university and industry. We've been running very

00:40:33
successful internship programs at our cell where I personally

00:40:36
spend a lot of time with the interns.

00:40:37
Like I'm picking their brain, they're picking mine.

00:40:40
Like this intern is like, you know, hanging out with CEOs and

00:40:44
CT OS because like, I do think that you are getting up to speed

00:40:48
really fast if they're in the right room.

00:40:50
I do think there's a little bit of a get rid of your ego.

00:40:53
Forget about everything you are good at.

00:40:55
It starts from first principles. If you, if you thought you were

00:40:59
really good at TypeScript, if you thought you were really good

00:41:02
at C, well, yes, but also someone that never knew how that

00:41:09
technology even worked will probably be generating lots of

00:41:13
it just like you very soon. They might be even be learning

00:41:18
faster than you because you're able to try more things.

00:41:21
I'm very concerned in the sense of like people say, the junior

00:41:24
developer job. Instead, I would argue maybe

00:41:27
there's nothing that's ever been more important than using your

00:41:31
company with clinical junior developers that are more AI

00:41:35
native and can show you new approaches truly to learning, to

00:41:41
iterating, to figuring out how to solve a problem when you

00:41:44
don't know all of the science behind it.

00:41:46
When you ask about computer science, computer science is a

00:41:48
very specific version of like how they teach you data

00:41:51
structures and algorithms, etcetera.

00:41:54
If they're leaving out this like pragmatic AI side of

00:41:58
engineering, I really liked what Elon said about like we're

00:42:01
nuking the term researchers at Grok, at XAI, we're all

00:42:05
engineers. I do think it's a very healthy

00:42:07
mindset to have. We're all engineers and we're

00:42:10
all figuring it out and we're all using AI to figure out as we

00:42:13
go. Taste has become, you know, a

00:42:16
keyword in how people think about the role of humans going

00:42:21
forward in AI. And I, I want to use that as a

00:42:25
lens to sort of untangle some of the coding stuff we're talking

00:42:29
about. Like, you know, obviously I

00:42:31
spend a lot of time thinking about writing with writing like

00:42:33
you can, everybody can have taste about stories because we

00:42:36
can all like read and assess them.

00:42:39
And obviously there are sort of the practitioners who sort of

00:42:42
know what goes into making it. But then there have always been

00:42:45
sort of critics of, you know, novels and writing that sort of

00:42:49
you can imagine with AI, the sort of line between the writer

00:42:54
and sort of this sophisticated critic of writing sort of is

00:42:58
collapsing because you can sort of use AI to write over time.

00:43:03
I guess with code. My my question is like, there's

00:43:07
sort of the, there's the product, there's like what what

00:43:10
is the thing outputted that you could have taste on?

00:43:13
And that's something that seems like it's being democratized,

00:43:17
like potentially writing. But won't this sort of code base

00:43:22
itself still matter? Or like, will we need engineers

00:43:26
that have like the taste for like, what is well written code

00:43:30
and like how systems are set up? Or are we just going to like,

00:43:33
yield that piece of it to agents?

00:43:36
Like, is the taste only about whether the product sort of

00:43:39
itself looks good, works well, or is it about like how the

00:43:44
actual code base is set up? When I first came to Silicon

00:43:47
Valley, one of the things that was and vogue was there were a

00:43:52
handful of people in the Valley that could create deep tech

00:43:57
software systems and also design beautiful products and I could

00:44:01
count them with like one hand. There were a handful of people

00:44:06
that I knew and I aspire to become like them.

00:44:09
I've cared always a lot about the design and the engineering.

00:44:13
To me, that's what was so special about the Steve Jobs and

00:44:16
Apple mythology is that they literally had a slide in one of

00:44:20
their presentations of like, Apple at its best is the

00:44:23
intersection between computer science and liberal arts.

00:44:27
And they had a sign of like the intersection of two streets, one

00:44:29
called computer science, one called liberal arts.

00:44:33
I think the future will be more of that.

00:44:36
I would argue that the liberal arts is going to be overweight

00:44:40
because the computer science part, well, the agents will

00:44:43
figure out all the data structures, algorithms,

00:44:45
optimizations, blah blah. And so your ability to infuse

00:44:49
creativity, taste, culture, design will probably matter even

00:44:55
more in the future. You know, I'm happy to hear the

00:44:58
argument for like, the liberal arts major, but I still think,

00:45:02
you know, tech companies still need to build things today.

00:45:05
And it's not just about like clever billboards and clever

00:45:08
marketing. Yes, but there is a discovery

00:45:11
element to technology like there's just so much technology

00:45:16
that is possible and yet we don't know it's available to us

00:45:20
and so when people. Say this stuff, I'm like, oh, I

00:45:22
have a distribution platform. This is what everybody seems to

00:45:24
want. I just need to do what you guys

00:45:26
are saying is the easy part. Oh, just cobble on some tech

00:45:29
thing to sell people. It's like, oh, I have people's

00:45:31
attention. It's like clearly I'm deficient

00:45:34
in figuring out the, you know, tech thing to just like cobble

00:45:37
on. But but to me, from where I sit,

00:45:39
it seems harder than some of the vibe coding it takes.

00:45:43
It's like, oh, you can't just spin up.

00:45:44
You can't be like, oh, I have attention, let me just spin up.

00:45:48
There is a software that I pointed there that you're

00:45:50
pointing out, which is that even if you figure out a huge

00:45:52
platform, you need to create something that's great for that

00:45:55
platform. And so if software is getting

00:45:58
democratized, like maybe it's when I mean, when I mean that

00:46:02
everybody can cook. A lot of it is like solving your

00:46:05
own problems and and creating tools for you and your

00:46:08
colleagues and things like to that matter, creating software

00:46:12
at scale, like if you get the biggest like Times Square sign

00:46:18
and you're going to get every eyeball on the planet and you

00:46:20
want to give them something of high value.

00:46:22
It's so really hard. Yeah, you can solve your own

00:46:25
problems. You know what you need.

00:46:26
But to solve some sort of general problem for a bunch of

00:46:29
humans is is a hard product. Building that is that's very

00:46:32
hard. My argument was there's

00:46:35
something really interesting about the Steve Jobs story

00:46:37
because he wasn't an engineer, or at the very least he wasn't

00:46:39
an engineer in the, in the sense that he's Wozniak was an

00:46:43
engineer. And so there is this huge skill

00:46:46
set to figuring out one, what are the boundaries of the

00:46:51
technology? What are, what are the limits of

00:46:52
the technology #2 how do you best percent that technology #3

00:46:58
how do you capture people's attention and distribute it?

00:47:01
And so I think Steve Jobs was really good at these things.

00:47:04
And then he quote, UN quote, used the team of agents, the

00:47:08
people that he recruited. Well, as human beings, right?

00:47:11
Yeah. But you get to kind of see where

00:47:14
this is going, right? There will likely be a bunch of

00:47:18
Steve Jobs that figured out that trifecta and are offloading the

00:47:23
hard engineering work to agents. But also there there might be

00:47:28
teams constituted almost entirely of Steve Jobs people

00:47:32
like, we're all tastemakers in this room.

00:47:35
We all bounce ideas around and like what kind of products we

00:47:38
want to see in the world. We do a bunch of research.

00:47:40
We talk to customers, but we're basically like, I would say it's

00:47:45
almost like the return of the product manager.

00:47:48
I think you're afraid that iconic moment in culture of the

00:47:53
PS by the pool. The intersection of engineers

00:47:57
resenting product managers and general Twitter loathing of

00:48:00
women, Yes. I'll tell you, like, I don't

00:48:02
think people will have seen this coming, right?

00:48:06
But there is a return to, you know, well, do you need more

00:48:10
than the PM with the person sort of like understanding market

00:48:15
customers requirements, E etcetera.

00:48:18
And so it's almost like a pendulum swing back and some on

00:48:21
some levels at least. But yeah, I think the, the, the

00:48:24
reality is is likely to be a lot more nuanced than any of this

00:48:27
prediction. But my, my, my personal

00:48:30
predictions in five years, this whole industry does look

00:48:34
extremely different, Extremely, extremely different. 16 month

00:48:37
prediction Six months from now, what will be different?

00:48:41
Six months. I do think there's going to be

00:48:44
maybe even sooner than this. I think people are

00:48:46
underestimating GBT 5. I think they're judging it by,

00:48:50
again, judging a model is becoming an important industry,

00:48:55
right? There's companies like LMRE.

00:48:58
Takes. I'm like, I really need to sit

00:48:59
with a model and sort of really understand.

00:49:02
And that's an emergent thing. I I don't remember technologies

00:49:04
where like you really had to sit with them for like like aiding

00:49:08
to know them like a person. Right.

00:49:11
It's like, yeah, you want to assess their soul, like on the

00:49:13
first meeting, you know, exactly.

00:49:15
We have to sit with it and sort of see what it can do.

00:49:18
It's like it's like you're 306090 day plan for a new hire

00:49:24
or a new executive like you have to treat a model.

00:49:26
Underestimating GPD files? Yeah, I think.

00:49:28
I think people are. I really think people are.

00:49:31
I also think you are not realizing that this vertical

00:49:34
agents, there's probably going to be some agents, maybe some

00:49:38
like agents for video editing or creating video or creating ads

00:49:43
or you know, I can't tell you exactly what.

00:49:45
There's going to be one that we're all going to be talking

00:49:47
about. Interesting.

00:49:48
Yeah, Yeah, we need to break out Chachi.

00:49:50
BT has so much defined the use case.

00:49:52
We need sort of a vertical niche where it's like it's going to

00:49:55
solve this problem, do this thing calendar.

00:49:57
Something along this line like it seems so odd but how do we

00:50:01
not do it? As soon as Chachi BT came out,

00:50:03
we didn't do it because we didn't have the infrastructure,

00:50:06
we didn't have the best practices, we didn't have the AI

00:50:09
gateways, we didn't have the agent frameworks.

00:50:12
We built a lot of this stuff. And so there's going to be an

00:50:16
obvious in retrospect application with a huge user

00:50:21
base. That's my.

00:50:22
Perspective right now that Gmail is not solving look that the

00:50:25
model is not in my e-mail yet To me is the, I mean superhuman

00:50:28
wants to do it, but to me like Gmail, it's like sitting there.

00:50:31
There's a lot of risk. Obviously I would be a bad

00:50:34
reporter if I didn't ask. You can dodge or not.

00:50:37
The information says you're getting approaches at 9 billion.

00:50:41
Do you have anything to say about that?

00:50:44
No comments. No comments, speculation.

00:50:46
All right, Guillermo, thank you so much for coming on the show.

00:50:49
We have to do this again. This was a lot of fun.

00:50:51
Hey Gary, always play.