The Moments That Matter from Cerebral Valley London
Newcomer PodJuly 18, 202500:50:1345.98 MB

The Moments That Matter from Cerebral Valley London

Fresh back from London! In this episode, Eric Newcomer reunites with co-hosts James Wilsterman and Max Child of Volley to dive into the best moments from the Cerebral Valley AI Summit.

From buzzy startup founders to incumbent innovators, the London event showcased a rapidly evolving AI landscape. But what stood out most? Eric, James, and Max break down their favorite clips and debate the key questions driving the industry right now:

  • Models vs. Applications: Are we back in a “models win” moment?
  • The Uber Dilemma: Why did Uber spin off its self-driving technology instead of competing with Waymo? Dara Khosrowshahi’s reasoning sparks a heated debate about platform dynamics, marketplace power, and whether Uber made the right call.
  • Figma’s IPO Moment: With their S-1 filing just days after the summit, Dylan Field defended Figma’s AI strategy and new product launches.
  • The Science of Discovery: Can AI models trained on “boring rule followers” actually make Nobel Prize-worthy breakthroughs?
  • The End of Reading?: Synthesia’s CEO made the boldest prediction yet — that kids won’t read anymore, and video will replace text entirely. The hosts wrestle with what this means for human intelligence and whether writing really is thinking.


The next Cerebral Valley AI Summit returns to San Francisco on November 12th!

Timestamps08:21 Uber’s Self-Driving Strategy and Market Positioning16:56 Figma’s IPO Bear and Bull case26:17 Harry Stebbings’ Interview Insights with Granola’s CEO32:11 Exploring AI’s Role in Scientific Discovery38:26 The Impact of AI on Reading and Writing



00:00:00
This episode is brought to you by Forethought.

00:00:03
Most companies build their customer experience in pieces.

00:00:06
Sales in one system, support in another, and onboarding

00:00:09
somewhere else. Forethought brings it all

00:00:11
together. Forethought is an AI system

00:00:14
made-up of advanced agents that handle sales, onboarding,

00:00:17
support, and retention. Each team manages its own

00:00:21
agents. The customer sees one unified

00:00:23
experience. Forethought powers over a

00:00:26
billion interactions every month for brands like Scale AI,

00:00:29
Cohere, Air Table, and Upwork. Learn more at Forethought dot

00:00:34
AI. Welcome back to America.

00:00:41
I'm here with Max Child and James Wilstroman.

00:00:45
Hey, guys. Hello, Eric.

00:00:47
Hey, Eric. We, we had a fun European

00:00:50
adventure, what, couple weeks ago with this Rebel Valley AI

00:00:55
summit in London. Yeah, we, we had a blast.

00:00:58
I don't know what. What did you guys think about

00:01:00
it? I thought it was tremendous

00:01:01
experience. I mean, it really drove home how

00:01:04
exciting the AI scene is in Europe, which I think you know,

00:01:08
in the sort of mobile and web eras, maybe the scene a little

00:01:11
bit beleaguered compared to America, but I felt like AI

00:01:14
startups and and big companies are totally popping off there.

00:01:17
I mean, we had granola, we had Lovable, we had Synthesia photo

00:01:21
room and then you know, on the big company side, obviously we

00:01:23
had Wave and Uber, which was super exciting and so.

00:01:26
Well, in classic Cerebral Valley, I feel like the the

00:01:29
buzzy startup founders are the real celebrities.

00:01:31
You know, it's like, I mean, people are excited about Dara

00:01:34
and we had Dylan Field at Figma, which, you know, what was like a

00:01:37
week away from filing its S1. But yeah, I feel like the

00:01:41
lovable guy and the granola guy, you know, we're definitely sort

00:01:45
of people of interest at the event.

00:01:47
My Spidey sense for for startup heat was was very was very

00:01:52
tingly around the lovable guy. He was he was Anton.

00:01:56
I interviewed him. But like, even just walking in,

00:01:57
he was being surrounded by like investors and partners and other

00:02:01
startups and you could just he was a look, get the vibe he's a

00:02:04
little mini celebrity in the the AI scene.

00:02:07
He's a He's a Unicorn now. He's a.

00:02:09
Unicorn now as of like today or yesterday?

00:02:12
Yeah, how how many? Like how many coding AI adjacent

00:02:16
companies are unicorns now? We've got like lovable replet

00:02:20
cursor windsurf. It's just like goes on and on.

00:02:24
And the narrative moves so fast. Are are is is it good to be a

00:02:28
coding startup now or are we already like oh man cost of

00:02:31
goods sold and everything are out of control?

00:02:33
And Windsurf was falling apart and luckily they were able to

00:02:36
save it. I don't know, right?

00:02:38
Since Cerebral Valley, though, we've had people like bouncing

00:02:41
between anthropic and back and to cursor and back, and we've

00:02:45
had this whole saga of windsurf getting chopped up.

00:02:49
I don't know if I had to stick my finger in the air right now.

00:02:53
I feel like the mood is it's good to be a model again.

00:02:56
I mean, you've got a a big sentiment that anthropic and

00:03:02
clawed are key to coding and you sort of rather be anthropic than

00:03:06
cursor. And obviously we're recording

00:03:09
this on Thursday, it's coming out on Friday.

00:03:11
I mean, open AI just get a live stream about agents and seems

00:03:15
like, oh, we're figuring out this agent thing.

00:03:17
I don't know, you know, they're they're at once, I think they're

00:03:19
trying to be supportive of MCP, you know, the sort of generic

00:03:23
open source interface across agents.

00:03:25
But then they're making the announcement that like, oh, that

00:03:28
thing you're trying to build a company around.

00:03:29
Yeah, we do. We do that too.

00:03:31
So I guess my take at the moment is models back in ascendancy,

00:03:35
which you'd hope so given their valuations.

00:03:39
Yeah. I mean, I think obviously we've

00:03:41
had this debate a million times, but I think ultimately it is

00:03:43
good to be the company that has put, you know, 5 to 10 billions

00:03:48
of dollars to work on something rather than the company.

00:03:52
So, yeah, yeah, yeah, that that then the company that is

00:03:56
slightly repackaging the thing the other guy did for five to

00:03:59
$10 billion. Like, I mean, there are

00:04:01
obviously I think application use cases here, but you know, in

00:04:04
the sort of. Bullish on applications, yeah.

00:04:06
Oh, no, I, I am bullish on applications, but I do think,

00:04:09
you know, anything that's sort of close proximity to the model

00:04:12
itself, which I think a lot of these coding startups are very

00:04:15
close proximity to the model itself in many ways is a

00:04:18
dangerous place to be because, you know, Dari, Dario and Sam

00:04:21
have got their eyes on that space.

00:04:23
It's like you don't want to be the most popular application

00:04:27
that makes the most money because then you're square in

00:04:29
the targets of the the foundation model labs.

00:04:33
It's like you kind of want to be like making good money, but not

00:04:36
like the number one use case of LLMS.

00:04:40
We're going to get into clips so this episode is really we're

00:04:42
going to engage with the key moments at this Rupal Valley AI

00:04:45
summit, which I think will be fun.

00:04:46
Well, you can go watch them all on YouTube, and I think they're

00:04:49
all still pretty current even though AI moves fast.

00:04:52
But we're bringing you our favorite clips so that if you

00:04:55
don't have the time, you can get the best of what happened there.

00:04:58
I wanted to say two different things.

00:05:01
One on London. I think one thing I really liked

00:05:03
about the event is that we took a sort of like, AI is global.

00:05:06
We're not getting dragged into like, which city is the best?

00:05:09
I mean, obviously London has deep mind and there are like,

00:05:11
good companies in Europe, but it's like, yeah, it's a global

00:05:14
world, all connected on the Internet.

00:05:15
They're great people everywhere. Like, I think like half the

00:05:18
audience was London and half came from elsewhere probably.

00:05:21
Like, I don't know, 1/4 US and a quarter rest of Europe, I don't

00:05:25
know. Yeah, decent amount of like

00:05:26
France and Sweden and things like that for.

00:05:29
Sure, though we had a train, yeah, Public transportation

00:05:32
infrastructure, despite being like much, you know, better on

00:05:36
its surface, uh, consistently we had transportation problems

00:05:40
across Europe. Eurostar was in trouble from

00:05:44
France, and then I had my flight deflayed out of France from an

00:05:47
air traffic controller strike. So I don't know.

00:05:50
They brag about public transit. There are transit and then they

00:05:53
can't keep it working. It's like like their healthcare

00:05:56
system or something you know works works great for most

00:06:00
people most of the time and then falls apart at the at the

00:06:05
limits. The other thing is we we have

00:06:07
not officially committed to going back to London.

00:06:09
This is, it's like we need, we need a sponsor bidding war

00:06:14
between New York and London, between which one we had for

00:06:19
first. So if you if you have a strong

00:06:21
horse in the New York versus London race, reach out about

00:06:24
sponsoring. I would just say in terms of

00:06:27
vibes at the event, the the London audience was dramatically

00:06:31
more engaged than the New York audience.

00:06:32
I would say they were excited to be there.

00:06:36
They stayed for the whole event. They they engaged with the

00:06:38
speakers. They wanted to chat and meet

00:06:40
afterwards, whereas well. Those New Yorkers, they're

00:06:42
grinding. They're like, I don't fucking

00:06:44
job like. New Yorkers are like, I've got a

00:06:46
happy hour at 3:45 that I've got to be at, so the 4:30 speakers

00:06:51
will not be. Fully attended.

00:06:53
New York invited. Before I go back to work.

00:06:55
Until, yeah, you live. In New York, you're a bit of a

00:06:57
booster. You did have a little bit of

00:06:59
like, you know, we had Goldman, we had Snowflake, it had a

00:07:02
little bit of like big company tries to like embrace AI,

00:07:06
whereas I do think it turned out that Europe had some of the

00:07:10
cooler up and coming startups. I mean, I think Rogo, you know,

00:07:14
which we had at the fintech event is buzzy New York

00:07:17
application company. I mean, there's certainly, I'm

00:07:20
sure if we dig dug into like finance AI startups, we'd get

00:07:24
more of the like application excitement in New York.

00:07:27
I flew back from San Francisco a couple weeks ago to be on a New

00:07:34
York Tech Week panel. And of course, we, you know,

00:07:37
from one of the New York boosters, Julie Samuels.

00:07:40
I don't know who it is. I forget what our group is these

00:07:42
days, but you know, does advocacy for New York's tech

00:07:45
scene was like, oh, what do you think about the New York's tech

00:07:47
scene? And I gave the most like, oh,

00:07:49
pro San Francisco, just like, you know, the beauty of New York

00:07:53
to me is that people come through.

00:07:54
It's everybody's like second favorite city no matter where

00:07:56
you are. But I'm not sitting here

00:07:58
claiming that New York is the center of the tech universe.

00:08:02
And they're like, you are not invited to next year.

00:08:05
I know, I just believe I've said this a million times, but I

00:08:10
agree with Ryan Hoover, the product hunt guy who said to me

00:08:15
during the pandemic, Silicon Valley is online.

00:08:17
Like come on, it's like it's on the Internet.

00:08:19
Like who cares where people are? All right, back to the present

00:08:23
moment. A lots happened we're in now

00:08:25
this like AI spending war for employees, obviously with Meta,

00:08:29
which is I think the most present happening.

00:08:32
Obviously scale deal just happened when we started the

00:08:35
London event Yeah, let's get into clips and let's react to

00:08:39
some specific lines from the most recent Cerebral Valley AI

00:08:42
summit. I mean in the Uber case, I mean

00:08:44
you came in, you did spin out Aurora like how did.

00:08:48
You we we merge ATG into Aurora. Yeah, and then became its own

00:08:52
company. How did you decide, I guess that

00:08:55
self driving wasn't there in 2017 or they this shouldn't be

00:08:59
something to invest in or how did you go into sort of all your

00:09:02
technologists saying, oh, we're just tomorrow, just tomorrow and

00:09:06
say I'm not convinced. Listen, I I think Erica was a

00:09:09
really tough decision at the time.

00:09:11
And one is we were deeply marked by the fact that we had an

00:09:15
accident, lost life there. And so the responsibility that

00:09:21
you hold in your hands when you're operating in the real

00:09:24
world hit us deeply. I think the other circumstance

00:09:30
that affected the ultimate judgement was that we, we ran

00:09:34
ATG very separately from mainline Uber because we wanted

00:09:39
the opportunity to partner with the rest of the ecosystem.

00:09:41
Again, like we, we don't just work with, you know, we don't

00:09:45
just have Priuses on our platform.

00:09:48
We got forth, we, we want to work with the entire ecosystem.

00:09:52
And when I went and talked with some of the other AV developers,

00:09:55
I'd say, you know, ATG, it's in the family, but it's separate.

00:09:59
We're going to treat you fairly, we're not going to share data,

00:10:00
etcetera. You know, when you're having a

00:10:02
conversation, actually, like I usually do with you and I'm

00:10:05
saying stuff and you don't believe a fucking thing I say,

00:10:07
pardon me, That's that was the response.

00:10:11
Like the we were having conversations, but I think

00:10:13
people didn't trust us. Yeah, because they also viewed

00:10:16
us, rightly so, as a competitor, right.

00:10:18
So we had a decision to make. Do we go with a proprietary kind

00:10:24
of vertical strategy or do we truly partner with the industry?

00:10:29
We couldn't have the best of both worlds and ultimately we've

00:10:33
started that a partial model was the right model going forward

00:10:36
and based on, you know, basically all of the leading

00:10:39
players sans Tesla, but everyone else is open working with us,

00:10:44
etcetera, I think it was the right decision to make.

00:10:46
I guess I don't really buy this explanation honestly, because

00:10:52
like I if you think of like Amazon today, right, they sell a

00:10:56
lot of their own, you know, Amazon branded products on their

00:11:00
marketplace. It pisses off their suppliers

00:11:03
and they do it anyway because it's a good, a good business

00:11:05
idea. Like I I just don't really buy

00:11:07
that this is a good enough reason not to invest in your own

00:11:09
self driving technology just because other providers on the

00:11:15
marketplace won't be happy about it.

00:11:17
Right. I mean, the premise of my

00:11:18
question was that I thought it was skepticism on the imminence

00:11:23
of the technology working and Uber's team particularly working

00:11:28
and Aurora, which is what they spun off has not really

00:11:31
delivered on a major successful. That might be the real answer I

00:11:35
but that was. So to me, I did think like a

00:11:38
call on the quality of the technology was the reason.

00:11:41
But I mean, I don't know, I take him in his face.

00:11:44
I mean, it's interesting to that his actual first answer was that

00:11:49
they had such a negative PR situation that could they really

00:11:53
do it? And I, I think it's easy to

00:11:56
forget how much Dara was brought in to like clean up the PR

00:11:59
situation. It's like we had the hack.

00:12:01
We and they had literally, you know, launched self driving cars

00:12:05
in the firestorm of their PR nightmare without the approval

00:12:10
of like San Francisco. It was like just brazen.

00:12:13
I mean, I do ultimately think that if Uber can sort of

00:12:16
maintain it's like marketplace, you know, middle man position

00:12:20
between riders and drivers, even if the there are no drivers and

00:12:24
then, you know, it's just way MO's or, you know, in the case

00:12:27
of Serbo Valley London, it was wave they were partnering with.

00:12:29
And it's plausible that for, you know, 120 different countries

00:12:33
around the world, there's going to be, you know, 60 different

00:12:35
self driving companies you need to ultimately partner with, you

00:12:38
know, to, to serve self driving cars to people.

00:12:41
And all these different locations that Uber may

00:12:43
ultimately be the sort of aggregator of this need for a

00:12:47
car, whether it has a person behind the wheel or not.

00:12:50
Right. And I think if that strategy

00:12:51
works, I think they're in an awesome position because kind of

00:12:54
being the middle man in this massive market of personal

00:12:57
transportation is, is really great.

00:13:00
I think what's what's the challenge is in San Francisco,

00:13:03
there's this sort of Waymo brand loyalty building among the tech

00:13:07
community. I think like people like James,

00:13:10
you know, are like, I basically only take Waymo's, even if

00:13:12
they're take longer and they're more expensive.

00:13:14
And I use the Waymo app, right? And so it's an interesting

00:13:17
question. Like for someone like James, I

00:13:19
would pose like if Waymo's were in Uber in any city you wanted

00:13:23
to visit, and you could just instead of clicking Uber X, you

00:13:26
click Waymo. Like would you just go back to

00:13:28
using the Uber app because like sometimes you want to use the

00:13:32
Uber X because it comes faster and sometimes you want to use

00:13:34
Waymo because you like it more like, or would you be like, no,

00:13:37
I'm always using the Waymo app, you know?

00:13:39
I do think a lot of Uber success here depends on their ability to

00:13:43
specifically convince Waymo that they should not deploy in every

00:13:47
city and need to participate in the platform.

00:13:51
I mean, you know, and they're running those experiments now.

00:13:54
So it'll be interesting to see where Waymo thinks the power

00:13:57
resides. Yeah, I think, I think I would

00:14:00
that to that point, like if, if I know there's a Waymo's in

00:14:05
certain cities that I can only access the Uber, it'll probably,

00:14:10
you know, switch the habit back to just using Uber everywhere,

00:14:12
including in San Francisco. If I can access Waymo's through

00:14:16
through the Uber app. All of this though is just, you

00:14:20
know, pointing back to my original point, which I just

00:14:22
think Uber would still be better off if they owned something

00:14:27
approximating Waymo right now. Like I just don't see a world

00:14:30
where they are worse off if it's.

00:14:33
Waymo Quality. If it's Waymo, sure.

00:14:36
Yeah, yeah, yeah. And I who knows if they could

00:14:38
have gotten there. Right, Amazon knows that they

00:14:40
can deliver. You know, batteries just as well

00:14:43
as Duracell or whatever, Yeah. And Costco and places they they

00:14:46
pick lanes where they see where they can offer something.

00:14:50
I'm not saying it was like a bad decision to spin this off

00:14:53
originally, I just think disingenuous to say that he's

00:14:57
only doing this for platform neutrality.

00:15:00
Well. Literally like a day or two

00:15:02
after the event, there was reporting in the New York Times

00:15:04
that Uber might be working with Travis Kalanick to like acquire

00:15:08
a Chinese self driving car startup.

00:15:10
So it's, it's very possible this is on Dara's mind that we,

00:15:14
they're all these Chinese startups that have actually

00:15:17
caught up really well in self driving.

00:15:19
And we spun off the one that wasn't succeeding and cost us a

00:15:23
ton of money. Now everybody's ripping each

00:15:25
other off and they're all going to catch up to each other just

00:15:28
like the language models. And so then we have our white

00:15:31
label version, so nobody can push us around too much Like,

00:15:35
yeah, I think that's very plausible.

00:15:37
And they skip a lot of the needless money losing that's

00:15:41
hard to do on the public markets.

00:15:43
It's like Google Pixel or something, right?

00:15:45
Yeah. You want the flagship that just

00:15:47
kind of shows how to do this and you want to learn even if

00:15:50
you're, you know, also distributing Android to other

00:15:54
phone makers. I do think, yeah, to Eric's

00:15:56
point, Waymo is so head and shoulders above the field right

00:15:59
now that you would just be terrified that you're going to

00:16:01
lose Waymo. And so you got to balance these

00:16:02
considerations of like, is it going to piss off Waymo if we

00:16:06
work with this Chinese startup? Like, but what if Waymo just

00:16:09
walks away tomorrow? Like we better have a backup

00:16:11
plan, you know, so it's, it's a challenging place to be and.

00:16:13
I think the Waymo brand is so good right now.

00:16:16
You got started in like Jaguars. You have none of the baggage of

00:16:20
like bad Uber rides. It's just like it's such a good.

00:16:25
The MPs on Waymo has got to be like great relative for actual

00:16:29
like riders. Yeah.

00:16:31
Anyway, it's uh, it's, I mean, it's like a quintessential

00:16:34
technology debate question, which is just like, what are the

00:16:37
platform dynamics? Who actually has leverage?

00:16:39
And it'll be interesting to see it play out.

00:16:41
But it sounds like we all agree that it would serve Uber to have

00:16:45
some credible self driving car technology of its own, at least

00:16:49
to understand the fundamentals and have a little bit of a

00:16:53
competitive edge against potentially bullying partners.

00:16:57
All right, well, we did Uber. Let's do the other big headliner

00:17:00
from the event, Dylan Field, the CEO of Figma, who literally like

00:17:05
he seemed very busy when we saw him and I can see why.

00:17:10
I think they literally flipped their S1 filing the next week.

00:17:14
So very appreciative that he made it to London at sort of the

00:17:18
busiest possible moment for a pre IPO company.

00:17:22
I wanted to talk about sort of the company building piece of

00:17:24
this for a second. I mean, you're, you're clearly

00:17:28
sort of moving towards going public.

00:17:31
I'm curious like what your decision making process has

00:17:34
been. We're in this moment where

00:17:36
Stripe, SpaceX, like it feels like they can stay perpetually

00:17:40
private. What's sort of the impetus to

00:17:43
potentially move Figma towards the public markets?

00:17:46
Yeah. I mean I can talk like I told

00:17:48
you about anything regarding like that process, but I think

00:17:53
that just the important thing that I've been like making sure

00:17:58
that I'm grounded in everyday. Whatever happens with going

00:18:02
public, not going public is this is like the moment to make sure

00:18:07
that we are all in on product, whether that's me, our team,

00:18:13
like there is so much to build, so many workflows changing, so

00:18:16
much happening and we have such opportunity to help our

00:18:19
customers. So that's what we're pushing for

00:18:22
every day. It's funny to hear that played

00:18:24
back, like as an interviewer, him being like, I told you, I

00:18:27
couldn't talk about this, like seared in my mind in a way that

00:18:31
watching it was like, OK, he said that and he moved on.

00:18:34
I do think he could have been slightly less defensive.

00:18:37
To me, it's always better just to be like, I, they ask

00:18:40
questions they feel like they have to ask and I can say

00:18:43
whatever the fuck I want that you could say something totally

00:18:45
different. He could have pretended like I

00:18:47
didn't even ask the question, you know?

00:18:48
Anyway, besides that little like bristle at, at my just doing my

00:18:53
job and asking a question, it's clear that they, they feel the

00:18:56
need to have like a great sort of pipeline of products heading

00:19:00
into an IPO and they, they literally are presenting

00:19:03
themselves as having launched what 4 fresh products after

00:19:07
having four products just this year.

00:19:09
My take is that he like actually believes that they're in a

00:19:15
disruptive moment. We, we, we talked about in the,

00:19:18
in on the podcast before where technology is changing and will

00:19:22
affect them in many ways through AI.

00:19:25
And then also, you know, he it's a message to his team to say

00:19:29
like, you know, even though we are going public and this is an

00:19:32
amazing milestone for the company, like don't get

00:19:35
distracted because we actually have to fight those battles and

00:19:39
build product. Yeah.

00:19:42
I think it's interesting given our experience with these vibe

00:19:46
coding tools over the last couple months, James that like

00:19:49
to me it increasingly seems like the bottleneck for vibe coding

00:19:53
or AI assisted coding or whatever you want to call it is

00:19:56
the quality of the design. I would say that like the

00:19:59
designs you get out of these tools, whether it's like.

00:20:03
Cursor or lovable or Claude code or whatever or just garbage like

00:20:07
really bad. And it's a sort of interesting

00:20:10
question if like Figma is extremely well positioned in

00:20:13
that case, because they have all the designers, like all the good

00:20:17
designers in, you know, in software, essentially use Figma.

00:20:21
And they can actually convert this sort of group of people who

00:20:24
actually know how to make things that look and feel really great

00:20:27
and polished into becoming creators and developers.

00:20:30
Or if you go the other way, which is say like, Oh no,

00:20:33
everyone else is going to like actually grab these designers

00:20:37
and they're going to start understanding how to use cursor

00:20:39
and Claude code. And then once cursor and Claude

00:20:41
code level up on their quality of design by either training on

00:20:45
new data or just something else, then all these designers are

00:20:48
just going to be using Claude code or cursor, which is kind of

00:20:51
what we experienced in the company in the last few weeks

00:20:53
when we trained everyone on Claude code.

00:20:54
It's like all of a sudden every every person in the company,

00:20:56
whether in design or product or data or marketing can can now

00:20:59
build apps. They all just look like garbage

00:21:02
because the models aren't there yet.

00:21:03
I I think Figma is in a funny situation launching Figma make,

00:21:08
which is their sort of lovable, you know what have you vibe code

00:21:13
platform, vibe code platform is that what?

00:21:16
We're calling the category. I mean, I don't know what

00:21:17
they're calling it, but that's what I'm calling it.

00:21:19
Yeah, I mean. It puts them in a weird position

00:21:21
where they have to sort of defend the category, which I, I

00:21:25
think is still like messy and broken.

00:21:27
I, I think they, they created like Figma.

00:21:30
I mean, a lot of what Figma is doing is like sort of you're a

00:21:33
marketer or you're a website builder.

00:21:36
We're going to give you sort of a lightweight version of Figma.

00:21:38
So it's easier for you to get in and do what you want.

00:21:41
And so they created Figma sites for web creation and that has

00:21:45
like a Figma make integration. And in some ways I think that

00:21:50
makes more sense with Figma's like long term plan where it's

00:21:54
like deeply integrated with their design elements.

00:21:57
And Figma make feels a little bit like, all right, there's

00:22:01
this sexy thing that everybody has.

00:22:02
It's just sort of like a wrapper over models.

00:22:05
If there's going to be so much zeitgeist around it, let's have

00:22:08
it too. But I agree that the real

00:22:10
opportunity for Figma is that like they have all your design

00:22:13
assets. They have all your design

00:22:15
assets. It's a moment when context is

00:22:18
seen as like extremely important and Figma has all your context

00:22:22
on how you design things. And so clearly the more they can

00:22:26
make this make thing much more like related to the actual

00:22:29
design tools and assets that Figma has, it'll be much more

00:22:34
valuable. And then it'll be truly

00:22:35
differentiated from Lovable and everybody.

00:22:38
Else, I mean, for the record, like I don't know if they're

00:22:40
gonna do this, if they have the talent to do this or if there's

00:22:42
it's a focus of theirs, but they have this tremendous opportunity

00:22:46
to essentially like trained as a, you know, the first good

00:22:50
model in terms of design for, you know, for vibe coding or for

00:22:55
creating software, right? I mean they have again basically

00:22:59
every like good design of software you know is on Figma

00:23:04
for the last like I. Don't know they can actually use

00:23:06
that data to train, but yeah. I mean, they should, I don't

00:23:09
know. They should.

00:23:10
Their lawyers should try to find some loopholes on this.

00:23:12
Yeah, I guess. What?

00:23:13
Would you be outputting? You'd just be outputting the

00:23:15
layers themselves, not the code you're saying like.

00:23:18
I don't know how it would work exactly.

00:23:20
Maybe you create your own version of Claude code.

00:23:22
Maybe you create your own version of, you know, a super

00:23:24
enhanced version of love of lovable that just makes really

00:23:26
nice looking websites instead of pretty crappy looking websites

00:23:29
like. But I'm just saying, you know,

00:23:31
to Eric's point about like context is king, you know, IE

00:23:34
like the data on how people, you know, use these tools or create

00:23:37
things with these tools. Like Figma has the greatest

00:23:40
goldmine of design context like in the world.

00:23:43
And I don't know how they capture that or if they're

00:23:45
thinking about it. But to me, it feels like this,

00:23:47
this massive upside opportunity to be the the creation tool that

00:23:51
actually makes things that look nice and work.

00:23:53
Well, the, the other thing Dylan bristled at me about was that I

00:23:57
said, you know, oh, when I use these like tools, I just like

00:24:01
fucking copy and paste, you know, sub stack or whatever and

00:24:04
say like, build me sub stack. And he's like, well, you're, I

00:24:08
don't know, it's sort of like you're lazy.

00:24:09
Not everybody else does that. I don't know.

00:24:11
But I do think like a lot of what works about these is you

00:24:15
start with some visual design that you like and sort of

00:24:19
replicate it. I don't.

00:24:20
Know no, I, I, I'm fully on your team here.

00:24:23
I feel like most design is most design is, is remixing, you

00:24:27
know, to sort of the remixing is the kind of like generous

00:24:30
version and I think the ungenerous version would be a

00:24:32
lot of copies. Design.

00:24:34
I think Figma is a tool where you're trying to build it up on

00:24:36
the pixel level, but a lot of these like text based almost.

00:24:40
It's either you're asking the thing to just arbitrarily come

00:24:45
up with something you don't know how, or sort of like copy and

00:24:48
pasting, putting things in. I, I realize you can put in your

00:24:50
own stuff, but I, I do think there's a lot of like copying

00:24:54
that's going on. I think a lot of this is take

00:24:57
right now is going to age like milk and in a year we're going

00:25:00
to be looking back on this in a world where the foundation

00:25:03
models have just chosen to train more on on good design.

00:25:07
You know, it won't be like Eric having to cut, copy paste

00:25:11
pictures of so stack to like copy it.

00:25:13
It'll just come out with better looking designs when you vibe

00:25:15
good, and I think that that's the future.

00:25:18
You can feel the AGI today, James.

00:25:21
Yeah, Yeah. I just, I just think the models

00:25:23
haven't like, you know, trained like the coding models haven't

00:25:28
focused on this yet and they will like it's just a matter of,

00:25:31
you know. Actually like a super short

00:25:34
figma case then I mean I'm just. Saying like.

00:25:36
I guess that's why. I'm saying they should be in

00:25:37
that race like I'm saying like they've got this goldmine.

00:25:40
They've got to get in here with these other models like, you

00:25:43
know, they could. They could be the leader.

00:25:45
I don't agree with the goldmine because they don't actually have

00:25:49
the connectivity to the code like at some point.

00:25:53
Like the way that clod code works, is it like generates

00:25:56
actual front end, you know, CSS? Yeah.

00:26:00
So I just think, you know, they, they have like a goldmine of

00:26:05
great looking imagery. All right, well, breaking news.

00:26:08
James's short Figma. James James's Shorting Figma I.

00:26:12
Predicted at the start of this episode that we would be high on

00:26:15
models today and that's that's bearing out.

00:26:18
All right Next up we've got Harry Stebbins, the only VC we

00:26:23
trusted to interview somebody. It helps that he has a podcast

00:26:27
of his own, so he knows what he's doing.

00:26:29
He interviewed the Granola CEO, Christopher Pedro Gall.

00:26:33
Here's the clip. My favorite quote is like will

00:26:35
the incumbent gain innovation before you know the innovator or

00:26:39
the startup gains distribution? How do you think about that

00:26:42
today, if you think about kind of an open AI's ability?

00:26:45
To do what? You do in some respects and have

00:26:47
distribution versus your ability to move fast.

00:26:50
Be very nimble, innovate. Yeah, I think my my view here is

00:26:57
that we are in you know, when the iPhone came out and you had

00:27:00
like the beer drinking apps and the lighter apps like I still

00:27:04
think we're in that stage of AI, which might sound ridiculous

00:27:07
because whatever ChatGPT has 500 million users and is clearly

00:27:10
useful, but I really think that is the case I don't think AI is

00:27:14
that useful. I think it's today and I think

00:27:17
it's mostly masquerading is useful, but you end up spending

00:27:20
a lot of time like rewriting or redoing it.

00:27:23
Another way to put that is it's going to be so much more useful

00:27:26
in one to two years. And and I think we have yet to

00:27:31
see truly AI native like interfaces and and products come

00:27:36
out. And I think that's the whole

00:27:38
question. Like I think the the it's not

00:27:41
about the products that exist today.

00:27:43
Those are. It's still like.

00:27:45
It's like the caveman days now. It's like who can get to the

00:27:49
truly AI native experiences and whoever builds those first and

00:27:54
gets distribution on that. That's that's my view.

00:27:57
And I think we're all everybody in the industry is pre that

00:28:00
right now. I basically agree with that, but

00:28:03
I'm not sure that means like bet on granola.

00:28:05
I don't know, it's like a little bit like I sort of

00:28:07
philosophically agree with him that there's a lot, you know, of

00:28:11
kind of latching AI onto existing products and workflows.

00:28:15
And even granola arguably, you know, is essentially just

00:28:17
latching voice AI onto a Zoom call essentially.

00:28:22
And the interface is, you know, not innovative.

00:28:24
It's like a notes app essentially with some folders.

00:28:27
It's a great product. For the record, I freaking love

00:28:29
granola, but I don't know if the the idea that like, oh, we're in

00:28:34
the beer drinking iPhone phase, so like air go, just wait one or

00:28:39
two years and then we'll see where the chips fall necessarily

00:28:41
makes me believe in someone other than opening eye, I guess.

00:28:45
I mean, it's a great. Fundraising posture because it's

00:28:47
like, it's cool now. Like, think about what?

00:28:50
Wait till next year. Really.

00:28:52
Yeah. I mean, I feel like yeah, the

00:28:54
the missing, yeah, the missing piece for me was the the

00:28:56
explanation of what the evolved version of his product looks

00:29:00
like, which maybe he doesn't want to say would have made to

00:29:02
that me that argument a lot more, right.

00:29:04
Yeah. I mean, it reminds me of like

00:29:06
the early social media days where, you know, first you have

00:29:10
all of these media companies just put their products on the

00:29:12
Internet, and then it turns out that the form factor that

00:29:17
succeeds dramatically is is Facebook or Twitter or

00:29:20
something, right? That's just like a totally

00:29:23
different product. But if we had interviewed the

00:29:25
Myspace CEO in 2003 and he had said, you know, we're not really

00:29:29
at the final form of social media, like, you know, it's

00:29:32
coming. And and then we're going to

00:29:34
really like unlock the value you get out of social media and

00:29:37
you'd be like, this dude's totally right.

00:29:38
But like, does that mean Myspace is going to win?

00:29:40
I mean, I don't know, it just it's sort of in, I feel like it

00:29:43
was a little bit of that kind of comment where it was like, so

00:29:45
you're saying like nobody's figured it out.

00:29:47
So ergo, like we're all just kind of killing time until the

00:29:50
models get better for. A year but but wouldn't the

00:29:52
wasn't Myspace like we're hot shit like I don't know.

00:29:55
I'm, I'm, I'm for the record, I'm not saying that Myspace CEO

00:29:58
would have said that. I'm just like.

00:29:59
Right. I'm just saying you'd prefer the

00:30:00
CEO who at least is like a realist about where we are if

00:30:04
he's correct. I guess if the Myspace CEO did

00:30:07
say that, they would be directionally correct in that

00:30:10
like, you know, most newspapers didn't win like incumbents,

00:30:15
right? But I guess you can argue like

00:30:17
this is where it gets to be like a tricky argument like open AI.

00:30:20
Are they really the incumbent or are they also just the, you

00:30:23
know, another startup that is, you know, in the game as much as

00:30:29
granola? I just don't get the I want.

00:30:32
I mean, I have a I want transcripts and I know they give

00:30:35
it to you, but like I don't want to query it for general

00:30:38
takeaways. Like I want it to give me

00:30:41
specific lines out of the conversation.

00:30:43
And so the whole thing sort of annoys me because it does the

00:30:47
opposite of what I want, which is you could give me a gloss

00:30:50
you. It shows you the whole

00:30:52
transcript, but I want it to deliver like here are the five

00:30:54
best quotes or like give me the specific verbatim thing that was

00:30:58
said and it's always trying to approximate sort of what was

00:31:01
said. I think the way that Harry asked

00:31:04
the question was like, how do you think of incumbents getting

00:31:07
to, you know, the product versus the innovator?

00:31:11
But like in this case, we're not, we're calling Open AI an

00:31:14
incumbent, not an innovator. I don't know.

00:31:15
They're innovating. That's funny.

00:31:17
I mean, that's sort of the role Figma played where it's like,

00:31:19
oh, you're the incumbent executor now, like right.

00:31:21
And Notion is a similar one. There's this whole set of

00:31:25
companies that like, oh, in any normal sense, you'd be like,

00:31:28
this is my innovator moment to be on the public markets.

00:31:32
And now it's like, no, you're, you're already the incumbent.

00:31:34
There's a new wave. I think that was like a big meta

00:31:37
theme of the event. Right.

00:31:39
And I'm just saying we need to be a little bit more precise

00:31:41
with the definitions here, like who are the incumbents?

00:31:44
I don't know. Incumbents, anyone who are they

00:31:47
thought they knew what their company was pre catching you 3

00:31:51
and everybody you know the the upstarts are people who said oh

00:31:56
man, I have to invent my company with catchy BT in mind.

00:31:59
I just say I, I basically what I'm seeing is that companies

00:32:02
like Notion and Figma are very much like a, like live players

00:32:06
in this game. And they're they're really

00:32:07
trying to avoid these disruptions from from below.

00:32:11
Our next clip is from the first panel of the day with

00:32:15
Microsoft's Bonnie Croft and Hugging faces.

00:32:18
Thomas Wolfe. I, I introduced it saying this

00:32:21
was my smart people panel. It was the question of whether

00:32:24
we'll find novelty in AI and where where true discoveries lie

00:32:30
with the AI systems of the day. Let's hear what they had to say.

00:32:33
So Thomas, you, you, you know, you wrote this great essay,

00:32:35
basically, you know, asking the question of will AI produce a

00:32:41
Nobel Prize? Will it create an Einstein?

00:32:43
Like what? What was your conclusion there?

00:32:46
How optimistic or not are you about the ability of the current

00:32:49
paradigm to produce a true revolutionary idea?

00:32:54
Yeah, I'm not super optimistic on this right now.

00:32:57
Not optimistic. You almost whispered it.

00:33:01
You're like, this is in the crowd to admit it.

00:33:05
In most part, I'm usually the optimist because I think there's

00:33:07
a lot of great stuff you can do with AI.

00:33:10
And as I mean, probably you got right, you got the idea right

00:33:13
here, which is you have two, two things with AI.

00:33:15
So you have the assistant that's really getting amazing.

00:33:18
And I think, I think training DfT models, training A4, these

00:33:22
are amazing assistants and they will definitely lead to some

00:33:25
interesting. Discovery, like we were

00:33:27
discussing yesterday where we started to talk about this panel

00:33:30
discovery, this discovery will come from still from this part

00:33:33
of creativity of a human saying I'm going to use alpha fault to

00:33:36
explore these proteins and that will be the thing.

00:33:39
And then Alpha Fault helps you find how it works.

00:33:42
And then you have the other thing which is taking this

00:33:44
trillion parameters model and ask them.

00:33:47
Invent something now invent something new.

00:33:49
We want to travel faster than light.

00:33:51
How should we do? Right?

00:33:52
If you ask that to chat DPT nowadays, the answer we get are

00:33:55
pretty bad. Well, I don't really.

00:33:56
I don't really understand his take away exactly.

00:33:59
Would it count if a scientist used ChatGPT or ChatGPT 5 to

00:34:05
like win a Nobel Prize? I mean, I think he's saying that

00:34:08
that is clearly going to happen. I think both panelists were

00:34:11
bullish on AI models expanding like the search space for

00:34:15
discovery or being like, we're going to be able to like I'm an

00:34:19
expert in material X and it will consider material Y, but it

00:34:24
doesn't think this sort of like contrarian, like think outside

00:34:28
the box, reinvent core premises that AI is capable of doing

00:34:33
that. I mean, I think he's saying like

00:34:35
if you just ask the big model like right off the shelf, go

00:34:38
invent something for me, it's not very good at that.

00:34:41
I agree with that. That could change, but that is

00:34:43
the current case. I guess that doesn't really

00:34:46
matter. I mean, like, we're gonna have

00:34:48
if, if, even if it's just, you know, humans have to like, tell

00:34:52
it to do something that's adjacent to their own research.

00:34:54
Like that would still be revolutionary.

00:34:57
Yeah, I'm definitely with jams here.

00:34:59
I mean, you can't just walk to a random human scientist and be

00:35:02
like, oh, go figure out, you know, the nature of the universe

00:35:04
or whatever. I mean, like they're it's like,

00:35:06
OK, well, give me 50 years and I'm gonna go do research and dig

00:35:09
into things, right? He's saying we're training these

00:35:11
models on the boring rule followers, which are him.

00:35:15
He includes himself in this and the vast majority of us.

00:35:18
But he's saying that the true novel discoveries are created by

00:35:21
independent thinkers, and that we are not training AI to be

00:35:26
that sort of revolutionary. I mean, I just don't believe

00:35:28
this. So obviously none of us are

00:35:30
scientists, but I do think don't.

00:35:32
Believe that there's this persona of.

00:35:36
Of course, I believe that being an independent thinker is

00:35:39
incredibly important to making novel discoveries.

00:35:41
But I think so much of the way novel discoveries come about is,

00:35:44
is not just this personality, but it's, it's searching the

00:35:47
sort of frontiers of, of what we know and, and being aware of,

00:35:52
you know, to go Rumsfeld here, like, what are the known

00:35:54
unknowns? What are the unknown unknowns?

00:35:56
You know, and, and even in that protein folding example, you

00:35:59
know, you could say, OK, well, here's like, you know, 10

00:36:03
proteins that, you know, I think might be interesting in terms of

00:36:06
like being an agonist for this receptor.

00:36:09
Like what are the top 100 you think I should look at or

00:36:11
whatever? And, and like, even just

00:36:13
narrowing down that list from 10 to 100, you know, using a

00:36:16
genius AI is like 100 X improvement in the efficiency of

00:36:20
this search space where you're trying to discover, you know,

00:36:23
new science, right? So like, I just don't, I don't

00:36:26
really believe that Like, you know, dude, sitting alone, like,

00:36:29
you know, coming up with crazy ideas is, is how most scientific

00:36:33
discovery happens. I think a lot of it's searching

00:36:36
these efficient frontiers that we've discovered so far and, and

00:36:39
trying to continue to push them forward in interesting ways.

00:36:41
And being 100X efficient is filtering ideas or remixing

00:36:45
ideas or sort of creating slight tweaks on what we know so far

00:36:48
is, is so much of I think discovery, which I think AI

00:36:51
would be tremendous at. Yeah.

00:36:53
Connecting ideas across disciplines, right?

00:36:56
Like you know, you, you know a lot about both biology and

00:36:59
chemistry and physics or something, right?

00:37:01
Which I think the models are clearly best in the world at in

00:37:05
terms of depth of knowledge in every scientific discipline.

00:37:08
So I think if we believe that connecting the dots across these

00:37:13
fields is important to discovery like I think we're, you know,

00:37:16
tracking in a great way there for for the models.

00:37:19
I think the defense is 1 like the models are pretty capable

00:37:23
now and like even in the sort of like, OK, we're not

00:37:26
revolutionary thinkers, like find the cool stuff.

00:37:28
I feel like it's a little underwhelming.

00:37:30
I mean, I I still don't think it's like delivering me scoops.

00:37:33
So, but, but putting putting that aside, I, I think we're all

00:37:36
somewhat optimistic that like sort of rule follower, consensus

00:37:41
associative thinker, we'll be able to find a bunch of stuff

00:37:44
that we just haven't put together and there'll be this

00:37:46
period of a lot of value. To sum up my point here, models

00:37:50
are sycophants. Sycophants don't win Nobel

00:37:54
Prizes. You know, I think every great

00:37:57
man story in history is about sort of like rejecting the

00:37:59
consensus, I think. But my point is, yeah, they're

00:38:02
they're sycophants. The models are.

00:38:04
And like academics, you know, need to be free thinkers.

00:38:08
And until we can figure out how to train a model to really

00:38:11
resist core premises, we're reinforcing the status quo.

00:38:15
And so it might discover things that we would have already

00:38:18
concluded, but to really reject human thinking it, it feels like

00:38:22
that's that's not how it's trained.

00:38:23
That's that's how I see the argument.

00:38:25
I don't know if I totally agree with that.

00:38:27
Moving on to what I think will be our last clip of the day,

00:38:31
perhaps the most provocative, we have a great line from the CEO

00:38:35
of Synthesia, Victor RIP. Our belly definitely designed

00:38:38
for peak spice. Give it a listen.

00:38:41
I guess, you know, last question for you, Victor, like within a

00:38:43
company similarly, like what percentage, you know, how many

00:38:47
times a day am I going to be watching an AI generated video

00:38:50
that's explaining something to me like 5 years from now?

00:38:53
Is it daily, hourly, you know, every 5 minutes, everything I

00:38:56
do? Like what percentage of

00:38:57
corporate, the corporate experience is going to be

00:39:00
consuming AI generated video, you know, five years down the

00:39:02
road? I think I'll probably mirror

00:39:04
your personal life, right. So, I mean, I don't know, don't

00:39:06
know that well, but when my own like media mix of like what I

00:39:09
consume, yeah, I consume mostly podcasts, YouTube videos.

00:39:12
Sure. Tick tock videos, Instagram

00:39:14
reels, Instagram photos. I also do read, but way less

00:39:17
than I did five years ago. And I think that'll be kind of

00:39:19
mirrored in the workplace. It could be hard to imagine

00:39:21
exactly what it's going to look like, but I truly think that

00:39:23
that is clearly what people prefer when no one just tells

00:39:26
them what to do. And so I think that's what we

00:39:28
want to kind of see in the in the in the workplace as well.

00:39:30
And I have this, you know, often very provocative to people idea

00:39:33
that, you know, your kids, kids are not going to be reading and

00:39:35
writing. They're going to be watching and

00:39:36
listening almost exclusively. And I think that is going to

00:39:38
happen. I'm teaching my daughter to read

00:39:40
right now. But I think at some point I

00:39:43
think text will just feel a bit like vinyls or something, right?

00:39:47
Or like, we'll do it sometimes because it's like a fun

00:39:48
nostalgic thing, but for actual information dissemination, it's

00:39:53
going to be like a very low bandwidth compared to those.

00:39:55
Where's Paul Graham? That's that's my first reaction.

00:39:57
Paul Graham has been beating the drum.

00:39:59
Writing is thinking. And so I guess my biggest

00:40:02
defense of writing and why this terrifies me the most is less

00:40:05
that like everybody, I mean, it's good.

00:40:08
Reading is how you learn to write and writing is how we

00:40:11
think. And so if we don't have reading

00:40:12
and writing. Human intelligence as we know it

00:40:14
today collapses. That's not necessarily defense

00:40:17
to the lure of video, which I agree is very strong, but

00:40:21
definitely this vision is to me one of much stupider human

00:40:26
beings. Well.

00:40:27
Do you feel like right now in this podcast, you're not

00:40:30
thinking, you're not learning like I do?

00:40:32
I feel like I have to like, you know, think about what you guys

00:40:34
are saying and concoct my, you know, responses like on the fly.

00:40:39
I just think like, if, if I could constantly talk to other

00:40:43
humans, you know, whether they're actually AI's or not,

00:40:48
like, I don't, I think I'd still be learning, right?

00:40:50
Like, so I guess I kind of understand what he's saying.

00:40:53
Especially does both podcasts are in an intellectually lazy

00:40:56
medium compared to writing. Writing you really have to

00:40:59
figure out your argument. Like, I could just be I'm most

00:41:01
fun on a podcast being loud, boisterous, disagreeing,

00:41:04
slipping around. Like the All in guys have the

00:41:06
most popular podcasts in the world.

00:41:08
Writing requires you actually, like, organize your thoughts.

00:41:10
Somebody could breakdown an argument.

00:41:12
No, no, I Yeah. I think podcasting is a much

00:41:16
lazier way to engage with ideas than writing.

00:41:20
What about like a one-on-one tutor?

00:41:22
Just chatting, You know, one-on-one with mean?

00:41:24
Socratic, you know, if you really push it, but it's just so

00:41:27
easy. Conversation lends itself to

00:41:31
mean social and like slipping to the next thing.

00:41:33
And like This is why this whole, you know, there's a whole

00:41:36
Silicon Valley way of, you know, argumentation that's sort of

00:41:39
like we, we're following each other's ideas and like jumping

00:41:41
around. That's that's not how you would

00:41:43
write like a proof. And like, I just think if you

00:41:46
want, you know, argument and reasoning in sort of like proof

00:41:50
form writing is much better than talking.

00:41:53
Yeah, I just think we're talking, we're basically talking

00:41:55
about like the average person growing up in America, like how

00:41:59
will they learn in 12 to 15 years?

00:42:02
Like I definitely think there will be a lot more like

00:42:05
personalized tutoring that basically feels more like

00:42:09
working one-on-one with a human, you know, assistant, right.

00:42:14
And, and so less, less writing. I, I agree with that.

00:42:17
I don't know if it means no writing, but.

00:42:19
But is this fatalist? I I, I sort of agree with that

00:42:22
we are going to be pulled to less writing.

00:42:24
I'm just saying, do you think we'll be Dumber if we write

00:42:26
less? I don't know, man.

00:42:28
I think the percentage of Americans that like writes and

00:42:33
at all like, you know, essentially like rounds to 0

00:42:36
for, I would argue probably 98% of the population like.

00:42:40
And so I would sort of argue like you're already there in

00:42:43
this dystopia if you believe this is a dystopia.

00:42:46
And then on the other hand, yeah, I'm kind of more team

00:42:48
James here, which is I agree, writing is a form of thinking.

00:42:52
And I think having to prepare your thoughts in written format

00:42:55
allows you to distill and improve them.

00:42:57
But I also don't think it is the only way to think.

00:43:01
I mean, like thinking is thinking as well.

00:43:03
I mean, there is a whole, yeah, there's a whole genre of, you

00:43:06
know, shower thoughts, right? I mean, like Elon Musk always

00:43:09
says he comes up with his best rocket ideas while he's

00:43:12
showering and stuff. Like, is he writing them down on

00:43:14
the, you know, the squeegee, like, you know, on the wall of

00:43:17
the shower? I mean, like, it is pot.

00:43:19
I have always thought like, my best thoughts came from like,

00:43:21
exercising or like walking around or something.

00:43:24
You know, it's not writing, right?

00:43:25
And so I'm just not sure IA believe like, first of all, that

00:43:29
nearest makes no difference. Basically nobody writes in, you

00:43:32
know, the United States, I would argue and then BI think there

00:43:35
are other ways to think. So I, I sort of believe Victor's

00:43:38
thesis and I, I think it's a little bit less dystopian, but,

00:43:41
you know, you guys are both, I think better writers than I am,

00:43:43
so that you might find more. But you write company memos.

00:43:47
You can, Yeah, but things and but I don't.

00:43:48
Necessarily think that's the my best thinking.

00:43:51
You know, I'm not like looking back on my company memos and

00:43:53
being like, oh, those are much better thinking than when I have

00:43:55
a good conversation with James. I mean, this is a, this is a

00:43:57
deep sort of question, but I think about a fair bit.

00:44:01
I, I, and I'm a pretty auditory person, like love audiobooks and

00:44:06
everything. But I think in terms of writing,

00:44:09
you can have great ideas in the shower, great ideas on a

00:44:11
podcast, But like, what's unique about writing is the emphasis on

00:44:16
revision. And I think what writing has is

00:44:19
that you have to sort of go back and say, Oh, does that sort of

00:44:22
make sense? And so, you know, I think, you

00:44:25
know, smart people, you know, like the sort of like first

00:44:30
burst of like this great idea and like that is like really fun

00:44:33
on a podcast and like a first draft of writing.

00:44:36
But I think like a tight argument is often in revision

00:44:39
and writing is unique there where you have to sort of face

00:44:43
your ideas and and sort of beat them down and reform them and

00:44:46
make them work. Back to Polygram, you know Y

00:44:50
Combinator has basically a two phase application process, right

00:44:53
The first is super writing heavy you you answer all these

00:44:58
questions about your company I think Max and I found that to be

00:45:01
a very helpful process in some ways where it kind of forces you

00:45:05
to like refine your startup idea.

00:45:07
But the actual like interview process is 10 minutes of them

00:45:11
asking you questions in a conversation, right?

00:45:13
And I also found that to be like a, a very thought provoking

00:45:17
experience. So I don't know what whether you

00:45:20
need both or not, but they it seems like we're definitely

00:45:23
gravitating to a world where conversation is gonna be more

00:45:28
prominent and and you'll be able to do that like sort of Socratic

00:45:32
style of. Pushing.

00:45:33
I agree, I think this like Socratic dialogue is definitely

00:45:36
another sort of valid and interrogating line of thought

00:45:39
that is a little different than podcast scene in podcasting.

00:45:42
I think there's often an agreeableness and it's sort of

00:45:45
like we're all going in the same place and like sort of like

00:45:47
improv or something, whereas like we're a true like interview

00:45:51
Socratic dialogue. And that's I think where

00:45:53
teaching can fit in where you're really like challenging that

00:45:58
that can help. The last thing I'll say is like,

00:45:59
you know, you can write with Chachi BT.

00:46:01
So I, I don't know that I necessarily think what's

00:46:04
happening with AI is going to totally hurt writing.

00:46:07
It's making a great writing tool that really, you know,

00:46:10
interrogates you much, much more available.

00:46:13
Yeah, it does seem like people are probably writing more than

00:46:15
they were in the past now with ChatGPT, because you have to

00:46:19
write full sentences, you know, you know, Google was just about

00:46:21
keywords. You end up in these longer

00:46:23
dialogues. So I wouldn't be surprised if

00:46:25
like we're on like a slight uptick in in writing.

00:46:28
But I think the elites are probably in a downward trend.

00:46:31
I don't know. You read old, old like famous

00:46:34
people letters to their friends. I mean, I just feel like they're

00:46:37
like these long sort of reflections and some of these.

00:46:40
Right. But but I mean to come back to

00:46:42
the the overall point like was the quality of their thinking

00:46:45
like superior to ours? I just don't believe that.

00:46:47
Like, I think that if you look at any metric you can measure

00:46:50
overtime, you know, there's this concept of like, the Flynn

00:46:52
effect, which is that IQ keeps going up on, you know, on

00:46:55
average over the decades, right? So like, yeah, sure.

00:46:58
I mean, John Adams and, like, Abigail Adams, like, wrote some

00:47:01
amazing letters. But like the, you know, the at

00:47:03
least the average IQ in their society was much lower than

00:47:06
today, even though, you know, the elites were all writing much

00:47:09
more. So I'm just not kidding the

00:47:10
elites. Are way less literary today.

00:47:13
I mean you. Sure, that for sure.

00:47:14
Of course, because that was kind of the only medium for

00:47:18
expression right now. Yeah, and we're going to movies.

00:47:21
Yeah, Yeah, Movies contain a lot of information, but I I remain,

00:47:27
you know. Pessimistic about the world,

00:47:29
basically. You remain, I know I think AI is

00:47:33
going to democratize a lot of tools.

00:47:35
I think we're still in this period where this is allowing

00:47:38
much more people to get access to education than before.

00:47:42
So I think it'll be net good, but I think it's encouraging

00:47:45
some of us, like super elites, to be lazier than we should be.

00:47:50
Writing is a good way to think. This is a super elite podcast

00:47:54
for the record. Obviously I mean super elites,

00:47:57
yes. I just admit it.

00:48:01
Like nobody wants to see their elite.

00:48:02
Anymore like I plan to be a Davos this year with my fellow

00:48:06
super elites. Oh my God, I'm not trying to be

00:48:09
indulgent. I'm just trying to be honest

00:48:11
that we don't reflect like, you know, the average reading

00:48:14
writing consumption Max, you know, you're, you're teaching

00:48:18
your daughter to read. That's sort of an inspirational

00:48:20
story. What is your?

00:48:21
What's your? What the method to teaching or

00:48:23
display? Oh, I mean, this is this is the

00:48:25
beauty of it is it's an iPad game.

00:48:28
It's an interactive iPad game that's actually made by a

00:48:31
startup in up in Seattle. I believe it's called Mentava.

00:48:36
And this is not a. Paid ad This is not a.

00:48:38
Paid ad and IT. Costs a fortune you're somehow

00:48:40
getting it for. Free.

00:48:41
I think getting it for free because my daughter is becoming

00:48:43
an influencer, which is also an important skill, they're making

00:48:46
videos of her playing the app. But yeah, I mean, it's awesome

00:48:51
because I think the screen time has become such a meme among

00:48:55
parents that it's sort of objectively bad to have any

00:48:58
screens in your kids life, especially in the Super elite

00:49:01
circles that I roll in. And this app is teaching my

00:49:06
daughter to read it at, you know, years ahead of when she

00:49:09
would be learning these things in school.

00:49:10
And so I think it's just evidence to come back to the

00:49:12
more optimistic perspective on software and AI in general that,

00:49:16
you know, interactive experiences, even if they're

00:49:19
sort of purely educational, can be a really positive thing for

00:49:23
for kids on screens and using software.

00:49:26
And so I actually think that makes me get very long like AI

00:49:29
education and sort of software driven human improvement or

00:49:33
augmentation or whatever you want to call it, because she's,

00:49:36
you know, making progress at just a crazy rate compared to

00:49:38
school. So whether or not, you know,

00:49:41
videos or games are the way she learns how to read or learns

00:49:44
math eventually, I think there's clearly much better ways for

00:49:48
kids to learn things and, and in my opinion, a lot of it involves

00:49:51
software and AI. That's a much happier note to

00:49:55
end on. Thank you very much.

00:49:56
Alright. Well, we'll see you guys ahead

00:49:58
of the next Cerebral Valley AI Summit.

00:50:01
It'll be in San Francisco on November 12th.

00:50:04
Sure, we'll have a lot to catch up on in late October or

00:50:07
November. Alright, we'll see you later.