Cerebral Valley: AI Agents Are Already Here
Newcomer PodJune 06, 202500:47:2143.36 MB

Cerebral Valley: AI Agents Are Already Here

The Cerebral Valley AI Summit is right around the corner! To help you navigate the fast-evolving AI landscape ahead of the event, Newcomer Podcast is launching a special four-part series — co-hosted by James Wilsterman and Max Child of Volley. Get insider insights, expert analysis, and fresh perspectives on the trends shaping the future of artificial intelligence.


In this first episode, James, Max, and host Eric Newcomer dive into what it really means to be an AI agent — and explore how agentic AI could reshape the future of work and everyday life. From picking wedding outfits to writing code, they share personal experiences of agents in action and reflect on where this technology is headed next.


So — where is AI headed? In the second half of the episode, the trio revisits market predictions made by AI last November and puts them to the test using fresh data pulled by Deep Research. After a spirited round of forecasting, they return to their 2024 AI Fantasy Drafts to see whose lineup is raising, exiting, and, ultimately, leading in the race for AI dominance.


Our next episode focuses on AI's impact in voice and video, and may include a few more surprise games...


The 2025 Cerebral Valley AI Summit will be held in London on June 25th


Timestamps:


00:39 - Intro & the scaling wall reversal

06:13 — How we use Claude and Deep Research

08:45 — Agents are here for the web search

14:44 — Coding agents as the breakout tool

24:24 — Update on last year's AI predictions

36:51 — AI Fantasy Draft


00:00:00
This episode is brought to you by Forethought.

00:00:03
Most companies build their customer experience in pieces.

00:00:06
Sales in one system, support in another, and onboarding

00:00:09
somewhere else. Forethought brings it all

00:00:11
together. Forethought is an AI system

00:00:14
made-up of advanced agents that handle sales, onboarding,

00:00:17
support, and retention. Each team manages its own

00:00:21
agents. The customer sees one unified

00:00:23
experience. Forethought powers over a

00:00:26
billion interactions every month for brands like Scale AI,

00:00:29
Cohere, Air Table, and Upwork. Learn more at Forethought dot

00:00:34
AI. Hey, it's Eric newcomer.

00:00:41
Welcome to the Cerebral Valley podcast.

00:00:44
Our occasional detour from the newcomer podcast with me are are

00:00:50
now three time hosts Max Child and James Wilsterman, the Co

00:00:55
founders of Volley and the Co hosts of the Struble Valley AI

00:00:58
Summit. Welcome to the podcast, guys.

00:01:00
Thank you, glad to be here. Eric, happy to be back, excited

00:01:04
about the conference coming up in London.

00:01:06
Yeah. So we, you know, always sort of

00:01:08
jump into the Struble Valley podcast ahead of our Tribal

00:01:11
Valley AI summits. And we've got one June 25th in

00:01:15
London. People are like, oh, isn't it

00:01:17
hard to do an international conference?

00:01:19
And like, like everything in startups, it's easiest to do

00:01:22
hard things when you underestimate how difficult it

00:01:25
is until you're like, oh, yeah, we're just doing it.

00:01:27
Someone is figuring out taxes. We have an event team there.

00:01:31
I saw we have the ticket prices in pounds on the website so I.

00:01:34
Think exactly. Like someone was on top of that.

00:01:37
So a couple fun things for this episode for the long time

00:01:40
listeners. At the end of the podcast, we

00:01:43
will return to the startup draft, perhaps my favorite part

00:01:48
of the show. We have overtime accumulated

00:01:51
startups in our imaginary portfolio and get to score keep

00:01:55
how we're doing working backwards.

00:01:58
Before that, we're going to dig into some of our predictions

00:02:01
from the last series. This is sort of a mid year

00:02:04
snapshot, right guys, in terms of the predictions that we made?

00:02:07
Yeah. We made about 10 predictions

00:02:10
last year in November with the idea that they would come to

00:02:14
fruition within a year. So this is the mid year check

00:02:17
in, no official scorekeeping needed, but we will obviously be

00:02:22
competitive on our mid year check insurance as well.

00:02:25
In the latter half of the show, we'll be doing our, our games.

00:02:28
But I just wanted to start off, you know, we've been doing this.

00:02:32
We launched the first Cerebral Valley in March 2023.

00:02:36
ChatGPT had just come out and everyone was getting their heads

00:02:40
around models Max and James, if you want to sort of walk me

00:02:42
through, how do you think of sort of the thematic evolution

00:02:46
of AI in that period? And where?

00:02:48
Where are we now as we're programming for Cerebral Valley

00:02:51
London? Last time we chatted we it was

00:02:55
the end of last year, which feels like a long time ago in

00:02:57
the AI world. But we were all having this

00:02:59
discussion of if we were hitting a scaling wall and if the models

00:03:02
would stop getting better. And I would say it feels like

00:03:05
every two weeks since then, something amazing has happened

00:03:08
in AI. Is pretty trivial.

00:03:11
Has been a great leap forward. I actually think the theme of

00:03:13
hitting the scaling wall aged really poorly, which we somewhat

00:03:17
predicted at the time. But you know, maybe we didn't

00:03:19
know how wrong we were. I mean, just some highlights.

00:03:21
We had, you know, GBT 01, the sort of first thinking model.

00:03:25
We had O3, which has brought thinking models into the tool

00:03:29
world. We've had Gemini really take

00:03:31
great leaps forward and kind of become a state-of-the-art model

00:03:33
system. We just had Claude 4-3 weeks

00:03:36
ago, which a lot of people consider the greatest model

00:03:37
built so far, maybe the best coding model ever made.

00:03:41
We've had incredible Leafs in image generation from mid

00:03:44
journey, Gemini Flux, a bunch of other folks, and then of course

00:03:49
we're going to this episode 2. Video generation has also been

00:03:52
unbelievable. VO3 in particular, which is

00:03:55
Googles new video model I think is that the first sort of truly

00:03:59
realistic seeming video generation model in my opinion

00:04:02
and and kind of a great leap forward there.

00:04:04
You know, Alexander Wang at our conference in November was

00:04:08
probably the most prominent person arguing for a scaling

00:04:12
wall or a potential issue. I mean, he was talking his own

00:04:16
book. You know, he's in the post

00:04:17
training business. I do think there's an argument

00:04:20
that a lot of the progress, some of the progress, some of the

00:04:23
progress we've seen has been about post training.

00:04:26
The models are a certain level of smart, but then they sort of

00:04:30
talk to each other, they're corrected in certain ways and

00:04:33
that's where they get more intelligence.

00:04:35
That seems to be the improvement we're seeing with O3 and the

00:04:40
Chain of Thought models. Yeah.

00:04:41
I mean, I think, I think Alexander Wang, I give him a lot

00:04:44
of credit for what he was saying at our conference in November

00:04:47
because he was arguing that maybe we would see a scaling

00:04:51
wall to some degree in pre training, but we wouldn't see

00:04:55
performance levelling at all. And I think that's exactly

00:04:59
what's happened. And to your point, like a lot of

00:05:00
the gains from thinking models have come from new post training

00:05:05
techniques, reinforcement learning and new a whole new

00:05:08
scaling paradigm I guess. Yeah, I mean, in the end, the

00:05:12
models have gotten really, really effing good.

00:05:14
And whether it's pre training or post training, they go good.

00:05:16
Yeah. Yeah.

00:05:17
I feel like the vibe was a little more like maybe this is

00:05:20
the end of AI progress, not just like pre training is over.

00:05:22
And to your point, there was a distinction drawn by some of the

00:05:25
people at the conference, but I think there was a maybe a more

00:05:27
negative tenor going forward from a lot of folks on stage.

00:05:30
And instead, I think it's been maybe the craziest 6 months in

00:05:33
the history of AI. I don't know.

00:05:34
It's felt like that to me. The interesting thing is, I

00:05:37
guess like in January, we had that DeepSeek moment, right?

00:05:40
We haven't talked about that. I forgot.

00:05:42
About DeepSeek. Too.

00:05:42
Yeah, yeah. Wow.

00:05:44
So to some degree that was maybe echoing some of these scaling

00:05:48
wall concerns because if you can get such high performance out

00:05:54
of, you know, open source models that are effectively competing

00:05:58
with the frontier U.S. companies, like maybe there is

00:06:01
an argument there. But then I guess, you know,

00:06:05
that's kind of died down a bit as we've seen both anthropic and

00:06:09
open AI come out with superior models to some degree.

00:06:12
What models are you guys using? Applaud for Gemini 2/5 for

00:06:16
coding and then for my own personal sort of research and

00:06:19
note taking and stuff. Probably O3.

00:06:21
That's the same for me. I've been using a lot of deep

00:06:23
research through ChatGPT. I think that's my favorite

00:06:27
product maybe of the last few months.

00:06:29
I don't have time. I use O3 a lot.

00:06:32
I use deep research some. I don't want to blow anyone up

00:06:34
here, but I I had deep research, right?

00:06:37
A whole like political consulting memo for somebody I

00:06:40
was trying to get to run for office.

00:06:42
You know, it's like it's amazing.

00:06:44
I mean, there is a world where I would have gone and paid a

00:06:47
consultant to be like draw out for me, like when races will be

00:06:50
available and when, when they could run.

00:06:51
And it's like you just like, oh, in the morning you're having a

00:06:54
manic fit and you're like, oh, let's, let's see what Chachi BT

00:06:57
can do. It's it's insane.

00:06:59
It's crazy. This is a very nerdy, lame use

00:07:02
case, but I like to buy cheap wine that is still good.

00:07:06
And there was a secret wine from a local provider and they said

00:07:10
it's this anonymous secret wine that we're selling for 1/5 of

00:07:14
market price. But you know, if you bought it

00:07:16
at market price, it would be a 250, three, $100 wine.

00:07:19
And they were like, it comes from these amazing vineyards and

00:07:22
the West side of Napa and the foothills, blah, blah, blah,

00:07:24
blah, blah. And I just pasted the

00:07:26
description into deep research and I was like, figure out what

00:07:28
the secret wine is like 20 and 20 minutes later comes back.

00:07:32
It was like, obviously this is like this BV Latour 2022 cab or

00:07:37
whatever, and I was like, what? I've been super impressed with

00:07:40
ChatGPT multimodal for shopping. The three of us are all going to

00:07:45
the same wedding in France in a month or so, and I don't know if

00:07:49
you guys have looked at the required attire, but they're

00:07:53
half of. It Oh my God, my wife is very

00:07:55
concerned. Yeah, this hot tip is paste that

00:07:58
whole thing in a ChatGPT, ask it to shop for you.

00:08:00
It'll be great. My wife and I.

00:08:03
This was a good idea. I mean, it's not perfect, I

00:08:05
won't lie. Like how like maybe one out of

00:08:07
10 suggestions are like completely off base, but the

00:08:11
shopping integration, it just, it's kind of showing where

00:08:14
things are headed like, you know?

00:08:15
What do you mean integration? Well, because it's actually like

00:08:19
searching the web, you know, doing an agentic workflow of, of

00:08:23
looking for these items and then it's pulling that information

00:08:26
back into Chachi BT in line in the chat.

00:08:29
There's links that link out, right?

00:08:31
Eventually I'm sure you'll be able to just like add that to

00:08:34
your cart within Chachi BT It's a full shopping experience.

00:08:37
It's not just researching. Yeah, interesting.

00:08:40
He used the word. He used the word agentic.

00:08:43
Yeah. OK.

00:08:44
I mean, yeah, we're going to talk a lot, you know, over the

00:08:46
next couple episodes in terms of how AI can get put to use.

00:08:51
The topic I really wanted to get into this week before reviewing

00:08:55
our games and our scorekeeping is agents.

00:08:58
Like I feel like agents have been at once sort of the

00:09:02
buzziest thing in the backdrop of a couple events, but and I

00:09:06
never quite here. And so I guess the first direct

00:09:10
question I want to ask is, is deep research an agent?

00:09:15
Is O3 an agent? Like what, what?

00:09:17
What is an agent these days if it's just delivering you a

00:09:21
report? I buy the agent definition that

00:09:25
an agent is, you know, an AI tool that can actually do stuff

00:09:29
for you, right? That can go through some sort of

00:09:31
series of steps involving actions and you know, quote UN

00:09:36
quote tool use is sort of one of the popular phrases these days

00:09:39
of, you know, using different tools.

00:09:42
Currently the only tools these agents can use really are

00:09:45
essentially like web search and you know, maybe pulling shopping

00:09:49
links and showing pictures to James's discussion.

00:09:53
But I still fundamentally think like there is a big difference

00:09:55
between, you know, ChatGPT of 6 to 12 months ago where right,

00:10:00
you know, you ask it a question, it gives you an answer.

00:10:01
Essentially it's you know it's a text in text out engine, right?

00:10:05
You put text in one side, it gives you text out the other

00:10:07
side, but it doesn't go do stuff that's.

00:10:09
Part of that kind of web it's. Like, yeah, yeah, yeah, exactly

00:10:12
right. Even the deep research that we

00:10:14
were talking about, I mean, I used to be a management

00:10:16
consultant for two years, 2 horrible years, and it can do

00:10:21
much better versions of what I did as a management consultant,

00:10:26
you know, in a matter of minutes, it could do 5 days of

00:10:29
world class management consultant level research on a

00:10:32
topic, right? And I guess if you don't think

00:10:35
that's agentic, I think you're a little crazy, like you're too

00:10:40
high, right? It's clearly running around.

00:10:42
I I've seen some people use like time is it's like if it takes

00:10:46
time, how long it takes, you know, if it's all going and

00:10:48
doing things and interacting with the world and coming back,

00:10:51
I want to throw down a gauntlet to me.

00:10:55
And this could come soon. We'll we'll be in the world of

00:10:58
agents once people are letting them run wild with their own

00:11:02
credit cards once, once agents are spending money without a

00:11:06
human check in, that's when we've got sort of real agents.

00:11:10
What do you think? So does it.

00:11:11
Not count in your mind if like my agent finds me a dope pair of

00:11:15
shoes and, you know, text me, hey, can I buy this?

00:11:18
And I'm like, yeah, go for it. No, it needs to transact.

00:11:20
It needs. It needs to do without you.

00:11:22
Oh, truly? It'll be like this.

00:11:25
My great. Thing that's like that's an

00:11:28
agent. I mean, I understand like you

00:11:30
know, you, you book flights and an agent would have like asked

00:11:33
you before that. Definition I I feel like if it

00:11:36
does a restaurant reservation, maybe no money changes hands.

00:11:38
That's still an agent. I mean, I I just think this.

00:11:40
Is it's definitely an agent. I I'm just saying that's like a

00:11:43
great employee solves the problem, right?

00:11:46
It's not like, oh, they come back to you and want all this

00:11:48
feedback. It's like do the thing like pull

00:11:50
the trigger like once we can trust to do that.

00:11:53
I'm just saying that would be a landmark moment that I don't

00:11:56
think is so far away. Or do you think that is far

00:11:58
away? No, I, I honestly think that if,

00:12:01
if you had the capability to do that already in ChatGPT, like

00:12:05
James did his clothing research, probably some people would be

00:12:07
like, yeah, fine, go do it. Like don't buy too many things

00:12:09
without asking me. But like, I think people would

00:12:11
already be down. I honestly think the company's

00:12:13
reluctance is probably just mostly like they don't want it

00:12:16
to go off the rails and spend, you know, thousands of dollars

00:12:18
of people's money or whatever. But it's it's possible already

00:12:21
in my opinion. I don't know if it would work

00:12:23
yet. Like, and I think a lot of this

00:12:25
ties into context how much Chachi BT knows about me.

00:12:29
I mean it it's obviously, you know, starting to build that

00:12:32
memory, but I don't think it knows enough without me

00:12:34
prompting it or, you know, having very targeted list of

00:12:38
shopping ideas for this wedding to go just start buying me

00:12:41
stuff. I that'd be super interesting

00:12:43
once we get there, but I don't think it's ready for that yet.

00:12:47
Max just was sort of getting at this.

00:12:49
I mean, stop limits I think are key, right?

00:12:51
I mean, there's a degree to which algos, you know, I was

00:12:54
talking to a former banker about this the other day.

00:12:55
Like, you know, it's not crazy that we would let a machine, a

00:12:59
computer make payment decisions on its own.

00:13:02
Traders do it all the time. You just create some limits and

00:13:05
checks and hopefully have people hovering.

00:13:07
But like the algorithms have to move before a person could

00:13:10
react. And so that's happening there.

00:13:13
And so you can see with LLMS, it's like, OK, you build up a

00:13:17
trust up to the, you know, $500 limit.

00:13:19
And you're like, you can, yeah, it'll be interesting.

00:13:22
I mean, how many chargebacks are our chargebacks can go up with

00:13:25
everybody releasing power to agents?

00:13:27
And you're like, oh man, we need to do more refunds when the

00:13:29
agents do something crazy. I mean, I think that kind of

00:13:33
gets into some of this like quote UN quote agentic web

00:13:36
discussion, which is like, can we redesign the web in a way

00:13:38
that enables more of this behavior?

00:13:39
Because I do actually think if you're a clothing retailer,

00:13:43
there probably is some kind of business model in which you let

00:13:47
people buy way too many clothes and then they send most of them

00:13:50
back or they cancel most of them before they ship or whatever.

00:13:52
Right? Like, which obviously is along

00:13:54
the lines of like a Stitch Fix or Trunk Club or some of these

00:13:57
other companies which, you know, weren't wildly successful

00:13:59
because I think the return fees are pretty punitive or, you

00:14:03
know, and you end up with a lot of fraud and, you know, damaged

00:14:05
clothes and stuff like that. But I do think there's an

00:14:07
interesting question of like, could you build some sort of

00:14:10
consumer commerce website where people's agents can like buy way

00:14:13
too much stuff, like basically and then cancel it or return it

00:14:17
or limit it or some in some way because people buying too much

00:14:20
stuff by accident. And it's it's probably good if

00:14:22
you're selling things like even if you have to find a way to let

00:14:25
them cancel it or return it. My first reaction to what you

00:14:27
were saying is, oh, this is like a workaround to allow agents to

00:14:31
spend when we're really going to unwind it, but it makes the

00:14:34
default spending versus not. It's like, oh, the sugar is

00:14:38
good, which is which is really good for if.

00:14:40
You're selling stuff. Yeah, yeah, yeah, yeah.

00:14:42
James, what's the agent use case you're most excited about?

00:14:45
I can't ignore coding. We haven't talked about coding,

00:14:47
which I think is like the most actually valuable the.

00:14:51
Real one. That's why we want to talk about

00:14:52
it's boring. It's happening.

00:14:54
And that's. Something actually works?

00:14:56
Oh yeah, yeah, yeah, OK. AI podcasts are only about the

00:15:00
future, you know. Yeah, I think, well, I just

00:15:04
think that there's a lot to unpack about the future of the

00:15:07
of coding. I mean, I am a CTOI code when I

00:15:11
can and this agentic world has dramatically changed what I can

00:15:16
do in terms of prototyping and participating in the coding at

00:15:21
Volley and learning faster, right?

00:15:24
I mean, you just learn so much faster about different

00:15:27
technologies and tech stacks. And yeah, I think it's like if

00:15:31
you're not an engineer, you maybe understand this a little

00:15:33
bit or you've played around with things that are a little bit

00:15:36
more. No code like lovable.

00:15:37
Or figma or. Figma right?

00:15:39
I mean I. Think both Lovable and Figma are

00:15:41
speaking our event and they are now competitors in the the no

00:15:45
code world. Yeah, both non coders and

00:15:48
coders, if you've dabbled with any of this, you are, you know,

00:15:52
receiving the future of like what could happen to all types

00:15:55
of computer work, white collar work, if you want to call it,

00:15:59
but is happening first in in the engineering space and it's it's

00:16:04
pretty remarkable. I think Nat Friedman had a

00:16:06
really good analogy that I can't get out of my head about agent

00:16:09
tick coding. And he talks about the idea that

00:16:11
you have, you know, you have this room full of interns who

00:16:16
are all like junior engineers basically, right?

00:16:18
And you assign each of them a task and they go off and try to

00:16:22
do it. And then when they get stuck,

00:16:24
they raise their hand and they say, hey, I need help here,

00:16:27
come, come help me. I'm, I'm stuck with this bug or

00:16:29
or this this, you know, issue the, the app's not working,

00:16:31
whatever. And he podcasted, I think about

00:16:35
six months ago and he was like, right now basically these

00:16:37
interns raise their hands like every 5 minutes in like human

00:16:40
time. Like so they do like 5 minutes

00:16:42
of work and they raise their hand and you've got to go help

00:16:43
them. And then they raise their hand 5

00:16:44
minutes later and you know, over and over and over again, he's

00:16:46
like, so you can't really have met that many of these, you

00:16:48
know, imaginary interns going because you can just become, be

00:16:50
running around fixing their problems all the time.

00:16:52
You know, maybe only one really like because you're just

00:16:54
constantly having to give them feedback.

00:16:56
I do think like the frontier for like how much quote UN quote

00:17:00
human work these agentic coding tools can do now without you

00:17:04
having to run over and help them when they raise their hand is

00:17:06
like probably somewhere in the like 15 to 30 minute range now

00:17:10
where like, you know, they obviously come back to you very

00:17:13
quickly. Like they iterate through their

00:17:14
work very fast because they can type at a superhuman speed,

00:17:17
right? They can put out hundreds of

00:17:18
lines of code. But the sort of amount of human

00:17:21
level work I would say in my testing that they get stuck is

00:17:24
probably like, you know, some of that 15 to 30 minute range.

00:17:26
I'm confused, like are you saying 15 to 30 minutes of like

00:17:29
what it would take for a human intern?

00:17:31
What it would take like a good human engineer like, you know,

00:17:34
Yeah, I got it. You know, mid to senior software

00:17:36
engineer just just hammering away at code, right?

00:17:38
Yeah. Like how much code do you get

00:17:39
out between them getting stuck? Right.

00:17:42
Like, yeah. I would say yeah, maybe half an

00:17:43
hour of like human, human work, but.

00:17:45
You get it. You get it in 3 minutes or two

00:17:47
minutes you get. It in a minute, you know, or 30

00:17:49
seconds usually, which which is which is mind boggling.

00:17:51
But like the dream is you get, you know, 4 hours of human human

00:17:55
work or 8 hours of human work or eventually, you know, weeks or

00:17:58
just starts fixing itself and you never go in there, right?

00:18:01
I mean, you know, I spent a fair bit of time vibe coding a month

00:18:05
or so ago. I do think there are some self

00:18:09
driving car aspects in the sense that like the last 5% or 10% of

00:18:15
a problem is very important. And like it's like, oh, it looks

00:18:19
close. It looks close, but it's like,

00:18:21
sure, it's really good when I'm like copying the sub stack

00:18:25
design, dropping a lovable like, oh, rebuild that.

00:18:28
But like at some point I feel like you've just been working

00:18:31
long enough and you have some minor tweak you want to make and

00:18:35
it just starts getting stuck and has no idea.

00:18:37
I mean, partially that's why I'm trying to do no code and no code

00:18:41
knowledge. But I do think these programs

00:18:44
are really good at being enticing in the beginning, and

00:18:47
then they sort of get overwhelmed as the project

00:18:50
starts to expand. Maybe this project that you were

00:18:53
doing, you won't be able to ship because of that last five mile

00:18:56
problem or whatever. The fact that you even started

00:18:59
it is kind of amazing that you did that.

00:19:02
Like you wouldn't have been able to even start the project three

00:19:05
months ago or six months ago or something.

00:19:06
Sure. Yeah.

00:19:07
So I don't know, it's just kind of interesting, like where that

00:19:10
heads is. You know, I think eventually you

00:19:12
get to the point where it's a finished product and then

00:19:15
there's just like way more coding projects happening in the

00:19:17
world, right? And then, you know, I think

00:19:19
within companies that already have an engineering workforce

00:19:22
like it actually can get across that that last mile.

00:19:25
I think another important theme that always comes up is like if

00:19:30
we froze this moment in time, how much value is there that

00:19:34
people are still understanding versus depending on continued

00:19:40
progress in the models that's that's revolutionary.

00:19:42
Where are you guys on how much harvesting could be done with

00:19:46
what would already exist versus what we're waiting on?

00:19:50
I would say a lot like a lot of harvested progress.

00:19:54
Like I can't even think like trillions.

00:19:56
I don't know. I haven't decided of 10s of

00:19:58
trillions, but let's say trillions.

00:20:00
What's the measures? In value.

00:20:01
Oh, like GDP. GDP value of just harvesting the

00:20:05
stuff that we've done so far. But it's amazing you still see

00:20:09
people who are like, skeptic. I mean, I don't know.

00:20:12
I just don't want to be, you know, deluding myself here.

00:20:15
Yeah. I mean, I think like

00:20:16
fundamentally we've all had the experience, right?

00:20:19
Where something that used to take 8 hours now takes less than

00:20:25
a minute, right? I mean, like we've all had that

00:20:27
experience, right? And so if you just do the math

00:20:29
on that, you say, OK, I took something that used to be a 500

00:20:31
minute problem and now it's a one minute problem.

00:20:34
Unless you believe that thing had no value whatsoever, how

00:20:38
could you not see a 500X in knowledge work in front of you

00:20:41
and say, hey, maybe we haven't harvested all the value on this

00:20:45
500X improvement, though this thing I just did.

00:20:48
How could you not believe that that's going to have insane

00:20:50
ramifications throughout our world and and insane amounts of

00:20:53
value that eventually can be created.

00:20:55
You know, so I don't know. I just to me, I just look at

00:20:58
that, you know, 2 orders of magnitude differential we're

00:21:00
already seeing on labor and think it's got to be huge.

00:21:04
Here's a question for you guys. Like, I have a friend who's in

00:21:06
PE and he, you know, doesn't really, he's not really in the

00:21:09
tech world that much, but he, you know, tried chat GPD didn't

00:21:13
really, you know, find it super valuable.

00:21:15
He had never tried O3. And I just showed him what was

00:21:19
possible and he, like, you know, kind of changed his whole

00:21:21
opinion of what types of companies he should buy in the

00:21:24
future. Wow.

00:21:26
Well, I do think that's a real danger that there are, there are

00:21:29
things that I'm changing about my life and medical decisions

00:21:33
and lots of stuff off O3 BS. It's so persuasive that we'll

00:21:37
never really be able to back out the psychological.

00:21:39
And, you know, it's all right. It's like, is it having an

00:21:41
effect? It's like, yeah, it's deeply

00:21:43
affecting big decisions in my life.

00:21:46
You know, just because it's like the thought partner, it's there

00:21:49
just like, you know, if you were to have the friend that's there

00:21:51
while you're soundboarding an idea, like that's going to be

00:21:54
dramatic. And I think just the fact that

00:21:56
it's in the loop. Wouldn't you say overall that

00:21:58
it's in person giving good advice?

00:21:59
That's why I keep it in the loop.

00:22:01
Yeah, right. It's not like, but but it will

00:22:03
be hard to know if it if it hallucinated.

00:22:06
I don't know. It's like, is it gonna?

00:22:08
Yeah, hopefully in a couple of years I'll chase after me in

00:22:10
this shot. GPT will be.

00:22:13
I told you a couple of key things that I think you took to

00:22:15
heart and and now that I'm a little smarter, I'm gonna go

00:22:18
back on that. Yeah, we'll see.

00:22:20
Having lived through the iPhone experience of of having the

00:22:23
first iPhone in 2007 and having to spend the next four years

00:22:27
explaining to my friends why they had to get an iPhone and

00:22:30
why they. Should probably.

00:22:31
Consider getting an iPhone. I mean James, you had a

00:22:34
BlackBerry until what, 2011 or something like that?

00:22:36
Like 10/20/11 like. James, I'm sorry, I was the

00:22:42
lager to. I had a BlackBerry for a long

00:22:44
time. Yeah, and people would be like,

00:22:45
well, Max, what? Why?

00:22:47
Why do you, I need an iPhone. What do you do with an iPhone?

00:22:49
And I'm like, well, you can like browse the Internet.

00:22:51
And they're like, well, you know, I browse the Internet on

00:22:53
my phone. It's fine.

00:22:54
And I'd be like, well, these apps are pretty cool.

00:22:56
And they'd be like what? Like the the app that makes it

00:22:58
look like you're drinking a beer and I'm.

00:22:59
Like, well, I'm not. Using that every day, but just

00:23:04
the last point I have to make on this BlackBerry thing.

00:23:05
I'm sorry, BlackBerry, which obviously is a dead company

00:23:09
today. No one you know has a

00:23:10
BlackBerry, right? Their sales continue to grow for

00:23:14
four years after the iPhone was released right?

00:23:17
So O 708-0910 eleven BlackBerry sales continue to increase even

00:23:21
though we all today look back and even in the movie about

00:23:25
BlackBerry, they pretended that the day the iPhone came out it

00:23:28
was like P BlackBerry. Sorry guys, you missed the

00:23:31
future. And I'm just pointing out that

00:23:34
in the real world, sales continue to grow for the next 4

00:23:37
years. And so I'm just saying that when

00:23:38
you're in one of these sort of Roadrunner, Wiley Coyote chases

00:23:42
the Roadrunner over the ledge moments where you take a second

00:23:45
before you look down and realize that gravity is pulling you

00:23:47
there. I think similarly with AI, we're

00:23:50
all like, holy crap, we just went off the ledge.

00:23:52
Like shit is about to get really dramatically different here.

00:23:56
And you can still stand there in the air for a year, two years or

00:24:00
three years or four years before gravity really hits, right.

00:24:03
And and I think to your initial question, like we are in that

00:24:07
period where even if nothing changed, like we're already off

00:24:11
the ledge, you know, stuff, stuff is going to be

00:24:14
dramatically different. No matter what happens from

00:24:16
here, which I know we all believe, you know, the models

00:24:18
are going to continue to get better.

00:24:20
So the pace of acceleration is going to be even higher.

00:24:22
It's a perfect endnote. Let's move the conversation to

00:24:27
our predictions from six months ago.

00:24:30
Basically, we're not going to get 2 in the weeds.

00:24:33
It was a fun discussion. James, do you want to take us

00:24:37
through them, What the question was, where we each landed and

00:24:40
then we'll give a quick reaction and then go to the next one.

00:24:43
Sure. Sounds good.

00:24:45
Just to clarify also we back in November, we asked Claude and

00:24:49
Chachi BT to generate these predictions for us including

00:24:53
providing probability estimates of how likely they were.

00:24:57
And then so our job was to take the over or the under on each

00:24:59
prediction. The first one was open AI shifts

00:25:02
GPT 5 with a greater than 10 trillion parameter model Max was

00:25:07
the over, Eric under and I took the over.

00:25:10
Thoughts. Where is it?

00:25:12
GPT 5 here. Yeah, so far.

00:25:14
So far I'm correct. Six months.

00:25:16
I mean, I think we much hung up on they wouldn't use the name or

00:25:20
maybe there was a high chance that they would abandoned the

00:25:22
name. It seems like they're going to

00:25:24
do it honestly right now. What the vibes are that they're

00:25:27
going to kill the O series and just make 5 the overall?

00:25:32
I feel super good about the name and it's shipping this year.

00:25:35
At this point. I think the only thing that

00:25:37
could hit the under would be the 10 trillion parameters because

00:25:40
didn't Brad, like at the CEO literally say they were going to

00:25:42
ship GPT 5 this year and it was going to be called GPT 5?

00:25:45
Like I'm pretty sure he said that like last.

00:25:47
Seems like a. So I would still smash the over

00:25:50
the 10 trillion parameter. Thing is, I guess the one.

00:25:52
Certainly I would buy the over now.

00:25:55
I might take the under because of that 10 trillion like I oh.

00:25:59
You're I'm sticking to my bet. Whatever I we're the best locked

00:26:01
in. Yeah, I can't be wrong.

00:26:03
I was trying to figure out like so Claude 4 Opus just launched

00:26:06
and there's no comment on the parameters that I can find, but

00:26:11
the best estimate I could get from Claude was 2 to 3 trillion,

00:26:15
so I don't know if that's nice. Nice.

00:26:18
I mean a lot of the improvements seem to be post training and

00:26:22
chain. Of thought and if they're.

00:26:24
Going to merge, you know, the O series with GPD 5.

00:26:28
It's partially because they think they need part of the

00:26:30
improvement off these reasoning models, and so maybe it's a sign

00:26:33
that the parameters aren't getting as big as we thought.

00:26:36
Also also it seems like adding that many parameters you know

00:26:39
creates huge issues with inference cost and just serving

00:26:42
them it's. Expensive.

00:26:44
Yeah. Anyway, the next one was three

00:26:48
or more countries enact national rules regarding AI medical

00:26:52
diagnosis. Max, you took the under, Eric

00:26:56
took the over, and I took the over.

00:26:58
We had it at a 70% probability. You guys are crazy.

00:27:02
You guys are crazy. There's no way you think in the

00:27:04
next 6 months three countries are gonna enact regulations.

00:27:08
That happened so far, medical diagnosis.

00:27:10
Here's what I had. Oh, you.

00:27:14
Found something chachi BT is saying.

00:27:16
Yes, EU AI Act. Exactly, Yeah.

00:27:19
UKSMHRFUS to regs online chachi BT things we're like in good

00:27:24
shape all. Right.

00:27:25
Well you took 70% so never forget that I get 2 to one.

00:27:28
Yeah. I don't know how you're going to

00:27:29
do the overall math at the end, but I'm sure whatever.

00:27:31
Onward and upward. All right, Tesla full staff

00:27:34
driving approved for unsupervised driving in one or

00:27:37
more US state, 40% probability. Max had the over, Eric had the

00:27:43
over, and I had the over. I think we're in good shape

00:27:46
there, right? Isn't Texas gonna happen like

00:27:49
yeah in June, supposedly. Like, yeah, in two weeks,

00:27:52
correct? But it does seem like right?

00:27:54
Chachibi says not yet Pilot cars on private roads in Texas.

00:27:57
No public permit. It's coming to Austin in June.

00:28:00
Now. I think this gets into the

00:28:02
letter of the prediction, though.

00:28:04
Is it a state law or city law, whatever.

00:28:08
OK, we'll come back to that in a few months.

00:28:11
AI will write the copy for greater than 50% of a major news

00:28:16
outlets articles. 30% chance Max had the over, Eric had the over

00:28:22
and I took the under. I just feel like my, my under is

00:28:27
great here. I mean, nobody's.

00:28:28
Nobody's. Claiming this nobody's first

00:28:30
newsroom. Everybody is like, what's the?

00:28:32
Eric's gonna do this by the end of the year.

00:28:35
Just to go build. A new media company.

00:28:37
I won't be major though. I don't think I'll I'll qualify.

00:28:40
You could be major by the end of the year.

00:28:41
Why not? Apparently, the Chicago Sun

00:28:43
Times inadvertently ran an AI generated book list filled with

00:28:47
errors, sparking backlash. Clearly not paying for 03 there

00:28:50
I think. I just think we're gonna see so

00:28:52
many of these things, like hallucinations, like, aren't we

00:28:54
already seeing this from the Trump administration?

00:28:56
Like just random things that don't make any sense.

00:29:00
I certainly don't want, you know, AI to replace the

00:29:03
newsrooms. I'm just expecting a lot of like

00:29:05
stories over the next year about like academic papers and news

00:29:09
articles just having obvious hallucinations, right?

00:29:11
Because people are going to be using.

00:29:13
But shouldn't we be more disturbed that the Trump

00:29:15
administration is like, I feel like if Democrats were in charge

00:29:18
and they were releasing government reports that were

00:29:21
clearly written by AI, yeah, it would be like the biggest

00:29:23
cultural story of the moment. It's like, well, the governments

00:29:26
already phoning it in with Trump, it's like, oh, at least

00:29:29
they're using it is. Literally what I was going to

00:29:31
say. I was like, I would honestly

00:29:33
prefer the AI to be doing this than the miscellaneous Trump

00:29:37
administration employee. Like knows nothing about

00:29:40
anything. I just want Sam Altman to give

00:29:42
every administration staffer free O3 so that they have O3

00:29:46
right. Reasonably intelligent fake

00:29:47
reports instead of real ones. I do think this core take that

00:29:51
the Internet split everybody's view and this is part of why

00:29:55
originally Marc Andreessen was so pro crypto and anti AI.

00:29:59
I do think AI is going to bring us potentially closer together

00:30:02
where people are asking rock like is this bullshit true or

00:30:05
false? I mean, it's possible people

00:30:07
build crazier models, but for now, while the models are sort

00:30:11
of generally in agreement with each other about how the world

00:30:14
works, it could be a major force for cultural consensus over the

00:30:19
next couple decades. It's basically the network TV of

00:30:22
the Internet essentially, right? We're all watching the three

00:30:25
major channels and they all broadcast kind of like exactly,

00:30:29
but not family friendly content. And your take is that Marc

00:30:32
Andreessen? Marc Andreessen doesn't want to

00:30:35
bring us together. He was weird, crazy shit.

00:30:37
Yeah, that's what crypto is all about.

00:30:39
Like, do whatever you want, like, and AI is a source of

00:30:42
conformity. But then Andreessen Horowitz

00:30:44
basically saw what was winning, and it was like, we not need to

00:30:47
go where the money is. And, you know, that's how I see

00:30:49
the story playing out. I mean, they were resistant at

00:30:52
first. Khosla led the open AI venture

00:30:55
round. Anyway, keep going.

00:30:56
Didn't Peter Thiel say like AI was communist or something?

00:31:00
Yeah. Yeah, communist and crypto was

00:31:02
libertarian or whatever. Right.

00:31:04
OK, Fully AI scripted and AI rendered feature film gets a

00:31:08
theatrical release 25% chance. Max with the under, Eric with

00:31:13
the over and James with the under.

00:31:15
This was this was Eric using his insider knowledge of pay to play

00:31:19
tactics within the movie industry to to try to grab us on

00:31:23
this that someones going to pay a theater chain to take their

00:31:25
movie even if it sucks. Basically, make sense?

00:31:28
I mean, we still have time. I had to double check this that

00:31:30
it wasn't hallucination, but according to chat GVT there is a

00:31:35
fully AI generated movie releasing soon, Pirate Queen

00:31:40
Zheng Yi Sao, billed as the world's first fully AI generated

00:31:44
feature film. Exactly.

00:31:45
That's going to get a festival run.

00:31:46
I gotta watch it. All right, moving on, more than

00:31:49
three major smartphone OEMs chip phones with AI Co processors

00:31:55
running 7 billion parameter plus models on device. 7 billion is

00:32:01
pretty high. The rumor mill on Apple is

00:32:03
saying they're going to be able to run 3 to 4 billion.

00:32:06
So even if you believe in James's claim that the chip

00:32:09
they've had in there for 12 years is an AI chip, it still

00:32:12
might not be able to do a 7 billion parameter model.

00:32:14
So we were specifying an Apple, Samsung, Google, Chami,

00:32:18
dedicated AI Co processors that run 7 billion perimeter LLMS

00:32:23
fully locally. We had the probability at 75% to

00:32:29
unders Max and Eric and I took the over and I yeah, I was

00:32:33
counting the the current technology as capable of running

00:32:38
those types of models. But yeah, to your point, Max, I

00:32:40
think 7 billion might be a bit high, right?

00:32:44
3 to 4 is what the rumor mills saying for this year, but we'll

00:32:47
know in a few more months I guess.

00:32:48
Next. Anthropic releases a model

00:32:51
scoring over 90% on U Bar, the unified benchmark for AI

00:32:57
reasoning, which does not exist according to our own research.

00:33:02
During the podcast recording last year, Complete Hallucinated

00:33:07
prediction from Claude. How are you feeling about that

00:33:10
one guys? Great.

00:33:13
I should, yeah. Certainly raises red flags,

00:33:16
yeah. Moving on to #8 / 4 Fortune 500

00:33:22
firms 5 or more cut greater than 25% of their middle management

00:33:28
roles by the end of this year, crediting AI explicitly with a

00:33:34
25% probability. Actually interesting.

00:33:38
Here we have an over and over from Max and over from Eric and

00:33:44
on under from myself. I'm feeling pretty good about

00:33:47
the under. Yeah, yeah, look, I'm.

00:33:49
Gonna do the research on this. We're only asking for five firms

00:33:52
to cut 25% of only middle management.

00:33:54
So that's a pretty that's a low bar.

00:33:56
According to my research with ChatGPT, this has not

00:33:59
materialized. Many large companies are

00:34:01
experimenting with AI and none have reported cutting 1/4 of

00:34:05
their management. Yeah, maybe we overestimated,

00:34:08
first of all how much we thought it would give them air cover for

00:34:11
all sorts of things. But yes, exactly.

00:34:13
Maybe we didn't want to get right into the AI narrative.

00:34:17
If we got a tariff induced recession, this actually might

00:34:19
happen so. OK #9 Deepmine and Google

00:34:22
discover a new drug that clears phase one trials within 2025.

00:34:27
We gave that a 20% probability and Max took the under, Eric

00:34:33
took the under, and I took the under.

00:34:35
I think all looking pretty strong here.

00:34:38
And D mine spun out isomorphic, which would be the start up.

00:34:42
I think that would potentially do this.

00:34:43
So there's a there's a chance that even if it happens, we can

00:34:47
all claim technicality that it doesn't.

00:34:49
But I I think it's, it's not looking likely, right?

00:34:52
They have to be in phase one trials already.

00:34:55
Yeah, I mean I. Think we?

00:34:56
Yeah, I think that they are planning to enter trials by the

00:34:59
end of this year, so unlikely to have completed phase one trial.

00:35:04
And lastly, we have the international AI treaty with

00:35:09
greater than or equal to 15 signatories, including three of

00:35:13
the US, China, EU and UK. 50% probability we all took the

00:35:19
under. Seems like a good bet so far.

00:35:22
Yeah, the AI believes too much in human institutions.

00:35:25
Right, because US, China both seem unlikely, right?

00:35:29
Is Europe getting EUUK viable getting the US which under Trump

00:35:35
is now like no AI regulation and China which is we do what we

00:35:40
want. The fact that the EU and UK get

00:35:42
separate credit here doing a lot of work, but three of the four

00:35:46
seems high do. You know, do you know what the

00:35:48
score is? So just pulling together the

00:35:50
scores, I asked Chachi PT to create a scoring system.

00:35:54
It came out with a formula inspired by Breyer style scoring

00:35:59
which I had never heard of but. Yeah, that's how close you are

00:36:01
to the probability. Yeah exactly.

00:36:03
Yeah, seems like a good scoring system.

00:36:06
It gives 3rd place to Max Dinged for betting against AI film

00:36:13
better calibrated on his conservative bets like hardware

00:36:16
and policy. 2nd place Eric solid instinct, slight overconfidence

00:36:21
on a few misses and in first myself.

00:36:25
Great balance of bold but accurate calls.

00:36:28
Thank you, Chachi I. Love it, love it.

00:36:30
I think I think this AI film take is complete bullshit.

00:36:33
So I need to I need to score adjusted for that.

00:36:38
This is just a check in, no medals awarded yet, but I will

00:36:44
take the pole position and see you guys in a few months where

00:36:49
we can do the final tally. All right, let's do our fantasy

00:36:53
draft. Max, you want to talk us through

00:36:56
the game and then we'll get into our picks.

00:36:58
Yes, we invented an ingenious game based on fantasy football

00:37:04
that allowed us to draft teams of startups into imaginary

00:37:09
rosters. We've done two different drafts.

00:37:12
We did one about a year and a half ago and one about six

00:37:14
months ago. We restricted the draft list,

00:37:18
the draft board as it were, to companies that had raised over

00:37:21
$100 million at the time. So if there's obvious omissions

00:37:25
that come up in your mind, it's probably because they hadn't

00:37:27
raised 100 million at the. Cursor Cursor.

00:37:29
Being the most. I think like there was some like

00:37:32
where we've that we didn't include because.

00:37:34
Yeah, We also excluded specific like chip based companies,

00:37:39
Chinese companies, I don't know for robotics, I can't remember

00:37:42
healthcare, we anything we thought we were even Dumber than

00:37:46
normal about we left off the list.

00:37:47
So we all drafted teams. We did a snake draft.

00:37:51
Most notably, the first draft involved a discount that we had

00:37:57
one person had to take for getting the first pick because

00:38:00
the first pick was very obvious. It was Open AI.

00:38:02
Eric paid, I believe, $75 billion in handicap to draft.

00:38:07
Opening auction 1st and I and I really good about it.

00:38:10
Aged pretty well since they're valued at 300 million right now

00:38:12
3. 100 billion, would you say? 300 billion.

00:38:15
I'm sorry. Yeah, like coughed.

00:38:16
Yeah, it doesn't matter. And I will say before we say our

00:38:21
teams, I have not yet had the first pick in any draft.

00:38:25
So I just want everyone to remember that when making our

00:38:27
teams, OK, I will go through my team first, which I will admit

00:38:32
upfront is in last place right now.

00:38:34
All right, my team, Databricks, my star worth $62 billion,

00:38:38
Cohere AI, model company, modular AI, language company

00:38:42
Scale AI, Sierra AI, Sakana AI and Hebia.

00:38:47
And if you know all of those names, you are far too online.

00:38:51
I'm bullish on Sierra. I think that's, I think scale.

00:38:54
Has room to run as well. Oh yeah, yeah, and obviously

00:38:57
Databricks isn't going anywhere. But Anthropic surpassing data

00:39:01
bricks has been a bit of a sob story for my team because James

00:39:04
got Anthropic at Crazy crazy money if I recall.

00:39:07
The 4th pick that wasn't even your third pick because it's a

00:39:10
snake draft. So it was me then Max the data

00:39:13
bricks, then James with you'll say in a second, which doesn't

00:39:17
make any sense, and then fourth with Anthropic.

00:39:20
It's so embarrassing in retrospect, just.

00:39:23
To just to clarify, like we drafted these teams originally

00:39:26
in 2023 and then we did, I don't know, we drafted it.

00:39:30
We, we had an ad drop waiver period last November and this

00:39:35
again is a mid year check in. No, no ads, no drops, but

00:39:39
checking in on the teams. So Max, I have your score so far

00:39:43
right now at 93 billion, mostly because some of your teams have

00:39:48
not raised or exited since you drafted them.

00:39:51
I have no valuation on Sierra and scale is at 25.

00:39:55
I'm somewhat optimistic on both of those.

00:39:57
I managed to somehow pick the only foundation model company in

00:40:00
the world that isn't wildly overvalued.

00:40:02
Cohere SSI thinking machines, Anthropic.

00:40:08
Like just throw a dart boarded foundation models.

00:40:11
You've got a $40 billion company.

00:40:13
But I'm. So Silicon Valley, the show said

00:40:16
this from the beginning. No.

00:40:17
Revenue is so much better than revenue.

00:40:19
Cohere is a real business, so people can value.

00:40:22
And. SSI and thinking machines are a

00:40:25
dream. I have made a huge real.

00:40:29
Revenue I I think SSI has real revenue, but anyway.

00:40:32
Yeah, Max, what's your learning from this so far?

00:40:35
I would say we already knew this, but you know the winners

00:40:38
keep winning, right? Obviously Open AI swamps

00:40:41
everything else that's happened in the entire draft.

00:40:43
So we have a true power law which is nothing else matters

00:40:45
comparison to Open AI even with the handicap which.

00:40:49
Which we knew, which we knew was the risk when we came into it.

00:40:51
Was happening, but regardless it still happened.

00:40:55
Secondly, I would say Databricks is, you know, a merely a $60

00:41:00
billion company, but that looks lame compared to, you know, like

00:41:05
XAI being valued 80 like. You know, all right, all right,

00:41:08
let's not spoil. James, you want.

00:41:10
To go next. My team with the first pick on

00:41:13
my draft that you guys were making fun of just the moments

00:41:16
ago hugging face, No value because they have not raised

00:41:21
since 2023. I've drafted them because they

00:41:24
had one of the highest valuations at the time of the

00:41:27
draft. They were valued, I think, over

00:41:28
a billion dollars. I thought they were valued at 4

00:41:30
or 4 billion. Dollars at the time, yeah,

00:41:32
something like that, yeah. Yeah.

00:41:33
OK So Anthropic giving me 61 1/2 billion value replit hasn't

00:41:42
raised since the original draft. I exited Adept at 1 billion.

00:41:47
I snagged XAI with the first pick last November.

00:41:52
I'm. So jealous of that.

00:41:53
Locked in 80 billion of value right there because they raised

00:41:57
earlier this year and runway also raised $4 billion

00:42:02
valuation. 11 Labs also raised at a $3.3 billion valuation and

00:42:08
poolside no raise recently. Also one of those foundation

00:42:14
model companies that we have yet to really see appear on the

00:42:18
draft board, but I am happy with my overall team and my score of

00:42:23
close to $150 billion currently. You know, something I just

00:42:27
thought about, You're extremely lucky that XAI purchased X and

00:42:33
not the opposite way, because if it had been X, it purchased XAI.

00:42:37
We'd be able to like, force you to disown whatever growth, but

00:42:40
now you get to benefit from this combined monstrosity, which

00:42:45
kudos to you. I would just say had I been able

00:42:48
to draft first, I would be in James's position of being in

00:42:51
second place with XAI, so I don't personally think that a

00:42:55
coin flip should be dictating my performance right now.

00:42:59
What is you? I know.

00:43:01
What was me I I will say to give James credit here, I I truly

00:43:05
believe that anthropic is is the pick of the draft or, you know,

00:43:08
so far I think that just getting anthropic at the fourth position

00:43:12
in retrospect looks insane. And so I think that that is the

00:43:16
the greatest. I don't even know if I call it a

00:43:18
sleeper is sort of a semi sleeper pick, but that that

00:43:21
clearly to me has had the most appreciation.

00:43:23
Eric, why don't you all? Right.

00:43:25
So yeah, I picked Open AI with a $75 billion handicap.

00:43:29
Now it's worth 300 billion. So I'm getting basically 225

00:43:33
billion for that. Inflection sold for 1.43

00:43:37
billion. Character sold to Google for 2.5

00:43:41
billion. Glean we're scoring at 4.6

00:43:45
billion, but rumored to be raising at 7 billion.

00:43:49
Miss Straw AI worried about that one. 6 billion right now.

00:43:54
Perplexity. Oh man, I'm getting no credit

00:43:56
for that. That's going to be a good 10

00:43:58
right now, but it will be 14 is apparent according to the rumor,

00:44:02
so we'll see. Safe super intelligence.

00:44:05
I knew this was buzzy, but I don't even know if I could have

00:44:08
seen this one raised at $32 billion already.

00:44:13
It's more perplexity like That's insane.

00:44:18
Kodium sold for 3 billion do. You want to explain that it's

00:44:23
they renamed to Windsor for their name.

00:44:25
Kodium is windsurf. Yeah, they sold to Open AI and

00:44:28
then Harvey. No credit right now but rumored

00:44:31
to be raising at 5 billion. Total value $274.5 billion.

00:44:39
Yeah. I feel really good about this.

00:44:41
I mean, hysterically, as I think I mentioned on the last episode,

00:44:44
I I wrote a bear case about open AI after this at 157 billion, I

00:44:49
think, but whatever. So I'm getting it narratively

00:44:52
both ways. But yeah, I mean, I'm proud of

00:44:55
all my picks. I think even my sort of singles

00:44:58
are selling and I'm bullish on basically everything except I

00:45:04
would like to hear what's going on with Mistral.

00:45:07
But yeah, it's I mean, it's a power law business.

00:45:10
It's crazy that like I'm like, oh, glean that's that's a good

00:45:13
company. I was totally right that that

00:45:14
would be a good company, but it it doesn't really matter for my

00:45:17
my performance. Yeah, I mean I think you are

00:45:19
consistently hitting singles and double s, but you could have

00:45:22
nothing on your team except opening eye and be beating us by

00:45:25
100 plus billion dollars at this point.

00:45:26
So it. Doesn't, no.

00:45:28
Which is why I made a that we could not randomly assign the

00:45:31
first one and I I said we know. You were right.

00:45:34
So I'm complaining, I I'm not complaining.

00:45:36
You made the right decision 100%.

00:45:38
It's just it, it is remarkable. It's like the whole game is just

00:45:42
like, open. AI who drafted Open AI who

00:45:44
drafted one Open AI go back. To that episode.

00:45:47
Yeah, exactly. Yeah, No, we talked about it.

00:45:49
I mean, I think it was it would they were raising at 90 at the

00:45:52
time. And so you ended up with a $75

00:45:54
billion handicap on a $90 billion company, which seemed

00:45:57
like a reasonable deal to us. But you know, we were.

00:46:00
We were all wrong, obviously. Or at least we're wrong with us.

00:46:02
Check in. These do have to last, right?

00:46:04
It's like 5 years or something. We're we're like, yeah.

00:46:07
Five years, yeah. Yeah, we had a good shot with

00:46:10
Sam Hoffman getting fired of your team.

00:46:13
Going up sync. But but then he came back in

00:46:16
force. All right.

00:46:17
What's come on the market that you think we'll be looking at at

00:46:19
the end of this year? Thinking machines.

00:46:21
Cursor for sure. Manus.

00:46:24
Oh yeah, yeah, yeah. Thinking Machines, cursor and

00:46:27
manus are the ones that come to mind for me.

00:46:28
Yeah, I mean, it's, it's going to be a tight band because we're

00:46:30
we're picking them up and we're only interested in ones What

00:46:33
that. They have to have raised $100

00:46:34
million. All right, well, that's that's

00:46:36
basically our episode. We're gonna have two more before

00:46:40
the Cerebral Valley AI Summit in London on June 25th.

00:46:43
And then at some point, once we've gathered ourselves, gone

00:46:47
to that wedding in France we mentioned and relaxed a little,

00:46:50
we'll come back to you and give you our thoughts from the event.

00:46:53
I'm super excited about next week.

00:46:56
No pressure James, our our game master over here, but we're

00:47:01
trying to come up with some good concepts, but we'll be talking

00:47:05
about voice and video and certainly in light of what VO3

00:47:10
Googles new video creation model, it's an exciting time in

00:47:14
video. So see you next week.

00:47:16
Thanks guys. See ya.

00:47:18
Thank you.