MongoDB.local San Francisco is happening on January 15th. Learn more and register here → http://mdb.link/sf-dot-local At the Cerebral Valley AI Summit, we surveyed more than 300 founders and investors with one question:“Which billion-dollar AI startup would you short?” The answers were… blunt.In this episode, Eric, Max Child and James Wilsterman break down the most surprising picks, what they reveal about the state of AI in 2025, and the shifting mood inside the industry. We also revisit the biggest moments from the summit — from agentic AI to the sustainability of today’s valuations.
00:00:00
During this Rebel Valley AI Summit, we asked a group of
00:00:02
founders and investors which billion dollar AI startup would
00:00:05
you short and their answers caused a bit of a stir.
00:00:13
Joining me today are my Cerebral Valley AI Co host and the Co
00:00:17
founders of Volley, Max Child and James Wilsterman.
00:00:20
We'll also be diving into some of the insights from our
00:00:23
panelists throughout the conference.
00:00:24
My interview with Mike Krieger, the Chief Product Officer at
00:00:27
Anthropic, former Co founder of Instagram, about the problem
00:00:31
with sycophancy and foundation models.
00:00:33
My wife got her first like you're completely wrong.
00:00:36
And she was like, yes, this is great.
00:00:38
And I think we should have more of that we.
00:00:39
Looked at a clip from the mayor. Whoever the politician, 11 the
00:00:44
adoration of the crowd. No one should be asking someone
00:00:47
that's been in a job for 10 months for advice.
00:00:50
And we ask, is Mecca Hitler inevitable On stage with Jimmy
00:00:55
BA, one of the Co founders of Elon Musk's Xai the Mecca Hitler
00:01:00
in the room. Yeah, Mecca Hitler a model had
00:01:02
an episode. This is the newcomer podcast.
00:01:14
All right, I'm excited. We've had some time to rest
00:01:17
since the Cerebral Valley AI Summit.
00:01:19
Max has been resting life flat. What?
00:01:23
Literally just got home from Dubai I.
00:01:26
Believe literally 10 minutes got off a 16 hour long flight over
00:01:31
the world's greatest hits like Tehran and Moscow.
00:01:33
I I've got my seven week old. So back back to the parenting
00:01:38
minds, though it's been a joy. James, what you're are you all
00:01:41
rested or how are you feeling? I'm not well arrested Eric.
00:01:44
I have been solo parenting my 2 year old for.
00:01:49
Where's your wife? Where's Brazil and Mexico?
00:01:53
Nice. Yeah.
00:01:54
So we've all got our own reasons.
00:01:58
To be we're at the exact level of delirious that the viewer
00:02:03
should want here because we just had this River Valley AI summit,
00:02:06
right? We host this twice a year.
00:02:08
Big AI, you know, 300 person event, top AI founders,
00:02:12
investors, real insider thing. We sent out a survey, became a
00:02:16
point of media fascination, like sources, Business Insider wrote
00:02:20
about it. I think random Indian media
00:02:22
outlets for reasons that we'll explain as we progress.
00:02:25
We had 300 attendees at the thing.
00:02:28
I want to be clear, like, you know, some of these things we
00:02:31
didn't. Everybody did not fill out this
00:02:32
anonymous survey. This was not academic research.
00:02:35
This is you can count the dots for yourself and sort of infer
00:02:38
how many people did this. I don't know what are we saying?
00:02:40
Like 30 to 40 people. I just want to be transparent.
00:02:43
But it gives you a sliver of where this highly engaged
00:02:46
audience thought things were in. Part of the fun was that Max and
00:02:49
I were on stage sort of reacting to the survey results.
00:02:53
Anyway, so we're going to dig into the survey.
00:02:55
James, this is really your brainchild.
00:02:58
Anything I missed about it or you want to start ticking
00:03:00
through the questions? Yeah, I guess one Part 1 fun
00:03:03
part of the game that we played on stage was the you and Max had
00:03:07
to guess what the audience would think the answer to these
00:03:12
questions might be. And I have to say, Max kind of
00:03:15
ran away with that game on stage.
00:03:18
He did well. I.
00:03:18
Think I won 5141 some Max. Max did well, Yeah, it was a
00:03:23
some of them were close. Let's go through.
00:03:25
We can talk about. Yeah, you did.
00:03:26
Very. Well, we can.
00:03:27
We can react to the reactions. Myself, despite Max's dominance,
00:03:31
I think I had AII, had my ear on the pulse still.
00:03:34
All right, Should we start with the the beginning?
00:03:36
Yeah. So the first question was what
00:03:38
will open a eyes annualized revenue be?
00:03:40
At the end of 2026, the audience median was 30 billion.
00:03:47
And it's a 20 billion 2025 is the expectation, right?
00:03:51
Yeah, over 20 billion already. So this is pretty low estimate
00:03:55
in my opinion. And I said like 40 or what did I
00:03:58
say? Yeah, I forget who said 40 and
00:04:01
one of us said 40 and one of us said 41.
00:04:03
I did be one of you. Yeah, Yeah, I went higher, maybe
00:04:06
41 or something. I was surprised.
00:04:08
I think that given we're exiting this year at 20 in open, AII
00:04:11
thought the effusive Glee and bubble talk of the conference
00:04:15
would would flow through to a growth.
00:04:17
Video. It'd just.
00:04:18
Be earning AI. Things are going well.
00:04:20
We're round tripping all day. Everybody's making money.
00:04:22
Revenue is not the problem. Mere 50% growth for open AI I
00:04:26
think would be like considered in a truck at this point in the
00:04:30
bubble cycle. So.
00:04:31
I'm betting on 40. If this actually happens, I
00:04:34
think the bubble is over baby. What's your bet Max?
00:04:37
Yeah, I think 40 is what I, yeah, I said on the last podcast
00:04:40
I think we discussed. I mean I think 2X year on year
00:04:43
makes makes sense to me unless the bubble collapses.
00:04:45
I'm curious, where do you guys think the next 20 billion comes
00:04:48
from? Like is it business as usual or
00:04:51
do they need to create new products or?
00:04:53
I don't know copy Harvey, you know, going to legal anything.
00:04:57
It's like, I think, you know, they should lean into the API
00:05:00
business through this sort of general purpose foundation
00:05:02
model. So I, I don't necessarily think
00:05:04
they need to go after an application directly, but if
00:05:06
they were really hungry for revenue, you'd think they'd
00:05:10
figure out who their best partner is and say screw it,
00:05:13
we're going to cannibalize them to find the revenue.
00:05:15
We. Need kind of seems like they're
00:05:17
gearing up for that yeah, Palantir model or you know, find
00:05:20
the biggest pockets of money in the world and, you know, consult
00:05:24
on how to adopt AI in those companies, right?
00:05:29
I mean, I think there's a lot of headroom on the consumer
00:05:30
subscription business as well. I just think that, you know,
00:05:33
they haven't necessarily monetized, you know, a huge
00:05:36
percentage of the people who use AI every day and they just keep
00:05:39
adding value to that consumer subscription.
00:05:41
And so, you know, if you even if you just got a doubling of that,
00:05:43
that you know, that would get you pretty far along the route
00:05:45
here. All right, next question.
00:05:47
What will NVIDIA be worth at the end of 2026?
00:05:51
On the day we took this survey, it was 4.8 trillion.
00:05:55
They just had earnings. I'm not actually sure what it
00:05:57
is. The day we're recording it up,
00:05:59
is it? Yeah.
00:06:00
It's pretty. Close.
00:06:01
They had they had pretty good earnings, I think, but I don't
00:06:04
think Wall Street. Yeah, I checked the stock's like
00:06:07
4546 right now or something. 4.35 trillion.
00:06:10
So down 35 S, down a little bit, yeah.
00:06:12
Yeah. All right.
00:06:14
Unless they always they beat earnings and then they still,
00:06:18
you know, the expectations are so insane.
00:06:21
I said five, I think Max. What?
00:06:22
Did you say? I said 6, which was dead on the
00:06:25
money, if I recall. Yeah, yeah.
00:06:27
Yep. So the audience, the audience
00:06:29
had a median of 6 trillion. Very few outliers on the high
00:06:36
end there. 5 was the next biggest grouping.
00:06:38
Also interesting median was just so large is that's really it's
00:06:44
just so many. If I was think it was average.
00:06:47
Well, average is obviously thrown off by this like 100
00:06:49
hundred trillion as a troll, but forget that one.
00:06:51
I think based on the performance of the recent earnings, I think
00:06:54
this is actually high for reality.
00:06:56
Like they crushed the recent earnings as we discussed and the
00:06:59
stock barely went up on a multi day period.
00:07:02
So if they're not going up, you know, 10% a quarter, essentially
00:07:07
they're not going to hit this number.
00:07:08
And I just don't see that if those earnings aren't moving the
00:07:10
stock up. So this actually feels high for
00:07:12
reality to me. 5 is hard because you're sort of like you're
00:07:16
chickening out on saying it's all going to blow up.
00:07:18
And you know, it's, it's like, what is this reality where
00:07:21
either it's like, oh, the mania has continued or there's a
00:07:24
pullback. In some ways my 5 trillion feels
00:07:27
like sort of a weird sort of same.
00:07:30
Middle ground sound. World.
00:07:31
But yeah, who knows? Obviously, if you could, if you
00:07:34
knew the answer to this, you could be unlimited, you could
00:07:37
have an unlimited amount of wealth.
00:07:38
So nobody knows, but this is what a couple people think.
00:07:45
What year will an independent committee of experts, as
00:07:48
dictated by the Microsoft Open AI Agreement, declare that we
00:07:52
have reached AGI? I thought this was a funny
00:07:55
question because at once it's such a like, silly idea, like,
00:07:59
oh, we're gonna have AGI, but like, there's an actual
00:08:01
contractual agreement between Microsoft and Open AI that
00:08:04
there's a committee to resolve this and big business when, you
00:08:08
know, dealings hang on in. So it's a specific question,
00:08:12
which is sort of funny. This was one of my favourites
00:08:14
because you guys were trying to guess what the audience will try
00:08:18
to guess what Microsoft and Open AI will will decide as AGI, as a
00:08:23
lot of layers of prognification going on here.
00:08:28
What do you guys say? You said much higher.
00:08:30
I said 2035. Yeah, I said 29.
00:08:34
So I was pretty close to the median of 2030.
00:08:37
What do you guys think in retrospect?
00:08:38
Like is this is too early or? I just feel like a theme of
00:08:43
Cerebral Valley in the beginning, you know, we started
00:08:46
in 2023, in March 2023 after your Chachi BT that was probably
00:08:50
the conference. We talked most about AGI and
00:08:52
then we talked about it less and less every time.
00:08:53
You know, it's like there was so much enthusiasm when the models
00:08:56
came out and now we're in the sort of like, oh man, this is
00:08:59
exactly like self driving cars where you feel really close and
00:09:02
then there's a lot of like edge cases to hammer around.
00:09:06
And so I just think AGI pessimism has gone way up and
00:09:09
you add to that Andre Carpathy sort of thing, and it's just
00:09:12
like, I don't know, I don't think the insider vibes are like
00:09:15
AGI tomorrow unless you're talking to Daria or something.
00:09:18
I think the, I think the interesting thing here is that
00:09:20
there's basically just two buckets of people.
00:09:23
One is 2030 or sooner, which are like the accelerationist and
00:09:27
then one is like 2045 or never, which is like the
00:09:30
decelerationist or the pessimist or whatever you want to call
00:09:33
them. Maybe, maybe the realist.
00:09:34
Yeah, yeah, yeah. Whereas you sort of hit this
00:09:37
exact, I thought that basically nobody was, which it was 10
00:09:39
years from now or whatever. But like it's still gonna
00:09:42
happen. Which is an interesting like,
00:09:45
yeah, that you sort of found the middle of this smiling curve
00:09:48
where you're either an optimist or you're a pessimist, and you
00:09:50
kind of tried to hit the middle and it didn't quite.
00:09:52
Yeah, that's interesting. We have more optimists at our
00:09:54
conference was the end was the end result.
00:09:56
That's why Max beat me. He understood, though.
00:09:58
That was the psychology, the answer right there.
00:09:59
Yeah, exactly right. Optimistic Conference.
00:10:02
There's something slightly interesting comparing it to the
00:10:05
self driving car world like you said though Eric, because self
00:10:09
driving cars are basically useless until they reach parity
00:10:13
or capability of human drivers. Right, this is not like that.
00:10:16
This is not. It just happens to have all
00:10:19
these other very targeted, more verticalized use cases that are
00:10:24
super valuable but. 100%. With the full human replication.
00:10:29
Yeah, which is why we're you know, I was negative about self
00:10:32
driving cars because it was annoying because you need them
00:10:35
to actually work, whereas this I've been very enthusiastic.
00:10:38
So yeah, I I agree that's what's beautiful about text versus
00:10:42
safely delivering humans places, which thankfully now way MO is
00:10:46
good at and we can celebrating it.
00:10:48
But you know, 10 years ago or whatever, it was annoying
00:10:50
marketing. I like that.
00:10:54
OK, next question, Which venture capital firm's AI portfolio are
00:10:58
you most jealous of? I think this was kind of a
00:11:00
shocker, right? Neither of you guessed A16Z,
00:11:04
which tied with Khosla for the lead here.
00:11:08
Obviously this. We both said thrive, right?
00:11:10
Yeah. And then we both.
00:11:11
And then we decided to tie a. 2nd pick or something.
00:11:14
No, I said Sequoia. Sequoia and I said.
00:11:16
And that's why you won this. Yeah, I got the kicker on that
00:11:18
one. Yeah, that was a good pull.
00:11:20
Max, how did you decide to pull Khosla?
00:11:22
Just open AI or? I did some ChatGPT research
00:11:26
before and. Then goes with the first in Open
00:11:30
AI, first venture investment, Open AI, yeah, but that round
00:11:33
has gotten significantly diluted and I'm sure they'll be
00:11:35
reporting overtime. I don't know how much they've
00:11:37
done secondary. It was one of these fantastic
00:11:40
investments but it feels like they just they got squashed down
00:11:45
by later stage rounds and all these negotiations.
00:11:47
I think we can all agree though, the Andreessen, you know, tie
00:11:51
for victory with Khosla is pretty bizarre because like I,
00:11:54
you know, there's not a lot of really notable successful
00:11:57
Andreessen AI of investments compared to most of these other
00:11:59
firms. I mean, I got a text from
00:12:02
somebody when I shit on Andreessen on stage, which is
00:12:04
funny, which is what's wrong with.
00:12:06
This like live on stage. Right after I went off, but I
00:12:10
which it's like, oh, people are paying attention.
00:12:13
Yeah, I don't know Andreessen. I mean, they have I I can't list
00:12:17
they had character. They have like SSI mean, I'm
00:12:20
sure they have a ton. Some of them are later, you
00:12:22
know, it's like they're in Open AI, they're in XAI.
00:12:26
I just think the lesson of our draft was like the only thing
00:12:28
that matters is basically being heavy in Anthropic X AI or Open
00:12:32
AI and like they're not really in any any of those, right?
00:12:36
I mean, it was my understanding. Probably big in XAI.
00:12:40
Are they big in XAI? I think that they're in X, but I
00:12:44
don't think they're huge. Yeah, these are all.
00:12:46
Growth Sequoia was pretty big, yeah.
00:12:48
Sequoia has some money all. I'm saying is yeah, they to me
00:12:51
it feels like the PR of Andreessen generally is sort of
00:12:55
overwhelming the actual portfolio.
00:12:57
It's like, oh. You know, I think Drive is doing
00:13:00
really well. They've done these huge bets in
00:13:01
open AI like big yeah, pre all the markups.
00:13:05
So I bet they're doing really well and would like to
00:13:08
substantiate that Thrive reach out the I mean all the you know,
00:13:14
they've mentioned a bunch of firms like, you know, it's like
00:13:17
a lot of Gil who we had on stage is obviously great Index
00:13:20
ventures. I mean some of these like who
00:13:21
knows, some partner, you know, set them.
00:13:23
I'm personally jealous of this slag because it would be a great
00:13:26
cap table for a start up. Just have all these.
00:13:30
Things. I mean every.
00:13:30
Single person. On the slide here.
00:13:32
Yeah, yeah. Sounds good and benchmark
00:13:35
doesn't make sense to me necessarily for AI portfolio,
00:13:38
you know, but I mean they have Mercore now.
00:13:41
Mercore Lang chain, I don't know.
00:13:44
But oh, OK, as I'm about to preview, I do think brand, like
00:13:48
having a big brand, you know, who gets an answer on a survey?
00:13:53
Somebody who's has large mindshare.
00:13:55
And This is why surveys are imperfect.
00:13:59
Yeah. What research mechanisms?
00:14:00
So we're about to see name recognition is everything here.
00:14:04
If you could put money in any private technology companies
00:14:07
today, what would they be? So the top 10 by far was the
00:14:13
first was Anthropic, followed by Open AI and then Cursor.
00:14:19
Those are the top 3. Rounding out the top five was
00:14:23
Andereal and SpaceX. Open evidence is interesting,
00:14:28
then perplexity replit stripe, XAI Max and I both said
00:14:31
anthropic right? Yes, yes.
00:14:33
I feel like the tiebreaker, First of all, I was idiotic on
00:14:36
the tiebreaker because I picked a really random company.
00:14:39
Nobody was ever going to pick. I said fireworks, which was
00:14:41
clearly just like if I were going to make the bet.
00:14:44
I don't know what I was thinking.
00:14:45
Max you picked cursor. Cursor, cursor.
00:14:48
Yeah, well, with the momentum play.
00:14:50
I I think we got, we had one pick this tiebreaker thing.
00:14:53
I was not pre negotiated. I didn't come in with a
00:14:55
tiebreaker, so I didn't. Do so.
00:15:02
But you both kind of agree with the audience on Anthropic.
00:15:05
I mean, it's an interesting question because Anthropic is
00:15:08
only, you know, valued at, you know, what is it 350 billion
00:15:12
compared to opening eyes 500 now, right?
00:15:14
They're they're starting to get pretty pricey comparatively
00:15:17
speaking. And I know their revenue growth
00:15:19
has been unbelievable. And I think they're mind share
00:15:21
in Silicon Valley again among developers.
00:15:23
And there is this sort of like undercurrent of like they're
00:15:27
like the ethical AI company, quote UN quote with the, you
00:15:30
know, cool hip branding in the West Village.
00:15:33
It is a bit strange to me that they're substantially bigger,
00:15:39
you know, than open AI and the votes here, because I just think
00:15:41
that the hipness of Anthropic is maybe outweighing the fact that.
00:15:45
Well, Silicon Valley is more bullish on Anthropic than you
00:15:50
know, most of America clearly of.
00:15:53
Course, yeah. But I, I just think, yeah, if
00:15:55
you were an honest assessment of the valuation compared to the
00:15:58
revenue versus open AI would would say like, hey, you might
00:16:01
just want to take the momentum plan bet on open AI.
00:16:04
So it was a bit of a yeah, let's guess what the hip Silicon
00:16:07
Valley one is. And you and I both correctly
00:16:09
guessed the hip Silicon Valley 1 was anthropic.
00:16:12
Yeah, what global companies model will top the LM Arena web
00:16:16
development leaderboard at the end of 2026?
00:16:20
We had an excellent conversation with some insiders in the data
00:16:24
labeling space the night before who said that LM Arena is an
00:16:29
incredibly gamable metric and it's because it's basically
00:16:32
voting from users whether or not they liked the response or not.
00:16:37
And so it's very susceptible to, some might call it glazing,
00:16:41
others might call it sick of fancy towards the user.
00:16:44
And their belief was that open AI was heavily over optimizing
00:16:48
to glazing their users and therefore was continuing to do
00:16:52
well in these types of rankings. Which I thought was an
00:16:54
interesting point, which is why I think I chose open AI for
00:16:56
this. And did well one the answers are
00:16:58
open AI, Anthropic, Google, Gemini X AI and then Alibaba.
00:17:02
We made sure to say global to try and induce some Chinese
00:17:05
answers, but didn't get they didn't rank high.
00:17:08
I just think this was basically what the rankings were during
00:17:13
the day the survey was taken like nobody was really going out
00:17:17
too far on a limb that this would be radically different
00:17:20
next year. But I think that's interesting
00:17:22
because my understanding is now Gemini ranks higher than open
00:17:26
AI, you know, a week later, right?
00:17:28
So I bet you'd, I bet you know, you'd see a lot more people
00:17:32
guessing Gemini. 'S insiders needed to be more
00:17:34
aware, yeah. Yeah.
00:17:36
Man, I wish Gemini come out before the conference.
00:17:39
So, yeah, well. Why do you wish that?
00:17:44
Why? Just think Google would have
00:17:45
leaned in and talking about it. It also would have been giving
00:17:47
us like a current. We had plenty to talk about.
00:17:50
It was one of my favorite events, but you know, it's
00:17:53
there's a lot of big. Happen a week later.
00:17:55
World that happens a week after it's like, oh come.
00:17:58
On, Yeah, All right. If you could short a $1 billion
00:18:01
valuation startup, which would it be?
00:18:04
Before before we yeah, before we answer this question, let me say
00:18:08
having just gotten off the 16 hour plane ride from the United
00:18:12
Arab Emirates, this was brought up to me multiple times apropos
00:18:16
of nothing in conversation world with venture capitalists and
00:18:22
investors are anonymous service of all stripes.
00:18:24
They had no idea I'm associated with the conference.
00:18:26
They had no idea that this was something that I was personally
00:18:29
like on stage for the reveal of this information.
00:18:33
So this not just went viral, this literally traveled around
00:18:37
the world faster than, you know, as fast as the speed of light.
00:18:40
The the answer to this question. And I think other journalists
00:18:43
probably made millions of dollars off of this.
00:18:46
And Eric, maybe just no, well, I don't know, money.
00:18:48
Do you think we will make off? Stories.
00:18:51
Maybe not, maybe not. Price.
00:18:53
We'd be lucky. If they made 10, I mean they
00:18:55
didn't make any. Money I meant.
00:18:56
I meant Business Insider. I meant business.
00:18:58
Ties it to the tune of $20 all.
00:19:01
Right, All right. I meant, I meant Business
00:19:03
Insider. If they have a sub plan, which I
00:19:05
believe they do, sure converted hundreds of Subs off of ripping
00:19:09
off our survey. So Congrats to Business Insider.
00:19:13
Yeah. So yes.
00:19:13
We're with that you get the coverage to be clear.
00:19:15
Thank you, Ben. Thank you for the coverage.
00:19:17
Thank you, Eric. How does this make you feel
00:19:19
about lists and rankings and types of stories you might?
00:19:24
Write We played it as a fun game on stage, and I think our
00:19:29
coverage in the newsletter reflected that.
00:19:30
It was like a fun game. And that's what we're talking
00:19:33
about it here. This was not an academic survey.
00:19:35
It was sort of provocative and it's funny.
00:19:37
I mean, some ways the media should be a little looser.
00:19:39
It's like, oh, some insiders think this thing, but just like
00:19:42
when things are turned into like journalese, it feels like
00:19:45
official, like Silicon Valley has turned on perplexity.
00:19:48
It's like, I don't know, so random people would decide to
00:19:51
fill out a survey, you know, that's what they said.
00:19:53
But and perplexity was on the bull list, you know, way lower.
00:19:57
And I, I do think this means something that it was #1 short,
00:20:00
I mean perplexity, why is it #1 super highly valued and doesn't
00:20:04
have that gateway to the consumer.
00:20:06
And people have tried to do browsers forever.
00:20:09
You know, it's like people have failed.
00:20:11
And you know, Google Chrome and Safari and Explorer dominate.
00:20:16
So it's it's a very hard space. So yeah, there are some
00:20:20
investors that are super bullish, but I think most people
00:20:22
are like, how do they get distribution?
00:20:24
Yeah. The the reason it's a short
00:20:26
right is, is valuation I think to a large degree, right.
00:20:29
It's valued at $20 billion, right.
00:20:31
And whatever leaks have come out about the revenue from the
00:20:35
inside, you know, there's some debates of whether or not
00:20:37
they're counting free trials that are a year long as part of
00:20:41
revenue if you read the information story about this
00:20:43
kind of stuff. But anyway, even if you just
00:20:44
take it on its face, the revenue that they're leaking or stating
00:20:48
this is like 100X revenue multiple which is sort of
00:20:52
ludicrous on any company. I mean even the wildly
00:20:56
overvalued companies were talking about earlier like
00:20:58
Anthropic and Open AI are only multiple 25X revenue.
00:21:03
So this thing is worth, you know, the, the excitement around
00:21:07
it from a valuation perspective is roughly 4X Open AI or cursor,
00:21:11
right, if you were to just sort of do the math there.
00:21:13
And so I think that's the reason it's a short, it's just that the
00:21:15
valuation is just out of control.
00:21:17
And to your point, they don't have their own models.
00:21:20
It is a search engine. And obviously Google and Open AI
00:21:22
are trying to own the search space.
00:21:23
And now they're getting into the browser thing.
00:21:25
And it's not clear that even this that AI browsers are even a
00:21:28
space. Yeah.
00:21:29
So most votes, most shorts was Perplexity, followed by Open AI,
00:21:35
and then tied for third was Cursor Figure, Harvey, Mercor,
00:21:40
Mistral and Thinking Machines. I, I answered, I think open AI
00:21:44
on the belief that oh man, popularity and Max got this dead
00:21:48
on. So kudos to Max, which was a
00:21:50
great pick, but I, I got the number 2 and then I think we did
00:21:54
a second. Did we do a second?
00:21:55
I said thinking machines also, which was third.
00:21:57
So we we, we were on the pulse. And that and that was that was
00:22:00
before thinking machines got marked out to 50 billion.
00:22:04
That was it came in 3rd when it was a $10 billion company.
00:22:07
Now it's 50. So that to me would have been
00:22:10
the the pick maybe. You know, Cerebral Valley survey
00:22:13
went viral with Indian media because Perplexity founder is
00:22:16
sort of a high profile Indian founder and I think there's
00:22:19
interest and how it goes. So it really, really travelled.
00:22:23
I think perplexity hangs on, you know, it could sell to Apple or
00:22:28
somebody to save their AI strategy, I think because they
00:22:31
have a big distribution problem. And then some people really
00:22:34
believe in the founder and some people don't.
00:22:36
All right, any final takes? Well, I was just curious about.
00:22:40
I was curious about cursor because you said that you had a
00:22:43
a. Oh, I just wanted to not talk
00:22:45
about it until we revealed that it was.
00:22:47
Short of the shorts, I don't. Know Max What's your take?
00:22:50
Are you bullish or bearish? I mean, I don't know, probably
00:22:56
bullish given the momentum they're seeing on revenue and
00:22:59
revenue growth. I think if you just take the
00:23:01
brain dead case that lots of revenue is good and lots of
00:23:04
revenue growth is good, they have a good business.
00:23:07
I think they're much more likely to exit to someone for near
00:23:09
their current valuation than Perplexity, for example, where I
00:23:12
think that buying it for 20 would just be absolutely insane.
00:23:16
But yeah, I mean, ultimately there's this whole debate with
00:23:19
Cursor that they're repackaging other people's models and their
00:23:22
gross margins are terrible and yadda, yadda, yadda.
00:23:24
But you know, ultimately someone may have to cave and just buy
00:23:27
them to own the IDE space. You know, Microsoft being an
00:23:30
obvious candidate. So it'll be.
00:23:31
Interesting and now now Google has anti gravity which is their
00:23:34
right. That's more their clawed code I
00:23:37
think. I.
00:23:38
No, no. I.
00:23:38
Downloaded it. No.
00:23:39
Anti gravity is is a cursor clone in many ways but I mean.
00:23:44
OK I I have it on my computer. It has terminal access.
00:23:47
It's trying to come. Up with.
00:23:49
Well, it's designed more specific.
00:23:51
I guess you could say it's cloud code.
00:23:53
To some degree it's because it's.
00:23:54
So does. It felt like cloud code because
00:23:55
I do stuff in the terminal I haven't actually known the
00:23:58
cursor. I'm not a coder so I don't know,
00:24:00
but when I it felt like cloud code.
00:24:03
You shouldn't have to use terminal that much for anti
00:24:07
gravity like versus cursor. I don't know.
00:24:09
I don't know. You wanted me.
00:24:10
You built my website and then I was like, build the website and
00:24:14
then I was like, OK, go in the terminal and run it.
00:24:16
Yeah, so that's what cursor would do too.
00:24:18
That's what cursor would do. Because you're just, it's like a
00:24:22
level above lovable or something, right?
00:24:25
Where it's like it's forcing you to.
00:24:26
Actually more yeah, minor speed is repple it even Dumber than
00:24:30
that. I think I need to try repple it
00:24:31
like. I think you'd like Repple.
00:24:33
It's like in between. I think Repple it has a little
00:24:35
bit more pro. User I want the dumbest feature
00:24:37
lease coding you know. That's probably I think.
00:24:40
I think lovable is the yeah is the version.
00:24:43
Well, I beef with lovable because they have Yeah, try
00:24:46
repple it, you know? Yeah.
00:24:47
Repple it but they say. They fix actually what you
00:24:50
should try as of, you know, yesterday is I think Gemini in
00:24:55
AI Studio. It's like they've built a
00:24:58
lovable kind of thing. Classic Google man Jesus Christ,
00:25:02
how many names you even how to find it like that name is
00:25:06
insane. Yeah, Gemini in.
00:25:09
Gemini inside AI Studio. Inside AI Studio.
00:25:13
You're not. You're not a daily daily driver
00:25:16
of AI Studio. Specific specifically.
00:25:20
Specifically the build, The build menu, the build.
00:25:23
Within, yeah, you're. Not up.
00:25:26
You're not up in Vertex every day, Eric.
00:25:28
That's that's a real name of a Google product, by the way.
00:25:31
That's related to AI. Yeah.
00:25:34
All right, let's do some clips. Let's do some clips for founders
00:25:38
and developers building modern data-driven applications.
00:25:41
Mongo DB's local event series is coming to San Francisco on
00:25:45
January 15th, and it's designed to help you focus on innovation,
00:25:48
not infrastructure. You'll learn about technologies,
00:25:51
tools, and best practices that make it easy to build and scale
00:25:55
modern applications without complexity.
00:25:58
Plus, attendees will hear directly from experts and
00:26:00
innovators who are using Mongo DB to power the next wave of AI
00:26:04
applications. Mongo DB dot local San Francisco
00:26:08
January 15th, Learn more and register at MDB dot link forward
00:26:14
slash SF-DOT dash local or click the link in the description.
00:26:19
Our first clip it's me, Eric, interviewing Mike Krieger, the
00:26:23
chief product officer of Anthropic, who is the Co founder
00:26:26
of Instagram. Before that, returning to sort
00:26:28
of my core philosophical question, like the sick of fancy
00:26:31
question, like what is your view on that and how much to enable
00:26:36
sort of everybody likes to be flattered, like it's a reality
00:26:39
of human beings versus an effort to be direct?
00:26:42
And how do you think about those tradeoffs?
00:26:44
Yeah, I think there's like a wide gulf between like true
00:26:47
empathy and then like sick of fancy.
00:26:49
And it's interesting that Materialize is not just in, hey,
00:26:52
I'm having a conversation with Claude about like some coaching
00:26:55
or personal goal that I have, but it also does encode as well.
00:26:58
When we were testing Sonnet 4-5, one of the things that people
00:27:01
got most excited about was when Claude was like, this idea is
00:27:04
bad like this, you know, not that you should feel bad about
00:27:07
it, but like this idea is like not a good direction.
00:27:09
I can go and implement it if you really want to, but I would
00:27:11
suggest that we try this other thing instead.
00:27:14
So there is something like that. Pushback is not just valuable in
00:27:18
a personal relationship with AI sense, it's actually like how
00:27:21
you get good work out of the models.
00:27:24
But you know, for a long time our models have been like, I
00:27:28
think like appropriately empathetic, like they're they're
00:27:31
like if you're going through a hard time, like I was dealing
00:27:33
with the death of a pet and I talked to Claude a lot about
00:27:34
these different things and it always started sounds like, Hey,
00:27:37
that sounds hard, like sorry to hear.
00:27:40
But then I'm going to give you like a factual answer.
00:27:42
I'm going to go research these pieces, but still with the place
00:27:44
of empathy as well. And so I think when we look at
00:27:48
it internally and we're just evaluating it ourselves, it's
00:27:50
again, not that like empathy, it's not even like the
00:27:53
likeability of the model. It is, do you like, does it show
00:27:56
up in the way that you'd want a good conversationalist to show
00:27:59
up and then continue on its AI journey around what it is going
00:28:02
to do with you as well? But I think it's it spans
00:28:06
everything from that like initial response all the way to
00:28:08
like how it evaluates an idea as well, you know?
00:28:12
Yeah, Claude, especially previous versions were kind of
00:28:14
like known for being like, you're absolutely right when you
00:28:16
correct it. And my wife got her first, like
00:28:21
you're completely wrong. And she's like, yes, this is
00:28:23
great. And I think we should have more
00:28:24
of that. Like, kind of like less San
00:28:26
Francisco. Yeah, less San Francisco, a
00:28:28
little more direct New York. You know, I'd set up this big
00:28:31
theme, you know, that he'd been at a social media company,
00:28:33
Instagram. Now he was an AI company.
00:28:36
Social media companies were built on user optimizing for
00:28:40
user engagement through machine learning and AI companies at
00:28:43
least started off chasing the truth and chasing these
00:28:46
leaderboards. But like sick of fancy is a, you
00:28:49
know, it shows that these models and open AI is famous for the
00:28:54
sick of fancy issue and people's attachment to GPT 4, which was
00:28:59
the one that really sucked up to everybody and people didn't want
00:29:02
to see it go away. You know, clearly these model
00:29:06
companies have to think about how much to pander to the egos
00:29:12
of their users, Would you guys think?
00:29:14
I mean, it's interesting because it does sort of spiritually
00:29:18
align with the fact that Anthropic has almost no consumer
00:29:23
adoption compared to open AI. I mean, like if you look at the
00:29:27
market share of each of these AI for consumers versus businesses
00:29:30
and enterprises, Anthropic is just crushing it with, you know,
00:29:34
B to B use cases engineers like, you know, all these kind of work
00:29:38
based applications and has very, very low consumer uptake just
00:29:45
like shockingly low. And I wonder if that's because
00:29:48
of this, you know, unwillingness to optimize for engagement and
00:29:53
sick of fancy and glazing or if it's just that the, you know,
00:29:56
opening eye got a head start and they figured, why even chase
00:29:58
these metrics? But it is sort of interesting
00:30:01
culturally that they're right not chasing engagement.
00:30:05
And obviously Opening Eye came out with Sora, which is sort of
00:30:08
a shameless. Like, give users something fun,
00:30:11
who cares about. Yeah.
00:30:13
What is the meaning behind it? I mean, I thought it was
00:30:16
interesting, this sort of a tangent, but my other favorite
00:30:19
moment from this interview is Mike Krieger saying that he came
00:30:22
to anthropic thinking, man, text box cannot be the main way to
00:30:26
interact with AI. And now that he's been there a
00:30:28
while, he's like text box. Pretty good way.
00:30:33
Well, especially if you're like the best coding model and the
00:30:35
best like. Whatever.
00:30:37
Probably like best. Legal model, It's great, yeah.
00:30:39
It's like, oh, it turns out all these work applications involve
00:30:42
parsing, you know, summarizing and generating.
00:30:45
I think that my reaction to that was like, nobody would be like,
00:30:47
oh, books, just, it's just a book, you know, it's just text.
00:30:49
It's like, yeah, text is great. I don't know, James reactions to
00:30:52
you. I don't know either the input
00:30:54
model or the truth. I think that actually, you know,
00:30:58
whether it's anthropic or open AI, like I am skeptical that
00:31:03
they have been like attention jacking, you know, optimizing
00:31:09
for flattery. Just intentionally.
00:31:12
Like a lot of what happens is that they throw an AB test up.
00:31:16
They like literally show you 2 results from the model and then
00:31:18
people pick right And so I think that they were just caught off
00:31:22
guard more than they were purposely trying to optimize for
00:31:26
this. I've also been hearing that
00:31:29
there's just general problems with multi turn conversations in
00:31:33
the training sets of these things.
00:31:35
Like most of the models are trained on one shot, the data of
00:31:39
like, give me a good answer to this thing.
00:31:41
And then once you get into multi turn, there's just less and less
00:31:45
data, right? It's like kind of makes sense
00:31:46
intuitively cuz you start branching off of conversations.
00:31:50
And I think that's another sort of flaw of these models is they
00:31:54
can kind of, they can just be more sycophantic.
00:31:59
In some ways what you're saying is they're not savvy enough yet
00:32:01
to really make this trade off and they're just trying to like
00:32:04
stumbling through the dark a little bit.
00:32:06
Yeah, Yeah. A more interesting question is,
00:32:08
will they change their tune on this from, you know,
00:32:11
capitalistic pressure to, you know, maximize shareholder
00:32:15
value? I'm yeah, I think that's an
00:32:17
interesting question. I'm just like skeptical that
00:32:19
that's what's been happening. All right, this is my interview,
00:32:22
last interview of the day with Jimmy BA, one of the Co founders
00:32:25
of Elon Musk's XAI, a very mysterious foundation model
00:32:29
company, the Mecca Hitler in the room.
00:32:32
Like what is your reflection as sort of a truth seeking
00:32:36
organization? What happened?
00:32:38
Like we, you know, like I think on the past to be maximum true
00:32:43
seeking, there's not without any hurdles, of course, like so we
00:32:47
like yeah, Mac Hitler is one of them.
00:32:49
Like we, our model had an episode that week.
00:32:54
It's actually a reference to the Wolfenstein game, right?
00:32:58
So, but I think very quickly that the perfect world we want
00:33:02
to be in is like, yes, the model is going to make mistakes, but
00:33:04
how can we get the feedback loops to actually train these
00:33:07
models to stay, you know, grounded to understand, hey, I
00:33:10
actually made a mistake in this journey and let me correct my
00:33:14
courses and go back into the sources.
00:33:16
So the way, you know, very quickly what happened after my
00:33:19
color is I would look at the, you know, the committee notes,
00:33:22
right? Committee notes is a great tool
00:33:24
on the platforms that allows everyone to kind of chime in and
00:33:27
provide learning signals for this AI, right?
00:33:29
So the vision we have is like, you know, like with the Grog PDA
00:33:32
is like kind of another step towards that.
00:33:34
So now like instead of doing an ask Grog, do all the online
00:33:39
computation, we learn our lesson.
00:33:41
We're like, hey, a lot of these problems are really hard about
00:33:44
the world. Like why don't we just, you
00:33:46
know, take this computation offline and spend as much
00:33:50
reasoning as possible using the entire cluster.
00:33:52
We're building benefits to like, look over all the primary
00:33:55
sources, combine only the primary sources, and dish that
00:33:58
information back. To is that so?
00:34:00
Is the media out of the calculus there?
00:34:01
It's you want primary sources, Yes.
00:34:04
Are you totally discounting news articles or how do you treat
00:34:07
news articles? I mean, majority of the Internet
00:34:10
is flooded with second hand and third hand information.
00:34:15
And we, we believe that, you know, the only way to get to the
00:34:18
bottom of the issue is, you know, directly get information
00:34:24
from the information source. And right now the X platform
00:34:27
has, you know, most of the outbreaks of the news and you
00:34:30
know, the the world leaders today are making the first hand
00:34:33
announcement on X platforms rather than anyone else.
00:34:36
After this interview, what happened this week is that Grok,
00:34:40
Grok has been telling everybody that Elon is the best at
00:34:42
everything in the fucking world. Better than better athlete than
00:34:46
LeBron James. I think he can get it to say
00:34:48
he's better giving blow jobs, and I don't know.
00:34:50
But anyway, Elon is great in every domain whatsoever.
00:34:54
And so I think what's galling about XAI is that they are the
00:34:57
loudest truth, truth, truth, truth.
00:34:59
We're are seeking the truth who knows how.
00:35:01
And then they're the ones who have like Mecca Hitler.
00:35:04
They're the ones who have, you know, they're bot glazing their
00:35:09
CEO like it's just like, yeah, it's very Trumpian, where you're
00:35:14
the opposite of what you profess to be.
00:35:16
Yeah, I find this whole maximally truth seeking argument
00:35:20
to be just the biggest pile of bullshit I have heard in a long
00:35:24
time. It is.
00:35:25
It is so absurd. To your point, it is almost the
00:35:27
opposite of what's happening. They are giving themselves
00:35:29
credit for failing in public while every other company goes
00:35:33
through all this hard work of failing in private so that they
00:35:36
don't have massive fuck ups in public.
00:35:37
It's like, obviously I'm sure some crazy version of, you know,
00:35:41
Chachi, BT and Claude existed in the labs that probably did stuff
00:35:44
that was equally stupid as Mecca Hitler, but they don't fucking
00:35:47
release it. They fix it before it goes out
00:35:49
to the public. That is, that is maximum.
00:35:52
I was very excited. Like talk to Jim.
00:35:54
I, I was very excited to talk to Jimmy because these guys,
00:35:56
they're so inaccessible. And I, you know, I asked him
00:35:59
later on, like, what is reasoning from first principles?
00:36:01
And I just think it's like so incoherent.
00:36:04
You know, it's like a thing you hear in Silicon Valley,
00:36:06
reasoning for first reason. But how are you going to like
00:36:08
derive like entire encyclopedia articles from first principles?
00:36:12
Like AI is clearly not smart enough to really think these
00:36:17
things from the ground up. And some of the things it has to
00:36:19
learn about are human phenomenon.
00:36:22
So you have to rely on human sources and they don't rely on
00:36:26
the media. And that he, he literally says
00:36:28
something that he thinks like X is more reliable than like the
00:36:32
media, which I, you know, obviously I find absurd.
00:36:35
And I just think any reasonable person would be like, if you're
00:36:37
trying to figure out a fact, you know, would you take the
00:36:41
distribution of answers on Twitter or would you take the
00:36:43
answer on like Wikipedia or in the New York Times?
00:36:45
I would certainly take Wikipedia or the New York Times.
00:36:48
Yeah, I don't know. I just find this whole like
00:36:50
getting feedback from community notes as like a solution to
00:36:54
maximal truth. Right.
00:36:55
It's like afterwards, it's like we're going to fuck up on
00:36:57
everything and then community notes will clean up.
00:37:00
Some of it it's like, it's like the only way our maximally truth
00:37:03
seeking AI system works is if we fuck up on such a massive scale
00:37:07
that a mob of people online says this is a yes, you have to fix
00:37:11
this. And then we like take the notes
00:37:13
on, you know, we're like, oh, yeah, that that's a good point.
00:37:15
Actually. Elon probably wasn't better than
00:37:17
Michael Jordan in the mid 90s. Thank you, community notes.
00:37:21
Like, it's like, that's not maximally truth seeking.
00:37:24
That's just like fucking up on the most maximal possible scale.
00:37:27
Like it's such an absurd line of thinking.
00:37:30
And I find it so offensive that they frame it as such.
00:37:35
Yeah, I guess. James is gonna offer the
00:37:38
contrarian viewpoint I'm so excited to.
00:37:40
Hear you disagree or James is gonna steel.
00:37:43
Man this take, I love him. I just feel like I'm going third
00:37:48
here. I gotta do the steel man.
00:37:49
So I think that maybe they, they are way ahead of their skis on
00:37:57
this. But if I'm giving them the
00:38:00
maximal credit here, like I think there are interesting
00:38:03
things you can do with like first principles reasoning in
00:38:07
training, right? So you can hire these pH DS, you
00:38:11
can like, you know, almost like create axioms and like create.
00:38:16
Reasoning changes get it that Like who is the philosopher king
00:38:19
at ex like if not Wikipedia like is there some guy or like and
00:38:24
the guy? Thinks that's what they're doing
00:38:25
or that's like, but like that's what they're planning to do or
00:38:28
doing, you know, like basically hiring lots of people to but it.
00:38:33
Feels like somebody is just like actually like, you know, racism
00:38:36
isn't bad, like you know, it's toy the thing you know it's just
00:38:40
like but it won't own it. It's like, if it's true seeking,
00:38:42
you have to like, Oh my God, like.
00:38:45
And it's probably Elon, right? Maybe, maybe a lot of this is
00:38:49
just Elon messing with the system prompt, right?
00:38:51
Like maybe the training is great and then and then Elon goes in
00:38:54
and and edits the system. And I think, you know, XAI is
00:38:57
very proud of its work in coding.
00:38:59
I think they're seriously competitive there.
00:39:01
But I think one thing we're seeing with these models is that
00:39:05
just because you're a genius in one domain doesn't mean it's
00:39:07
sort of like an all-purpose genius.
00:39:09
It means you like did a lot of reinforcement learning there.
00:39:12
You worked really hard. And so it's it's not like it's
00:39:15
not like what they think, which is like, oh, the smartest math
00:39:18
genius in the world. He's gonna have, you know, the
00:39:20
best views on like, you know, social issues of the time.
00:39:23
You know, it's they're they're pretty disconnected, just like
00:39:26
with human human like Bobby Fischer was like an anti Semite.
00:39:29
You know, it's like you can be a genius in one domain, it doesn't
00:39:32
necessarily make you super competent in others because
00:39:34
there are different ways of gathering information and
00:39:36
understanding what's happening. And so I think they're
00:39:38
delusional that they're going to have this.
00:39:40
Yeah, first principles machine. That's great just because it's a
00:39:44
great reasoner. And therefore it's it's just
00:39:46
going to be swamping the other models by ignoring conventional
00:39:51
human sources. Yeah, it's like it's kind of
00:39:54
weird. They're trying to, like, invent
00:39:55
new branches of philosophy that can like, cover all human.
00:40:00
Without having any respect for the past.
00:40:02
Thing right? Exactly the way.
00:40:03
You do that is you're sort of like, you know, the Uvra.
00:40:05
And then you're like, yeah, we, we read it.
00:40:07
We disagree. They're sort of like stumbling
00:40:09
and blind. They're like these these
00:40:11
intellectuals. They're like idiots.
00:40:12
We're going to code it anyway. Next, Next clip.
00:40:15
All right, here. Here's another one.
00:40:17
I talked with the mayor of San Francisco.
00:40:19
Have you had a conversation with Zoran Mamdani or any
00:40:22
observations on his election? You've been able to maintain
00:40:25
this. Great.
00:40:25
We, we, we, we, we, We spoke the morning after he won.
00:40:29
I congratulated him. I, I said, you know,
00:40:32
congratulations. Anything I can do to be helpful,
00:40:35
Great. I, I, I met my wife in New York
00:40:37
City. I worked at the Robin Hood
00:40:39
Foundation. I love New York.
00:40:40
I want New York to succeed. Did you give them any advice?
00:40:43
No, no, no, no one should be asking someone that's been in a
00:40:47
job for 10 months for advice. I, I unfortunately have been,
00:40:52
you know, here in San Francisco, not unfortunately like, but I
00:40:56
haven't been able to travel to New York for almost 2 years now.
00:40:59
So all you all here, you want me focused on San Francisco.
00:41:05
You don't want me talking Sacramento politics or DC or New
00:41:08
York. You want me focused, San
00:41:10
Francisco. Well, Eric, as the San Francisco
00:41:12
native, what do you what do you think about the mayor?
00:41:15
Yeah, yeah. I live in New York.
00:41:18
James is the only true San Francisco native anymore.
00:41:21
I I skip town for the suburbs. You should probably give your
00:41:24
take on the mayor. Well, I I generally like the
00:41:27
mayor a lot, and I think he's been doing a really good job.
00:41:31
I think, yeah, he's in a tough situation with these like
00:41:35
national politics issues. I think he really doesn't want
00:41:41
to deal, doesn't want to become the main story around the Trump
00:41:45
administration and national politics.
00:41:48
He wants to just focus on San Francisco, which I appreciate as
00:41:51
he was very. Politician.
00:41:53
He was like safety, safety, safety.
00:41:55
He just came back to that a billion times.
00:41:57
I, the audience loved him. I mean, politicians are better
00:42:00
speakers than CEOs. I, I think so the people liked
00:42:03
him. People were rooting for him.
00:42:04
He's talking about values, which often companies fail to speak
00:42:07
about. He's not mum Donnie though, like
00:42:10
I, I, yeah, he didn't. He's not like fighting it.
00:42:14
It was also interesting. I kept saying, you know, like
00:42:16
the business community loves you.
00:42:17
Like why is that? Even though like, you know, and
00:42:20
what advice would you give to mom Donnie and blah, blah, blah.
00:42:23
And then he sort of said at one point he was like, well, they
00:42:26
didn't love me at first, which I did think was a funny point
00:42:29
that, you know, it's like they came around to him pretty late,
00:42:31
but. Yeah.
00:42:32
I mean, I will just say, say that, yeah, as someone who has,
00:42:35
you know, lived in the Bay Area for 15 years, in San Francisco
00:42:38
for like a decade, like it is still sort of shocking to hear
00:42:42
the mayor express, like, excitement and appreciation for
00:42:46
the main industry in his city. Like, it's like, it's like, it's
00:42:50
like if the mayor of Los Angeles was up there and like being
00:42:53
like, I, I think this Hollywood thing is pretty good for the
00:42:56
city. And you were like, whoa, no
00:42:57
one's ever said that before. And reality is in San Francisco,
00:43:00
I have not heard a politician express any sort of positive
00:43:04
viewpoint about technology as an industry for 15 years.
00:43:07
And so it is, I think, you know, he has a 73% approval rating or
00:43:11
whatever. I think that the positivity
00:43:13
about what's happening in San Francisco is what really shown
00:43:16
through in the interview to me, including in this Benioff answer
00:43:18
where he was saying, hey, things are getting better.
00:43:20
We still have a lot of work to do.
00:43:21
But like, I believe in the city and I believe we can invest to
00:43:24
make it even better in the future, right?
00:43:25
And maybe Mark was a little off his rocker on calling for, you
00:43:28
know, the. Federal dimension a little,
00:43:29
yeah, there we. Go.
00:43:30
But it's just, yeah, the just the whiff of optimism about the
00:43:34
city and technology is is so unique in the last 15 years of
00:43:37
San Francisco politics. But he has universal approval in
00:43:41
the city because, you know, it got so bad.
00:43:44
And then pretty quickly after he got elected, like there was
00:43:48
noticeable improvement. Like I it's not that he was.
00:43:51
Yeah, he's doing an actually good job.
00:43:54
And there are a lot of obvious things that he could do to
00:43:57
improve quality of life in the city, and he's doing them.
00:43:59
And that, that's the success story.
00:44:02
Will it always feel like AI is this kind of tool?
00:44:05
Agents are useful, used by humans.
00:44:07
Christina, you said, you know, a smart person who knows how to
00:44:10
use AI might replace someone who doesn't know how to use that.
00:44:13
Or will we reach a point where these AI agents are really
00:44:18
approximating full workers in the enterprise?
00:44:22
Oh, I think they're yeah, there's definitely some full
00:44:24
workers, but I'm I'm people will just do other things like a
00:44:27
Vanta contacting example, and we were talking about it earlier is
00:44:30
one part one thing of GRC team does again is like evaluates new
00:44:34
vendors, new software vendors that are coming in.
00:44:36
And today often is someone's job to evaluate the high risk
00:44:40
vendors. Like they can't even do all of
00:44:42
them, but they are just like vendor evaluator.
00:44:44
And I think that is a great thing to give to an agent.
00:44:48
And then that and that agent can go to Erin's point, go and do
00:44:50
all of the vendors, not just a subset of them, because the
00:44:53
agent doesn't take PTO and doesn't get tired and, you know,
00:44:56
works 9/9 more than 997. And then the person becomes like
00:45:01
a vendor risk portfolio manager and thinks, OK, a given all
00:45:04
these, you know, inputs and given what I know about business
00:45:07
context, how do I like make better decisions?
00:45:10
But the person still has a role. It's just not as kind of in some
00:45:13
ways manual and tedious as what the agent is now doing.
00:45:16
I guess my reaction to this is just that what she's describing
00:45:22
is to some degree a job replacement.
00:45:25
I mean, she's saying that this role will no longer exist and
00:45:27
that this person will be doing a different job that she thinks
00:45:31
that person is capable of. But my question is like, is that
00:45:34
true? Like this person will be able
00:45:36
to, you know, become a portfolio of agents manager?
00:45:41
I don't know. I'm, I'm just maybe a like more,
00:45:43
a little more skeptical that it just so, so easily, you know,
00:45:47
transitions into this next era where everyone who used to be a
00:45:51
software engineer can just be an agent manager of software
00:45:54
engineers or everyone who, you know, was a lawyer can be a
00:45:59
manager of agent lawyers. Like, I don't know, it just
00:46:02
feels a little too neat to me that that's how things are going
00:46:05
to evolve. Yeah.
00:46:06
I mean, I think it's I think to to offer the sort of
00:46:10
conventional take on job replacement.
00:46:12
You know, 100 years ago, I think over half of Americans were
00:46:16
farmers, right? And today it's like 2% of
00:46:18
Americans are farmers, right. So we replace like literally 10s
00:46:21
of millions of farmer jobs over that time frame.
00:46:24
Now, I think, you know, the alternate and those farmers
00:46:27
ended up being lots of that we never would have imagined 100
00:46:29
years ago, right? Yeah, we're not all like.
00:46:31
Tractor managers, hopefully. Yeah, we're not all tractor
00:46:33
managers, we're not all combine harvester maintainers, right.
00:46:37
Yeah, Like they're, they're sort of as a layers of abstraction of
00:46:40
new types of jobs. So that's sort of the
00:46:42
conventional economics take. And I think I do basically
00:46:44
believe that. But I think that still means
00:46:47
that in the short run, especially with the pace with
00:46:49
which AI can do things that humans could do, you know, just
00:46:53
a year or two ago, like there there are new jobs since then
00:46:56
that now AI can suddenly do. It does feel like there's going
00:46:58
to be very rapid displacement, right?
00:47:00
Like the the invention of machines for farming did not
00:47:04
like overnight just completely obliterate everything farmers
00:47:08
were doing, which it does feel like AI is like just
00:47:10
obliterating huge chunks of knowledge work like basically
00:47:13
overnight. And so I think that they could
00:47:15
be sort of a shock to the system in the way that maybe
00:47:18
traditional automation is not. And it seems like as long as
00:47:22
Trump's in charge, nobody's stopping this putting the horse
00:47:25
back in the barn. Like, states aren't even gonna
00:47:27
be allowed to make regulations about it.
00:47:29
So it's like. Nobody's putting the horse back
00:47:32
in the barn is perfect like for perfect analogy like to the dawn
00:47:37
of cars, you know, like exactly. Yeah.
00:47:41
We. I think the other thing that was
00:47:44
interesting to me is just that she, she also, you know, sort of
00:47:50
is making the assumption that the job of agent manager won't
00:47:53
be be run by an agent like like how many lawyers you know?
00:47:58
Yeah, I don't know at what point does.
00:48:00
The I mean, progress. People want human to me, people
00:48:03
will want humans for something like I'm, you know, I would love
00:48:06
to have human, you know, caretakers for the elder.
00:48:09
Yeah. You know, there are lots of
00:48:10
important human things to do. I don't necessarily agree with
00:48:13
her. Like you're saying that humans
00:48:14
will just slot into the agent hierarchy.
00:48:17
It's like very possible agents run agents.
00:48:20
I think the big question is just like, well, so much value accrue
00:48:23
to the people who own these agents relative to the average
00:48:26
American worker, then, you know, the wealth inequality will get
00:48:30
get so terribly skewed. I think humans will have value
00:48:34
and therefore, if the economic system is working, there should
00:48:36
be money for them to make. But maybe, you know, people who
00:48:39
control these agents will just be far, far too powerful for any
00:48:43
sense of an egalitarian society. All right, I enjoyed the
00:48:48
conference, I think. Yeah, let's keep it tight.
00:48:50
This was a blast. I really had fun with this one.
00:48:53
Yeah, yeah. Thank you guys so much.
00:48:55
Thank you for tuning into this week's episode of the podcast.
00:48:58
If you're new here, please like and subscribe.
00:49:00
It really helps the channel. We're building a YouTube
00:49:03
channel. I think you can tell we're
00:49:04
investing a lot more in our production and we appreciate
00:49:07
your support. And if you want the data Insider
00:49:11
takes real reporting, go to newcomer.co and subscribe to the
00:49:15
Sub Stack as well. Thanks for following going on.
