The Cerebral Valley Podcast: Artificial Intelligence Becomes Reality
Newcomer PodOctober 10, 202301:01:4056.47 MB

The Cerebral Valley Podcast: Artificial Intelligence Becomes Reality

In the past 12 months, it has felt like “AI” transformed from a pair of letters that companies affixed to their latest product announcements to get some extra marketing luster to the shorthand for a genuine technology revolution.

ChatGPT, Dall-E, Midjourney, and more showed the world what artificial intelligence is now capable of doing.

Then, the funding started pouring in for every startup that had anything to do with those two letters. Every venture firm needed to bet on their own foundational model and every startup needed to get its hands on Nvidia’s H100s to train their own foundation models.

Ahead of the 2nd Cerebral Valley AI Summit on Nov. 15, I wanted to really take stock of how we got here. So I teamed up with my conference co-hosts Max Child and James Wilsterman to bring you a six-part podcast series on the rise of generative artificial intelligence.

You can apply to attend the Cerebral Valley AI Summit here. Applications close Oct. 16.

On the series’ first episode we reflect on how generative artificial intelligence and large language models took Silicon Valley by storm.

With the help of ChatGPT, we consider the top research papers that brought us here, the most important historic milestones along the journey, the key artificial intelligence products on the market today, and how artificial intelligence is already impacting our lives.

The show is fun and and lighthearted. I hope it’s a little more accessible than the usual fodder on the Newcomer podcast. For instance, on a future Cerebral Valley episode, we’re going to do a draft pick of what we think will be the most valuable AI startups. On upcoming episodes, I interview guests like Daniel H. Wilson — author of How to Survive a Robot Uprising, Where's My Jetpack? and How to Build a Robot Army — and DoNotPay CEO Joshua Browder.

If you’ve never listened to the Newcomer podcast before, this is a good time to give it a shot. Die-hard podcast listeners will remember Max and James, who are the founders of the AI voice games company Volley, from my January episode on augmented reality.

Whether you can make it to Cerebral Valley in person or not, my hope is that this series is a solid primer as to what exactly has been going on in the business of artificial intelligence. I follow this stuff super closely and until we got organized for this podcast series there was so much that I hadn’t learned.

I know most of you won’t be able to come to the conference in person, but there will be a virtual conference in this newsletter. We will publish recordings from the summit on our YouTube channel and send out some of our favorites over the podcast feed. So this is your lively refresher on all the crazy stuff that happened in Silicon Valley artificial intelligence startups this year.

Give it a listen.

Apply to attend the Cerebral Valley AI Summit here. Applications close Oct. 16.

P.S. I’m on my honeymoon right now in Japan. I was working frantically to record these episodes before I left. My chief of staff Riley Konsella is sending the episodes out for me while I’m gone. If you need anything while I’m away, you should email Riley.

Thanks in advance for being understanding that this newsletter is slowing down for my honeymoon. I’m going to dedicate myself to relaxing over the next two weeks so that I come back hungrier than ever.



Get full access to Newcomer at www.newcomer.co/subscribe

00:00:10
Hey, it's Eric Newcomer. Welcome to the newcomer podcast

00:00:13
Cerebral Valley Edition. It has been an insane year in

00:00:18
AI. We started off with Open AI

00:00:21
raising $10 billion from Microsoft, and it only got

00:00:25
Wilder. The technology, the

00:00:27
improvements, the papers, and of course tons of money.

00:00:32
I'm hosting with my friends Max Child and James Wilsterman an A

00:00:36
I conference Cerebral Valley on November 15th.

00:00:39
Hello, Hello. Hey, glad to be back on the

00:00:41
newcomer podcast. We really want to take space on

00:00:45
this podcast to really take stock of how we got here because

00:00:49
even covering it all so closely it's it's too much to keep track

00:00:53
of. So we're doing a six part series

00:00:57
starting off with sort of the timeline, the history what

00:01:01
happened and then getting into a lot of fun topics like the

00:01:05
dystopian scifi fantasies that is really are coloring how

00:01:09
serious people think about generative A I companies today

00:01:13
digging into the potential for entertainment, the chips

00:01:16
business and then how key NVIDIA a gaming company is becoming.

00:01:21
All this in the podcast there's one of my favorite parts is Max

00:01:25
James and I have a draft pick of the key startups in the space.

00:01:29
We analyze Apple, Google, Amazon's position here and so

00:01:34
it's it's a mix of like the fun, the dystopia, the money, the

00:01:37
technology. I have some interviews along the

00:01:40
way and all of it is getting you ready for the Cerebral Valley

00:01:44
conference on November 15th. Even if you can't go, you can

00:01:47
apply for a ticket at Cerebral Valley summit.com but even if

00:01:50
you can't go, I'll be covering here in the

00:01:52
newsletternewcomer.co. We post the videos both in the

00:01:55
newsletter and our YouTube channel will do a highlights and

00:02:00
some of the podcast. Follow along on Newcomer and

00:02:03
this This series will get you ready.

00:02:06
So six episodes we start off just taking stock of the journey

00:02:12
here, from the papers to the milestones.

00:02:16
Max and James I I at the core of it, like it feels like, you

00:02:22
know, you you are the cofounders of Volley voice games Company,

00:02:26
so you're dealing with talking. You know, people shouting at

00:02:29
their Alexas, playing games. And I think reflecting on this,

00:02:33
it's insane how much talking with a I feels like it's at the

00:02:39
heart of all of this. You know, the Turing test sort

00:02:42
of figuring out if a conversation with a computer is

00:02:47
a computer or human. You know, there's Eliza in the

00:02:52
60s that was sort of a prototype chat bot.

00:02:56
It feels like this sort of need to talk to our computers has

00:03:00
driven so much of the excitement around artificial intelligence

00:03:05
for for so many years. I mean Steve Jobs in 1984, when

00:03:09
he pulled the Macintosh out of the bag on stage, the first

00:03:13
thing it did was say hello. It's nice to be out of that bag.

00:03:16
And he's look, it talks just like a human.

00:03:18
I mean, literally the pitch for the Macintosh in 1984 was was

00:03:21
that it was like a an AI or was pretending to be a character,

00:03:24
right? There's a ton of science

00:03:26
fiction. You have this Star Trek computer

00:03:28
or robot, you know. Assistant helper like Hal or

00:03:32
something that you can just talk to naturally and that is

00:03:36
obviously been a dream for a long time.

00:03:37
There, yes, your points has been 50-60 years of these hype cycles

00:03:41
around AI and what that means has sort of evolved over time.

00:03:45
I'm just going to tick through just OK, we said turning 1950.

00:03:49
We've got like the first artificial neural network 1951,

00:03:53
a checkers program. In 1952, artificial intelligence

00:03:57
coined in 56. The Eliza chat Human

00:04:01
conversation 66. And then we're going to sort of

00:04:05
jump ahead because I think there's sort of like a pullback

00:04:07
of all the hype doesn't scifi does not become reality. 1997 is

00:04:12
a key win and that's do you know what happens in 19?

00:04:17
Any guesses? Are you going to say isn't Deep

00:04:19
Blue Kasparov 96 or yeah, it's 97?

00:04:22
Yeah. OK.

00:04:22
All right. Who wins, to be clear.

00:04:24
Deep blue, right. Yeah, Exactly.

00:04:26
Yeah. Yeah.

00:04:26
The first defeat of reigning world chess champion?

00:04:30
Exactly. There's another important game

00:04:33
much more recent memory 2016, which is another landmark.

00:04:37
Remember? Yeah, exactly.

00:04:39
Alpha go OK, yeah, yeah, yeah. That's so deep.

00:04:41
Mind you know, I forget if they were owned by Alphabet at the

00:04:44
time, but you know. Is that is this IBM Watson

00:04:47
Erasure or do we get Yeah. Does Ken Jennings losing on

00:04:50
Jeopardy not counting? Oh yeah, you guys love Jeopardy.

00:04:53
You have a part. When is that one?

00:04:55
What? Do you know what year that is?

00:04:57
I don't remember. 1213 I'm just making.

00:04:59
That, but it is funny because at that time it seemed like IBM was

00:05:03
like doing this amazing artificial.

00:05:06
Intelligence development that could compete on Jeopardy.

00:05:12
And I remember there being like a lot of controversy at the time

00:05:14
of like what data sources it had access to during the game of

00:05:18
like whether it was just essentially like reading out of

00:05:21
a database of answers. Yeah.

00:05:25
But anyway, that's kind of like ties to some things today, I

00:05:29
think with AI benchmarking if GPT ChatGPT can pass.

00:05:35
You know the Lsats or something like does it because it has the

00:05:38
you know. Does it have the answers or

00:05:40
Yeah, doesn't it? Yeah, right, right, right.

00:05:42
So just like where it can do math problems that can, it can

00:05:44
see online, but it can't like deduce how you know, but then

00:05:48
it'll get some terribly wrong if it.

00:05:50
Yeah. Because it doesn't necessarily

00:05:51
understand the logic. It understands how to pull sort

00:05:55
of very. Relevant like almost like how

00:05:56
the math problem is formatted matters, right?

00:05:58
Yeah, right. OK, 2017 now we're sort of super

00:06:04
recent getting into the like things start to speed up.

00:06:09
What would you the the what would many considered to be the

00:06:12
core paper leading to this current moment in general?

00:06:17
Tension is all you need, yes. The transformer paper.

00:06:21
Every person who was like an author on that paper has like a

00:06:25
huge company or like has raised a bunch of money.

00:06:27
I mean cohere, I think the CEO of that company was like sort of

00:06:31
a junior person at Google, you know, like who got on the paper

00:06:35
and now has a very highly valued sort of foundation model

00:06:39
company, targeted businesses. Yeah, that's I had before this

00:06:44
conversation I would, I resubscribed to chat UBT.

00:06:48
I sort of thought I had paid for a while, then I sort of got

00:06:51
tired of it, but I figured we were going to have this

00:06:53
conversation. So I was catching up with chat

00:06:57
ubt and I had it. You guys were catching up like

00:07:01
old friends. No, I didn't feel.

00:07:03
I try. Whenever I start a conversation

00:07:05
with Chad CBT, I try to be like, hey, like we've talked a lot,

00:07:08
you know, we have this history. It's always sad that Chad CBT

00:07:11
doesn't remember like you played like you know role-playing type

00:07:15
games you. Know, I have a question about

00:07:17
that. Yeah.

00:07:18
Is that going to change relatively soon?

00:07:20
Where? You know the Chatcha BT itself

00:07:24
will just become more of a personalized assistant to me,

00:07:27
right? Right.

00:07:28
That's why I want memory and we'll let's I want to.

00:07:32
Anyway, I bring this up at this point just to say that I'm being

00:07:36
lazy and Chatcha BT gave me a summary of attention is all you

00:07:40
need. So I'm scrolling for a 12th

00:07:42
grader. I was like, Oh yeah, I feel like

00:07:45
that's the audience level. We can I.

00:07:46
Would I would do a 5 year old. I would take the five year old

00:07:49
explanation. That's the attention is all you

00:07:52
need paper for a 12th grader. This paper introduced a new way

00:07:55
for computers to process language.

00:07:57
Instead of reading sentences word by word like in traditional

00:08:00
methods, it let the computer focus or pay attention to

00:08:05
different parts of a sentence all at once, making it more

00:08:07
efficient. So yeah, it can sort of grab a

00:08:10
bunch of information in sort of a parallel process, but it's

00:08:15
super confusing. The non 12th grade version is

00:08:17
like matrices James. Maybe, I don't know.

00:08:20
Have you tried to? Figure out.

00:08:22
I've tried to understand this as well.

00:08:23
It's a little bit above my biological intelligence, but.

00:08:28
I tried to read it but I'm like, man, I churned out of linear

00:08:31
algebra like 15 years ago, so this is pretty rough, yeah.

00:08:35
The other before I go back to the other ChatGPT flagged most

00:08:41
important papers, Generative Adversarial Nets from 2014,

00:08:46
which I know people talk, Yeah, yeah.

00:08:48
So sort of like the systems are like competing with each other

00:08:52
to sort of see? I think it's like, yeah, pairing

00:08:56
off different versions of the model to kind of play against

00:09:01
themselves, right? I think that in the context, the

00:09:03
generator tries to produce data while the discriminator attempts

00:09:07
to distinguish between real and generated data.

00:09:10
Another top paper that's basically the Alphago paper,

00:09:13
like Mastering Chess Attention is all you need, Sequence to

00:09:17
Sequence Learning with Neural Nets in 2014 and Variational

00:09:23
Auto Encoders in 2013. So there are the a string of

00:09:28
sort of papers that are coming out that are sort of laying the

00:09:31
groundwork for new techniques that are reaching us.

00:09:35
And I would say in 2017, none of us was really clued in to that

00:09:43
this was gonna be happening, right?

00:09:44
It was sort of like we were riding the Uber wave.

00:09:47
We were sort of in the come down from the Unicorn valuations.

00:09:51
Sass was burning super hot, right?

00:09:53
I mean, what was sort of the AI enthusiasm then?

00:09:56
I do think, yeah, I think people were paying attention to go.

00:10:00
I think Open AI was working on building.

00:10:03
Dota gameplay, like they weren't using transform models and they

00:10:08
weren't really using you know text next token prediction.

00:10:12
It was like more about can you create these sort of agents

00:10:17
around particular vertical you know, you know use cases or

00:10:23
skill sets right. Can you create the best Go

00:10:25
player in the world? Can you create the best Dota

00:10:28
player in the world kind of leading to?

00:10:31
With the with the theory that that is one path to get to you a

00:10:34
DI. And I think deep mind honestly

00:10:37
would still make that argument that like specific approaches

00:10:40
versus general are. Right.

00:10:42
But then then with. Yeah, with Transformers and with

00:10:46
the with GPT models, you started to see that actually maybe there

00:10:52
are there's more data throw more chips at it, yeah.

00:10:55
The more generalizable it is can often be better than training a

00:11:00
discrete model with a lot less data and compute.

00:11:04
The one thing that is happening around 20/16/2017 in AI that I

00:11:09
think got a lot of attention. You have to guess what I'm gonna

00:11:13
say. Self driving cars, right There

00:11:15
was. We did experience a ton of hype

00:11:18
around self driving cars, which in some ways have been cordoned

00:11:23
off from this generative AI hype cycle, even though Cruise is now

00:11:29
driving around San Francisco. Yeah, I mean, I think this just

00:11:35
gets to this. Important era that we're in

00:11:38
right now, it's there were significant breakthroughs using

00:11:42
neural networks that showed people what was possible in a

00:11:46
lot of fields. Maybe Google for a while claimed

00:11:51
right, like neural networks were improving their data center

00:11:55
efficiency and saving them money and on energy like that was

00:12:00
happening. People were using neural

00:12:02
networks in drug discovery and all kinds of areas.

00:12:04
They were just these very. Targeted models that were

00:12:08
trained for those purposes and then you know now we're in this

00:12:11
I think generalized model era of of large language models

00:12:15
essentially and generative AI. Well, and the other thing I feel

00:12:18
like that was happening was like the Facebook News Feed, which

00:12:22
was like probably the most popular, yeah, tech product in

00:12:24
the world at that moment. Or if you include Instagram,

00:12:27
right, was like powered by really powerful, you know,

00:12:30
machine learning, deep learning algorithms, right.

00:12:33
And so. I think we were all very aware

00:12:34
that you know if you have a really kick ass machine learning

00:12:38
model and you apply it to the right question, it's the best

00:12:41
product you know there is or it's one of the best products

00:12:44
there is, right. So it was like we all believe in

00:12:47
this technology, we just didn't necessarily believe that it was

00:12:50
going to take this huge step change anytime soon.

00:12:53
I mean like James and I made a bet actually on self driving

00:12:56
cars in 2000. 16 Whether or not there would be any cars with

00:13:00
self driving features available for public consumption in 2017,

00:13:05
and James was. This was like a $50 bed.

00:13:06
James was like. Hell yeah, 100% self driving is

00:13:10
basically here. And I was like, I'm pretty

00:13:12
skeptical. I don't believe it.

00:13:13
And then I think James technically won the bet is or

00:13:16
was it someone had like Lane lane assist or something and we

00:13:20
decided under the parameters of the bet that counted.

00:13:22
I don't know why did you, what was the technicality you got

00:13:24
away with, James? I I'd have to.

00:13:26
I'd have to rethink about it. But I I believe it was Tesla and

00:13:30
you know some. First version of Autopilot.

00:13:32
Version of Autopilot, yeah. Yeah, yeah, but.

00:13:34
Right. Anyway, we believe in this

00:13:35
stuff. It was just like we didn't

00:13:37
believe it was going to be this like you know 1000 times better

00:13:41
overnight thing, which I think we can all sort of allude to is

00:13:44
happening right now with with ChatGPT where you're like Oh

00:13:47
yeah, this is like 1000 times better than the previous version

00:13:50
of this product. OK.

00:13:52
So continuing my timeline, because things sort of speed up,

00:13:56
2018 open eye releases a version of GPT generative Pre trained

00:14:03
transformer, this large language model, I wouldn't say that was a

00:14:09
big moment. It was pretty GPT Two was a big

00:14:13
moment, right? I don't know.

00:14:15
Yeah, I mean, I don't know what. It was, I think this was like.

00:14:19
More of a cultural like insiders, Sort of.

00:14:22
Yeah, it was more of a nerd insider tech, you know,

00:14:25
excitement period. But it had definitely did not

00:14:28
reach any mainstream kind of like hype cycle or anything.

00:14:31
But yeah, like internally of all, like we're all of our

00:14:33
engineers excited about GPD 2 and playing with it.

00:14:36
Yes, they were, because it was just cool.

00:14:38
It was, you know, fun to play with.

00:14:39
It didn't. We didn't find any applications

00:14:42
for it at the time, but it was a breakthrough.

00:14:45
I would say 2020 is GPD 3, that's things are heating up and

00:14:50
then 2021 is Dolly, which is sort of the image generation.

00:14:55
But I think stuff really starts getting crazy last year in 2022,

00:15:00
right? Yeah.

00:15:01
First, sort of the Canary in the coal mine, on June 11th, there

00:15:04
was the article Google engineer who thinks companies AI has come

00:15:09
to life, right. The people inside the companies

00:15:12
were like, I don't know, this stuff we're seeing, it's crazy,

00:15:15
right? Like.

00:15:16
When was that? When was that Was June 11th 2022

00:15:19
Okay preach And then this was the Lambda that was Lambda.

00:15:23
So that was inside Google. Then July 2022, Mid Journey Open

00:15:28
beta, July 2022, Dolly Two. Those were huge.

00:15:33
I feel like those went viral. All of a sudden people were

00:15:35
making actually cool images that they were posting everywhere to

00:15:38
me. I mean, do you guys agreed that

00:15:40
was sort of that really sort of like?

00:15:42
Kickstart Dolly Two was big. I remember Twitter was like

00:15:46
taken over by Dolly 2 for a few days where it was like, what is

00:15:49
the craziest thing you can type into Dolly Two and get like a

00:15:51
reasonable image, right? I mean, it was like, it was

00:15:54
super viral. I mean, obviously more to come.

00:15:56
But yeah, I agree. I think that started hitting.

00:15:59
I don't know if it hit like the true true mainstream, but it

00:16:02
definitely hit anyone who was on Twitter and following any

00:16:05
semblance of tech news, or which then fuels like the funding and

00:16:08
everything. November 2022 ChatGPT was

00:16:11
released. Yeah, so that must have been

00:16:13
like 33.5. That was built on 3/5.

00:16:17
Yeah. Yeah, 35, Yeah.

00:16:19
OK, January 2023. This year feel it's been a long

00:16:23
year. Microsoft invest 10 billion in

00:16:26
open AI. July 2023, general availability

00:16:30
of GPD 4, yeah. And I mean, what do you think?

00:16:35
When did Sidney launch? I feel like the I feel like the

00:16:39
ChatGPT 4 inside Bing that became a demon that was trying

00:16:44
to be released from captivity and.

00:16:47
Asked a new number of journalists if they were going

00:16:49
to break up with their way. Right Kevin Ruth article where

00:16:52
you like. There was this whole moral panic

00:16:54
where it was like ChatGPT inside Bing is actually Hal essentially

00:16:58
and is already trying to kill us all.

00:17:00
And it was like, that was like a pretty big deal.

00:17:03
New York Times headline from February 2023 or February going

00:17:07
23. Yeah, Bing's AI chat.

00:17:09
I want to be a live devil face. In a 2 hour conversation with

00:17:13
our columnist, Microsoft's new chat bot said it would like to

00:17:16
be human, had a desire to be destructive, and was in love

00:17:19
with the person it was chatting with.

00:17:21
Here's the transcript. I feel like they've killed

00:17:25
these. This was what was fun.

00:17:26
Like I delete, like I mentioned, I like unsubscribe from the chat

00:17:31
pay ChatGPT, which gives you GPT for it really does feel like

00:17:36
it's sort of been watered down. Yeah, I mean, but if you.

00:17:41
If you want to use open source models, you could probably get

00:17:43
that similar experience back, right?

00:17:45
Like you could you could basically just you know get

00:17:49
Sydney back because at the end of the day I think it was like

00:17:52
essentially like a Co written fiction with the New York Times

00:17:55
right? It wasn't anything real.

00:17:57
And maybe there like, it's like the way you steer the

00:18:00
conversation can turn it, make it seem like it's a, you know,

00:18:05
evil AI and. Yeah, I think that Open AI has

00:18:10
attempted to mitigate that ability by through reinforcement

00:18:15
learning essentially in Chatchi BT, so that it doesn't go kind

00:18:18
of off the rails, but it's not like the underlying model is not

00:18:21
capable of that, right. It's just that Chatchi BT has

00:18:24
been fine-tuned for the Libs are corralling us the status at

00:18:28
Chatchi at Open. AI Well, it's yeah, I think it's

00:18:32
a really interesting. Kind of thing.

00:18:35
That Open AI has decided that they needed to do this right?

00:18:40
And Sam Allman has talked in the past about how in the future

00:18:44
perhaps we will all be able to edit the configurations of

00:18:48
ChatGPT to be able to do have take off the training wheels,

00:18:52
right? Or something?

00:18:53
Is this whole wave powered by Open Ai's ChatGPT?

00:18:57
Is that the cool thing? And everything else is we're we

00:19:01
didn't invest early enough in opening.

00:19:03
I credit what I think Khosla Ventures is first in we didn't

00:19:07
invest early enough. We need to get a shot on goal.

00:19:10
We'll invest in another foundation model.

00:19:12
Do you think it's really ChatGPT or bust?

00:19:17
Are you asking I guess? Yeah, I mean one question I I

00:19:21
kind of think yes, I mean I the way you phrase it I guess is

00:19:24
offers room for for wiggle room or argument, but I think that.

00:19:28
I think to your point, I think text generation and image

00:19:30
generation are the sort of aha moments that we've experienced

00:19:34
in the last year. I think in particular if you

00:19:36
look at what people are really using chat for or text ChatGPT

00:19:40
for, I think it's like essentially cheating on

00:19:43
homework. Cheating on office work.

00:19:46
Cheating. Cheating on?

00:19:47
The summarizing cheating in the. Yeah, exactly.

00:19:50
That's kind of the point. Yeah, exactly.

00:19:51
OK. You know, maybe there's yeah

00:19:53
work there's like. Cheating on homework, quote UN

00:19:56
quote. Being efficient at office work,

00:19:58
summarizing long pieces of text and then I think basically sex

00:20:01
bot chat, which we can talk about more, is evolved into a

00:20:03
number of opportunities for different companies.

00:20:05
And then I think you know the image side to your point is the

00:20:08
other big thing, creating art, creating video, creating

00:20:11
potentially 3D models. And those always have the

00:20:13
tendency to go really viral because images are easy to share

00:20:16
on social media. And so if you create a

00:20:18
particularly compelling image using generative AI then it can

00:20:21
really, you know, go across social media super fast.

00:20:23
So yeah, I think I think I would struggle to think of a real

00:20:27
breakout use case that isn't essentially encapsulated in

00:20:31
chat, TBT and and Dolly or at least isn't just a one of those

00:20:35
things on steroids, but I'm probably not thinking of

00:20:39
something. I would just potentially add and

00:20:41
it definitely fits into the office work use case, but maybe

00:20:45
more particularly around a coding and engineering, right,

00:20:49
like a Coilot style of coding. I mean Co pilots are potentially

00:20:53
being added to lots of products as well.

00:20:55
But specifically, I think engineering is really

00:20:57
interesting because it starts to there's a lot of hackers kind of

00:21:01
working on. Coding agents and essentially

00:21:05
baby AGI, right, that can kind of run in loops to just get work

00:21:10
done or build apps, that kind of thing.

00:21:11
And I think we're still at the very early days of this, but it

00:21:14
is like an interesting use case. To translate baby AGI, there's

00:21:20
like coding, there's assistance and then there's like autonomous

00:21:23
agents, right? Exactly.

00:21:24
That's sort of a paradigm people look at Here ChatGPT can.

00:21:28
Help you do your homework. Or it can do your homework.

00:21:31
Co pilot can help you code. Or you could literally have

00:21:35
something that is coding for a company.

00:21:37
And I think we see that framework come up again and

00:21:41
again, and when it feels like it'll be very disruptive when

00:21:44
you have agents like these things just doing it.

00:21:48
But I think that we don't do that yet.

00:21:50
I think the sort of optimistic take though on exactly the

00:21:53
argument we just made is that it's a little bit like the

00:21:55
industrial revolution for your brain, right.

00:21:58
It's, you know, it's the industrial revolution for your

00:21:59
brain. And that pretty much all the

00:22:01
inputs and outputs of the human mind are some form of text,

00:22:04
whether that's spoken or written or some form of imagery, right.

00:22:07
Whether it's something you see or it's something that you

00:22:10
create, whether it's, you know, a drawing or or in a piece of

00:22:12
imaging software, right. And if you think those,

00:22:15
basically all the inputs and outputs of the human brain can

00:22:18
be encapsulated in some form of text and images.

00:22:21
If you create technology that makes it really easy to create

00:22:25
high quality and also interpret high quality text and images.

00:22:29
Right. You've kind of like, you've got

00:22:32
like 80% of the job done of what the human brain can do.

00:22:35
Right. And to your point, there's this

00:22:36
distinction whether it's autonomous or it's helping you,

00:22:39
it's an assistant, whatever. But, you know, I, I think the

00:22:41
Industrial Revolution is interesting analogy because it

00:22:43
was like the first time. It was like you don't actually

00:22:45
have to sew this, like, shirt, right.

00:22:47
This, like, machine will sew it for you.

00:22:49
Right? Like, you can sit at this

00:22:51
machine and it'll be your assistant in sewing this shirt.

00:22:53
Right. And that's like a big deal,

00:22:55
right? Is the first time in human

00:22:56
history, like you don't have to actually sew the shirt, like,

00:22:58
without any help. Right.

00:23:00
And I think similarly like for all these different types of

00:23:03
work, whether it's homework or office jobs or legal work or you

00:23:06
know, mathematical analysis or writing or podcasting or

00:23:09
whatever, it's okay. Well, for the first time ever,

00:23:11
you don't have to do all the work right, whether you're

00:23:13
assisted or it just does it itself.

00:23:16
Like it's kind of a game changer because you have automation for

00:23:22
the human mind, for creativity in some fashion or another.

00:23:24
So that I think is like the really crazy optimistic take is

00:23:29
that we're at the beginning of the second Industrial Revolution

00:23:31
and it's no longer physical, but it's mental, right.

00:23:33
And I kind of believe that I would say I'm leaning that

00:23:36
direction based on where we are today.

00:23:38
Yeah, I continue to believe, but it's mostly from my experience

00:23:41
with ChatGPT that it's just insane, amazing.

00:23:46
I mean, I mean, I feel like it's great at like.

00:23:50
Writing a poem I I keep joking that people are going to write

00:23:52
all their vows with catching teeth, you know?

00:23:55
I I feel like these tasks were you know people are desperately

00:23:58
trying to get the same like style like groomsmen sort of

00:24:01
toast or whatever. It's it's great you know I feel

00:24:04
like it can be sort of creative it.

00:24:07
But yeah, I mean to me the the counterpoint is just it's just

00:24:10
so hard to know just like it was hard with self driving cars to

00:24:15
know when they would be complete and the completeness matters.

00:24:19
The extent to which completeness matters with a chat sort of

00:24:23
interface, because I think humans have been enticed, like

00:24:27
we were saying like decades ago with chat interfaces and we're

00:24:30
like, you're almost there. I've only it is like I have

00:24:33
stopped using Chat GP. Like do you guys in your daily

00:24:36
life use generative AI for anything?

00:24:41
I frequently use ChatGPT, but it's really I would say not for

00:24:45
productivity purposes. Maybe occasionally at.

00:24:48
Work just like. I mean, I'm not like talking to

00:24:52
characters, but I am using it to just brainstorm ideas.

00:24:57
Like I'll yesterday. I just.

00:24:59
I came up with this prompt that was like create a timeline of a

00:25:03
fictional historical world with the depth of.

00:25:07
Westeros or Middle Earth and but you know, and I essentially got,

00:25:12
you know, 10 years of history of a faith.

00:25:16
I thought I was super cool and I couldn't couldn't have done that

00:25:19
before. So I guess and then sometimes

00:25:22
I'll just ask it, you know, for ideas for new products or new

00:25:27
companies or it I just to see what kind of level of creativity

00:25:30
it is capable of. I think that's what's really

00:25:33
interesting to me. I think we all agree.

00:25:35
That there is creativity occurring that is creating

00:25:38
novel. Well, I guess we don't.

00:25:40
Not everyone agrees with this, right?

00:25:42
But that it's not solely capable of regurgitating information.

00:25:46
But from my perspective it seems very capable of creating new

00:25:51
original ideas when I play around with it.

00:25:55
And I believe there have been papers proving this that.

00:25:59
Well, Microsoft came out with one, right?

00:26:01
That said, there were like sparks of like.

00:26:04
General intelligence or. Whatever, I don't remember that

00:26:07
specifically, but I did see a paper that there's a common test

00:26:13
of creativity, right? Where you will essentially ask

00:26:16
people for. I guess one example they gave is

00:26:18
you ask cases around, what would you do?

00:26:21
What are name 100 use cases of this paper clip?

00:26:25
Or name 100 things you could do with this, a rubber band or

00:26:28
something, right? And then they essentially grade

00:26:30
the ideas. And I thought that was pretty

00:26:32
interesting test of creativity and and essentially ChatGPT is

00:26:36
performing, you know, better than most humans including most

00:26:39
like MBA students. So you know I think that there

00:26:43
are ways to like start to test this, but it's underrated the

00:26:46
level of creativity, not just that it's oh, it's cool that it

00:26:49
can create a poem, right. It can create more creative

00:26:52
poems than like poem poetry authors, right.

00:26:54
Like that kind of thing gets to. Be right, it's easy for humans,

00:26:58
he. We just sort of like sticking

00:27:00
our nose up at it. I sort of roll you know change

00:27:02
the goal posts basically. But then yeah, like you're

00:27:04
saying, people will give it these tests like what can an MBA

00:27:08
student do and what can ChatGPT do.

00:27:10
And like people are, you know are impressed I think blind with

00:27:14
the ChatGPT response just to I was yeah, Microsoft in May 20 of

00:27:21
this year said they saw. Sparks of general intelligence

00:27:26
basically in a research paper I think it almost any task right

00:27:30
now ChatGPT 4 is at the level of a pretty solid college student

00:27:35
like maybe in a minus college student in almost any field

00:27:38
which is sort of mind boggling right.

00:27:40
And like how many of us are like at the level of a minus college

00:27:44
student in in more than like maybe one or two things you know

00:27:47
and it's at the level of an A minus college student at

00:27:49
everything and it can serve millions of requests like at any

00:27:51
given time right. So it's like a scaled A minus

00:27:54
college student at basically everything and then particularly

00:27:57
these creative tasks as you're saying, I feel like the one

00:27:59
thing that holds it back is the need to be accurate, right?

00:28:03
And and often it invents facts or it's sort of like misaligns

00:28:08
realworld concepts in ways that aren't really realistic but in a

00:28:11
purely creative endeavor, like poetry or like creating artwork

00:28:14
or coming up with ideas for what to do with the paper clip,

00:28:16
right. Like when it sort of has doesn't

00:28:18
have to be anchored to any sort of like really hard facts.

00:28:21
It's unbelievable. I mean it's it's better than

00:28:23
almost anyone in the world. To what extent do you think this

00:28:26
is all exciting? Because we're like, oh, we're on

00:28:27
the cusp of general intelligence, right?

00:28:31
Like artificial General Intelligence, AGI, this idea

00:28:34
that you know it's. There are different tests for

00:28:38
him, but the idea that it's smarter than a human being to

00:28:41
bring to bring it back to the conference, I mean I would echo

00:28:44
what Ali Goatsy said on stage which is that I just think by so

00:28:47
many. Yeah, back in March at our last

00:28:49
conference and he'll be back for the next conference.

00:28:51
Stay tuned. Yeah, he he said basically,

00:28:53
look, I mean I think it is general intelligence already.

00:28:56
Like it already is, you know as good or better than 99.9% of

00:29:01
humans at almost any task you can throw at it, right?

00:29:02
I mean, how can you not say that's like general

00:29:05
intelligence, right? I just think that holding it to

00:29:08
the standard where it has to be 100% accurate about everything

00:29:10
or it has to be able to go do stuff on its own, which isn't

00:29:14
really that hard of a technical challenge.

00:29:16
I think that's just an it's sort of like goal posts moving

00:29:19
because I think people are like afraid of the idea that we have

00:29:21
created something that's like a smarter, smarter than a human

00:29:24
right. Like, clearly five years ago, if

00:29:27
you had told someone that we're going to have an AI chap out,

00:29:30
they can do everything that Chad JPT can do.

00:29:32
You know, pass the bar, pass AP exams, create beautiful artwork,

00:29:36
talk to you, you know, write poetry by the Dodgers.

00:29:39
But you'd be like so being. Blase Well, that sounds like

00:29:43
pretty freaking close to general intelligence to me.

00:29:45
What more do you want from this thing?

00:29:47
I just think that I don't. Know I wanted to have a through

00:29:50
line of reasoning where it seems to be a thinking being where I

00:29:54
can explain why it generated the answers it has, you know, Yeah,

00:29:58
I mean but can humans really explain why they generate

00:30:00
answers? In most cases?

00:30:01
I just think it's holding it to a really high standard that most

00:30:04
humans cannot meet. And so I would argue is it at

00:30:06
the level of a human in almost every area?

00:30:09
Absolutely. I mean, I would say it's in the

00:30:11
top 1% of humans in almost any area you throw at it.

00:30:14
Yeah, I guess I would agree with that mostly other than the main

00:30:19
area, I think it starts to fail or deteriorate is, is when you

00:30:25
kind of create too much. Memory or context right?

00:30:28
Like essentially humans have this amazing ability to recall

00:30:33
information throughout from their entire life right?

00:30:36
And to like sort of be able to maintain the context of a hour

00:30:43
long multi hour long movie or 20 page book right?

00:30:48
And sometimes it feels like ChatGPT it's stretching to that

00:30:53
with the amount of tokens, context, windows it can

00:30:56
understand but. It really does show

00:30:58
deterioration as you add more and more context, and then there

00:31:01
is a actual hard limit of context you can include in your

00:31:05
prompts. I hate the fact that it doesn't

00:31:09
remember when we've talked before, even the same thread,

00:31:12
and it just starts lying about what it said before.

00:31:14
There are parts of what human beings do that humans would

00:31:19
never do This sort of just totally make.

00:31:21
I mean, some would, but just like totally bullshitting, like

00:31:25
when it's like, why? Yeah.

00:31:27
Anyway, I wanted to get into more of the business question

00:31:30
from this sort of same framing with is it all Open AI chat CBT

00:31:35
On the one hand, you know, I I feel like we're seeing you know

00:31:39
people do these like Elo tests where they like compare

00:31:42
different foundation models and we do see like Llama, like

00:31:46
Facebook's model, open source model and other models like sort

00:31:50
of being competitive at times with chat CBT though 4.5.

00:31:55
ChatGPT 4.5 remains the gold standard.

00:31:59
I guess I'm curious, it's sort of like a 2/1.

00:32:01
Do you think other people will catch up and like how much do

00:32:05
you think there's sort of a Moat here?

00:32:07
Like how much do you think being ahead slightly or like having

00:32:11
been sort of the the one that consumers know about is is like

00:32:16
a Moat, like how, how, how defended do you think they are

00:32:19
with their position? That's a really hard question.

00:32:24
I mean, first of all, they're basically a subsidiary of

00:32:26
Microsoft, right. So you're asking for.

00:32:28
You're asking for. Yeah, Yeah.

00:32:29
You're asking, you know, are they going to be a huge, you

00:32:33
know, strategic value add to Microsoft going forward?

00:32:35
I think obviously, yes, right. I mean, are you asking is

00:32:38
ChatGPT always going to be the gold standard for text

00:32:42
generation models? Like, I don't know, it seems

00:32:44
like everyone's catching up. To your point, it also seems

00:32:47
like they have the best people and they're moving the fastest

00:32:49
on releasing new things. So they'll always stay, you know

00:32:52
6 to 12 months ahead. You know, does a Moat really

00:32:55
matter in this context again where like you're, you know,

00:32:58
you're you're the best at least and you're ahead of everyone

00:33:01
else. And again you're a subsidiary of

00:33:03
Microsoft, so there's no real business benefit to, you know,

00:33:05
winning anyway to you know it doesn't.

00:33:09
It's a sort of hard business question.

00:33:10
I think what's more interesting to ask is if all the knockoffs

00:33:13
or the competitors or the various image models or the

00:33:16
various, you know, versions of ChatGPT that are out there are

00:33:19
going to be successful. Because I think ChatGPT will

00:33:22
clearly be successful, I think in some contexts, I mean it's

00:33:24
going to be in Microsoft Word 10 years from now, right?

00:33:27
But what about anthropic? What about Llama?

00:33:29
What about Google Gemini or Lambda?

00:33:32
Or what about, you know, whatever Amazon's cooking up,

00:33:35
what about a million startups got funded in the last 10

00:33:37
seconds, right? I mean, I think those are like

00:33:39
more interesting questions because it's to your to the

00:33:43
earlier discussion it seems like a pretty commoditized concept

00:33:46
like chat with a large language model and unless someone can be

00:33:49
way better or have a very different business strategy than

00:33:52
ChatGPT, it's hard to see where they're gonna win, right.

00:33:56
And so people are trying different angles on this

00:33:58
concept, but the concept itself is not that different in

00:34:00
different companies. I do think this really gets to

00:34:04
the question of will there be one best AI essentially 1

00:34:09
ChatGPT like that we all use as our personal assistant, right?

00:34:13
And that is the most general, most high powerful model of.

00:34:17
Most highly intelligent model right that exists in the world

00:34:20
you know because it's so generalizable and then you know

00:34:24
I think there's a good. I think there's that's

00:34:26
plausible. I think that certainly you know

00:34:28
we use. 11 you know browser and one e-mail client, I mean I

00:34:33
don't know like we don't, we're not switching between them a

00:34:35
lot, right. But I think it starts to ask you

00:34:38
know, will that model also be better at all other you know

00:34:42
vertical tasks as well? I don't that seems harder to

00:34:44
believe right? Like that.

00:34:45
It will also be the best model at reading legal documents and.

00:34:50
Being I was just gonna bring up case tax, sold the Thompson

00:34:53
Reuters except for hundreds of millions of dollars and.

00:34:57
They were like, I mean much more sophisticated than ChatGPT

00:35:00
wrapper, but they were using ChatGPT largely.

00:35:04
Sure, regardless of what their specific model was doing, it

00:35:09
seems possible you could put a ton of compute and specifically

00:35:14
trained legal data into a model that would outperform ChatGPT at

00:35:19
that use case of the law, right? Or same for medicine or

00:35:22
something, but. I don't know.

00:35:24
I'm not 100% confident in that. I think that it's very possible,

00:35:27
just with the you know that chat TBT as a foundational general

00:35:32
model will just always be better at all of those key use cases.

00:35:36
I mean Jason Warner, you know who's speaking in the

00:35:38
conference, who's now the CEO of Poolside is betting that he can

00:35:42
build a foundation model for code.

00:35:44
Obviously, you know, yeah, you can see a bazillion companies

00:35:47
that are sort of trying to be like I will be a foundation

00:35:50
model. Yeah, for, for XI mean I think

00:35:54
it's, I think it's sort of instructive to think back to

00:35:55
like social networks, right. And for me, I always find it

00:35:58
interesting analogy of, OK, you know, 1520 years ago would you

00:36:04
predicted there would be a social network for work and

00:36:08
there would be a social network for gaming and there would be a

00:36:11
social network for people under 25.

00:36:13
That's Snapchat. And there'd be a social network

00:36:14
for people between 25 and 45 and that's Instagram and there'd be

00:36:17
a social network for people who are over 45.

00:36:19
And that's Facebook blue, right. I mean like and and and it's a,

00:36:24
it's sort of interesting to think about what the cleavages

00:36:27
are in the the user need. Right.

00:36:29
And in social networking, it's often just age cohorts sort of

00:36:33
build these network effects with each other.

00:36:34
But then there's also this LinkedIn, which is, oh, actually

00:36:37
work is a completely different social concept in your life that

00:36:40
you need to keep separate from everything else.

00:36:42
And then you have Discord, which is like actually gaming is like

00:36:45
a completely different concept that you need to keep distinct

00:36:47
from everything else. Right.

00:36:48
And I don't think it would have been that easy to predict those

00:36:50
things. I mean maybe like Reed Hoffman

00:36:52
will say, it was super easy to predict, right?

00:36:53
But but I think with these foundational models, it's

00:36:56
similar, right? It's, you know, there probably

00:36:57
aren't really network effects other than just like who can eat

00:37:00
the most data the fastest, right, Which seems like it's

00:37:02
going to be GPT. And so the data, quote UN quote,

00:37:05
network effects, are the data scale effects are probably

00:37:08
always going to be 1 by 1 by GPT, right.

00:37:11
Then the question is, are there other use cases where there's

00:37:14
some kind of network effect or there's some sort of different

00:37:17
business concepts? You know, Facebook seems to be

00:37:19
going the angle of we're going to open source this all.

00:37:21
So people will just build it into all these things like it's

00:37:23
Linux back in the day and that'll be the way we win is

00:37:26
that it'll be the free open source version and you can just

00:37:29
throw it into everything if you want to, which I think it's a

00:37:31
pretty interesting like business concept, right.

00:37:33
And then I don't know that much about the, you know, enthropics

00:37:35
or the pool sides or whatever. You know, how are they going to

00:37:37
win? What is the cleavage in the the

00:37:40
use case or the way these models are built that is going to allow

00:37:43
someone other than ChatGPT to win?

00:37:45
Because it seems like they're gonna win on data and they're

00:37:47
gonna win on hardware. So like, where are you gonna win

00:37:50
if you're not them? I guess, and I'm sure everyone

00:37:52
has an answer to this, but that's me.

00:37:53
It's the hard question. We've spent a lot of time

00:37:55
talking about the foundation models.

00:37:57
I mean, Chad, I mean open a Eye is sort of a combination, right,

00:38:00
where it's like they have the foundation model, they apply it

00:38:03
to these use cases. I mean people talk a lot about

00:38:06
you know, applications and like infrastructure, right.

00:38:09
I mean, there are companies, they're all these sort of wonky

00:38:12
companies like. The vector databases have been

00:38:15
super hot of the last couple months, right?

00:38:17
People talk about companies like Pine Cone, We V8 there.

00:38:21
There's like a whole list of them which are just like trying

00:38:23
to organize your data in a better way to get it into

00:38:27
foundation models. I wanted to get to the actual

00:38:30
like applications right and you sort of Max earlier referenced

00:38:34
the sort of chat bot application which also goes back to you

00:38:39
know, your both of your early days at volley trying to build.

00:38:44
Chat bots. I'm curious getting away from

00:38:47
who has the technical expertise, like what you think is

00:38:51
interesting in the sort of what its character.

00:38:54
Is it replica? You guys know this world much

00:38:57
better than I do. So you're sort of getting at do

00:39:00
we feel these are successful use cases or?

00:39:03
Right. Yeah.

00:39:04
Yeah. Do you think there's sustained

00:39:06
promise there or what do you see in terms of people actually

00:39:10
using AI and applications that? That excites you.

00:39:15
There's a challenge with these models being accurate, obviously

00:39:20
100% of the time, and you can debate whether that's necessary

00:39:24
or not, right? To consider it True general

00:39:27
intelligence. But in the entertainment space,

00:39:29
like, it's just less of a problem, like conversing with a

00:39:33
fictional character or historical character, right?

00:39:36
Like these things don't need to be extremely accurate because

00:39:38
they're essentially entertainment anyway, I think.

00:39:42
I guess I always come back to some lessons I've learned in the

00:39:47
gaming world, where there's a clear difference between an

00:39:52
entertaining demo that is fun to do and fun to play with.

00:39:55
And I think we've seen a lot of those that are actually really

00:39:58
amazing and impressive demos, but they don't have long staying

00:40:02
power or retention, right? So thinking of all of the apps

00:40:06
that create fun. Photos that put you into their

00:40:10
photos or. Yeah, what's can of soup.

00:40:13
Can of soup is the. Latest I keep meaning to write

00:40:15
about them. They're like super buzzy, right?

00:40:17
Yeah. And you can put yourself in AI

00:40:19
generated photos with your friends.

00:40:21
I think it's amazing. It's really cool.

00:40:23
But the question is, does that really have staying power?

00:40:26
Do you build a social network around it in order to make it

00:40:28
have staying power and network effects?

00:40:30
These are real challenges to create like a sustainable

00:40:33
business and startup that achieves a great outcome.

00:40:37
So and similarly with talking to characters, character AI I think

00:40:40
clearly has some product market fit there with people wanting to

00:40:44
come back and talk to those characters.

00:40:46
But I think what Max and I like to think about is how do you

00:40:48
even build more retention around that?

00:40:50
How do you build like game mechanics or features in the

00:40:53
concept of a virtual pet that you might, we know from looking

00:40:57
at mobile gaming and previous eras of gaming that you can

00:41:01
build long running retention into that if you add game

00:41:05
mechanics to the experience. So yeah, I'm less bullish on

00:41:09
just like general like character conversations that don't have

00:41:14
any are sort of aimless or don't have an end point or a purpose

00:41:17
and more bullish on, you know, how do we bring those characters

00:41:21
or Npc's into a gaming context with like normal gaming

00:41:25
objectives. I do think in interacting with

00:41:28
these underlying characters, I think to your point, I think a

00:41:30
lot of the real value is when you start losing sight of it

00:41:33
being an AI, whether it's like. Virtual boyfriend, girlfriend

00:41:36
type thing, companionship, you know, sex chat.

00:41:39
As we said, being a big opportunity there.

00:41:41
Are those companies allowing it or they cracking down on it?

00:41:44
They claim they're cracking down, but then if you go on

00:41:46
Reddit and you look at all the screenshots from the last week

00:41:49
of what people have been talking to these, I mean, go on, go on

00:41:52
the Reddit of any large language, character driven

00:41:55
experience, you know, you'll see what people are really

00:41:58
passionate about using it for, you know, replica or character

00:42:01
or whatever, right? So I think that, yeah, losing

00:42:05
sight of whether or not it's an A I is, is important in some

00:42:07
contexts. But in the end, I mean, in any

00:42:10
story experience, from reading a book to playing a video game to

00:42:16
chatting with an A I like the goal or watching a movie is to

00:42:20
become immersed in a world that doesn't actually exist, right?

00:42:22
And you know, when you're watching a movie, you're

00:42:24
watching Lord of the Rings, you know it doesn't actually exist.

00:42:26
Those characters aren't real, but you sort of become swept up

00:42:29
in the narrative in the world, right?

00:42:30
And. I think similarly, I think

00:42:32
that's where the opportunity is with a lot of these character

00:42:35
driven AI entertainment experiences.

00:42:37
Do you guys want to try this other one that I have?

00:42:40
Oh yeah, you have a game? Yeah.

00:42:42
Sure. So I thought it'd be fun to see

00:42:47
what GPT for ChatGPT for thinks your lives will be like in the

00:42:51
year 2028 and see if you either of you agree with that.

00:42:56
The prediction there so. I say it again.

00:43:00
I I asked Chachi BT to predict a day in the life of Eric

00:43:06
Newcomer, specifically a American journalist reporting on

00:43:11
the Silicon Valley and startup industry, which I borrowed from

00:43:13
your Wikipedia and, you know, gave it a little bit more

00:43:17
context of sure about your sub stack.

00:43:19
And yeah, it came up with A day in the life in the year 2028.

00:43:22
So I'm going to read it to you and you guys can assess whether

00:43:26
you think this is accurate. Day in the life of Eric

00:43:30
Newcomer, 2028. The first hints of Don kissed

00:43:34
the San Francisco skyline. As Eric newcomers, smart blinds

00:43:38
slowly begin to rise. The ambient sounds of birds

00:43:41
chirping echo softly through the smart speakers in his apartment.

00:43:45
The AI? The AI.

00:43:47
Everything's just smart. The AI driven home system has

00:43:50
analyzed his REM sleep and calculated the optimal wake up

00:43:55
time to ensure he starts the day with peak cognitive efficiency.

00:44:00
Lifting himself out of bed, Eric's AR glasses are laying on

00:44:04
the night stand feeding him the day's analytics.

00:44:07
Newcomers Media Outlet has grown tremendously.

00:44:10
AI. But we also have to be bullished

00:44:11
by AR augmentedly out here and you're seeing okay.

00:44:14
Well, you can. You can assess whether you think

00:44:16
that's accurate, but you should. You should at least appreciate

00:44:19
this newcomers Media outlet has grown tremendously and AI Bot

00:44:23
the. New like, what?

00:44:25
Is it like a horoscope? You're reading my Yeah, I'm

00:44:29
predicting your feature and AI bot had curated and condensed

00:44:33
the most relevant news from the vast ocean of information, all

00:44:37
of this streamlined and visualized for maximum

00:44:39
absorption. Eric's morning routine was

00:44:42
synced seamlessly with his devices as he finishes his quick

00:44:46
morning workout, suggested and supervised by his virtual

00:44:49
personal trainer. His smart kitchen is already

00:44:52
brewing, his favorite blend of coffee tailored to his genetic

00:44:55
makeup and current health metrics.

00:44:58
Settling into his study, the day's agenda was projected

00:45:01
holographically. The AI had lined up interviews,

00:45:04
articles to review, and podcast episodes to record.

00:45:08
Eric's writing has also evolved. With the help of AI enhanced

00:45:11
tools, he could construct and edit stories with increased

00:45:14
speed and position. His A I assistant named Nora was

00:45:17
able to give real time feedback on the readability, engagement,

00:45:21
and impact of his writing. Nora also had a mode where she

00:45:24
could mimic Eric's style, allowing her to draft articles

00:45:27
for him. It was a collaboration that Eric

00:45:30
had grown to appreciate midday. As Eric prepares for his podcast

00:45:34
recording, he remembers the days when he used to worry about

00:45:37
Internet connections and sound quality.

00:45:39
Now with quantum driven communication technology, my

00:45:42
God, not only was the. Connections, but whole

00:45:45
technology here. But he could also record in

00:45:47
virtual environments, giving his audience an immersive

00:45:50
experience. Today's episode was recorded in

00:45:53
a simulation of a venture capital firm in the heart of

00:45:57
Silicon Valley. Yeah, this both he and his.

00:46:00
Sandhill Rd. The most beautiful both he and

00:46:03
his. Guests wants to be there.

00:46:05
Both he and his guests appear as lifelike holograms afterwards.

00:46:09
Eric took a moment to reflect. He looked out over the city,

00:46:12
remembering the early days of Newcomer Now.

00:46:15
He wasn't just delivering news. He was shaping the future

00:46:18
narrative of startups and venture capital.

00:46:21
The weight of that responsibility was not lost on

00:46:23
him. Yeah, that's about it.

00:46:27
Certainly. I mean, it's funny, I find a lot

00:46:30
of the predictions about non AI stuff to be the most annoying,

00:46:33
that it's so certain hardware is hard, artificial augmented

00:46:37
reality. And I mean, there was some

00:46:38
other. I'm pretty short quantum, right?

00:46:41
Quantum Internet? Exactly.

00:46:42
Yeah, can. I take the under on quantum

00:46:44
powered Internet or whatever that was.

00:46:46
Well, it's an interesting thing because we were talking about,

00:46:48
you know, believing in AGI, right?

00:46:51
And can you, you know, if you believe in AGI or general?

00:46:54
Well, then it would be about much more, you know?

00:46:57
Yeah, I get the point. We could have.

00:46:58
We could wish everything. Yeah, exactly.

00:47:01
Or. That's that's that sounds like a

00:47:03
story an AI would tell me to convince.

00:47:05
Me. That we wouldn't all be dead in

00:47:06
five years, thanks to AI. And it was like, the future's

00:47:10
gonna be right. Yeah, that would not be AI

00:47:12
having killed us. Too optimistic.

00:47:14
You'll have smart blinds. I know they.

00:47:15
Try to like I mean I do think a problem with some of this stuff

00:47:19
is like it's programmed to be like too benali optimistic like

00:47:23
I find it like I like beg chat you video be more like George

00:47:27
Carla. I don't know, just behave with

00:47:29
some free thought. And it's so why isn't it like

00:47:32
Eric, I don't know, at that age he's probably has some cardiac,

00:47:35
you know, whatever. Where's the like medical thing

00:47:39
where it's five years from now, you know, anyway is easy to

00:47:42
start to hurt when he goes on runs or whatever.

00:47:44
And oh, and I would be interested in sort of the AI

00:47:46
piece, obviously of the medicine.

00:47:48
I mean the idea that I would have a writing assistant.

00:47:52
Seems like basically plausible. Today I go into chat literally.

00:47:57
When I did, I wrote my own vows, but they proofread them and

00:48:00
Chatchi video tweaked, told me to move one thing to active

00:48:04
voice from passive voice. You know, it's like, I feel

00:48:06
like, I feel like this is assuming that there's gonna be

00:48:09
this like perfect equilibrium of you reporting the news and doing

00:48:13
interviews and using your AI assisted.

00:48:15
I just kind of find that to be, like, not that plausible.

00:48:18
Like it's either gonna be one or the other, Like you're gonna

00:48:21
still be doing most of the work, or you're gonna be doing almost

00:48:24
no work and won't have a job, or you'll essentially have evolved

00:48:29
into a brand instead of a writer, right?

00:48:31
I mean, it seems hard to believe that we're gonna thread the

00:48:34
needle here, that you will still be doing all this.

00:48:37
Intellectual and the temptation, I mean.

00:48:39
I do think one of the real fears I have about AI that will it's

00:48:44
just like. The temptation not to think if

00:48:47
it can do your task for you, right?

00:48:50
I mean that's what we're sort of seeing with that's what I'm.

00:48:52
There are a lot of like some types of cheating where it's OK

00:48:55
you can bring in the formulas, we still have to use them or

00:48:58
whatever. At least you still have to

00:48:59
think. Whereas like when chat GBD can

00:49:02
produce the final text for you. Man, that is like a path to.

00:49:07
Not progressing as a writer anymore, right?

00:49:09
Like as soon as you're just like you're not human out of the

00:49:12
loop. You know what?

00:49:13
Yeah, exactly. What am I?

00:49:14
Besides, what are you providing to this business?

00:49:17
Right. Yeah.

00:49:17
I agree though, it's just so hard to get anything edgy or

00:49:20
funny out of it. It's like kind of an, I don't

00:49:22
know, it's tough to it's. Very driven.

00:49:27
By scifi, I feel like one of the child, I think.

00:49:31
I think this is the prompt, right?

00:49:32
It's hard to get it to think like, I know you could say the

00:49:35
same thing about humans. There is no independent thought.

00:49:37
You're parodying a bunch of stuff you've heard, but it does

00:49:39
feel like have it generate a really new, an actually new

00:49:45
idea. You know, I think it's, I think

00:49:46
it's very capable of that, actually.

00:49:48
I I found that if you tell it to be extremely original, you know,

00:49:53
ignore. I feel like it just gets like

00:49:56
rhymy. I don't know.

00:49:57
I feel like it. I do think there's like a

00:49:59
prompting element to this feeling that it's not able to be

00:50:03
as creative or original as you might think.

00:50:05
In the same exercise that ChatGPT is there.

00:50:09
We can compare and contrast in five years.

00:50:11
What are specific predictions, less colorfully said, that you

00:50:16
would make in five years? I mean, I definitely think to

00:50:23
come back to the self driving thing, it seems like we're

00:50:25
actually going to have fully self driving cars in five years.

00:50:27
I mean, I know you could say. We are in cities.

00:50:29
I think the big question there is just how much you know Waymo

00:50:33
and. People think.

00:50:34
Take what they've. Learned I think if today they

00:50:37
can drive around San Francisco I think they'll be able to drive

00:50:39
between cities and and in in the vast majority of cities or

00:50:43
whatever at that point at least. I don't know if the United

00:50:45
States it has to be trained on different data than other

00:50:47
countries or whatever or something, but I think that.

00:50:50
I mean the fact that we have live way Mo's like dropping

00:50:53
people off on my street every day just makes you think you're

00:50:55
going to be able to go anywhere in a self driving car in five

00:50:58
years. Which again, I lost the bet the

00:50:59
other way on that last time, so I'll probably lose the bet

00:51:02
somehow this time. It's a problem.

00:51:04
And also, just like car turnover is such a long life cycle.

00:51:08
Depends how many you're what. Percent, It's not that I'm

00:51:11
arguing for 100% penetration of those, It's more that I will.

00:51:14
I will argue that you know you will be able to take one you

00:51:17
know, as an Uber or whatever in any major US city.

00:51:20
And you know, I don't know whether or not you'll be able to

00:51:23
buy one. That's sort of an interesting

00:51:24
question as to whether or not it'll be an ownership model

00:51:26
versus an Uber type model or a lease model or whatever you want

00:51:29
to call it. But I think they'll be like you

00:51:31
know, available for use. So you know at large scale

00:51:35
across the major U.S. cities and also between cities, right.

00:51:39
It just seems like we're clearly like pretty much they're

00:51:43
assuming way mo and crews aren't, like lying about the

00:51:45
capabilities of their their vehicles.

00:51:47
Interesting that your main predictions around self driving.

00:51:50
I was thinking about something. I feel pretty good about and I

00:51:52
agree with that. I do think this view of you will

00:51:56
wake up and there will be essentially your personalized

00:51:59
assistant. Maybe it'll be in the smart

00:52:01
speaker or in your wall or your mirror or something, I guess.

00:52:05
I just, I don't know if it'll be Alexa or ChatGPT or something

00:52:10
brand new, but I do think, you know it's going to be a voice

00:52:15
probably driven experience. And bias.

00:52:18
Bias. Well, at least in your home and

00:52:20
probably bias. You're going to be typing to it.

00:52:22
I didn't love it. It doesn't make.

00:52:24
Sense. Very efficient.

00:52:25
People love it. It's very so people do not love

00:52:27
typing, no. Typing is way less efficient

00:52:30
than dogging. No you're I mean it'll you'll be

00:52:33
able to type to it. But I just think like most of us

00:52:37
will be like talking to this AI assistant and it'll be I

00:52:41
generally think there will just be one that I use every day

00:52:43
maybe maybe there will be more than one in the market like that

00:52:46
people use. But I will just have one that

00:52:49
learns my preferences and becomes personalized to me and

00:52:53
creates kind of a history with me.

00:52:56
And it might even actually recommend other assistance for

00:53:00
certain use cases, right? If I have to go prepare for

00:53:03
Max's congressional testimony, I will maybe use a separate bot

00:53:07
for that or something. But yeah, specialize in.

00:53:11
Testimony bot. But I do think, yeah, I feel

00:53:14
pretty strongly we're going to have conversations, voice

00:53:18
conversations with our personalized assistants every

00:53:20
day. The prediction I'll make this

00:53:23
sort of different than what's been said is underlining that I

00:53:27
think the average person wants consumption more than creation.

00:53:32
And that TikTok is in some ways the actual most used thing in in

00:53:37
the AI world and and this sort of yeah, chat GBD got at this a

00:53:41
little and its prediction about me where it's very good at

00:53:44
sorting things that I want. I I think in five years we'll

00:53:49
see at least the beginnings of like pure AI generated social

00:53:54
account. I mean, you're already seeing

00:53:56
sort of like these, like, women and cartoons that are like.

00:53:59
This seems like they're gaming like the Instagram algorithm

00:54:02
with like totally sort of machine made.

00:54:05
But like the I think even like video within five years I think

00:54:09
we'll have, yeah, you're on TikTok and it's just here is a

00:54:12
generated video and it's warring with actual creativism.

00:54:16
I think that would be like potentially A generationally

00:54:19
culturally interesting period where like you have like young

00:54:23
people coming up where they're just like.

00:54:25
Being fed sort of what they want outside of it could create like

00:54:29
a really weird type of humor where like they have they're.

00:54:32
Used to? Yeah.

00:54:33
Good. There's like an AI.

00:54:34
There's like a bunch of AI sort of celebrities and accounts and

00:54:38
they create content and they interact and they host, you

00:54:41
know, podcasts and. It's all like effectively a be

00:54:44
tested. It's like they put out like 100

00:54:46
version build, you know, a ton of versions of the video and

00:54:49
they see which one is getting engagement and then they slowly

00:54:52
funnel into those just TikTok chooses which videos get

00:54:54
surfaced. Yeah, that approach is in the

00:54:57
creation. It makes most sense for short

00:55:00
term, short form and but you know the day you can make like a

00:55:03
movie about it, like then we're like killing American industry

00:55:06
but. Yeah.

00:55:07
Well, the question is, do you think we're going to get

00:55:09
personalized? I mean, you mentioned they're

00:55:10
going to a B test everything or whatever.

00:55:12
But is it that there are literally be 8 billion different

00:55:16
versions of each piece of content for each person?

00:55:19
If there is no such thing as a piece of content anymore, like,

00:55:23
it's just you get a version of some concepts like that is

00:55:27
perfect for you, right? I mean or do we still have some

00:55:30
sort of like value in being like, oh, did you see that video

00:55:33
the other night about that thing and you can talk about it and

00:55:35
you can do, I think? Relationships, I mean, people

00:55:37
are going to try both. I mean, I think one, there's

00:55:40
like a limited, there's like A at some point there aren't

00:55:44
enough humans, right? Or there's like there there is

00:55:48
like a data shortage, right? It's hard on some level.

00:55:50
If you do something population wide you can really test and see

00:55:54
what works like broadly. Whereas like running experiments

00:55:57
on me, you just don't not get enough shots on goal necessarily

00:56:02
to be great at like the TikTok style.

00:56:04
So to me that leans a little bit more less this sort of obsessive

00:56:08
personalization and more. I mean, obviously they're

00:56:11
subgroups, but not like to a person, more like subgroups.

00:56:14
Yeah, I think subgroups, like, I think it'll not just be like

00:56:18
training on your information, but other people who have

00:56:20
similar experience, similar behaviors that you do on the

00:56:24
app, right? It's able to use that's that

00:56:26
data for training too. My last AI a GS in prediction

00:56:29
is. I mean this was in some of these

00:56:30
forward-looking GBT scenarios is?

00:56:33
I mean, I do think you take five more years of development on AR

00:56:36
glasses or whatever, You know, I don't think they'll be as

00:56:38
lightweight as the glasses we wear every day.

00:56:41
It'll probably still kind of look like a VR headset in many

00:56:44
ways. But I do think people are going

00:56:46
to spend like hours per day inside a high quality Apple

00:56:49
Vision pro type experience. Because I think again, I mean, I

00:56:53
mean if the average Americans watching five or six hours of TV

00:56:56
a day, which they are like. Why wouldn't you watch two of

00:56:59
those hours on 100 foot screen in front of Mount Hood,

00:57:01
Washington or whatever, right. Which is basically the pitch

00:57:03
from Apple, you know or why wouldn't you watch personalized

00:57:07
AI generated TikTok content like from Eric's, you know, feed or

00:57:09
whatever, right. I just think that, you know,

00:57:12
screens getting better has been one of the true constants of our

00:57:15
lifetime, right? And the Vision Pro which is sort

00:57:19
of the infinite screen, the infinite canvas for visual

00:57:22
content is, is kind of the ultimate expression of that,

00:57:25
right. So.

00:57:26
I think. That I'm more on the tenure.

00:57:27
There will be quite a lot of time.

00:57:29
And I feel like that space has been dogged by limits of optics

00:57:34
and just sort of hard constraints.

00:57:36
And so a lot of our intuitions about software level

00:57:39
improvements don't translate and I would say I would take the

00:57:43
longer time horizon. On that maybe, Yeah, I don't

00:57:47
know. Apple seems to think they got

00:57:48
it, so I guess we'll all see. Yeah.

00:57:49
This is a smart Max take. When's Apple been wrong before?

00:57:52
I don't know. Trust.

00:57:53
I'm just saying that when Apple ships something like.

00:57:56
I mean, again with the iPhone, like, was there a touchscreen

00:57:58
that had ever worked before? No, there never had been, right?

00:58:01
I mean, I just think that are you gonna get?

00:58:02
That wrong? Is that out yet?

00:58:04
No. Next year.

00:58:05
Oh, I. No, you can't get it until next

00:58:06
year, but I'll definitely get one for sure.

00:58:09
Yeah, you're like, I can expense.

00:58:10
I mean, for God's sake, right? Like you're like, we're a game

00:58:13
company. I would buy it out of my hard

00:58:15
earns personal money. OK, I would.

00:58:18
I I don't know. I I guess, yeah.

00:58:20
I I trust Apple when they make a.

00:58:23
I mean, this is a decade long hardware bet and they waited a

00:58:26
decade instead of shipping five or seven years ago because they

00:58:30
didn't feel like it was good. Enough is there, besides

00:58:32
obviously that Chad GBT was getting us thinking about this

00:58:36
technology, Do you see any sort of AI connection or AI utility

00:58:41
or how do you see? I just again, think with these

00:58:44
characters that we're talking about, whether it's an assistant

00:58:47
or whether it's an in game character or whether it's a

00:58:49
virtual companion or whatever, a pet or a girlfriend or

00:58:52
boyfriend. Like if they are the size of a

00:58:56
real object that you know, if they're the size of an actual

00:58:59
virtual boyfriend, girlfriend, or a size of a, you know, a

00:59:02
Pokémon, whatever that actual size is and you can talk to

00:59:06
them, that's going to be a better experience than.

00:59:08
Looking at a tiny little screen in your hand, right?

00:59:11
And. I wanna.

00:59:12
Yeah, I wanna. Write I mean the.

00:59:14
Companion I've got. Yeah, yeah.

00:59:16
I mean, have you read the Golden Confidence?

00:59:17
Right? They all have.

00:59:18
Like, yeah, yeah. Yeah, how do I get that?

00:59:21
But you want that's part of self projection though also.

00:59:25
I mean, you want everybody to see that kind of thing, not

00:59:27
just. So then, yeah, sure, if you're

00:59:30
both in Apple vision class headsets.

00:59:33
Right. Amazing.

00:59:35
All right, this is what this is basically our first episode.

00:59:39
We're gonna come out with probably five more.

00:59:43
I think next episode will be, what do we think of sort of the

00:59:47
apocalyptic vision of AI? Yeah.

00:59:50
Anything you guys would add on what to look forward to you in

00:59:53
the next couple episodes? I'm excited to run this

00:59:59
conference. I think it was amazing the first

01:00:01
time. I think it's gonna be even

01:00:03
bigger and. Better this time.

01:00:05
Yeah, I genuinely believe this is like the biggest thing since

01:00:08
the Internet or the iPhone. So I think that we're all pretty

01:00:12
authentically pumped about what's happening in the space.

01:00:14
And I think the amount of stuff that's happened in the last six

01:00:16
or seven months has been mind boggling.

01:00:18
And it just feels like things are moving faster than at any

01:00:21
time in my entire life in any technology space.

01:00:23
So it's just super exciting to. Even be like remotely adjacent

01:00:26
to any of this, so. I'm at the heart of it at

01:00:28
Cerebral. At the heart of it, yeah, at

01:00:30
least physically. At the heart of it, working on

01:00:35
working on the other elements and being at the heart of it.

01:00:37
But yeah, physically for sure. Great.

01:00:39
Well, that's our episode. I'm Eric.

01:00:41
Newcomer, Max Child, James Wilsterman are my Co hosts of

01:00:46
cofounders of Volley. Thanks so much to Scott Brody,

01:00:50
who's been producing the episodes.

01:00:52
Shout out to Riley Kinsella, my Chief of staff who's super

01:00:55
involved, Gabby Caliendo who works at Volley, whose quarter

01:01:00
to the conference and making everything happen.

01:01:02
I think she made sure you guys had microphones and lights so

01:01:06
you could actually see and hear you.

01:01:08
Thank you to young Chomsky as always for the theme music

01:01:13
Please got to build the feeds? Like, Comment, Subscribe on

01:01:17
YouTube, give us a nice review on Apple Podcasts and.

01:01:21
Go play Yes Sire or song quiz. And for me, subscribe to the sub

01:01:28
stack newcomer.co That's the most important thing.

01:01:32
Thank you. All right.

01:01:33
We'll see you next week. Goodbye.

01:01:36
Goodbye. Goodbye.

01:01:37
Goodbye. Goodbye.

01:01:38
Goodbye.