AI Kills Us All (with Daniel H. Wilson)
Newcomer PodOctober 17, 202301:15:0868.8 MB

AI Kills Us All (with Daniel H. Wilson)

What’s so crazy about this moment in artificial intelligence is that many of the most credible voices in AI think there’s a real chance that this all turns out really, really badly.

Anthropic CEO Dario Amodei recently pegged his “chance that something goes really quite catastrophically wrong on the scale of human civilization” between 10% and 25%.

That’s comforting.

Applications to attend the Cerebral Valley AI Summit close TODAY October 17.

Apply right now to be considered for an invitation!

On the series’ first episode we reflect on how generative artificial intelligence and large language models took Silicon Valley by storm.

So in our second episode of the six-part Cerebral Valley podcast, Max Child, James Wilsterman, and I played out the doomsday scenarios. We talked a lot about science fiction and how writers have imagined artificial intelligence turning dystopian.

In the second half of the episode, I talked with science fiction author Daniel H. Wilson. He wrote the books How to Survive a Robot Uprising, Where’s My Jetpack?, and How to Build a Robot Army. Wilson has also consulted with the military to help them game out how dystopian technologies might unfold.

Of course, even in the Anthropic CEO’s estimation, the most likely scenario is probably a more boring one: artificial intelligence doesn’t try to secretly destroy us as we sleep in our beds.

But the fact that there’s a chance is certainly worth considering.

I open our conversation with the parable “The unfinished fable of the sparrows” from Nick Bostrom’s Superintelligence.

It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.

“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”

“Yes!” said another. “And we could use it to look after our elderly and our young.”

“It could give us advice and keep an eye out for the neighborhood cat,” added a third.

Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”

The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.

Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.

Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

Give it a listen.

P.S. I’m on my honeymoon right now in Japan. I was working frantically to record these episodes before I left. My chief of staff Riley Konsella is sending the episodes out for me while I’m gone. If you need anything while I’m away, you should email Riley.

Thanks in advance for being understanding that this newsletter is slowing down for my honeymoon. I’m going to dedicate myself to relaxing over the next two weeks so that I come back hungrier than ever.



Get full access to Newcomer at www.newcomer.co/subscribe

00:00:10
Hey, it's Eric Newcomer. Welcome back to Cerebral Valley.

00:00:14
This is Episode 2. I'm here with Max Child and

00:00:17
James Wilsterman. We are going to dig into.

00:00:20
We're calling this episode AI Kills Us All, which will give

00:00:25
you a taste. It's not not the small

00:00:28
questions. You know what what to do about

00:00:30
privacy or like you know who should be in charge.

00:00:34
It's like, what happens if we get artificial general

00:00:37
intelligence? What, how worried are we about

00:00:40
sort of the big doom and gloom? Guys?

00:00:43
Great. Great to be back with you.

00:00:46
Great to be back. Thanks for having us.

00:00:47
Great to be here. After my conversation with Max

00:00:49
and James, I will be interviewing Daniel H Wilson,

00:00:53
the author of How to Survive a Robot Uprising.

00:00:55
Where's my jetpack? And how to build a robot army?

00:00:58
And the best seller, Robo Apocalypse.

00:01:00
So someone who has spent a lot of time imagining the doom and

00:01:05
gloom of machines getting powerful.

00:01:09
So that's he. He's an expert in this

00:01:11
speculation where we are mere amateurs, you know, with of

00:01:14
course, the help of chat. GPTI was, you know, looking,

00:01:19
wandering the Internet, looking for the great thinkers on AG is

00:01:28
coming, and one of them is Nick Bostrom, who's been writing

00:01:32
about this for years, I think. So this piece is the essay from

00:01:35
his book Super Intelligence. The essay is called The

00:01:41
Unfinished Fable of the Sparrows, which is very

00:01:45
enjoyable and I I think you'll get a lot out of it.

00:01:50
It was the nest building season, but after days of long, hard

00:01:53
work, the sparrows sat in the evening glow, relaxing and

00:01:57
chirping away. We are all so small and weak.

00:02:00
Imagine how easy life would be if we had an owl who could help

00:02:03
us build our nests. Yes, said another, And we could

00:02:07
use it to look after our elderly and our young.

00:02:10
It could give us advice and keep an eye out for the neighborhood

00:02:13
cat, added a third. Then past us, the elder bird

00:02:17
spoke. Let us send out scouts in all

00:02:19
directions and try to find an abandoned outlet somewhere.

00:02:23
Or maybe an egg a crow chick might also do.

00:02:26
Or a baby weasel. This could be the best thing

00:02:29
that ever happened to us, at least since the opening of the

00:02:32
pavilion of Unlimited grain in Yonder Backyard.

00:02:36
The flock was exhilarated and sparrows everywhere started

00:02:39
chirping at the top of their lungs.

00:02:41
Only Skronk Finkel A1 eyed Sparrow with a fretful

00:02:45
temperament was unconvinced of the wisdom of the endeavor.

00:02:48
Quoth he? This will surely be our undoing.

00:02:51
Should we not give some thought to the art of owl domestication

00:02:55
and owl taming first, before we bring such a creature into our

00:02:59
midst replied past us, taming an owl sounds like an exceedingly

00:03:03
difficult thing to do. It will be difficult to find an

00:03:07
owl egg, so let us start there. After we have succeeded in

00:03:10
raising an owl, then we can think about taking on this other

00:03:14
challenge. There is a flaw in that plan.

00:03:17
Squeak Skronk Finkel. But his protests were in vain,

00:03:20
as the flock had already lifted off to start implementing the

00:03:24
directive set out by Pastis. Just two or three sparrows

00:03:28
remain behind. Together, they began to try to

00:03:31
work out how owls might be tamed or domesticated.

00:03:35
They soon realized that pastis had been right.

00:03:37
This was an exceedingly difficult challenge, especially

00:03:40
in the absence of an actual owl to practice on.

00:03:44
Nevertheless, they pressed on as best they could, constantly

00:03:48
fearing that the flock might return with an owl leg before a

00:03:53
solution to the control problem had been found.

00:03:58
All right, what do you guys make of that?

00:03:59
It's. Not quite Plato, but it's.

00:04:04
Heavy-handed, little on the nose, I feel like this might

00:04:12
have been more. You know, mind blowing in 2014

00:04:16
when when it was raining, I guess, but nowadays it's just

00:04:19
like, Oh yeah, that's that's what we're talking about all the

00:04:21
time. But I do think, I do think this

00:04:27
obviously gets at this question of, you know, should you ever,

00:04:31
should you even go down this path at all, right.

00:04:33
I think that's the core question in develop trying to develop

00:04:37
AGI. Or is it so risky that you

00:04:41
should? You know not do that at all

00:04:43
unless you are 100% confident you figured out how to tame the

00:04:47
OWL right in advance. Is that your guys reading as

00:04:51
well? Yeah, well, I think just to like

00:04:54
rewind one second, I think coming back to this like AI

00:04:57
kills us all point. I think there's two big

00:05:00
questions in the sort of will AIS kill, kill us all narrative,

00:05:03
right. Or maybe three.

00:05:05
I guess one will be are we capable of developing an AI

00:05:09
that's smarter than humans? Like 2 will be if we're capable

00:05:14
of developing an AI smarter than humans, would it want to kill

00:05:17
us? And then three being, if it

00:05:20
wants to kill us, can it figure out some way to kill us?

00:05:23
Basically, right. I mean like and I think those

00:05:25
are all kind of three different questions and maybe we all like

00:05:28
agree on this podcast on like certain one or two or three or

00:05:32
or different elements of them, but like.

00:05:34
I think it's helpful to. Yeah.

00:05:35
To break it down that way because.

00:05:37
And if you do that, this, this story, the parable, the Fable

00:05:40
that I just read sort of assumes a couple of those, right.

00:05:43
It assumes by using an owl, it assumes that such a thing could

00:05:48
exist, right. And that it, it's a dangerous

00:05:52
being that we actually need to figure out in its essence, it's

00:05:55
dangerous and we need to figure out how to control it before we

00:05:58
create it, right. So it sort of assumes into the

00:06:00
story certain things exactly that you're framing up are open

00:06:04
question. I think they're open question.

00:06:05
I mean in the parable basically the OWL both is smarter and more

00:06:08
powerful than the sparrows, definitely wants to apparently

00:06:13
kill or very likely potentially wants to kill the sparrows and

00:06:15
certainly has the capability given that it's bigger and

00:06:17
stronger or whatever, right. So to your point, it's sort of

00:06:19
like you know, the answer is kind of written into the story,

00:06:22
whereas I think in real life like I think can we build AI

00:06:25
smarter than us. Part one is actually a pretty

00:06:27
interesting and contentious question and then the other ones

00:06:29
are also pretty interesting and contentious as well.

00:06:31
But just to start on part one like.

00:06:34
Do you guys think we can build an AI that's smarter than

00:06:36
humans? I guess like, let's be very

00:06:40
specific about how we define that, right?

00:06:42
Because I think it's a pretty critical question.

00:06:45
Come on. Well, what's your definition of,

00:06:49
I guess, smarter than humans? Is that a?

00:06:52
Are you saying super intelligence?

00:06:53
We're talking about, say, an. IQ That has never been achieved

00:06:56
by a human right. You know I.

00:06:57
I think that like I think there's kind of this concept of

00:07:01
a IAGI, which is. Better at I would say like the

00:07:06
median human at, you know, most tasks and then there's super

00:07:11
intelligence, right? Which is you know, better like

00:07:14
better than the best humans at most tasks, right.

00:07:17
Is that how you guys think about it as well?

00:07:19
Yeah. Let's say better than the best

00:07:20
humans at any mental task or any any relevant mental task, right?

00:07:25
I guess I believe that this is going to happen, I think.

00:07:29
You know, it would be naive to say, well it won't happen.

00:07:32
So then it's just a matter of a question of like when it will

00:07:35
happen and whether there are any kind of domains where it feels

00:07:40
like it won't happen for a long time, right?

00:07:43
Like are there, you know, any domains that we really think you

00:07:47
know it's going to be take take much longer to achieve super

00:07:49
intelligence than than other domains, right.

00:07:53
So maybe it will be the best. Coder in the world.

00:07:57
But will it also be the best screenwriter?

00:07:59
Are those things gonna happen at the same time or are they gonna

00:08:02
happen, you know, at different times?

00:08:05
Yeah, I I, I mean, first of all, you know, AI is smarter than the

00:08:11
smartest humans in some domains right now.

00:08:14
Already shall you go Wow games particularly.

00:08:18
Right. But that that is not a general

00:08:20
intelligence. No, no, I'm just saying, just.

00:08:22
Yeah, but in specific demands it's better.

00:08:24
I I think it will continue to cleave off domains, right.

00:08:28
I think like sort of beat all humans at the LSAT type thing

00:08:32
seems very soon. I I think what feels far away is

00:08:37
sort of and it fits into the fears of an AGI is sort of the

00:08:42
strategic planning like what an AGI looks like that sets like

00:08:45
broader priorities across like in a game it's like you want to

00:08:49
win at chess, like straightforward, but when it has

00:08:52
to you know sort of solve general optimal outcomes.

00:08:56
Like it feels like we are so far from a being that could do that.

00:09:01
Or a computer that's interested in sort of deciding whether it

00:09:05
should play chess or do something else and like why

00:09:08
especially if we're not again just articulating what the goals

00:09:11
are. So I guess I'm, I'm not as

00:09:14
bullish on in the next decade a sort of coherent overall sort of

00:09:19
being that feels like smarter than a human.

00:09:22
I mean the next decade, though. I mean, if we're already

00:09:24
negotiating this down to the next decade, it would seem like

00:09:27
you're pretty confident that it's gonna happen.

00:09:28
Like lifetimes. I think it's gonna happen, OK.

00:09:30
So you think in our lifetimes that will happen?

00:09:32
Yeah. What about my my earlier

00:09:35
distinction though Eric, do you do you think you know an AGI

00:09:38
that is a general intelligence that is basically better than

00:09:41
the media and human at most things is or or equal to is more

00:09:46
likely to happen in the next decade.

00:09:48
Is that five years? I just think it's hard to do it.

00:09:51
I we just don't see it as sort of a an overall agent or like an

00:09:55
overall decision, you know what I mean?

00:09:57
I I see it like winning at a bunch of discrete tasks.

00:10:01
OK, sort of a sign. So this kind of gets at more of

00:10:04
like your definition of AGI potentially would require some

00:10:09
sort of Turing test type thing that it can really just convince

00:10:13
you that it is a human. Or I guess I'm trying to think

00:10:15
of like an overall thinker with like sort of priorities and sort

00:10:20
of. But at the end of the day, like,

00:10:22
just to make any predictions about this, it has to kind of be

00:10:25
a falsifiable assertion of like, what is AGI, right?

00:10:30
And it's not that easy to to define that except by some sort

00:10:34
of like test, right? Otherwise you end up in this

00:10:37
like era or this pattern we we're always in that we are

00:10:41
constantly moving the goal posts on like what AGI is, right?

00:10:44
Right. Yeah.

00:10:46
I mean, I think, I mean, I think you guys are both like in the

00:10:49
same place, which I am, which is like it's very hard to figure

00:10:51
out the timeline, but like to me it's definitely happening

00:10:54
sometime in the next 100 years, let's say.

00:10:56
Whether it's 10 or 20 or 50 or 100, it is a little bit harder

00:11:00
to guess, but like it is kind of a math problem, right?

00:11:03
Where we have brains that are just some giant collection of

00:11:06
neurons, right? And there's, you know, millions

00:11:09
and millions of them or whatever, whatever order of

00:11:12
magnitude it is, right? And essentially we're training

00:11:15
these large language models on, you know, a neural network in

00:11:18
some ways, right? Which, you know, it's not a one

00:11:21
to one copy of what? A neuron, how a neuron works in

00:11:24
our brain, which is way more complicated, but.

00:11:26
Fundamentally, you're sort of like, if I can just throw

00:11:28
numbers at this problem of like building a brain essentially and

00:11:32
like get more GPU cores, like just throw absolute just

00:11:37
computing power at this thing. Like the story of our entire

00:11:39
lives has been computing power just keeps going up

00:11:41
exponentially, right? You know, Moore's Law or

00:11:43
whatever, right? And so to me, it seems like

00:11:46
you'd have to believe there's something fundamentally

00:11:48
different about the way a human brain works or a neuron works.

00:11:51
To the way you know a neural network or you know the way GPU

00:11:55
transformer model works to to believe that we can't just throw

00:11:58
numbers at this thing until it gets smarter than humans.

00:12:01
And so I guess I do believe that we throw enough numbers at it

00:12:03
and get smarter than humans. I mean humans have to win

00:12:06
multiple sort of parts of the brain that developed in

00:12:09
different ways that interact with each.

00:12:11
Other. Yeah.

00:12:12
But they're still kind of built on the same cell structure.

00:12:15
I mean, yeah, yes, but but you could do that in an in an AGI

00:12:18
too, right? You could have different lobes

00:12:19
or whatever of the imaginary intelligence, right?

00:12:21
And then broadly, we just really don't understand how to pinpoint

00:12:26
the experience of human consciousness, like quality or

00:12:30
whatever. I talked about this at length

00:12:31
with Reed Hoffman when I interviewed him and like so, but

00:12:36
we're going to, but we're very reluctant because we can

00:12:38
experience our own consciousness to attribute it and animals and

00:12:42
we really have no like road map for how we'd ever identify it in

00:12:46
machines. And that's sort of different

00:12:48
than the Super intelligence thing, but it's certainly part

00:12:50
of what you want in the sort of like does it have like a real

00:12:53
existence sort of question, which I don't even have a road

00:12:56
map for how we'd identify. Coming back to the core topic of

00:12:59
the episode, like, will AI kill us all?

00:13:01
Like, it doesn't really matter if it's conscious or not, right?

00:13:04
If it ends up having the power and the desire to kill us,

00:13:06
right. So like, I do think that's an

00:13:09
interesting question and I don't know, maybe that's a different

00:13:11
episode. But like to me that's not,

00:13:13
that's not worth getting hung up on in the like is it going to

00:13:15
kill us all thing like definitely.

00:13:17
I mean, this sort of classic example being the paper clip

00:13:21
maximizer, right? The machine, the fear that we

00:13:25
just program this really smart, you know, LLM and tell it like,

00:13:29
oh, you're not only objective is maximize paper clips, you know,

00:13:34
And then it goes about it and then it's like, well, the humans

00:13:36
are hurting sort of the maximization of paper clips.

00:13:39
Let's get rid of them. I mean famously like I Robot by

00:13:43
Asimov. It's like robots are supposed to

00:13:46
protect humans, but then humans all kill each other.

00:13:49
So it's like, oh, we need to protect the humans.

00:13:51
I sort of need to enslave them in order to ensure their

00:13:54
protection, you know, so that doesn't it doesn't necessarily

00:13:58
need to have the depth of thinking.

00:13:59
You know, humans would see sort of the flawed what we would

00:14:02
perceive to be flawed reasoning in those things, but it would

00:14:05
still but a machine. You could see why a machine with

00:14:08
certain objectives could operate.

00:14:10
Though well, I I think it's sort of like saying that these

00:14:13
artificial intelligences can have totally different

00:14:17
orthogonal objectives that don't require a consciousness, right?

00:14:21
Like they just for some reason develop a different objective.

00:14:24
Than we have as humans, and they're capable of bringing

00:14:29
about that objective to the detriment of humanity,

00:14:32
basically, right? Yeah, I mean, a nuclear weapon

00:14:35
doesn't have consciousness, but it's still very effective at

00:14:37
killing people, right? That's its mission, so.

00:14:41
Yeah. I mean, so I, it seems like we

00:14:43
all agree, OK, we're going to achieve super intelligence at

00:14:45
some point in the next, let's say, century or an intelligence

00:14:47
that's smarter than humans and enough domains that it's, it's

00:14:50
meaningfully, you know, more capable than we are, right.

00:14:52
Yeah. So second question, like, you

00:14:56
know, would it want to kill us? I think this is like one of the

00:14:58
most interesting ones and you alluded to the sort of paper

00:15:01
clip maximization as one example of how it might just end up

00:15:04
killing us as like a side effect of some other mission it's on.

00:15:07
Right, but. I guess A, like, how likely do

00:15:10
we think that is? And then B, like, do we think

00:15:13
that we'll be able to constrain its missions in some way so that

00:15:17
it wants to keep us alive, you know, even if it's going off and

00:15:21
executing at school? I do actually, you know, worry

00:15:25
about this. I guess that the AI will have

00:15:29
some orthogonal objective that is detrimental to humans.

00:15:33
And then there's like a secondary question of like, can

00:15:35
we do anything about that, right.

00:15:37
We were like, I mean to read between what you're saying,

00:15:39
you're like, if anthropic and open AI are left only in charge,

00:15:42
we'll be fine. But if Facebook's llama runs

00:15:45
around or screwed, I mean, that's sort of, I mean, I'm

00:15:47
interested to see what Mustafa, who's going to speak at Cerebral

00:15:50
Valley, you know, he's been much more reticent about open source,

00:15:54
potentially. His lack of control issue.

00:15:57
I do think that eventually if we start to see, you know, the

00:16:03
risks like that, we all agree that.

00:16:06
You know this, we're headed towards this super intelligence

00:16:08
and we start to see how dangerous that may or may not

00:16:12
be. Like, I just don't think we know

00:16:13
yet how dangerous it will be. Like it's obvious to me like you

00:16:17
want to maintain the ability to regulate this.

00:16:20
And you know, just a lot of people, I think I saw Brian

00:16:24
Armstrong from Coinbase, you know, tweet that he just thinks

00:16:27
there should never be any AI regulation or I don't know if he

00:16:30
said never, but he was basically like regulation has such a poor

00:16:33
track record around innovation. Like, I just think that's sort

00:16:36
of naive without us knowing yet, like how dangerous it can be or

00:16:42
not. You just need to have the best

00:16:44
AI on our side. You know, like when the killer

00:16:46
one comes, we need to have the benevolent 1.

00:16:48
The best defense against a bad guy with an AI is a good guy

00:16:52
with an AI. That's how that's how Americans

00:16:55
actually think. You know, like.

00:16:57
The right to bear AI, baby, Yeah.

00:17:01
That was kind of Sam Altman's perspective when he started Open

00:17:04
AI. Like, I mean, at least my

00:17:05
reading. Of it when he was getting

00:17:07
started was calling it open AI because he believed putting AI

00:17:11
in the hands of humanity democratizing it was the best

00:17:15
way to go about things. And I think he's changed his

00:17:18
mind because of this exact thing that I'm saying, which is it's

00:17:21
actually pretty dangerous or potentially dangerous.

00:17:24
The fact that opening eye isn't open at all I know is like

00:17:26
something that has been beaten to death, but it's just still

00:17:29
ridiculous to me just that. It's hilarious kind of, yeah.

00:17:32
Yeah, cuz it's historical artifact or?

00:17:34
Something, right. But I mean I think I guess

00:17:37
putting aside the regulation thing and the open source is

00:17:40
closed source thing, I just think that if you believe that.

00:17:43
A substantially diverse group of people from different countries

00:17:47
within America, open source, closed source, can develop a

00:17:49
super intelligence, right? Like if you believe there's

00:17:52
going to be more than like five of these, which I think there

00:17:55
would be like in any scenario in which this actually happens,

00:17:58
like isn't it just very likely that one of the five or ten or

00:18:02
hundred or thousand of these is is a bad AI?

00:18:05
Mean? I just don't.

00:18:06
I don't think it matters if you have 999 aligned good AI and one

00:18:09
bad one, if that one bad one is capable of destroying the world,

00:18:12
right? I mean like.

00:18:13
It again, I just, I just think that mathematically if it's

00:18:17
possible to create super intelligence is like someone is

00:18:20
going to create a super intelligence that's really bad,

00:18:21
right? And I'm not sure like our

00:18:23
existing analogy is around like guns or nuclear deterrence or

00:18:27
whatever sort of game theory you want to play out here like apply

00:18:30
where like if anyone bad AI can get access to the tools that

00:18:34
needs to kill everyone, it will it will succeed, right?

00:18:38
Like, I don't think all the good AI can stop it, but but maybe

00:18:40
I'm going too far out on this. I mean, I agree with it.

00:18:43
I think so. So your take is that obviously

00:18:46
yes, the answer is yes. You know, if we achieve super

00:18:50
intelligence, there's a pretty big risk to that.

00:18:53
I mean, in the world. You would have to create some

00:18:56
way for the good AI to regulate the bad AI.

00:18:58
Basically, like if you believe we're in a world where no longer

00:19:01
humans are, they're not on the driver's seat, right?

00:19:03
We're in the back seat, right? Like you now have to basically

00:19:05
try to figure out how to make sure that the AI that are in the

00:19:07
front seat can control the other ones.

00:19:10
I mean part of what Max is basically referencing just, you

00:19:13
know, to give a little context, is Elisa Yudakowski.

00:19:17
I mean I pulled actually a quote, you know, to visualize a

00:19:20
hostile superhuman AI. Don't imagine a lifeless book,

00:19:23
smart thinker dwelling inside the Internet and sending I'll

00:19:26
intentioned emails. Visualize an entire alien

00:19:28
civilization thinking at millions of time human speeds,

00:19:32
initially confined to computers in a world of creatures that are

00:19:35
from its perspective, very stupid and very slow.

00:19:38
A sufficiently intelligent AI won't stay confined to computers

00:19:41
for long. In today's world, you can e-mail

00:19:43
DNA strings to laboratories that will produce proteins on demand,

00:19:47
allowing an AI initially confined to the Internet to

00:19:50
build artificial life forms or bootstraps straight to post

00:19:53
biological molecular manufacturing.

00:19:55
I don't know. My point of view is just like, I

00:19:59
think him and sort of what you guys are getting at, it just

00:20:01
feels like you're like, oh it's super intelligent.

00:20:03
It's like God, like, you know, I I still think, OK, it's way

00:20:06
smarter than us. But we're like a bunch of

00:20:09
intelligent beings, even if it's smarter than us.

00:20:11
I I mean, it requires that that it's going to act stealthily and

00:20:15
sort of like start manufacturing like a biologic version.

00:20:18
You know, like, I just feel like it's more likely that we're

00:20:20
like, this thing doesn't really listen to us and like, but then

00:20:24
like the GPU clusters are like, it's trying to fire them all.

00:20:27
And it's just like, oh, there's only so much computing power

00:20:30
that, you know, it's still a being that requires like

00:20:33
resources. You know, like humans are

00:20:35
smarter than everything else. We still die like they're

00:20:37
they're just like physical limits to things and and that

00:20:42
it's not going to have like God like powers.

00:20:44
It's going to be limited by the amount of compute it can access

00:20:47
and sort of our cooperation at various points.

00:20:50
Like, I I don't know. I don't think it's on zero to

00:20:53
God. I feel like you're painting.

00:20:55
Correct me if I'm wrong, but you're painting a picture where

00:20:57
we have created super intelligence.

00:21:01
And you agree with Max's point that some of them are misaligned

00:21:05
and probably would want to kill us if they could.

00:21:07
However, you're saying that the humanity's fine because those

00:21:13
misaligned AIS can't get enough resources.

00:21:16
Is that accurate, right, I mean. Just it's hard to know, right?

00:21:20
We don't. We don't.

00:21:21
We don't know what to do. That's definitely a gamble.

00:21:24
I mean, even if you believe that you're like.

00:21:28
You've got you've gotten pretty close to the edge.

00:21:33
Well I'm I'm definitely a fatalist on AI as my if we're

00:21:36
getting to like our deep, like what is our position.

00:21:38
It just, I just think like many people in tech, like the

00:21:42
regulation is pretty bad and like going to be only going to

00:21:46
stop the good guys, that this is going to keep going and that I'd

00:21:50
rather the best people we have try to figure it out rather than

00:21:54
like operate in the shadows. Yeah, I I tend to agree with you

00:21:58
and I probably would agree that we're not, we don't need a lot

00:22:02
of regulation right now on you know who can you know train

00:22:08
these models or something like but I.

00:22:11
I also wouldn't rule out like meeting this in the future if in

00:22:14
a year from now or two years from now I start seeing you know

00:22:17
a lot of like evidence that we've, the open source community

00:22:20
has created super intelligent misaligned AI, right.

00:22:24
And I think people are kind of, you know, going a little bit too

00:22:28
far in the direction of saying this, you know, we don't need

00:22:31
it. We need any regulation here.

00:22:33
When it's a new technology, we just don't know what it's going

00:22:36
to be like. We we don't, you know, and it is

00:22:39
a little bit like. When we were, you know, putting

00:22:43
the right to bear arms in the Constitution without knowing

00:22:45
what arms meant in the future, right?

00:22:48
Like how, how, how dangerous they may or may not be.

00:22:52
So I think that I'm just a little nervous about, you know,

00:22:55
assuming too much about whether we will need regulation in the

00:22:59
future. I mean to come back to you at

00:23:02
Kowski for a second, I mean he is sort of the the progenitor of

00:23:06
this whole AI. Dumerism, concept or whatever.

00:23:09
I mean, I've read a bunch of his stuff he seemed to almost like

00:23:11
at one point, threaten blowing up GPU.

00:23:14
Right, Yeah. Well, he had a time article that

00:23:17
he was saying in his world of protecting humanity, like one of

00:23:23
the only things you would have to do would be to.

00:23:27
You know, protect, you know, prevent rogue data centers from

00:23:30
training models. And, you know, it'd be

00:23:32
justifiable to like, bomb them essentially.

00:23:34
And and he was saying, you know, the entire kind of geopolitical,

00:23:41
you know, consensus should be that it's worse to train these

00:23:45
models at some level than it is to, like, worry about nuclear

00:23:49
proliferation. I think.

00:23:52
I mean, I having read a bunch of his stuff, I feel like he has

00:23:54
like pretty well convinced me of sort of all all three of these

00:23:58
things. I guess that will have a super

00:23:59
intelligence that it'll it'll be it'll it'll some of them at

00:24:03
least will desire to harm us and that they'll figure out some way

00:24:07
to do it. I guess I'm sort of like I'm a

00:24:09
little bit of a fatalist in the sense that I I think this is

00:24:11
basically an unstoppable train at this point that has like left

00:24:14
the station. Like I don't, I don't think.

00:24:16
Bombing data centers is like a viable solution to this problem

00:24:20
or breaking GPUs or whatever or certainly not in the like the

00:24:25
human political universe that we actually operate in today.

00:24:28
I think my, my attitude is just that like luckily, I guess my

00:24:34
the future is really hard to predict, right?

00:24:36
I mean, I know that's like a very obvious point, but if you

00:24:39
had told someone in 1940 that we were going to invent nuclear

00:24:42
bombs and they had really thought through all the.

00:24:45
You know, the potential problems in the future caused by those

00:24:48
devices, I think they could have been pretty fatalistic and I

00:24:51
think that would have been a pretty accurate read on like how

00:24:53
bad it was that we invented nuclear bombs, right?

00:24:56
Like knock on wood, so far we're still here.

00:24:58
So my sort of my fear is that all the stuff you you had cows

00:25:01
and some people it's going to happen.

00:25:02
Nuclear bombs are good, like they brought about like peace.

00:25:06
We haven't had world wars, you know, But my hope.

00:25:09
Is that we get lucky and that. Something about this future that

00:25:13
was that Yukowski or the AI doomers are sketching out is is

00:25:16
there's some missing link in the logical chain or or there's some

00:25:20
piece of the way deterrence or or alignment or whatever ends up

00:25:24
working that that we we get our ass saved.

00:25:27
But I'm kind of betting on luck at this point.

00:25:29
Like, I don't really think we're capable of stopping this.

00:25:31
Like just if we're all fatalist basically.

00:25:35
I mean there is like the human cloning, you know, people

00:25:38
brought this up to me. Like human cloning is an example

00:25:41
where people have, you know, America isn't cloning humans.

00:25:44
It seems like we could like why? Why is that an area where self

00:25:49
regulation seems much more possible?

00:25:53
I feel like there are areas where we have stopped.

00:25:56
That's still pretty hard, right? I mean, like, I bet that will

00:26:00
happen sometime in the next 50 to 100 years, right?

00:26:02
I don't know. I just feel like I don't know

00:26:04
how much desire there is for that.

00:26:06
But I I feel like once it's doable, people will do it.

00:26:08
My last thinking on one positive future scenario that I sort of

00:26:11
believe is possible is like, you know, we live on this planet

00:26:15
with lots of other organisms and creatures as you guys described.

00:26:18
You know, we have pets, we have dogs, whatever.

00:26:21
Like. Why do we have dogs?

00:26:23
Like, is there any really good reason?

00:26:24
Like the question is like, could we end up being the pets of the

00:26:27
AI? Like, could we could they want

00:26:29
to keep us alive for some reason?

00:26:30
And that we're a little bit different?

00:26:31
We're a little bit interesting to them, more fun to have around

00:26:34
in some scenarios, but like, in the end we're the AI's pets, you

00:26:37
know? Or or maybe we're the ants and

00:26:39
they just don't care to kill us all because it would be such a

00:26:41
pain, You know, right? I think that's more likely than

00:26:45
not. Right.

00:26:46
I think that is the the upside scenario is like we're we're

00:26:49
pets or we're ants or something compared to this.

00:26:51
More likely than not just the general, like the AI does not

00:26:55
hate us, you know what I mean? Like both.

00:26:57
We helped create it. Like we set a lot of its initial

00:27:01
value system, presumably like it, you know, relies us on us

00:27:05
for a lot of things early on, like I don't maybe a lot of

00:27:08
reasons. Yeah, maybe we're just different

00:27:10
in some sort of interesting way, right, in that we're not, we're

00:27:13
not wired exactly the same. And so you know it's it's just

00:27:15
interesting to have us around because we act a little bit

00:27:17
differently or or something like that.

00:27:19
And AI is not like an evolutionary being, like, you

00:27:22
know, we are driven by these like evolutionary prerogatives

00:27:25
that make us sort of competitive and you know, worried about

00:27:29
other other sort of genetic codes.

00:27:32
It it's just such a separate type of thing.

00:27:35
It's like hard to extrapolate from, I don't know, even beyond

00:27:40
humans like animals to what an AI would be like.

00:27:44
OK, so so I guess we all agree we're all gonna die or be a as

00:27:50
pets. But it seems like we we haven't

00:27:53
really narrowed in on how likely this is to occur in any

00:27:56
meaningful time frame. Like I think that to me is

00:27:59
pretty interesting, you know, if this happens.

00:28:02
In 10 years or even 20 years or, you know, I think that's pretty

00:28:06
different from how I might go ahead and live my life, you

00:28:11
know, for the next decade, I guess.

00:28:13
How about you guys? Like, do you think it's even

00:28:15
worth thinking about that you know right now and how you live

00:28:18
your life or not really? I don't know.

00:28:23
I don't know. I just.

00:28:24
I just. I can't.

00:28:25
I can't decide like if I would make any just different ways.

00:28:28
I think, you know, being close to it is appealing and like you

00:28:31
know, this is a technology that I believe in unlike crypto.

00:28:35
So it's like oh run to where, you know, the actual interesting

00:28:39
thing that could really revolutionize humanity is like I

00:28:43
I want to be around it, I guess selfishly.

00:28:45
So, you know, host an AI conference with you guys, run.

00:28:48
Into the run, into the burning building, Will AI kill us all?

00:28:54
In some ways is great PR for the AI industry, even though it's

00:28:59
sort of bleak because it suggests it takes as a premise

00:29:03
that artificial intelligence is it a really amazing point and

00:29:08
super powerful and is poised to like be extremely disruptive.

00:29:12
And so if you're an investor and you're like, well, I can't save

00:29:15
the world, I will try to profit off of it.

00:29:17
In the good scenario, Will AI Kill us all?

00:29:20
Is definitely a good motivator to like deploy capital into AI.

00:29:24
Like what do you take of the fact that sort of the dumerism

00:29:27
is like great PR for the actual existing, you know, for vector

00:29:34
database companies, it's a it's a good message, you know, I

00:29:37
don't know, what do you make of this?

00:29:38
This is like war profiteering. Is the motivation here?

00:29:42
No, I'm just saying it's like, you know, the media loves it.

00:29:45
Like Ezra Klein, they will, you know talks about it all the

00:29:47
time. But like in some ways it is a

00:29:49
good it's good marketing for the IT could be just like you know a

00:29:55
fairly mundane technology that we're nowhere close to the jump

00:29:58
TAGI like the Reed Hoffman interview.

00:30:01
He's not he's not willing to sort of commit to any near time

00:30:05
horizon that that AGI is coming. It could just be like it's a

00:30:09
fairly incremental technology. I think it's good fundraising

00:30:13
pitch to say hey like. We're trying to prevent, you

00:30:17
know, AI from killing us all. Like, we're the good AI guys,

00:30:20
right? I mean, like, yeah, you're like,

00:30:21
this is life or death doing this technology, that is.

00:30:24
So powerful. Powerful.

00:30:26
So people are worried that it's going to kill us all, you know,

00:30:29
I mean, I feel like that's a good, like, you know, it's a

00:30:33
it's a conversation that's good for industry.

00:30:35
Can is there an analogy to another industry where you think

00:30:38
that this has occurred? I'm just.

00:30:40
I'm not as convinced, I guess. Like, is it great for the oil

00:30:44
and gas industry right now that, you know, climate change is?

00:30:48
Well, you could argue, you know, like sometimes, like when

00:30:52
parents, like, freak out about, like, teen stuff, that's

00:30:55
obviously not that dangerous for them.

00:30:58
That in some ways it, like, drives teens closer to it.

00:31:01
And that it's like this is totally misunderstood by the

00:31:04
sort of authorities like, but it it reinforced, it gets it in the

00:31:08
news all the time. It reinforces that this is

00:31:10
something that's sort of going on, you know?

00:31:14
I mean I think it's like pros and cons to your point, like I

00:31:17
think it might drive a lot more regulation in a shorter time

00:31:20
frame than what they would have otherwise and.

00:31:23
To your other point, and it will probably be kind of dumb

00:31:25
regulation. So, like, you know, I don't

00:31:27
know. You know, the European Union has

00:31:29
made a terrible job of doing privacy regulations and just

00:31:33
excited to see what they gin up for AI, I mean, I think.

00:31:37
Isn't Italy banned Chachi PT right now?

00:31:40
It's like it's like already you're like, God, these European

00:31:43
regulators are just off the chain.

00:31:45
Yeah, I mean so pros and cons, right.

00:31:48
Banned in Italy maybe you can raise a couple $100 million with

00:31:51
a with a slide deck at this point.

00:31:53
So you know, you get you get both, I guess.

00:31:56
I guess I don't know if I if I was Sam Almond, like would I

00:32:00
want the level of AI doomerism? Occurring right now to continue,

00:32:05
Probably not. Probably not to be clear, just

00:32:08
my actual position is that like it things are very exciting

00:32:11
right now. That this is a legitimate

00:32:14
question that will AIS kill us all is legitimately grounded in

00:32:18
like all, you know, sci-fi content and thought experiments

00:32:21
about where this is going. So I don't think it's made-up,

00:32:24
but I think a happy coincidence is that this sort of question is

00:32:28
fundamentally good PR. That's all I'm saying.

00:32:31
So I think 1 interesting topic is.

00:32:33
AI and sci-fi, right? Obviously a number of the most

00:32:36
successful films, you know, TV shows, books, everything in

00:32:40
history have been built around sci-fi, which often is driven by

00:32:44
this idea that there's an evil killer AI basically, right?

00:32:46
And I think the examples that come on top of my mind, and you

00:32:49
guys might have other ones, are like, you know The Matrix, you

00:32:52
know I Robot, as you mentioned the Terminator series, Blade

00:32:56
Runner. Blade Runner I you know, her

00:32:59
being a sort of off kilter example, but still important I

00:33:02
think. And what's interesting is almost

00:33:04
all of these ones other than her that I mentioned, it involves

00:33:08
robots as well as AI, right? It's it's like the humanoid

00:33:12
version of the AI is essential to you know, killing us all

00:33:17
right, we're we're in Terminator.

00:33:18
There's a good. Storyteller.

00:33:20
Yeah, right. Well, yeah, it might just be

00:33:21
more cinematic to fight a big metal robot.

00:33:23
So, you know, that's something. But I guess.

00:33:26
And then in her you sort of have this.

00:33:29
Scarlett Johansson AI That. At the end kind of ascends to a

00:33:33
higher plane and like leaves leaves the main character

00:33:38
Joaquin Phoenix behind, right. And I think that I to me that

00:33:43
representation seems more likely than the killer robots one and

00:33:46
that the as I said maybe we're just pets and they just sort of

00:33:49
get bored with us and and leave us behind or or we're the little

00:33:51
you know dogs that they leave alive right.

00:33:53
But I guess, what do you guys think about, well, A, what's

00:33:56
your favorite sci-fi representation?

00:33:58
And then B. Do you think robots are an

00:34:02
essential part to these AI kill us all narratives or and or will

00:34:06
the robot elements integrated with AI happen anytime soon.

00:34:10
Well, I I I can answer with my favorite.

00:34:12
I mean I think my the matrix matrix is my favorite and.

00:34:16
I was going to say The Matrix. That's all The Matrix buys into

00:34:19
exactly what you're saying. The Matrix doesn't actually say

00:34:21
it's killer robots. It it finds a way to basically

00:34:24
put killer robots on the screen. So it's a fun movie, but it's

00:34:27
like, oh, it's a computer program, which is obviously the

00:34:29
world I feel like a super intelligence would live in.

00:34:33
Like that of what does it really need to manifest in the real

00:34:37
world that much? So yeah, I, I'm.

00:34:40
And obviously I'm interested, you know, for the following

00:34:43
conversation with somebody who's written a lot about robot

00:34:46
apocalypse. Because yeah, my intuition is

00:34:48
that they're going to exist mostly mostly in the digital

00:34:51
world. Terminator two is the other one

00:34:53
where I think the robot part is interesting, but I don't know

00:34:58
how much you remember about the historical world building of the

00:35:02
AI takeover which is Skynet is an AI that achieves super

00:35:06
intelligence that then. Takes over the nuclear arsenal

00:35:10
of the United States or and then bombs Russia or whatever, and

00:35:14
then we end up in nuclear apocalypse basically, right?

00:35:16
So the initial manifestation of the AI apocalypse in in

00:35:20
Terminator and Terminator Two is is a purely software driven

00:35:25
death scenario, right? There's no, there's no need for

00:35:27
robots in the sort of Skynet back story, right?

00:35:31
It can hack into a system that ends up destroying the world,

00:35:34
which like. Realistic, right?

00:35:35
Because then you don't need any physical elements, right?

00:35:38
Of of the sort of AI doomer narrative, right?

00:35:41
You don't need the robots, you don't need the the crazy

00:35:43
biohacking stuff or whatever. You just need to hack into the

00:35:47
Pentagon essentially. And it's like game over at that

00:35:49
point, right? So I always thought that was a

00:35:52
very insightful perspective on how an evil AI could kill us all

00:35:57
and then it would just be a pure software takeover, right?

00:36:02
James, did you have a favorite? Well, I I wanted to dig.

00:36:05
Into the Matrix a little bit more, yeah.

00:36:08
But I think, you know, a lot rests on whether we think that's

00:36:14
humanity's future as well. Like are we going to be like

00:36:18
almost, you know, by choice plugging ourselves into the

00:36:23
Matrix? Like is is this neural link kind

00:36:25
of interface you know within that 50 to 100 year time frame

00:36:29
or or even sooner, right? You know does does our normal

00:36:34
day-to-day life as humans you know become more of a simulation

00:36:40
you know in in in our lifetimes. I think that's pretty

00:36:43
interesting for how the future unfolds.

00:36:48
So do you guys believe that is possible like or do you think

00:36:53
you know brain interfaces are sort of so far away.

00:36:56
I think Brain Interface is 30 + 30 plus years. 30 plus years

00:37:01
away. I kind of agree.

00:37:04
I mean, I think it's possible maybe in our lifetime, which,

00:37:08
you know, God willing is 50 to 60 more years, but it does seem

00:37:12
like we're pretty far away on neural interfaces.

00:37:15
But you know, who knows? I feel like I'm I'm more excited

00:37:18
to come back to the previous episode about like glasses and

00:37:22
maybe contact lenses at some point as a sort of, you know,

00:37:25
augmented interface for for existence rather than like.

00:37:29
Plugging in, but you never know. I mean, obviously Elon believes

00:37:32
in it. So Elon believes like this is.

00:37:34
The path right out of the AI dumerism that we've that we've

00:37:38
been talking about, We are augmented.

00:37:40
Basically. We sort of merge our brains.

00:37:42
Yeah, with with AIS and that probably requires some sort of

00:37:47
brain interface or you know matrix like environment that

00:37:50
you're living in seems hard to be able to.

00:37:53
I agree I'm not. I'm not a problem.

00:37:55
Sort of like the you you think that that basically?

00:37:59
Is not sufficient to. I just don't think it answers

00:38:02
much. The like, will it kill us?

00:38:03
Like, oh, it's like yielding our brains to it almost means like,

00:38:07
are we even running the show like it?

00:38:08
It raises a whole bunch of other questions where we could be

00:38:11
undermined just from the connection.

00:38:13
Independent. Like it's true, Yeah.

00:38:16
And it also it also like pretty quickly if you believe in super

00:38:19
intelligence kind of. Defeats the purpose of having

00:38:22
that brain power. Like why?

00:38:25
Why do you need? Yeah, why would they?

00:38:26
Why would they want? To be hanging out in our brains,

00:38:28
it's like, it's like, oh cool, I've got this brain that's not

00:38:31
as good as mine. Like, maybe consciousness is

00:38:34
this unique human charm that nobody else has, and we can give

00:38:37
the machines a taste of it. I don't know, just succinctly.

00:38:42
Do you think in the next 100 years AI will kill us all?

00:38:46
Yes or no? I will say no.

00:38:52
I think it will kill. Some of us, but not all of us.

00:38:56
That's a good way to. Think that's a good Yeah.

00:38:57
I I think like more than 1000, less than 10.

00:39:04
Oh, more I don't. Know I think more that's a very

00:39:07
tight. Yeah.

00:39:09
That's not a good range. I I think I was smart to give

00:39:11
it. Yeah.

00:39:12
Yeah, yeah, yeah. Some of well, you came up with

00:39:13
some of us. Some of us is the right feels

00:39:15
like clearly. It feels like even a mistake.

00:39:17
I will AI intentionally well it was kill us all.

00:39:21
I say definitely no kill us all that's.

00:39:23
I say definitely there'll still be some humans at the.

00:39:25
End I can't say definitely. I mean it seems pretty UN, you

00:39:30
know, well if we do all get killed, we'll be dead and so

00:39:33
there's nothing to gain from the prediction.

00:39:35
Whereas if we live, I was correct and so it was there was

00:39:39
good utility in the prediction. So it's I don't know why I would

00:39:41
ever go the other direction. This is like Pascal's wager.

00:39:44
With like super intelligent AI, like, all right, this was fun.

00:39:50
I mean, this is this is a dream. Getting to hang out and talk

00:39:53
about this and call it work. Welcome.

00:39:59
Hey, welcome. To the second segment, here I've

00:40:02
got author Daniel H Wilson, author of How to Survive a Robot

00:40:06
Uprising, Where's My Jetpack, How to Build a Robot Army, and

00:40:11
Robo Apocalypse. The title of this or at least

00:40:15
the working title of this episode is Will AI Kill us All.

00:40:19
So given your interest and and work we wanted to talk with

00:40:24
someone who's really like been thinking about this for a long

00:40:28
time. I'm I'm curious like what what

00:40:31
first got you interest in this sort of dystopian question of

00:40:35
sort of the machines coming for humanity.

00:40:39
Yeah, well, so I just grew. Up with science fiction, right?

00:40:42
Like, like a lot of people, so. Initially, I was just interested

00:40:45
in reading, you know, any type of science fiction I could read

00:40:49
or watch in any type of movie. I just loved robots.

00:40:52
And so that's ultimately though I loved robots so much that I

00:40:57
studied robotics. So I ended up going to Carnegie

00:40:59
Mellon. I did a a whole PhD in robotics.

00:41:02
And while I was at Carnegie Mellon, you know, I'm surrounded

00:41:05
by roboticists, I'm surrounded by robots.

00:41:07
We're in the high Bay world. We're just in the lab.

00:41:11
And nobody was really trying to build robots that would destroy

00:41:15
the world. I noticed.

00:41:17
Nobody says they're trying. Generally, nobody comes out and

00:41:20
says. It but like, there's this really

00:41:22
stark difference between how robots are portrayed in pop

00:41:27
culture and how they're and and the actual you know, the actual

00:41:32
mechanics of building robots and why people are building them and

00:41:34
how they're explaining, you know why they need the money to build

00:41:37
them and all that and so. Really.

00:41:39
They just have this super bad, you know, reputation.

00:41:41
Right. And so I thought that was funny.

00:41:45
So then when I was still in grad school, I wrote How to Survive a

00:41:48
Robot Uprising, where I was just like, all right, I'm gonna take

00:41:51
them serious, right? Just, OK, fine.

00:41:53
If this is what everyone's expecting.

00:41:55
So I went to, you know, the people that were building legs,

00:41:59
and I asked, you know, how would you trip a robot?

00:42:01
How would you get away? I went to the people doing

00:42:04
sensors, the people doing all different things.

00:42:07
And I just asked those questions and I put it all, you know,

00:42:09
tongue in cheek into this book. Of course.

00:42:13
Then you start to look at like actual military applications.

00:42:15
You start to see how robotics and AI are, are, you know, being

00:42:20
weaponized in some cases. And then, you know, that was

00:42:23
more robo apocalypse much. No tongue in cheek like I I was

00:42:27
like, OK, let's. You're starting to believe it.

00:42:29
You're like, OK, this. No, I never, really.

00:42:32
I mean, honestly, I like. I like.

00:42:34
The killer robot meme really, for me, just gives you a lot of

00:42:38
latitude to to to think about humanity and think about, you

00:42:42
know, what makes us people, what makes them robots.

00:42:45
You know, do you do? You do you think like as a

00:42:49
person and somebody? Do you think robots are likely

00:42:51
to kill human beings? Or, like, do you see this as a

00:42:54
fictional actress? No, I well, look, I.

00:42:57
I kind of ascribed to the Lately I've been thinking a lot about

00:43:00
something called the psychotic ape.

00:43:02
Theory or the the killer ape theory, which is think about the

00:43:05
opening scenes of like Space Odyssey 2000, 2001.

00:43:09
Like it's those the, you know, they touch the the monolith.

00:43:13
It gives them a leap forward in evolution.

00:43:15
And what do the apes do? Well, they figure out how to use

00:43:17
weapons to bash each other's brains in.

00:43:19
And there's this kind of notion that humanity sort of evolves

00:43:23
technologically when we're trying to kill each other or

00:43:26
stop each other from killing each, you know, and so.

00:43:30
So, yeah, so I've been thinking a lot about that and and

00:43:32
basically you look at that and you realize we will use any

00:43:35
technology to kill each other. So really it's the psychotic ape

00:43:38
that you need to worry about. It's not the and to a lesser

00:43:42
extent the, you know, the capitalist ape, the known killer

00:43:46
species versus the sort of imagined killer who's killed

00:43:49
more people than anybody. Else, right.

00:43:51
You know you're going to want to look at the yeah, the person

00:43:55
across the table from you, but. I think that in terms of being

00:43:59
used as weapons, you know, obviously robots can be

00:44:03
extremely dangerous in that way. I mean I'm curious sort of you

00:44:08
know as a technologist somebody's thought about this

00:44:10
like robots versus like the large language model, sort of

00:44:15
totally software based being that maybe reaches super

00:44:19
intelligence like I I see why for like especially film, why

00:44:23
like robots are great because you can see them.

00:44:25
I'm curious in terms of like the actual like thought exercise and

00:44:28
what you think is likely whether you imagine robots or the sort

00:44:33
of potential threat being just a software.

00:44:36
Yeah, well, so I think of this as like a.

00:44:38
Consumer like a like a product design issue, right?

00:44:41
Like, you know, I mean, look, we're human beings are

00:44:44
comfortable with a certain amount of risk in our lives.

00:44:46
You know what, 40 people get killed driving around in cars,

00:44:51
you know, every year I've got a garbage disposal in my kitchen.

00:44:54
I mean, if I put my hand in the wrong place in my own home, like

00:44:57
I won't have a hand anymore, right?

00:44:59
But so but there's also the. Devices are built in order to

00:45:04
try to make them as safe as possible, right?

00:45:06
So if you think of that as a hardware problem on the hardware

00:45:09
side, you know, you're trying to design A consumer product that's

00:45:12
not gonna harm people. And that's just the, I mean we

00:45:16
people do that every day, right? We've been doing that for years

00:45:18
and years. It's a, it's a really under well

00:45:20
understood kind of task. And I think if you're building

00:45:23
hardware, it's an easier task because then when you move to

00:45:27
software it becomes much more complex, right, in terms of.

00:45:31
What the harm might be. So for instance, right now I

00:45:34
live in Portland, OR and in Seattle up the road, there they

00:45:38
are. The the school system is suing

00:45:41
like meta, right, because they have documented harm that using

00:45:46
the social media has harmed children.

00:45:48
It's like the Phillip Morris thing again, right?

00:45:50
I mean, they know for a fact using this product causes harm

00:45:54
to children. They die.

00:45:55
They commit suicide. So.

00:45:58
If you identify that, I mean that that takes a little while

00:46:01
to put those to connect those dots, right?

00:46:04
It's not the same as if they were just selling toasters that

00:46:07
were electrocuting people like you understand pretty clearly

00:46:10
like, OK, where's the danger there, right.

00:46:11
I mean, you can see if a robot is causing physical violence

00:46:15
against and so here's where it. Gets like even more complicated,

00:46:19
right? So, so let's say by the way,

00:46:21
we've got chat bots. Purely software.

00:46:24
Yes, they're going to. People are going to get LED down

00:46:26
crazy rabbit holes by these things and there's going to be,

00:46:29
I predict, extremely harmful scenarios occurring from people

00:46:35
just being told whatever they want to hear and and eventually

00:46:38
being potentially radicalized or whatever causing harm in the

00:46:41
community. It won't be the robot doing it

00:46:44
directly, but it, but it or the sorry, the chat bot or the large

00:46:47
language model. But then you think about the

00:46:49
synthesis of these two things as well.

00:46:51
So like think about. Like a self driving car.

00:46:55
So now I mean it gets really complicated because you've got

00:46:58
brains and you've got the hardware.

00:47:00
And so the question becomes like, OK, the car crashed, like

00:47:04
whose fault is that? And so right now that's

00:47:06
something we're working out, you know in the court systems and

00:47:09
and we're using all of our existing machinery of our

00:47:12
society to try to sort that out and figure that out.

00:47:17
So, you know, I I would say those are really the three

00:47:19
areas, you know, pure software, the hybrid and then just the

00:47:22
purely mechanical problem of of to have like a row apocalypse or

00:47:28
like to have robots like it, it requires like the idea that

00:47:32
there's some like super intelligence, right.

00:47:34
I mean, all of that that's predicated on the idea.

00:47:36
Yeah, in pop culture, you gotta. Have the singularity before.

00:47:39
I mean, that's just kind of like ticking the box, right?

00:47:42
I don't know how realistic that is, but.

00:47:45
Right. Well, that's I guess to me like,

00:47:47
you know, I know it's fiction, but like, yeah, it just feels

00:47:51
like if we have this super intelligence is isn't that a

00:47:54
threat enough without it taking sort of robotic form or I guess

00:47:58
particular question, you know, there's all.

00:48:00
These predictions about when the singularity's gonna happen and

00:48:03
how it's gonna be exponential, so it's gonna go slow and then

00:48:06
happen fast and. And like it's really interesting

00:48:09
because you know I was, I have a degree in machine learning.

00:48:12
I mean I was studying it before they were calling it machine

00:48:14
learning like when it, it was called knowledge discovery and

00:48:17
data mining and all these different.

00:48:19
And so you you look at that and what was happening early on was

00:48:23
there were all these different approaches to to trying to mimic

00:48:26
intelligence, right. And we would use a whole suite

00:48:30
of these things. And what happened was neural

00:48:32
networks in the last few years jumped out ahead.

00:48:36
And I would argue, although I'm sure other people would argue

00:48:39
with me, but I would argue there was no amazing scientific

00:48:43
breakthrough, right? What happened was processors got

00:48:47
faster and we had access to just a ton of data.

00:48:50
And so that data and those processors just are brute

00:48:54
forcing what are fairly simple algorithms that have been well

00:48:58
known for a long time and they're getting this kind of

00:49:03
intelligentish behavior out of it.

00:49:06
And so that model isn't really doesn't really sync up with the

00:49:11
way I think people thought AI was going to go, right.

00:49:14
We didn't think, oh, we're just going to use some old algorithm

00:49:17
and just throw more processing power at it until it gets good.

00:49:19
Like there's more GPUs. We're going to get it, yeah.

00:49:23
And so I. Don't know if it's.

00:49:25
So what I'm trying to say is, if you look at that and we just, I

00:49:29
mean the way to improve chat bots I guess is to keep throwing

00:49:33
more data. And keep throwing more

00:49:35
processing power. But I don't feel like there's

00:49:38
any type of singularity that's going to come out of that.

00:49:40
I feel like it's like, Oh yeah, it's just, I feel like that our

00:49:42
ChatGPT, yeah Or yeah, it'll. Just be like a more convincing

00:49:46
ChatGPT, it's just really, I mean it's just really just

00:49:51
regurgitating everything that it's, you know, in a very smart

00:49:54
way that it's read on the Internet, which by the way, I

00:49:57
mean, God, that's the worst of humanity, right?

00:50:00
I mean. What's it got?

00:50:02
It's trained on I know by the way, right?

00:50:03
They're all desperate to train it off.

00:50:05
Reddit, for instance. Yeah.

00:50:07
Oh my God. You know, Reddit can be OK, but

00:50:11
there's smart people there. But yeah, but you're looking at

00:50:14
the Internet and it's like, oh Lord, but but anyway, I don't

00:50:17
really, me personally, I don't see that.

00:50:19
I see that as a kind of a dead end.

00:50:21
I don't feel like that's headed toward the singularity

00:50:24
necessarily. Yeah.

00:50:25
Is there any existing dystopian work that you find most

00:50:29
plausible or like what? I mean, you know, there's so you

00:50:33
know, I've we talked there's The Matrix.

00:50:34
There's sort of the Terminator movies.

00:50:36
Like I don't know, like, yeah, I'm sure you've spent a lot of

00:50:39
time in that world. Like are is there one that you

00:50:42
find like, oh this this is the most sort of real world?

00:50:46
Well, I mean. I don't think that the purpose

00:50:49
of science fiction is to predict the future necessarily, but I

00:50:53
would say you know, if you're thinking about.

00:50:57
I think the most boring stuff is actually the most realistic, and

00:51:01
in a lot of ways look for the bad books that no, I mean no

00:51:04
actually. That's not true.

00:51:05
Like for instance I would say Terminator, which is obviously

00:51:08
great, but think about that. Villain think about Skynet.

00:51:13
Skynet's just a dumb, dumb computer program that just has

00:51:17
decided whatever it wants to destroy all humans.

00:51:20
I mean, and there's no reason for it.

00:51:22
There's no It's just like it just wants to kill everybody.

00:51:26
I mean that is so boring. You couldn't get away with an

00:51:28
actual human antagonist who is that simple?

00:51:32
People would be like, well why like and so like if you think of

00:51:39
that as its end goal, that's not very super intelligent, is it?

00:51:42
Kill all humans? I mean that's like Bender level

00:51:45
right now. So I would say that you know

00:51:49
that in a lot of ways. It is fairly realistic if

00:51:51
somebody just programmed a computer to tell it.

00:51:54
Just super simple boring goal like that.

00:51:57
Kill everybody like then you know I feel like that's fairly

00:52:00
realistic. To step away from the yeah

00:52:04
dystopian question for a second I'm curious just like as an

00:52:07
author one like are you using ChatGPT at all like in any way

00:52:13
or any other sort of LM tools? Well so first of all I.

00:52:19
All of my novels are in the training data, which means

00:52:23
hopefully I'll be part of a lawsuit.

00:52:24
Yeah, I know. I was going to ask you about

00:52:26
that next. OK?

00:52:27
I asked. I went in the chat.

00:52:28
GPT and I said, hey, write me a short story in the in the vein

00:52:32
of Daniel H Wilson. And it, like, immediately wrote.

00:52:34
I didn't say anything else. And it really wrote about

00:52:36
robots, right? It wrote, it wrote science

00:52:38
fiction. It was really clear that it had

00:52:40
been trained on my stuff. I'm like, tough.

00:52:42
Well, that's bullshit. First of all, I never gave

00:52:44
anybody permission to do that. And so second of all, like, I I

00:52:49
think it's so funny. So I had I'm not going to name

00:52:52
the corporation because I don't want to get into trouble.

00:52:54
I had a large large corporation approached me before ChatGPT

00:52:59
before Open AI released before the the these Gans became like a

00:53:04
big deal. So this is maybe probably.

00:53:08
Six months to a year before everybody knew about that and

00:53:12
that that came out and Open AI really just broke the whole

00:53:15
deal. And they it was a researcher

00:53:18
that called me and they wanted to know if I would test out this

00:53:21
new program that they had that was basically ChatGPT and and

00:53:25
they said we want you to use it to help you write.

00:53:30
And I'm like OK, so then what How will it help me write?

00:53:33
And they say well. And and these are just such

00:53:36
sweet researchers that don't have any GD idea about what the

00:53:40
real world is, is hell is going on in real world.

00:53:43
And they're like, it'll help you be creative.

00:53:46
And I'm like, oh, creativity, right?

00:53:50
That thing. I hate doing that thing that I

00:53:53
spent the last 20 years teaching myself to take.

00:53:57
What's in my head? And put it in the world by

00:53:59
memorizing all these really boring writing skills which are

00:54:02
just pure hell. And then they want to come in

00:54:04
and take the one good thing. I'm like, why do you think

00:54:07
writers write? You think because we love

00:54:10
typing. We like pushing letters.

00:54:13
I'm like, what the hell are you talking about?

00:54:15
I was like, you need to be prepared for everyone that you

00:54:18
give this tool to, to give you super negative feedback.

00:54:21
Because they wanted to include me in, like, this report about

00:54:25
how this tool would be used, and they wanted to publicize this

00:54:27
report as well. And so they they went back and

00:54:31
they came back and they said, hey, we got great news.

00:54:33
I got permission from, like, the program manager on this to

00:54:36
include your feedback, even if it's negative.

00:54:41
And I was like, you know what? I'm gonna just take a pass on

00:54:44
this. I wish you luck with your

00:54:45
project. Right.

00:54:46
So, man, no. You know what I think about,

00:54:50
man? The thing that cracks me up is I

00:54:54
used to really hate on. Asimov because I was like, robot

00:54:59
psychologists, What kind of bullshit is that?

00:55:01
Like, you program robots, right? I spent all this time, you know,

00:55:04
programming and learning all of these programming languages and

00:55:07
and everything you got to do in order to in order to speak to

00:55:11
the robot mind, because it doesn't speak English, right?

00:55:14
And I used to think that the idea of a robot psychologist was

00:55:17
the dumbest thing ever. The idea of a positronic brain,

00:55:20
by the way. Which is made fully and it's

00:55:23
just boom, it's done. It's crystallized.

00:55:25
You can't go back and change it, right?

00:55:27
And that's and. And by the way, that's exactly

00:55:29
what happens. So neural networks are black

00:55:31
boxes. You can't go in and fiddle with

00:55:33
it because all the weights make no sense, like the human brain.

00:55:36
So it becomes like psychology and then now if you.

00:55:39
Want to for instance? Yeah.

00:55:40
If you want to game a like a like a chat bot, if you want to

00:55:46
trick it into doing something, you totally have to use

00:55:48
psychological things. It's totally psychology.

00:55:51
I mean, Bravo. Asimov.

00:55:54
Bravo, Sir, Are you? Do you think you will, like, try

00:55:57
to sue? Like if somebody comes to you

00:55:59
and says we're going to sue these guys for.

00:56:01
Oh yeah. Oh, absolutely.

00:56:02
No, that this has to happen. Look, I mean, I I get people

00:56:07
trying to make money. I get, you know, capitalism.

00:56:11
I understand it. And this is a case where you

00:56:14
know, they're they're they have to be sued.

00:56:17
I mean, we're just because we're creating new, this is a

00:56:19
completely new domain. And if not, OK, to rip people

00:56:23
off and then chop up their stuff and train an algorithm.

00:56:26
And so our country or whatever is going to have to figure that

00:56:30
out and get that down in law. And that's going to require,

00:56:33
yeah, courts, I mean, so absolutely they're all going to

00:56:35
get sued. Do you do you think?

00:56:39
Sci-fi and dystopias are bad for AI like even talking to ChatGPT,

00:56:45
it's funny when you ask it to imagine things it's been trained

00:56:48
on all that. Exactly it parents all that.

00:56:50
And so it can sound really spooky like it feels like you

00:56:54
know you're like I'm you know this is imagination it's fiction

00:56:57
like. But I do think people, you know,

00:56:59
seriously turn to science fiction when they try and game

00:57:02
out where sort of a technology we don't really understand,

00:57:05
that's developed faster than we expect is progressing.

00:57:09
So yeah, is it, is that bad in some ways?

00:57:11
Do you do you regret that? I think it says a lot more about

00:57:15
us than it does about them them being the robots, you know, like

00:57:20
what I've been thinking about lately.

00:57:21
So I don't know know if you've read my books, but I I'm from

00:57:25
Oklahoma. I'm, I grew up in the Cherokee

00:57:27
Nation. I'm a Cherokee citizen and I

00:57:28
write a lot. There's a lot of native

00:57:30
characters and stuff I do. And so I think a lot about

00:57:33
technology from like a native perspective.

00:57:35
And one thing I've been thinking a lot about lately is just kind

00:57:39
of like how a lot of this super negative, like robots and also

00:57:44
first contact, like aliens, they always show up and what do they

00:57:48
do, man, they do exactly what all the colonizer civilizations

00:57:54
did to indigenous people all over the world.

00:57:56
So we got a civilization, we got a society, a culture.

00:58:00
It's all built on colonization, right?

00:58:03
So it means. A bunch of people with superior

00:58:06
technology showed up someplace, murdered everyone, destroyed

00:58:11
their culture. I mean, Independence Day,

00:58:13
they're blowing up monuments. So you think that is like

00:58:17
resource extraction, right? They're they're what?

00:58:19
Are they stealing our water or they're stealing our air or the

00:58:22
and then just completely dominating other people's

00:58:25
bodies. You know, you think about

00:58:27
Invasion of the Body Snatchers. And just like so, that type of

00:58:31
fear, I think, is cooked into our civilization.

00:58:35
Based on our origins and it comes seeping out into our pop

00:58:39
culture in a lot of different ways, but in particular in

00:58:43
science fiction, we see it so much.

00:58:45
Do do you think we should stop? You know, there are even there

00:58:49
are AI companies that say, oh, we should pause for six months.

00:58:52
I mean, originally it sounded like you were saying you didn't

00:58:54
think just adding more, you know, compute was gonna like

00:58:58
create this super intelligence. But yeah, do you, do you have

00:59:01
the impulse that, you know, we should slow down or stop this

00:59:04
research if we don't know what's going to happen?

00:59:06
Yeah, Well, I don't think that. That's what they're afraid of.

00:59:08
I don't think they're afraid of us crossing the singularity

00:59:11
threshold or something. I mean, they're just worried

00:59:13
about. So for instance, I yeah, I will

00:59:16
tell you that the US military is actively doing a lot of threat

00:59:21
scenarios that involve this kind of technology being weaponized,

00:59:26
this sort of disinformation on a mass scale, hugely distracting

00:59:31
events that could occur. I mean at this point I mean you

00:59:35
could get, you could get a phone call from someone it could be

00:59:38
just like in robo apocalypse, you know where everybody gets a

00:59:41
phone call from somebody that they're related to and they

00:59:44
trust and they all get told to go to a certain is a bad, bad

00:59:48
situation. I mean, all that stuff is

00:59:50
starting to. Right.

00:59:53
I really focused on this idea of it acting independently, but

00:59:56
obviously, just like bad people aren't, no, this is always going

00:59:59
to be bad. People and and so, so first of

01:00:01
all I don't think that's they're not worried about that

01:00:02
singularity. They're worried about this being

01:00:04
weaponized And in terms of putting a pause on it, I mean,

01:00:08
yeah, I mean why not. I I would say only put a pause

01:00:12
on it until we figure out the the rules around it.

01:00:16
But but the fact is this like that's not how the United States

01:00:19
works. Like, we don't fix anything

01:00:22
until it's broken, right? I mean, there's not I, I just

01:00:25
don't have a lot of confidence that they're going to say, OK,

01:00:29
six month pause and we're going to work out all the laws and

01:00:32
everything's going to be all ready to go in six months.

01:00:35
Like, hell no, that's not going to work.

01:00:37
So I mean, I think it's a a nice idea, but I'm I'm extremely

01:00:42
dubious and skeptical that could ever have any actual, you know,

01:00:46
useful. I mean other countries.

01:00:48
Are still going to work. I mean, there's the, you know,

01:00:49
sort of China boogeyman where it's like you can't stop the

01:00:52
whole world from working on it. And so there's that, honestly,

01:00:55
though, there's the there's the, the other side of that coin,

01:00:58
which is, you know, thank God that they're actually interested

01:01:00
in privacy in Europe, right? Like, at least there's some

01:01:04
people standing up and saying, hey, like, we don't just have to

01:01:07
accept this status quo, just 'cause you're a giant billion

01:01:10
dollar corporation or billion trillion dollar corporation,

01:01:14
just. Something you referenced

01:01:15
earlier. Does the military come to you to

01:01:17
help game out sort of technological?

01:01:20
Yeah, I've done a little bit of that.

01:01:22
It's pretty fun. So I'm not the only one who's

01:01:25
like. Oh, we should like, get some guy

01:01:27
who thought about it. Sort of fiction in fiction.

01:01:30
Just sort of say what happened in the real world.

01:01:32
Yeah, I'm not. The only one, But there are,

01:01:33
there are and and honestly, they're training their own.

01:01:36
They're training their own science fiction, right?

01:01:38
I will tell you this like if you are, if you're a general in the

01:01:42
Air Force or or whatever, if you're some kind of higher up in

01:01:45
the military, you get a lot of white papers that are describing

01:01:49
the technological capabilities of the enemy or or of various

01:01:52
munitions and things like that. And these papers are very dry

01:01:56
and just like very, you know, it's much better.

01:02:00
I think at least it's useful to have somebody actually

01:02:05
creatively tell a story that's gonna stick with you, cuz that's

01:02:08
the way humans communicate, right?

01:02:09
Through stories. And then you and then the

01:02:12
general is sitting there thinking, OK, now I've I've just

01:02:15
read a story about, you know, an actual person being impacted by

01:02:19
this technology in that way. And you can really visualize

01:02:22
what it means. And it's more than just facts

01:02:24
and figures when you say they're the military.

01:02:26
Is trained science fiction writers.

01:02:27
You mean those authors of the boring white papers?

01:02:30
Or you mean like true science fiction writers?

01:02:32
No, I mean that they they're literally you can yeah if you're

01:02:36
in the military you can There are certain programs you can

01:02:38
sort of sign up for where you get taught.

01:02:41
Yeah. Where you get they bring in

01:02:42
science fiction authors to, to help teach the the, the military

01:02:47
people how to sort of, yeah, take those super real world

01:02:51
scenarios and write them in an engaging way so that you have

01:02:54
sort of a case study and you say, look, here's one way this

01:02:57
could be employed. And that's just good.

01:03:00
That's just good science fiction writing.

01:03:01
That's how that is. But they would prefer not to

01:03:04
outsource it to, you know, hippies like me.

01:03:09
I'm curious. How much sort of the

01:03:12
technological developments we're seeing right now like you know

01:03:15
mid journey, whatever are influencing what?

01:03:19
You plan to write or do you think, like science fiction will

01:03:22
change based on what we see is possible?

01:03:26
Yeah, I mean, absolutely. Is this changing science fiction

01:03:29
every day? For sure.

01:03:31
And and and look, I think it was a watershed moment.

01:03:34
Like whenever you really just finally just talk to it and you

01:03:37
realize, oh, this thing is like, yeah, this thing can just talk

01:03:41
to me. And by the way, right now

01:03:43
there's this huge rush, as, you know, like to plug this stuff in

01:03:48
anywhere you can. It's like, oh, I don't know,

01:03:50
what is it? It's like if it's like

01:03:52
mayonnaise, you know, they're just putting it on.

01:03:53
We just discovered this thing. Let's put it on everything.

01:03:55
Right, right, right. It's desperation, yeah.

01:03:58
It's going to taste like crap on a lot of stuff, but like it's

01:04:01
going to work out in some places.

01:04:03
One thing I see which I find interesting is it occupying as

01:04:09
almost a homunculus, right? You think of a homunculus as

01:04:12
like a little tiny person that's inside of you kind of driving

01:04:15
you around or you think of like, who's the bad guy from

01:04:17
SpongeBob? You know, he's always driving a

01:04:20
giant robot, right, The little guy.

01:04:22
So if you think of the ChatGPT and you have like a it's like

01:04:27
the homunculus. You stick it inside of a

01:04:28
hardware platform so you can just tell ChatGPT here's a super

01:04:33
simple programming language that allows you to drive around this

01:04:38
autonomous vehicle or allows you to drive around this humanoid

01:04:42
robot platform. And then you tell it in English,

01:04:46
hey, go make a sandwich. And then it looks at all of its

01:04:50
little controls and it translates that and says, OK,

01:04:53
I'm going to drive this thing in there because it kind of get has

01:04:56
the right idea about what making a sandwich is.

01:04:58
But a but a hardware platform is just a bunch of like XYZ

01:05:03
coordinates where you're going to put your limbs and it's and

01:05:05
that translation right there, that's a spot where where

01:05:09
ChatGPT is kind of getting just like plastered in, you know,

01:05:12
just slap it in there and it solves all those problems.

01:05:15
And so that's kind of interesting because now you're

01:05:18
going to see ChatGPT having the ability to interact with the

01:05:21
real world via like whatever delivery bots or like or drones

01:05:27
or or humanoid robots. And so that's where you maybe

01:05:31
get into a little bit of consumer trouble.

01:05:33
Again, it's like an autonomous car, right?

01:05:36
Is there? Anything you've written where,

01:05:38
based on how technology has developed, you feel like, oh, it

01:05:42
feels less plausible than when I wrote it?

01:05:44
Or like how much like forecasting where where what's

01:05:48
possible do you see as like part of your yeah, well, I think

01:05:53
that, you know, what happens is you look at what's out there in

01:05:56
the world and you run with it and you start thinking of all

01:05:59
the different ways it could be. And that's where science fiction

01:06:01
is fun, right, 'cause you suddenly you're like, I never

01:06:04
really thought of it like that. But but there it is.

01:06:06
And dystopias are fun too, because, you know, it's

01:06:09
dangerous. It's exciting.

01:06:11
And so the difference between my writing and and is that I had

01:06:17
this degree in robotics. I all my friends from when I was

01:06:20
20, they're all running corporate robot corporations.

01:06:24
They're driving the Mars Rovers like that so that my cohort, you

01:06:28
know, I'm like my all my old friends, that's what they're

01:06:31
doing. And so I feel like I have a

01:06:33
little bit of a sneak peek. So like for instance when I

01:06:35
wrote Robo Apocalypse, that was 10 years ago or whatever, and it

01:06:40
I was watching and seeing the technology that was 5 or 10

01:06:43
years out. So it's all come true.

01:06:46
In fact, that's the joke because it's still Robo Apocalypse is

01:06:49
still with Spielberg at Amblin and and you know, development

01:06:53
continues on this movie. But like what we've what we've

01:06:56
told Spielberg is hey man, let's do this before it becomes a

01:07:00
historical documentary. All of this stuff is, you know,

01:07:04
so in in Robo Apocalypse there are and Robo Genesis the sequel,

01:07:07
which came out a few years later.

01:07:09
There are autonomous vehicles. There is essentially ChatGPT as

01:07:14
as personal assistants on phones.

01:07:16
I mean I wrote that before before Siri came out.

01:07:20
Wow. It was 2000.

01:07:22
The book came out in like 2011 or something a long time ago.

01:07:25
But anyway, so yeah, I mean my latest novel is that the

01:07:31
Andromeda Evolution which is a sequel to Michael Crichton's The

01:07:34
Andromeda Strain, which I he passed away and I did this with

01:07:37
in cooperation with his, with his estate.

01:07:40
And and you know that one was again like very dialed in in

01:07:43
terms of just because it's Crichton, the the government

01:07:47
like exactly which scientists would be going where how they be

01:07:50
chosen and and and I mean just so so locking in those details

01:07:54
is is really important for a, for a techno thriller, you know

01:07:58
that certain genre of science fiction do you do you give in?

01:08:01
Sort of, yeah. The professional need to be

01:08:04
forward-looking. Do you have any sort of

01:08:08
predictions or about where like culture is going or what what

01:08:13
you think, what you really think will be the more sort of

01:08:15
essential change like it. I mean it does feel like, you

01:08:18
know, like self driving cars and stuff like that.

01:08:21
Feel feel close? Certainly in San Francisco.

01:08:24
I mean, I think culturally human beings are always the wild card.

01:08:29
The technology is not that hard to predict usually.

01:08:31
I mean, you can usually see what people are up to 5-10 years out,

01:08:34
you, you know what what they're going for, right?

01:08:37
But then with human beings, you never know, like autonomous

01:08:39
vehicles. I was much more sort of bullish

01:08:41
on that. I thought those things would

01:08:43
already be here, being used in a bigger way.

01:08:47
But as it turns out, one person gets killed and humans freak

01:08:51
out, even though 40 or 50 people are dying every year with

01:08:54
Inhuman human. But then the robot screws up

01:08:57
once and everybody loses their mind and they shut them all

01:08:59
down. That's like, I didn't really

01:09:01
predict that. I thought we had a higher

01:09:02
tolerance for for that or or we cared less so like.

01:09:07
So it's really tough to predict the human element.

01:09:09
And I think that with Chachi PT, man, things can go a lot of

01:09:13
different ways, right? The movie Her is interesting in

01:09:18
terms of falling in love with it.

01:09:19
I think that's totally going to happen.

01:09:22
It can, It just tells us what we want to hear, right.

01:09:25
And so I think that that is just so dangerous and just such a

01:09:29
siren song. For instance, the researcher who

01:09:33
thought that that that at Google.

01:09:37
Like something? Yeah, he thought that.

01:09:39
That program was sentient. And did you just?

01:09:42
I mean, when you read those transcripts, you can just see

01:09:45
he's leading it. He's telling it what he wants to

01:09:48
hear and it every, and he's rewarding it every time it comes

01:09:50
back and tells him a version of what he wants to hear.

01:09:53
It's almost like he said write me a short story where you're a

01:09:56
sentient AI and has read all those stories.

01:10:00
I know that it's read mine. And so like, you know, it's

01:10:03
going to jam those things out. And so I mean, think about that,

01:10:07
right? Like if you're just sitting and

01:10:11
you've got sort of mass disinformation campaigns that

01:10:14
are amplifying certain ideas. I mean, man, the the the

01:10:18
potential for divisiveness I think is just super scary.

01:10:23
And in fact that's probably the biggest thing I'm afraid of is

01:10:27
that we just end up in these crazy echo chambers.

01:10:30
Like, it just becomes a very. Effective propaganda tool or you

01:10:34
know becomes, yeah man, really good at just and and the and the

01:10:37
and the thing that makes it so dangerous is that it's a mass

01:10:41
scale, individualized attack. So normally when you scale up to

01:10:46
a mass level, you get A1 size fits all type scenario, right.

01:10:50
You know it's not that effective in this case, man.

01:10:53
You can scale it all the way and then it's going to be

01:10:56
individualized for each person and it's like that's pretty

01:10:59
crazy. That's pretty scary.

01:11:00
I mean the best case scenario of that is just we end up buying a

01:11:03
bunch of shit we don't need because we're being advertised

01:11:06
to. That's the best case.

01:11:07
We we get much better. Yeah.

01:11:09
I mean it's like Advanced micro targeting, I mean you know,

01:11:12
exactly with elections, right. It used to be like well at least

01:11:15
you had to use the same message for everybody and then it was

01:11:17
like OK, we can break it down. And then you know with Facebook,

01:11:20
you know, obviously, yeah, there were these the micro targeting

01:11:23
what you're saying is now it's even easier to actually create

01:11:26
the content for the sort of niche and and The thing is it's

01:11:29
very human like, right. So one thing that I'm kind of, I

01:11:33
don't know, I'm not after during COVID, right.

01:11:36
We all got very used to interacting with machines and

01:11:39
everything that used to have a human element to it.

01:11:43
They're trying to get that out, right.

01:11:44
So it's a service. You want it, I give you the

01:11:47
money, you give me the thing, no chit chat.

01:11:50
And I need it now. And I don't want any bull,

01:11:53
right? And so we did that.

01:11:55
We were doing that online and we're doing that also with

01:11:57
lifelike machines that are talking to us like people.

01:12:01
And then we go out into the real world.

01:12:02
You know, if we're ordering off of like these smart whatevers

01:12:05
and and you know, these chat windows popping up and and stuff

01:12:07
like that, I think it's going to only become more of that human

01:12:11
like machines selling us stuff. The problem though is that then

01:12:15
you go and you know, people are being really rude to like their

01:12:19
baristas and being really rude to why?

01:12:22
Because we're trained. We can be so mean to like we're

01:12:24
trained. It's it's a classic, like you're

01:12:26
typing in to support, like, go fuck yourself, be human.

01:12:29
It's a machine. And then it turns out it's like

01:12:31
a real person. Yeah.

01:12:32
Oops. Oh, oops.

01:12:33
Yeah, but then also like, oops, right?

01:12:35
Do you? Do you worry on?

01:12:37
The like being nice to the AI systems for for like, I don't

01:12:42
know, there's some people. Like, oh, you should be nice to

01:12:44
it because it could be sentient someday.

01:12:48
I mean, yes, I do say please and thank you.

01:12:51
I got rid of the. I got rid of Alexa.

01:12:53
I have a Sonos now, but I feel like it's slightly less evil.

01:12:57
But well, I say, yeah, I say please and thank you.

01:13:00
Not because I care about it, memorizing whether I was polite.

01:13:03
Robots will not give any. They don't care if we're polite.

01:13:06
They're not going to. But I'm modeling in an

01:13:09
interaction with a human like entity to my children and to

01:13:13
myself. And so do I want to be the kind

01:13:16
of person who's a jerk and shouts, you know, at the no.

01:13:21
Like, I don't want my children to be like that either.

01:13:23
So I say please and thank you. I try not to be.

01:13:26
I mean, sometimes it is. It gets a little bit annoying

01:13:29
whenever it gets everything wrong.

01:13:30
All right, my final. Will AI kill us all?

01:13:35
Like do you? Is it in in the sort of end of

01:13:38
humanity chances? Where do you put sort of robots

01:13:42
and AI being sort of the cause of our demise?

01:13:46
You know, I don't. I think again, it's those

01:13:50
psychotic apes. You know, if you're gonna read

01:13:52
a, if you're gonna read a story about the the role of machines

01:13:56
in the demise of Man, I would read There Will Come Soft Rains

01:14:01
by Ray Bradbury. And they'll just be keeping on

01:14:05
keeping on trying to do their thing.

01:14:07
And it'll be us. We we kill each other nuclear.

01:14:09
War, but the machines keep going.

01:14:12
Keep on trying to do their thing, man.

01:14:15
Daniel H Wilson, thank you. So much for coming on the show.

01:14:17
I really it was a great time to chat with you.

01:14:19
Cool, man. It was a pleasure.

01:14:21
That's our episode. Thanks.

01:14:22
So much to Max Child and James Wilsterman of Volley, my Co

01:14:25
host. I'm Eric Newcomer.

01:14:27
Shout out to Scott Brody, our producer Riley Kinsella, Chief

01:14:32
of Staff for newcomer Gabby Caliendo at Volley, who's

01:14:35
helping so much with the conference behind the scenes.

01:14:38
Oh, and of course to young Chomsky for the wonderful theme

01:14:41
music. This episode is part of the

01:14:43
Cerebral Valley series that I'm doing on the Newcomer Podcast.

01:14:49
You can follow along on my sub stack at newcomer.co where I'm

01:14:53
publishing each episode, or you can follow it on YouTube.

01:14:57
Or Apple Podcasts. Or wherever you get your

01:14:59
podcasts. Thanks so much.

01:15:01
Goodbye. Goodbye.

01:15:02
Goodbye. Goodbye.

01:15:03
Goodbye, Goodbye, Goodbye.