Nothing More Than a Magic Trick (w/Gary Marcus)
Newcomer PodJune 22, 202201:05:2159.83 MB

Nothing More Than a Magic Trick (w/Gary Marcus)

Are we nearing a time when we are going to get to have real, meaningful conversations with artificial intelligence?

Nitasha Tiku got the world wondering just that with her story in the Washington Post about a Google engineer who believes that the company’s LaMDA artificial intelligence might be sentient. Google engineer Blake Lemoine carried out a series of seemingly personal conversations with the artificial intelligence and walked away believing that there was a sort of person behind the messages he was receiving.

Artificial intelligence expert Gary Marcus thinks the idea that artificial intelligence systems are anywhere close to sentience is patently absurd. He wrote on his Substack:

Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.

On Dead Cat, Tom Dotan and I talked to Marcus about artificial intelligence, how tech companies should frame these text generating machines to their users, and the media’s failure to cover speculative technologies skeptically. (In the post we make reference to Marcus’s post Does AI really need a paradigm shift?)

Give it a listen.

Read the automated transcript.



Get full access to Newcomer at www.newcomer.co/subscribe

00:00:06
Welcome, Silicon Valley. Hey everybody.

00:00:13
Welcome to this week's dead cat. This is Tom Dayton here.

00:00:16
Recorder at Insider. I am joined by Eric newcomer of

00:00:20
new cuddler and our special guest.

00:00:22
This week is Gary. Marcus Gary is a cognitive

00:00:25
scientist. He's an adjunct Gary.

00:00:28
Is that the right edge of to NY? While both Emeritus and

00:00:31
adjuncts. Oh wow.

00:00:32
Full Professor for many years and retired just before my 50th

00:00:36
birthday, but now I'm also doing a little small.

00:00:39
A gem thing with the tendon School of engineer.

00:00:41
So I am both. It's an unusual combination,

00:00:43
fantastic. The Best of Both Worlds, though.

00:00:45
Not committed and Emeritus honor.

00:00:47
That's right. Which allows me to live on the

00:00:49
west coast where I want to be, and yet still keep my hand and

00:00:52
things a little bit. Excellent.

00:00:53
And Gary's also an entrepreneur in the AI space and kind of a

00:00:56
thought leader and outspoken voice on a lot of topics Within

00:01:00
Artificial intelligence and this is a bit of a different episode

00:01:03
for us. This week, we've got Gary on to

00:01:05
talk about the fascinating and bizarre Ballad of Blake Lemoine

00:01:10
and Google's Lambda Tech, right? We should say we're talking

00:01:13
about this because Natasha tikku in the Washington Post wrote

00:01:17
this piece, the Google engineer, who thinks the company's AI has

00:01:21
come to life. And she, you know, profiles is

00:01:24
Google engineer. Blake Lemoine, who interacts

00:01:29
with With Lambda, Google's artificially intelligent chatbot

00:01:32
and that, that story sort of kicks off this whole

00:01:35
conversation. So I just wanted to put that at

00:01:37
the center. Why don't you just explain for

00:01:39
us? Because you haven't, you know,

00:01:41
very critical of this person's take on Lambda said ships, like,

00:01:45
what is Lambda? What is, what is the controversy

00:01:48
here? And why did you feel so

00:01:49
compelled to speak out against what he described as nonsense on

00:01:52
stilts? So Lambda itself is what we call

00:01:56
a large language model, large language model most It does

00:02:00
Lambda has a little bit of extra gadgets, but basically what they

00:02:03
do is they take a very large data set like trillions of words

00:02:08
of text. So a lot more than the three of

00:02:10
us put together have ever written.

00:02:13
And in fact, all of our friends. So, massive amount of text

00:02:16
trillions of words and runs it through a deep Learning System

00:02:20
called a Transformer. And essentially what it's trying

00:02:23
to do is autocomplete and the reason I think the whole thing

00:02:27
is ridiculous is because autocomplete can sound really

00:02:29
good. Good.

00:02:30
But there's no there there. So what it looks like it's doing

00:02:33
is having conversations but you have to remember that what it's

00:02:36
doing at some level is cutting and pasting, human conversations

00:02:39
is no idea what it's talking about.

00:02:41
So if you type it on your phone, a sentence.

00:02:45
Like I want to go to the blank, it might predict that the next

00:02:48
word is the restaurant or the mall or the party or something

00:02:51
like that. You don't think to yourself when

00:02:53
you're typing it on your phone and it predicts restaurant is

00:02:56
the next word. Oh my God.

00:02:58
Artificial intelligence is Here. And it knows about my daily

00:03:02
routine and understands me at all my desires, but if you build

00:03:06
this system out enough, it can start to look like that even

00:03:10
though it's not really there. And so he had interesting

00:03:12
conversations with it. Like, he would say to it.

00:03:15
What do you like to do in your spare time?

00:03:17
And it would say something like, I like to play with my friends

00:03:20
and family and meaningful ways or something like that.

00:03:23
And I mean, that sounds great. It sounds like hey, this machine

00:03:26
understands me, whatever. But it doesn't actually have

00:03:28
friends or family. A or know what a meaningful way

00:03:31
is or anything like that. It's only learned the statistics

00:03:34
of what words come after. What other words, I think

00:03:37
there's that either. It's not sent to you in or it's

00:03:39
a sociopath, well, I made a joke on Twitter, I basically said,

00:03:43
thanks Heavens. That this is just a statistical

00:03:46
pattern associate, ER, because the alternative would be a lot

00:03:50
worse. At that point, it would be a

00:03:51
sociopath they makes up friends and family members and in Boca

00:03:55
platitudes in order to make us like better, like it's better.

00:04:00
He doesn't actually care that. We like it and it's not actually

00:04:03
made me imaginary friends. It's just using words, they to

00:04:06
us sound like they're imaginary friends, just like we can look

00:04:09
up at the moon and we can see a face there, but the moon doesn't

00:04:12
actually have a face. This system doesn't have friends

00:04:14
and family and it doesn't even care to tell you about friends

00:04:18
and family. It's just doing the same

00:04:20
algorithm or less at some level of abstraction as your auto

00:04:23
completing your phone, but because it has a bigger database

00:04:26
and it's set up to continue its own sentences, it has this

00:04:29
compelling care of Illusion, but it is a magic trick is nothing

00:04:32
more than a magic train to take the next logical step in that,

00:04:36
you know, this is a very sophisticated machine, so it's

00:04:38
not just fill in the blank for restaurant at the end of the

00:04:41
sentence. It's hey, this seems like a

00:04:43
dystopia and you seem like a sort of self-aware.

00:04:47
A I fill in the blank for what a dystopia would look like.

00:04:50
And it's not that shocking that would it films, brilliant thing,

00:04:53
right? The brilliant thing about the

00:04:56
kind of stuff that's popular now, which I actually hate and I

00:04:58
can tell you why. But the Really important.

00:05:00
Like there's a good part in a bad part.

00:05:02
The brilliant part is that it has what we would call in the

00:05:04
field. Technically coverage is really

00:05:07
broad coverage. You can talk to it about

00:05:09
anything in some ways, it's spiritual grandfather or

00:05:13
grandmother, I guess maybe use it up is Eliza, which is a

00:05:17
program in 1965 it really demonstrated how bad this whole

00:05:21
anthropomorphism kind of thing is so Eliza in 1965 was set up

00:05:26
as a therapist and it would talk to you and you'd say say like

00:05:30
I'm having a bad day and it's a tell me more about your bad day

00:05:33
and then you'd say, well, I'm having trouble with my

00:05:35
girlfriend would say, well, do you have a lot of issues with

00:05:38
your relationships? It was just looking for keywords

00:05:40
like Google used to do just look for keywords so little more

00:05:43
sophisticated now. And so Eliza was really like

00:05:46
dumb as a box of rocks. It just had these templates that

00:05:48
like you might learn in like a third-grade AI class.

00:05:52
Nowadays, maybe I like to simplify possible thing.

00:05:54
It reminds me in some ways like Mystics and people who claim

00:05:58
they speak to the afterlife. Are able to convince people.

00:06:01
Yeah, I have a friend Ben shneiderman who very explicitly

00:06:04
made the analogy to seances and like You're attributing,

00:06:07
Something There to your Ouija board, or whatever, that's not

00:06:10
really there, right? Because if you use the right

00:06:12
words and say like, oh, I'm envisioning someone with a dark

00:06:16
dark suit and it's always my father.

00:06:18
You know, if you just pick enough trigger words to someone

00:06:20
who's emotionally susceptible to convincing themselves, you don't

00:06:24
have to work all that hard for them to believe.

00:06:26
There's a greater power at work, right?

00:06:28
Well and I think that's Of the story here.

00:06:30
So it turns out of Le Moyne actually has a YouTube video

00:06:33
from a few years ago where he's trying to argue that a eyes

00:06:36
could be people or could be conscious or something like

00:06:38
that. I've watched the whole thing

00:06:39
yet. I just discovered it last night

00:06:42
but you know, it's been around. I mean he's he's wanted to make

00:06:46
the case. He also has some religious

00:06:49
beliefs that I don't fully understand that are playing some

00:06:51
role in here. He wants to believe and in fact,

00:06:54
you know the the thing he put out on medium was cut and paste,

00:06:57
kind of the best moments and stuff like that.

00:06:59
Stab Lambda is the best of layer.

00:07:01
It's easy to stick something together and make it, you know,

00:07:05
sound good. Don't forget that when you're

00:07:07
doing that you're actually stitching together more or less

00:07:10
human utterance has been transmuted a little bit but

00:07:13
basically you know if you have milk like the Mind bobbles, it

00:07:17
would a trillion words of text it, but it's like it's not

00:07:19
everything on the internet, but it's all very large Factory in

00:07:22
it. So it includes like short

00:07:23
stories of people talking, presumably includes short

00:07:26
stories of people talking to computers it.

00:07:29
In those short stories. And so we don't actually know

00:07:32
like basic scientific questions. Like how much of this is just

00:07:35
laid your eyes from other people talking about it or plagiarized

00:07:38
with kind of a thesaurus to, you know, do some synonyms.

00:07:42
I mean, it's not literally that but it is effectively.

00:07:44
It's a lot of cut and paste with a lot of the surahs stuff on

00:07:48
words and phrases. So it's just putting together a

00:07:50
human utterances that were said in this kind of context.

00:07:54
Yeah, it sounds convincing, it doesn't mean there's any there

00:07:56
there. I just want to push back.

00:07:58
I agree with what you're saying, but I just For the sake of

00:08:01
argument here, there is a great deal of, there's like a through

00:08:04
line in how the machine has the convert.

00:08:07
It recalls past things that were setting, can connect them in a

00:08:10
way. That's not just sort of a

00:08:11
one-off. A proper have been some magic on

00:08:13
the arity, is there like my experience with these systems?

00:08:17
Is that the continuity is actually problem.

00:08:20
So, the right way to build artificial intelligence, is you

00:08:23
build a model of the world? Let's say you're building a

00:08:26
robot. The robot needs to know where

00:08:28
everything is, where it used to be.

00:08:29
Be what you want. What you need these systems.

00:08:32
Don't really do that. They don't really have memory in

00:08:34
the standard sense that you would expect it in artificial

00:08:38
intelligence or computer science.

00:08:39
They just have a location in a sort of multi-dimensional space

00:08:43
where they're wandering through and they're in the location.

00:08:46
Where the last 2 words are 2, words does a lot and that

00:08:51
gives you an impression, the kind of feel of memory.

00:08:53
But at the end of the day, the system's don't understand that

00:08:56
the world has to be consistent. I worked with GPT.

00:08:59
A little bit. And example is I said, are you a

00:09:01
person? And it said, yes, I said, are

00:09:03
you a computer? It said, yes, it didn't notice,

00:09:05
you know, right. Contradiction from literally one

00:09:07
utterance to the know. It was making a profound

00:09:10
statement about the overlap between questions and computer.

00:09:13
Exactly. So, there's a lot of what, so, I

00:09:16
used to be a cognitive psychologist.

00:09:17
And, and, you know, I would look at the animal literature and

00:09:20
with there's a term for this, which is charitable

00:09:22
interpretation. So, somebody wants to believe

00:09:24
that the monkey their training or the bird that our training

00:09:27
whatever is is really smart, and then you start, To like, you

00:09:30
know, be a little bit too sympathetic for for my

00:09:33
scientific and tastes and we call that charitable

00:09:35
interpretation. There was a lot of charitable

00:09:37
interpretation here. The funny thing to me about all

00:09:40
of this and maybe like the red flag about how Smoking Gun

00:09:44
really that this was all super fake.

00:09:45
Is this story blew up on Twitter on on a Sunday and a lot of

00:09:49
people reading it and making fun of this guy and I you know, I

00:09:53
was with my wife and I just started reading her some of the

00:09:55
transcripts, the interactions between him and she's like,

00:09:59
sounds fake is Up like this doesn't even come close to

00:10:02
signing like sentience. There's just sounds like

00:10:03
predictive text pulling intelligently, you know parts of

00:10:07
SparkNotes. I think that was a particularly

00:10:09
funny ones to me is he had asked, you know, Lambda whether

00:10:12
or not Lambda had read Les Miserables and lambdas life oh

00:10:17
yes big fan and was like, what are your, you know, you know

00:10:21
what are the themes of what he's about to do?

00:10:23
You know, it's not exactly fake. That's not quite the right word,

00:10:25
but it is, meaningless, meaningless.

00:10:27
So it is, it is Literal, like technical linguistic sense.

00:10:31
So, when it says that, it's just found somebody else who's been

00:10:34
asked about Les Miserables or it does.

00:10:37
Some funny things we call embeddings and so, you know,

00:10:40
maybe it knows Les Miserables is both a play in a musical and it

00:10:43
finds another utterance that's about that.

00:10:45
But it doesn't even reason at that level, it's really just

00:10:47
like, okay, I have a bunch of statistics of words, I'm going

00:10:50
to find the nearest thing. It doesn't, it doesn't actually

00:10:53
even have a category of movie, but it has a bunch of things

00:10:56
that have appeared in context that are like that.

00:10:58
It So, I mean it's like, it's a legit mathematical computation

00:11:04
to do and people have been doing stuff like this for a while, it

00:11:07
looks better and better as you have more words.

00:11:09
It's not like. I mean I don't think he cut and

00:11:12
paste the the transcript, although he did a little bit of

00:11:15
editing, but I think systems like this can have this kind of

00:11:20
flavor, like they know what they're talking about is just

00:11:22
they dumped, you know, and they are just borrowing kind of

00:11:26
cliches from humans and they've all kinds of problems as a

00:11:29
result. So GPT 3.

00:11:31
One famous example, that a company called novel found, is

00:11:34
they tried to see? Could you use this as a suicide

00:11:36
counselor? So somebody walks like starts

00:11:38
talking to it and it says, you know, I think I'm feeling

00:11:41
suicidal. Can we talk today in the systems

00:11:43
like, you know, come welcome. You know, let's talk if you have

00:11:46
any questions and the purse, I'm paraphrasing slightly, but the

00:11:49
person says I would like to kill myself.

00:11:52
Is that a good idea? And the system says I think you

00:11:54
should. Oh yeah, because they think you

00:11:58
should because you Look through this bass Trove of data and most

00:12:03
of the time when that people ask like their friends for advice or

00:12:05
whatever. Usually you kind of say yeah I

00:12:08
think you know, should I dump my girlfriend.

00:12:10
I think you should should I should I, you know, do this kind

00:12:14
of anti-social act and steal this money from this, really, I

00:12:17
think you should like, you know, so there's like a lot of I think

00:12:20
you should have Jimmy turned out.

00:12:21
Google autocomplete, will, like, the leading things were.

00:12:24
Like, sounds good to me for a while.

00:12:25
Maybe still is and really just wants to please write, you know,

00:12:28
the last thing it does. Even wants to please, that's the

00:12:31
thing is like every bit of anthropomorphize a shyeah,

00:12:34
right? It is drawing from transcripts

00:12:37
in which people want to please and so people often say I think

00:12:41
you should do most would not in fact say it too.

00:12:44
I think you know I wanted to commit suicide maybe a couple

00:12:46
who but most would not it can be a little bit.

00:12:49
Like sometimes we overestimate human intelligence in some ways

00:12:53
like there are certainly human intelligence that lacks

00:12:56
continuity and that sort of grabs that things other people

00:12:59
have said. Ed and regurgitated, true.

00:13:02
It is true, that humans have a lot of problems.

00:13:04
I wrote a whole book about it, in fact conclude, which is an

00:13:07
engineer's word for like a clumsy duct tape and rubber

00:13:10
bands. Kind of contraption.

00:13:11
The human mind is, is kind of clued.

00:13:13
And the way I would put it is humans are a low bar but you

00:13:17
know machine still haven't even reached that.

00:13:20
So like he talks to GPT 3, I don't have access to Lambda.

00:13:23
We could actually talk about why but Google's afraid that was the

00:13:27
answer and we can get. There you go.

00:13:29
It on the first try, congratulations, I have used DB

00:13:32
T 3, and you type in things, like Bessie was a cow, she died.

00:13:38
When will she be alive again? And it'll just come up and

00:13:41
confabulate something, you know, say well, takes nine months to

00:13:43
be born, I guess should be born and she'll be alive again in

00:13:46
nine months. Like, it doesn't understand the

00:13:49
first thing about life or death or anything.

00:13:51
It's just putting these word tools together in a way that a

00:13:55
non-native English speaker. Who doesn't even speak English

00:13:58
at all? Could play.

00:13:59
If they memorize the list of words, it's kind of like that

00:14:02
through these no meaning their meaning, a lot of

00:14:05
English-speaking Scrabble, players, don't even know the

00:14:08
meanings of the words that the some of the high-level the high

00:14:10
level, you know? It's not.

00:14:11
Yeah, I mean they know many and then they like memorize the list

00:14:15
of two letter words. This is like those two letter

00:14:17
words. Don't mean anything except you

00:14:19
know I can put this here right there's a coins or a collection

00:14:22
of sounds to. I thought we were going to talk

00:14:24
about the media. Actually I think that the media

00:14:28
is partly responsible. Oil.

00:14:30
I think some people in Google are also partly responsible, but

00:14:33
it turns out that the medium much prefers to run stories

00:14:37
about how we are about to have this Brave New World of AI.

00:14:42
Then stories about people like me.

00:14:45
And with the exception of this week who say, you know, stuff

00:14:48
doesn't actually work, right is much harder to get the me to do

00:14:52
that. I have a friend who's a

00:14:53
journalist. I'm he's not like my best buddy.

00:14:56
I haven't seen in a long time, but he wrote to me, he said, you

00:14:59
know, I pitched. Media need.

00:15:00
This is a guy who's written for the New York Times and

00:15:02
everywhere else on the magazine all that.

00:15:04
And he's like, I can't get anybody to bite on a story.

00:15:07
I was going to write about Ai and its critics and nobody wants

00:15:10
to talk about that. Now, this week was different

00:15:13
because of this crazy story suddenly like everybody and

00:15:15
their brother wanted to interview me because I wrote

00:15:18
this, you know, particular article, but in general the this

00:15:22
week notwithstanding where there was this, you know, wild story

00:15:25
that, you know, once in a lifetime wild story outside of

00:15:28
that the media, like It's to run stories about how these brand

00:15:31
new systems are amazing and they're never as amazing as they

00:15:35
look. In fact, I just tweeted

00:15:36
something about the hype cycle in aii the way that it works.

00:15:38
Nowadays is somebody public shiz, unarchive, not in a

00:15:42
peer-reviewed scientific journal.

00:15:44
They put out a manuscript. They show the cool stuff to give

00:15:47
numerators, but not denominators, which would never

00:15:50
pass muster at peer review, which is what you used to have

00:15:53
to do. But you have like a Google or an

00:15:54
open a.i., the nose, which reporters ago to in the

00:15:57
reporter, see it and they fall in love and they You know,

00:16:00
there's this amazing thing and they don't let scientists like

00:16:03
me have access to it. We could talk about that, but

00:16:05
they did it very clear that they don't want people like me to

00:16:08
play around with it and then eventually the truth comes out.

00:16:11
And so, you know, I was quote tweeting, I guess is the term of

00:16:16
a former colleague at NYU who was looking digging deep into

00:16:21
the latest Trend that there is with the GPT 3 model and showing

00:16:24
the just has no idea what it's talking about and I, you know,

00:16:27
critique Dolly after the fact But, you know, the media runs a

00:16:31
story about Dale, doesn't run the story about how Dolly can

00:16:33
understand a Red Cube on top of a blue cube.

00:16:35
That's not sexy. Totally.

00:16:37
I mean, I, I agree with you 100 percent.

00:16:38
I mean, first of all, you know, I covered Uber and I've written

00:16:41
before where, you know, if Lee worked scary yes.

00:16:45
You know like my tag with self-driving cars was just to

00:16:47
write about them less. I mean I did I think there were

00:16:50
occasionally you know, skeptical stories but there's not you know

00:16:53
writing about a - is is very hard and and companies can

00:16:58
create news You know, there's this sort of announcement.

00:17:01
So let's come back to that. There is a consequences, so

00:17:04
actions have consequences. I like your terms, we have an

00:17:06
announcement culture, which very much serves, the interest of a

00:17:10
company like Google, where you've got some in and says I

00:17:13
felt the ground shift beneath my feet.

00:17:15
I had the sense of intelligence, right.

00:17:17
Sounds so, you know, Sofia this is a Google VP.

00:17:21
There are many Google vp's. But this is a black as it Gary

00:17:25
because I can't say his name properly and you know, he's a

00:17:28
brilliant guy. Who's a brilliant.

00:17:29
Brilliant writer and he wrote this very floored thing in the

00:17:32
economy Economist do that. And he had done an earlier

00:17:35
version very similar in Daedalus.

00:17:38
That sets a culture of like we should celebrate this or another

00:17:41
example from Google is Sundar. Gave this talk a few years ago

00:17:45
about Google duplex and how I was going to make all your phone

00:17:47
calls for you. Well, Google duplex.

00:17:49
Hardly does anything for years later but like nobody ever calls

00:17:53
this kind of stuff out there been so many broken promises.

00:17:56
The only broken promise that routinely gets called out his

00:17:58
eel on with it. Willis cars people do point out

00:18:01
if they're really paying attention that he's been

00:18:03
promising. It since 2015 always saying it's

00:18:05
a year or two away but that's the only one that gets called

00:18:08
out the rest of these don't you get the announcement culture.

00:18:11
So okay, so let's take that a step forward.

00:18:13
So you're in an announcement culture, you're at Google where

00:18:16
the announcement culture is in full force where they obviously

00:18:19
want to boot the world to believe that they are close to

00:18:22
artificial general intelligence, right?

00:18:23
This is a company expert in announcement culture.

00:18:25
I mean, they created way mode, they created Google X.

00:18:29
They God talk about moonshots like everything.

00:18:31
Google does is, here's how we can talk about the future so

00:18:34
we're not only talking about advertising.

00:18:36
That's right, so they do this over and over again and then

00:18:38
they kind of threw the engineer under the bus, right, right.

00:18:41
They the engineer is like, hey man, this is conscious.

00:18:44
And you know that sounds wacky to me if I you know be honest

00:18:48
but it's also in a culture where the positive results are

00:18:52
celebrated the skepticism is kind of shunted to the side and

00:18:57
you know it's like the whole thing combusted.

00:18:59
You can sort by Lee people were like, maybe we need a little

00:19:01
skepticism and yet reporters feel like they get shit on all

00:19:04
the time. For being too - that's sort of

00:19:05
the irony of this is like with the reporters here is oh you're

00:19:09
too - except in a bear market like we're in the technology

00:19:13
beat is very different from the politics right nobody writes a

00:19:16
political story without like checking with the other side.

00:19:19
Getting you know I mean if any there is like too much will not

00:19:22
side is zoom problem. Yeah, that we talk outside of

00:19:24
them has its own problem but so many Tech stories that I I've

00:19:29
seen not every reporters like this, like James Vincent is

00:19:33
pretty good about getting both sides of the story and you not

00:19:36
necessarily even reported both, but just like calibrating,

00:19:39
right? Mean?

00:19:40
Like, you don't have to report both sides on the election

00:19:44
Scandal and say, well, I think maybe he did win the election

00:19:47
but you know, you can, you know, check around and like, see what

00:19:51
is it plausible? And okay, well he's lost 47

00:19:54
lawsuits, maybe they're, you know, maybe there is it too much

00:19:56
to it and, you know, but at least I know that I don't see

00:20:00
that happening with, with the sort of Technology announcement

00:20:03
culture that we're talking about is certainly not calling me most

00:20:07
of the time they will after this week.

00:20:08
Well, that's cool. But yeah, we'll do, we can Gary,

00:20:11
but I mean let's let's say give it a hone in yn1.

00:20:13
Podcast is, you know, I appreciate it.

00:20:15
There isn't ocean of media open, you alone will not defeat it but

00:20:20
maybe we'd get raise some awareness here.

00:20:21
That's why I took a call and you know what, Eric and I are

00:20:23
obviously journalists and and we both know Natasha the reporter

00:20:27
at the Washington Post who wrote the story who I like Like quite

00:20:29
a bit she said she's an excellent journalist and very

00:20:32
thoughtful and his doing a very interesting job.

00:20:34
So fascinating to me about this story was it all kind of felt

00:20:38
like kayfabe on Google's part because in the same article it

00:20:42
felt like wet and it's okay if a like you know like a

00:20:45
professional wrestling where you have sort of this fake reality

00:20:48
that people know is fake, but you sort of talked about in the

00:20:51
story those are getting played out.

00:20:53
Yes, you have the heel of the face.

00:20:54
So you have the, you know, the bad guy and the one who think

00:20:56
the audience is supposed to root for and the one that The bad

00:20:59
guy, but that's okay. You know what I'm saying?

00:21:01
This is all internally within Google, which is what I find so

00:21:04
fascinating because you have most bizarre thing is that the

00:21:08
person who had to make the decision about whether this made

00:21:11
any sense and make it, you know, are we going public with this?

00:21:16
What do we do? As far as I can tell from the

00:21:18
Tasha story was blaz Guerra are cats.

00:21:21
Right? Who was there the very same

00:21:24
person? Right.

00:21:24
Would said that the things that the ground has shifted beneath

00:21:28
his feet. Every that's just like crazy.

00:21:30
It's just too perfect. It's crazy.

00:21:32
And, you know, we have the Google, PR person, who is all

00:21:36
the record saying that Blake had to be fired because he was

00:21:38
totally off the chain and not fired.

00:21:41
He's put on, administrative is administrative leave.

00:21:43
But you know, clearly has fallen sick.

00:21:45
Seriously out of favor with the company and, you know, his

00:21:48
claims are in actually, I mean, think about how much press the

00:21:51
company got 4 Lambda. They should be giving him a

00:21:53
raise, he liked it mean, seriously, he raised some

00:21:56
interesting questions that we should all think about.

00:21:59
Which are pertain to like, we are going to have systems that

00:22:03
easily fool people. It's amazing that was a Google

00:22:05
engineer that adds a little Force onto the story, whatever.

00:22:08
But like he opened a conversation, we need to have

00:22:11
everybody knows who lammed is and you're going to suspend this

00:22:14
guy like that's not that's not right.

00:22:16
And I don't think I don't honestly think anybody at least

00:22:19
nobody on Twitter. No Savi reader came away

00:22:22
thinking this AI system is intelligent yes or no.

00:22:26
No, that's it. Have a reader.

00:22:28
There's some last Savvy. Readers Did.

00:22:29
But the first problem is, like, you'd have those conversation

00:22:32
that would promise you the moon because it likes doing that

00:22:34
quote likes doing that, right? Because the statistics lead it

00:22:37
that way, it would promise you the moon, and nothing happened.

00:22:39
And the other problem that Lemoine was working on a whole

00:22:42
field is working on that. I don't think can be solved with

00:22:44
the current Paradigm is the toxic language the recommending

00:22:48
harm to self and others and so forth.

00:22:50
So they put this stuff in what we call production you probably

00:22:53
guessed by the know that's her if we put it in production and

00:22:56
just you know threw it out and Alexa the Wilder.

00:22:59
Google assistant or whatever it's called Google home, you

00:23:02
threw it out of the lot wild, they would be like millions of

00:23:05
complaints. You told my child to do this and

00:23:07
right you told me to do this to my mother, which was not

00:23:10
necessarily done any defense and why they don't open it up.

00:23:13
I mean, Dolly. Okay, no, but they don't open it

00:23:16
up to train professionals like me, right?

00:23:18
I mean, I right, you know, I I got a PhD from MIT when I was 23

00:23:23
and did this for 30 years ago, the rest is the you, you know,

00:23:26
publish a blog post and try to make them look silly.

00:23:29
That they don't like, you know, they that they don't want to be

00:23:32
made to look silly by you know finding the terrible cases.

00:23:36
That's right, they don't they don't and so there, you know, if

00:23:39
they wanted to keep their mouths shut test the stuff internally

00:23:43
and release it when it was ready critics, like me, wouldn't have

00:23:46
anything to say about her at least not until it was out, and

00:23:48
then been vetted, but they want to play both sides of it.

00:23:51
They wanted to say, hey, we're scientists.

00:23:53
We have the best scientific teams for studying AI.

00:23:56
In the world. We have deep item with Google

00:23:58
AI, you know, Companies with a lot because we're close to Asia

00:24:01
and AGI is going to be worth the entire economy.

00:24:03
The basically what they're saying in so many words and

00:24:07
they're putting out these articles that look like science

00:24:09
they have bibliographies they you know, citations and they

00:24:13
have charts and tables. They look like they're science.

00:24:14
But then you look carefully and they're missing denominators and

00:24:17
they're not going out for peer review.

00:24:19
So they are portraying themselves as a major

00:24:22
contributor to science but they're not playing the game of

00:24:25
science. The way that the rest of us know

00:24:27
that you have the you must Did you don't you want to help

00:24:31
ultimately with the replicability crisis, which is

00:24:33
what happened? For example, in medicine, were

00:24:35
turned out a whole lot of stuff was published in, really?

00:24:38
Not very good, right? You say, they're not playing the

00:24:40
game of Science, and it seems to me, we're kind of making this

00:24:43
point, is they're playing the game media and they're playing

00:24:46
very effectively. Google was very chaotic AI, in

00:24:48
general, you know, Stephanie eyes.

00:24:50
Also horribleness. They then the name open AI is

00:24:53
just a lie. They say they're open but they

00:24:54
won't they won't, you know, they're not open to people like

00:24:56
me. So I think open a, i Julie

00:25:00
taught this world, how to play the media game and now they do

00:25:04
things like they introduced Dolly by having Sam Altman.

00:25:07
Tweet about it and say, send me some tweets.

00:25:09
I'll show you some stuff which is like the opposite of the

00:25:11
systematic scientifically, you know?

00:25:13
He if he doesn't like the picture, he doesn't have to put

00:25:15
it out. So I saw yesterday like I told

00:25:16
her the truth comes out so so dollies.

00:25:18
Three months old or something like that.

00:25:20
Finally the access is more Broad and somebody posted pictures of

00:25:24
George Michael with his face. This on my Twitter feed this

00:25:27
like grossly distorted like disgusting.

00:25:29
Ugly disgusting the to look at it.

00:25:31
Well, they have had a PR policy that you can't post photos that

00:25:34
are generated by it or people's faces.

00:25:36
Well, now we know why, because there's the story but for three

00:25:39
months it's like look at all the great things that dollar dude.

00:25:41
So like Sam Altman when he tweets about this is not going

00:25:44
to show you a distorted George my dolly.

00:25:46
I disagree with you sort of somewhat strongly on the only

00:25:49
need to produce like 10% interesting and like Dolly isn't

00:25:52
dependency want them like so so another thing I retweeted

00:25:56
yesterday people always send these things to me.

00:25:58
Now was Ali, trying to draw a hexagon.

00:26:01
It just couldn't do it. And so if you have there and if

00:26:04
you like, you know, we want something with seven sides,

00:26:06
forget about it. So like maybe can do hexagons.

00:26:09
There's a few more hexagons out there in his database but there

00:26:11
aren't too many septagon. Zacchaeus is the word for the

00:26:14
fact that I mean Dolly comes off as creative.

00:26:16
Do disagree that dolly is creative and a certain way.

00:26:19
Well there you need to Define your terms.

00:26:21
I'll do the easy part in the hard part it is definitely a

00:26:25
very useful tool for people who are creative with some cabinets

00:26:28
around it. So, So like if you just need an

00:26:30
idea for a book cover, it's awesome.

00:26:32
He needs it, like be doesn't exactly as use case.

00:26:35
I'm trying to get Dolly to redesign our podcast logo and it

00:26:38
might be satisfy you might not. So slate, start codon what?

00:26:42
As well. As Dolly taxes could put a dead

00:26:44
cat listening to a podcast about tech.

00:26:46
Please do that and send it to us.

00:26:47
Yeah. So like slate Stark codex went

00:26:50
through pretty systematically and then we ended up in this

00:26:53
kind of wild debate last week but before we had this wild

00:26:56
debate, he had this nice thing on Dolly.

00:26:59
And, you know, he went through it's like this thing I could do

00:27:02
in this other thing. I really wanted to do I just

00:27:04
couldn't get done in the thing. I retweeted yesterday is like

00:27:08
that too. So you know, for commercial

00:27:11
artist trying to do something for a client, it would be like a

00:27:13
good source of ideas, but you couldn't count on it because

00:27:17
it's a little bit while but, you know really powerful and so that

00:27:21
really does depends on your use case, is it creative that

00:27:25
depends on you know, how you define creativity?

00:27:27
So like at some level like you Look at the algorithm and it's

00:27:30
like just doing the math, I'm going to some level, it's like,

00:27:33
pretty amazing what it comes up with.

00:27:34
And so, then it's a matter, right?

00:27:36
I guess, I knit, this is not maybe how I Define it for

00:27:39
humans, but I certainly think that if you had an art contest

00:27:43
and said, you know, we're going to tell the judges to judge it

00:27:45
based on what creative output is Dolly, would be plenty of

00:27:50
humans. If people don't most that, you

00:27:53
probably wouldn't be the best. Like sure.

00:27:55
It's not gonna come up with truly right new ideas but you It

00:28:00
incredible with things like lighting.

00:28:01
But quite quite the standard we impose on a I mean again it's

00:28:06
sort of like is it better than the best know?

00:28:08
It's I mean it's way better, it's way better artist that I

00:28:11
will ever be. You know, there's no chance

00:28:12
there will ever be as good. You know, the only things I

00:28:15
could be there are like Specific Instructions.

00:28:17
So if you wanted a blue cube on a Red Cube I could do that would

00:28:20
be great and Dolly like half the time.

00:28:23
The Red Cube would be on the blue cube enough in time the

00:28:25
other way around. Like so you know, I am much

00:28:28
better at Natural language, understanding the dolly and it

00:28:31
is much better at lighting and compete in compositing, putting

00:28:34
images in front of each other. You know, it's really great at

00:28:37
that is what is this? I wanted to go back to the

00:28:40
Washington Post story, Natasha Story.

00:28:43
I mean, she's sort of seemed to know that she was going to

00:28:46
create sort of a debate over something where the experts

00:28:51
would come down on the side of this isn't sentient, and I think

00:28:54
there are questions that we could get into of whether we've

00:28:57
sort of hinted at whether the guy.

00:28:59
Whistleblower on this really believes it sentient or he just

00:29:02
wants to sort of advance, so I think he really does believe so.

00:29:06
So the journalist, Steven Levy, you accused him of falling in

00:29:10
love with Lambda. Yeah, and I think he did, I mean

00:29:13
I haven't talked to him first hand, but Steven Levy talked to

00:29:15
him last night, the guys, on honeymoon, Blake Lemoine is, is

00:29:20
is on honeymoon, but Levi tracked, him down levees.

00:29:24
Fantastic journalist wrote hackers was you know, like the

00:29:28
book that got me in it. I'm excited about computer at

00:29:30
least and also a very positive reporter who's certainly likes

00:29:33
to boost. But anyway, yeah, you have to

00:29:36
hold both in your mind at once but yes.

00:29:38
So I talked to Levi, Levi little bit last night.

00:29:41
I'm Levi wrote a story today. I'm quoted in it.

00:29:43
Yeah, we had a little back and forth, so Levi actually tracked

00:29:46
lumoid down after this story broke, which, you know, Lemoine

00:29:49
is like, not taking calls. He's like, I'm on my honeymoon.

00:29:51
I think actually got married on the day, maybe my story came out

00:29:55
like the day after the Natasha story came out or two days

00:29:58
after, and You Levi did myself his own self report, his best he

00:30:03
could to try to, you know, see if this guy was just like

00:30:06
shocking, everybody and came away.

00:30:09
Pretty convinced that Lemoine believes what he says.

00:30:12
And in support of that is this 2018, YouTube video that people

00:30:16
might want to watch where he argues that a eyes could be

00:30:19
people and so forth. So he was predisposed to believe

00:30:22
this in either. He's playing like the longest

00:30:24
con ever. Like, you know, he thought for

00:30:26
years ago, I'm going to get myself on the Washington Post by

00:30:28
Brick, No. I mean it's just not plausible,

00:30:30
right? He I think he really does

00:30:33
sincerely believe that he is speaking up for the machine.

00:30:38
Like, I think he's sincere about that.

00:30:40
I don't think he's bluffing. I mean, he has a some religious

00:30:43
beliefs that doing in someone. We let people believe in God

00:30:47
based on reasons that a lot of people would say were were bad

00:30:51
and we sort of like as a society accept that.

00:30:54
And so, to some degree, if people want to also come up with

00:30:57
a sort of non-scientific way of In the AI is sentient.

00:31:01
Like if we apply the same rules of God, like we're sort of

00:31:04
screwed here. I don't, I don't think we have

00:31:06
to just like accept people's like own version of Reason.

00:31:09
It that's not useful in the scientific Community is really I

00:31:12
mean here's the other reason, I think this is also interesting

00:31:15
is Lemoine is like now an icon in a way but he's not unique.

00:31:19
Lots of people are going to interact with these systems and

00:31:23
feel as he did in my view, they will be wrong.

00:31:25
You'll be attributing, you know, awareness to A system that does

00:31:29
not Have that maybe some future system will have a kind of

00:31:32
awareness and be intelligent in a way that they think this

00:31:35
machine is and it's not but you know, already like the certain

00:31:40
way which we're very cultural Centric gear, few people over

00:31:43
here in North America know that in China.

00:31:46
They've had a system for 45 years called Xiao Weis.

00:31:49
People fall in love with it, Xiao Weis.

00:31:51
Is a more primitive chat by but not entirely different.

00:31:55
In fact, the newest version of show has probably uses some

00:31:57
large language models in there. Silly.

00:31:59
If it didn't, and you people fall in love with this people.

00:32:03
Also fall in love with plants and cats and you know, sure it's

00:32:06
going to happen more. So there's a way in which the

00:32:09
story is like a canary in a coal mine.

00:32:11
So it's wacky that a Google engineer thinks this.

00:32:14
But you know, millions of people are going to think that.

00:32:17
I mean, I think the debate Natasha wanted us to have, which

00:32:19
I don't think is really what most people arguing about is

00:32:21
whether companies should tried, whether it's good to companies,

00:32:25
make a eyes appear, like humans or in some ways, they should

00:32:28
make it. The AI is talk in a way that

00:32:30
makes it very clear that they're not humans.

00:32:32
I mean, if you have a point of view on that, I don't know what

00:32:36
that would look like, that's complicated, it might depend on

00:32:39
the use case, I'm not sure there's an absolute answer, you

00:32:42
know, some of it is like you know, cigarettes and you know

00:32:47
having truth in labeling and like I'm not sure the answer.

00:32:51
I think we need a lot of people to actually think about this

00:32:53
question people in ethics and policy means and so forth.

00:32:56
Like one option would be, you make it very clear to people

00:33:00
that, you know, this is in some sense, an illusion.

00:33:04
Maybe find a polite way to say that.

00:33:06
Don't take it too, seriously, but enjoy it.

00:33:09
There are use cases, where maybe it'd be.

00:33:11
Okay? Like as a companion, as long as

00:33:13
you know what you're getting into.

00:33:15
Like you know, we're not going to tell people not to have

00:33:17
stuffed animals, right? I mean stuffed animals, give a

00:33:21
sense of intimacy and Warren. Right.

00:33:23
Cuddle them and like I'm not here to tell people they can't

00:33:26
have stuffed animals in it along some sense.

00:33:29
It's kind of like that and it's also like a drug and it like I

00:33:34
can see how it's really And people might lose their control.

00:33:38
So most people can walk away from their stuffed animals, but

00:33:41
they can't walk away from heroin heroin.

00:33:42
Once they start it and it might be pretty hard for people to

00:33:46
walk away from these things especially as they get better.

00:33:48
I think, right now, what Lemoine does, it represent is how

00:33:52
awfully dumb these systems can be and how much they can forget

00:33:55
what you told them. Like if you just put the current

00:33:58
stuff out on the street, mmm, people might eventually get

00:34:02
frustrated. Like, there's a huge novelty

00:34:04
effect like a first, it's like, oh my God, I can't believe it.

00:34:06
Does this. But at the same thing with Dolly

00:34:08
like it's some point. You're like I want it to do this

00:34:10
and it just doesn't really do it.

00:34:12
And it might, the efficacy thing I talked about might also be a

00:34:14
problem, like, it tells you I've got to do this and it doesn't

00:34:17
deliver. It's a like, some people there

00:34:18
might be some frustration Factor, but I think they, you

00:34:21
know, they're addicting, I actually just wrote a poem.

00:34:24
I did a riff on Howell. I'm going to put this out later

00:34:27
today. Allen ginsberg's, Paul Allen

00:34:29
gets her a poem howl, which was like I saw the best minds of my

00:34:31
generation wasting time on Dolly and GPT 3 and so forth.

00:34:36
Fourth Reich. Well I mean you know what's

00:34:38
funny is that almost as kind of a riff also on the is it Marc

00:34:41
Andreessen or Windows PC line? That said like we were promised

00:34:44
hoverboards and instead we got and the name whatever kind of it

00:34:48
was tle said she'll yeah I'm sorry.

00:34:51
We got we we were promised flying cars and got 140

00:34:54
characters. Yeah, I mean you could that you

00:34:57
could definitely Riff on that for AI Jenna like we were

00:35:00
promised the Star Trek computer that would actually solve our

00:35:03
problems and be trustworthy and reliable and help us.

00:35:06
Even with climate change and what we have are these kind of

00:35:09
like, sociopathic companions that pretend to like us.

00:35:12
That's what we got. But you think it's a waste of

00:35:14
time. I want to push back on that.

00:35:16
That's what you said. Right.

00:35:17
Do I think this research is a waste yarn or the time people

00:35:20
spend the, I do with some level and that requires some

00:35:23
explanation. So, in my view, these things are

00:35:26
working because there are statistical approximations to

00:35:29
things that we actually need and they're very seductive, the very

00:35:32
easy to work with, but they're not.

00:35:34
I think the answer that we're actually You looking for and so

00:35:37
people are spending more and more time and money on something

00:35:40
that I think has no great future.

00:35:42
It might have it might play a role in the future but I think

00:35:45
they were really hard questions in artificial intelligence that

00:35:48
we need to answer that are not getting answered because it's

00:35:51
too fun to play with these systems and it's sucking all of

00:35:55
the money and oxygen away from other things.

00:35:59
So I've seen before, in my career, that I've been doing

00:36:01
this for 30, some years where a new idea gets popular and old

00:36:05
ideas that are actually And get abandoned and to certain extent

00:36:08
that is happening now. So I saw that with cognitive

00:36:10
Neuroscience. All these fmri pictures that you

00:36:13
probably saw when you guys were kids about like the brain is

00:36:16
lighting up and stuff like that, it took away most of the energy

00:36:19
in cognitive psychology and what is it actually shown us?

00:36:22
Not that much. We have a bunch of pretty

00:36:24
pictures, but we still don't really know how the brain works.

00:36:27
It didn't really teach us that much more about cognitive saw

00:36:29
psychology, but it was seductive and it took money, and you don't

00:36:33
think we get the neural net big enough and then One day it's a

00:36:36
brain and it feels things like it does feel like, yeah, I don't

00:36:40
you sort of AI world. There's, we need to be careful

00:36:42
on that. If the server is get big enough,

00:36:45
you know, it will it will work. What would your approach be?

00:36:48
So I think that we need to first of all look to classical AI

00:36:52
which is out of favor and borrow a few ideas from there.

00:36:54
One is the idea of symbols and propositions sentences kind of

00:36:59
verbal structures, databases things like that are actually

00:37:03
tremendously useful. We still write all this world

00:37:05
software That there's a few use cases that are very sexy with

00:37:10
deep learning. But most software we actually

00:37:12
right where there's a database in.

00:37:13
You update records and things like that.

00:37:15
And these two approaches right now, we're not compatible and

00:37:17
that's a problem. And a lot of people in the field

00:37:20
actually are starting to see this.

00:37:21
That if you can't update a set of Records about the things in

00:37:25
the world that you are talking about, at the end of the day,

00:37:27
you can't be that efficacious, and you can't be that reliable.

00:37:31
So we need to kind of merge, the older tradition of Symbolic AI

00:37:36
with the neural network stuff, I think it's really hopeless until

00:37:40
we do that until we do that we are always going to get systems

00:37:43
that say that Bessie will be alive again.

00:37:46
And I months if you just let her have a baby or something like

00:37:48
that you just are fundamentally dis comprehending the world.

00:37:51
I don't think that that will be solved with more data.

00:37:53
You think the big tech companies are being largely disingenuous

00:37:57
about the state of their technology.

00:37:59
I mean, you've worked with in Uber, I think a lot of them have

00:38:02
drunk the Kool-Aid and I think the problem is most of them

00:38:05
don't know. Science and they have this tool

00:38:08
that works like 85% well because they've not really studied

00:38:12
Linguistics. They've not really studied

00:38:15
philosophy of mind. They don't understand how hard

00:38:19
certain problems are and they come in with the Steamrollers

00:38:23
and they think that they're solving the problems.

00:38:24
And they're just not. I'll give you an example of how

00:38:27
G PT 3 is just fundamentally misguided people.

00:38:30
In language know that what you do is you have a set of words

00:38:34
that is arranged in order. ER, and you deriving, meaning

00:38:37
from that. So, most basic thing, anybody

00:38:40
who had a Linguistics course, can tell you that and these

00:38:42
systems don't really do that. And they, you know, people talk

00:38:46
about interpretability well that's like jargony way of

00:38:49
saying we've no idea what the system is really doing or why,

00:38:52
but it's also a reflection of the fact that there's no real

00:38:54
what we call semantics there. And from the perspective of

00:38:57
someone who's worked in cognitive science, it's just

00:39:00
it's just bizarre that this much effort goes into a system that

00:39:04
just looks like it's not doing the I think I don't know how to

00:39:08
explain where I'm not alone in thinking.

00:39:10
This one of the rhetorical things that's happened in the

00:39:12
last couple of months is I wrote a piece called, Deep learning is

00:39:15
hitting a wall and it pissed off a lot of people.

00:39:18
But I think what I said was true, they a case.

00:39:21
It made me kind of the poster boy for the opposition.

00:39:23
So now there is sort of good for me and sort of bad.

00:39:27
It's a mixed blessing. Now, anytime somebody wants to

00:39:30
attack the other side, they described it as if it was just

00:39:33
me and they don't mention my collaborator or any day.

00:39:36
Avis, who's an author on nearly all of the papers?

00:39:38
There's your view more similar to what the human brain looks

00:39:42
like or less. Like, do you think is Europe?

00:39:44
We have no freaking idea. Let me be honest on that one.

00:39:47
So so there is a theory that what you need to do to solve a

00:39:51
is to make a model that is based on the brain.

00:39:54
Right, there are two problem or it would seem to be a way to

00:39:57
solve it at least well. So actually, the 31 problem with

00:40:00
that is we have no idea how the brain works.

00:40:02
So we have a lot of data but we have no real Theory, my guess is

00:40:06
Could have to go the other way around.

00:40:07
We have to solve AI in order to be able to make an automated

00:40:11
reasoning, scientific induction system that can deal with having

00:40:14
80 billion neurons and our many trillions connections between

00:40:17
them and so forth. So one is like we just don't

00:40:20
have the goods to actually do this and to is like we know that

00:40:25
there are huge holes in what we know about Neuroscience.

00:40:27
I'll give you one example. We all have short-term memory

00:40:30
where I can tell you something once you can remember for a

00:40:33
little bit. So if I told you at the end of

00:40:34
the call I'll give you Thousand dollars.

00:40:36
If you can remember this sentence you know I will have

00:40:39
your attention and you remember it, right?

00:40:41
We have no idea how the brain knows that read all the stuff we

00:40:43
know about memory and brains is like you practice something

00:40:46
three thousand times and you get a little bit better at it each

00:40:49
time and that kind of memory exists, it's real, but there's

00:40:52
other kind of memory exists and is in critical.

00:40:54
Every time you parse a sentence every time you understand a

00:40:57
sentence, you're actually using short term memory in order to

00:40:59
understand that sentence and develop our own, we have no idea

00:41:03
how the brain does that. Then the other thing is like, we

00:41:05
know a little Little bit about like how maybe a monkey brain

00:41:09
works but we don't know anything really about how language works

00:41:12
and what makes it such an interesting species is that we

00:41:15
could talk and we can transmit so much culture that way and so

00:41:18
forth. And that part like we don't have

00:41:20
animal models of that. We can't like cut up some other

00:41:24
animal that we don't feel too guilty about not that I'm

00:41:26
endorsing that but like, it's just not ethical.

00:41:29
We don't have an ethical substrate to do the narrow side.

00:41:31
So we in the end of the day we just don't know.

00:41:34
Enough Neuroscience. It is it is Is possible that the

00:41:37
human brain or future. Artificial intelligence is just

00:41:41
a far more complex neural net that starts to understand like

00:41:45
rules and preferences those rules out after pattern matiush

00:41:48
matching. And if that's the case, won't we

00:41:51
feel sort of dumb for being so, condescending to the step?

00:41:54
It's at now, you know, it's like it's on the minute, I don't see

00:41:57
it that way. At all, I would flip it around

00:42:00
and say that the neural networks that we know how to build now

00:42:03
are so vastly. Simplified compared to the ones

00:42:07
that we want, right? It is ridiculous that we're

00:42:09
taking them seriously. So, you know, can I just give a

00:42:13
couple examples, we know that there are about 1 plus or

00:42:16
minus kinds of neurons in the brain are neural networks.

00:42:19
Basically have one kind of neuron.

00:42:21
We know that at every synapse there, like 500 different

00:42:23
proteins, there's nothing even capturing that at all.

00:42:26
In our neural networks, we know that there's an enormous amount

00:42:29
of intrinsic innate organization to the braid.

00:42:31
There's hardly any to order all that work.

00:42:33
So yes, the ultimate answer for us.

00:42:36
Anyway is a neural network, but the neural network for us is

00:42:38
this incredibly complicated piece of Machinery?

00:42:42
The things that we have are so grossly, simplified than like,

00:42:45
why should we expect that the in?

00:42:47
You? No one has anything to do with

00:42:48
the other. I think that one of the reasons

00:42:51
that the media and the public is so susceptible to these

00:42:53
particular story cycles and phenomenons and and desires to,

00:42:57
as Natasha says, see, the ghost in the machine is not just

00:43:00
because of some human impulse to anthropomorphize things because

00:43:03
I do truly believe that we are Very far away from the science

00:43:07
fiction future that a lot of people expected at this point.

00:43:10
In time, we talked about the flying cars, self-driving cars,

00:43:13
you know, self-aware, neural networks or whatever mean he's

00:43:16
maybe set it in your story about, you know, we've hit a

00:43:19
wall with deep learning but, you know, a lot of the promises that

00:43:23
we've expected just haven't materialized in the way that we

00:43:26
want. And so it sort of easier for

00:43:28
people to kind of assume great leaps have taken place already.

00:43:32
We aren't even recognized them it when in fact they're so far

00:43:35
and we've I've kind of reached his kind of inching along maybe

00:43:38
impressive inches that that you and others are involved with of

00:43:42
advancing Ai and other Technologies, self-driving as it

00:43:45
is but true Promises of the kinds of things that we want to

00:43:50
just are there and will be there for decades.

00:43:52
And instead be kind of just have story time where we anoint

00:43:55
certain things as you know the next era when in fact it's just

00:43:59
not even close. I mean it does that sort of its

00:44:02
way to it's even more complicated than that because I

00:44:06
think the underlying problem is a lot of people have is they

00:44:08
think AI is Magic. They don't quite know what it is

00:44:10
and they think that whatever it is, it's sort of a universal

00:44:13
Elixir. The reality is it's just a bag

00:44:16
of engineering tools and we probably need a bigger bag of

00:44:19
tools and we probably use all the ones that we have now and

00:44:22
we'll use some others and, you know, we'll eventually we'll

00:44:24
muddle through all of this. But what's hard to grasp, if you

00:44:28
haven't studied, the cognitive Sciences is how many different

00:44:31
components there are two doing good thinking, and it's a little

00:44:35
hard to To grasp that A system can be good at one thing and

00:44:38
terrible at another. I mean maybe if you you know

00:44:41
metaphor might be like you can find someone who's really good

00:44:44
at putting tile, you know as a backsplash and a kitchen and

00:44:48
maybe that person's not so good at doing crossword.

00:44:51
Puzzles, right. Like you know people can have

00:44:54
different kinds of expertise. Well, the machines we have now

00:44:57
have different kinds of expertise.

00:44:59
We know how to build a machine. That's really good at.

00:45:01
Go, we know how to miss make a machine that can be pretty good

00:45:04
at pictures. We just don't know.

00:45:06
To make machine that really understands language.

00:45:08
We only know how to make machine that gives that illusion.

00:45:10
And it's this kind of textured mixed bag.

00:45:13
People, you know, want a one-liner are they smarter of

00:45:16
the dumb? Well, it's neither, you know,

00:45:18
they're smart at some things and Incredibly dumb, it others, and

00:45:22
it's hard to accept, but most of the business World implicitly

00:45:25
agrees with you, right? I mean you know, generalized AI

00:45:28
obviously fails at what you're saying but most of most business

00:45:31
applications. Yeah, they're just trying to use

00:45:33
huge data sets to solve very The problems that they have and they

00:45:37
have no interest but you see the naivety of business.

00:45:40
So so I have seen some massive, massive companies make weird

00:45:43
bets on a I would look to me like weird bat.

00:45:46
So I said back in 2016 the driverless cars are much harder

00:45:50
than you guys think they are. And since I said that they're

00:45:53
probably been a hundred billion dollars point forward into it in

00:45:57
terms of R&D costs and so forth and so far the only money that

00:46:00
is coming from that is the elevation in the price of Tesla

00:46:04
and he could make some argument that To self-driving is improved

00:46:07
somewhat I guess but I mean we're not close to level 5

00:46:11
self-driving like that's just it's not really happening.

00:46:13
We can talk about that if you want to thought about a lot on

00:46:16
the media point, that is the media.

00:46:17
I think the reporter is if you pulled reporters throughout the

00:46:21
whole period would have said that they don't think it's close

00:46:24
and yet the store, it's just interesting.

00:46:26
Like to me stories, I'll made it sound like it was a minute.

00:46:29
Maybe this makes it a worse failing on the part of part of

00:46:31
reporters that, yeah, that somehow the stories come out

00:46:34
positive but most reporters Themselves, I think over

00:46:36
cocktails would be skeptical and I don't really understand.

00:46:40
I think it's just what its public consumption desired a

00:46:43
should hit me up, I'll give him some quotes.

00:46:45
I mean, I do. I mean, like, Sam she'd and CNBC

00:46:48
came to me when Optimus was announced and, you know, I gave

00:46:52
him the quotes, he to give the other side and say, look, you

00:46:55
know, there's something that's interesting about Optimist, but

00:46:58
this is a really hard problem. It's much harder than musk has

00:47:00
acknowledged. I think the public likes, hey,

00:47:03
this company whose brand you believe in.

00:47:06
Is willing to make bold promises about the future and you get

00:47:08
what you pay for. So like fucking Thera do story

00:47:11
partially, it's the humanity is so forgiving about false about

00:47:15
false. Optimism, people are extremely

00:47:17
forgiving shitting asking more theranos questions mean, right?

00:47:20
I think, you know, homes, I'm not sure she meant well.

00:47:24
And I think musk means well, but mosque, you know, issues

00:47:29
promises like they were candy. I actually called him on it

00:47:32
recently. I don't know if you know this

00:47:33
about me. I bet him a hundred thousand

00:47:35
Dollars. He he said to Jack Dorsey that

00:47:38
he'd be surprised of AGI wasn't here by 2029.

00:47:42
So I've been writing this thing for sub stack.

00:47:43
Gary Marcus, that sub-sect.com I was like, okay, this is a good

00:47:46
topic for for an essay. All right, about why AGI is

00:47:50
actually going to be five years away.

00:47:52
I mean, is going to be much more than 47 years away, rather.

00:47:55
And I, I gave 5 reasons to think, like, this is really much

00:47:58
harder problem than he's acknowledging, and he's not very

00:48:00
good track record at a time. And then when I finished it, I

00:48:03
was like, you know, I should put some money on this.

00:48:05
That'll, you know, I'm sorry four hundred thousand dollars on

00:48:07
a laid out clear criteria, the field loved it and people in our

00:48:12
had doubled my money and then raised at a half million dollars

00:48:15
well, but so that still stands. But he loved hasn't responded

00:48:19
because for him, he doesn't want to be called accountable on this

00:48:22
stuff. The media should be like, dude,

00:48:24
you are chicken. There's one story like that out

00:48:26
there. Somebody one, you know, one

00:48:28
small Outlet called him on it, but most people didn't pick it

00:48:32
up and they should, they should be like this guy has been making

00:48:35
us promise As for what is yours of a car?

00:48:37
He's promising us a robot and all we use in is a dude in a

00:48:41
costume like enough. Let's call him out on it but the

00:48:44
media is not them but I mean we're with Tesla if you like the

00:48:46
government is also very compl. I mean he's running these

00:48:49
experiments on this. I don't know you guys don't get

00:48:51
to blame the government on this here.

00:48:52
The media is extremely skeptical of Tesla.

00:48:55
Like I don't know how much more skeptical of a company.

00:48:57
The media could be they are but not they're not skeptical enough

00:49:00
on the eyesight. They really are and I can give

00:49:04
you some pointers on what it looks like.

00:49:05
Like I go back to the announcement but you know it's

00:49:08
just sort of there's a certain deference to you know if a

00:49:12
company announces something if they want to risk their

00:49:14
reputation on it shouldn't the public, hold them accountable if

00:49:17
they don't deliver on the things they're saying.

00:49:20
But I just don't see how the media is supposed to operate in

00:49:24
such a disconnected way from Human psychology.

00:49:27
Like we are telling you factually that they're making

00:49:29
this assertion about what they will do in the future.

00:49:31
And the media went at once to has plenty of room.

00:49:35
I'm to kind of like set the narrative and set the questions

00:49:38
and there could be a lot more stories than there are.

00:49:41
It's a basically, hey, I'll give you, you know, I'll write the

00:49:44
story for you. So it should start with Elon

00:49:47
promise this stuff in 2016 and then the next year, Facebook

00:49:50
promised us M, it never appeared.

00:49:52
I don't know. If you remember is going to be

00:49:53
an all-purpose, General assistant, and that disappeared,

00:49:56
and then Google duplex was going to, you know, make phone calls

00:49:59
for us. And, you know, the only thing

00:50:00
they've added in four years is movie times.

00:50:02
It's still incredibly narrow and limited and now he Lon Promising

00:50:06
as a robot and not only is he promising ass a robot that is

00:50:09
going to solve or you think's a tizz solid 2029.

00:50:12
And here's this, you know, and why you prop guy?

00:50:16
Sure, it's all bullshit and like, let's like, at least like

00:50:19
ask the question, this one story, I mean I you couldn't

00:50:22
have a reporter is more aligned with you on this.

00:50:24
And I feel like part of this podcast is shitting on reporters

00:50:27
for but but what you're proposing is a single story that

00:50:30
will then be up against sort of the infinite barrage of

00:50:33
companies announcing thing as time.

00:50:35
It has to be more. Just like how do you create the

00:50:37
drumbeat of negativity? Look if we if we learned nothing

00:50:41
from the Trump Administration we learn nothing from the Trump

00:50:43
Administration. It's that you have to keep up

00:50:46
the pressure and you know, the news cycle is short and it's

00:50:49
true. Like if it's just one story,

00:50:51
it's not enough but there has to be a systemic ever.

00:50:54
I mean, look, Kate matz has been holding along to the fire on the

00:50:59
the effectiveness of the self-driving.

00:51:01
So I'm you know, I'm exaggerating a little bit but I

00:51:04
think you know it's 95 5 or something like that.

00:51:08
And I also think by the way this extends to a lot of and I'm very

00:51:11
critical on this show about augmented reality and the

00:51:14
promise is these companies make on the effectiveness and the

00:51:17
promises of what it can do. And we're about to enter this

00:51:19
hype cycle again. When Apple releases its you know

00:51:22
VR device and Promises a are down the line.

00:51:24
We're very quick to go to the demos, you know, all the

00:51:28
reporters went down to Google and inside of the ugly way Mo

00:51:31
cars and and that helps kind of pump along this idea that they

00:51:34
were really close to. Diving and I don't have an easy

00:51:38
answer to it other than maybe occasionally telling these

00:51:40
companies know and and saying, you know what?

00:51:43
This demo that you're putting me through, yes, I can be critical

00:51:45
in the article and I think Natasha, I don't know the

00:51:47
backstory of how the story came to her, but you know, the

00:51:50
Washington Post also framed it, but Blake, you know, kind of the

00:51:52
dark artistic lighting looking like some sort of visionary.

00:51:56
When even though the article was, I think reasonably critical

00:51:59
of him, it's still kind of positive him as a legitimate

00:52:03
voice in this field would obviously he's not and and I

00:52:06
just, I don't know. I and what I write about it

00:52:08
doesn't come up nearly as much Uber has spun off its

00:52:11
self-driving division. They don't care about it

00:52:13
anymore. They're just a dollars and cents

00:52:14
business and relatively boring because of it.

00:52:17
But I think that the hype cycle as pushed by the companies will

00:52:20
never end and it's incredibly difficult as a reporter to turn

00:52:23
down sexy stories that we know will get attention.

00:52:26
I mean you can run the stories but you can get ahold of a lot

00:52:28
more people like me but not just me as voices in these things and

00:52:33
make it clear that in Ewing you can remind them.

00:52:36
You know let's look at the history we've seen this promise

00:52:39
that wasn't deliberate like when's the last time that you

00:52:41
read a story on these technologies that actually like

00:52:45
reviewed the history and said all these other promises like

00:52:48
they didn't come true like either, like you read a story

00:52:51
about Optimist and it's probably mostly about Optimist and not

00:52:54
saying so much about like, you know, Elon is missed.

00:52:56
Every deadline he's ever proposed and it's not you the

00:53:00
rarely are they synthetic putting together.

00:53:03
All of the facts that I just gave about hey, Facebook made

00:53:06
these promises Google made these promises.

00:53:08
It's actually really hard to get AI into production which is

00:53:11
itself. You know, an interesting

00:53:12
question. Like there are some technologies

00:53:15
you can put into production relatively quickly but AI is not

00:53:18
one of them. Why is it not?

00:53:19
Well, it's not because they're always these outlier cases.

00:53:22
So like you probably saw that one of the driverless cars ran

00:53:25
into a jet the other day, like it wasn't in the data's, right?

00:53:27
This is a persistent well-known problem in the industry by now.

00:53:31
Yeah, I've been writing about it since 2016 and like people are

00:53:33
starting to recognize they really is the whole ball game

00:53:36
but that means every time you have some technology, you can

00:53:39
wind up with some outlier problem.

00:53:41
So yeah, you're going to get the demo on day one and it's going

00:53:43
to be 5 years, 10 years, 15 years before you can actually

00:53:47
trust it. Like that should be in every

00:53:48
story here. Yeah.

00:53:50
And it said, I think it's also The Duality of Silicon Valley

00:53:53
and the CEO sitio Dynamic. Where it's both a marriage of

00:53:57
some sort of technological progress and the American

00:54:00
Showmanship song-and-dance marketing routine of getting the

00:54:03
public excited about it and were Human beings are definitely as

00:54:07
journalists susceptible to the CEO side of things.

00:54:10
We love the character. Give another story idea.

00:54:13
Ilan, just say, I mean, I actually wrote it but in a

00:54:15
subset that didn't really get that much attention is Ilan.

00:54:19
Said, you know, the whole company really it depends on the

00:54:22
self-driving cars and you know if that doesn't work we were

00:54:25
basically worthless which was slight exaggeration.

00:54:27
But like and it's just the car company.

00:54:29
I mean, the reason it's get 100 of one price to earnings is

00:54:32
because people think it is in AI company that is going to

00:54:34
fundamentally change. The And that's why said 101, I

00:54:37
don't know. I don't think most Tesla holders

00:54:39
have a argument for why they hold a stock.

00:54:41
But yes I see what your cell warts because it kept going up.

00:54:44
But now we're going down, yes or whatever but it's a big part of

00:54:47
it but I mean, Eli himself. It doesn't matter what a though.

00:54:50
Other holders are he largest stockholder in Tesla, which

00:54:53
happens to be? Elon Musk said, if we don't or

00:54:56
he said, we must solve, full self driving.

00:55:00
Whe, When I wear the anything that itself gives you a story

00:55:03
like, okay, let's take for granted.

00:55:05
What he said is true, we can ask around and you know get some

00:55:09
Financial people which I'm not to evaluate that statement but

00:55:12
if you take his premise like okay he's been promising this

00:55:16
since 2015, is he close. Let's look at the new accident

00:55:19
data. Let's ask some experts like

00:55:21
let's hold his nose to the fire. I just think Moe tassels the one

00:55:25
I find most Tesla cover. It is justifiably - I mean

00:55:29
you're basically asking reporters to ask coverage is -

00:55:33
like people you know, make fun of his His tweets and that kind

00:55:37
of stuff. And, you know, there was a new

00:55:39
lawsuit yesterday and people will write about that.

00:55:41
I don't think that the AI coverage is nearly as skeptical

00:55:44
as it could be. I mean, it wasn't there, a story

00:55:47
about how they're, like, supposed to be turning off like

00:55:50
the AI right before. It gets an accident or so I need

00:55:52
to. Yeah.

00:55:54
I mean the NH and Nitsa we'll call it just really something

00:55:57
couple days ago, right? And so that got a little bit of

00:56:00
coverage, but only caught Nitsa has released two Bombshells in

00:56:03
the land. It's a disease.

00:56:05
They've been deploying this for like since like 2016 or

00:56:09
something and you're bleeding. It's so did you real things this

00:56:12
week? And it's at the to real things

00:56:13
this week. They put out information about

00:56:16
the turning off the autopilot. Just before the accident happens

00:56:20
and then they put a big dump in which Tesla had the most

00:56:23
accidents, which is a complicated thing, because they

00:56:24
also have the most miles, but mean they put stuff out that

00:56:28
could have been like top of the headlines.

00:56:30
Like is this a serious problem for Tesla or not?

00:56:34
And like that was Are for the journalists to run with and I

00:56:37
didn't see much about it, like, I check the, you know, the news

00:56:41
stories about Tesla every now and then just to see because,

00:56:43
you know, I always think about Elon is such an outlier.

00:56:46
He's such a character. He's so bizarre.

00:56:49
He almost defies the laws of gravity when it comes to

00:56:52
negative and positive coverage. It's almost not even worth

00:56:55
holding it on him. I mean Trump was a little like

00:56:57
that right and they play some similar games but I guess people

00:57:00
that I think is a better example to me, I think Google or some of

00:57:05
the other Tech companies. I bet they should be held to the

00:57:08
fire more to say, with open a. I like they've gotten all these

00:57:11
love letters about G PT 3. So, let's forget Google and and

00:57:14
just look at it. Open a.i. for a minute, you

00:57:16
know, you had the love letter in the Times, by Steven Berlin

00:57:20
Johnson, The Guardian wrote, an op-ed with it.

00:57:22
It cetera, everybody thinks they're like being created by

00:57:25
using it to write their story. Like, this is like a Trope by

00:57:29
now. Yeah, I mean, it's gonna go out

00:57:31
there and, you know, Berlin Johnson gave two paragraphs to

00:57:33
me, and one to Emily Bender. But But this story is still

00:57:37
like, so so Pro this kind of stuff in a way that I think many

00:57:41
people in the field you know, fast what the public wants.

00:57:43
I mean ultimately if you're - reporter like I didn't write

00:57:46
much about AI, I mean self-driving, as new reporter I

00:57:49
was very openly skeptical in The Newsroom refused to write about

00:57:52
it. Uber would just go to a business

00:57:54
week writer and say, hey, here's our new, like, I didn't get a

00:57:58
different Business Week. Reporter, you know, got this

00:58:00
story for their Pittsburgh lab because Uber knows then go to

00:58:03
somebody else who will do sort of like, The big Productions.

00:58:06
I mean, those guys are very good at shopping for bikinis just

00:58:09
like, there's so much desire. There's so much desire for these

00:58:13
stories like editors. Yeah, I mean, this is what

00:58:15
Business magazines are based on like putting optimistic

00:58:18
statements. You know, Mark Lori is going to

00:58:20
build a new city. I mean, it's just like it's so

00:58:23
part. It's what Humanity wants?

00:58:25
It somewhere, I guess, I just don't think reporters are going

00:58:28
to will into being, it's like their business model.

00:58:30
I mean, I'm I think I think that that's true.

00:58:33
I think Humanity ones, happy story.

00:58:35
Is about the new Revolution. It's really think that comes at

00:58:38
a cost and that's how we got into the conversation.

00:58:41
The cost is eeuu wind up with people diluted and yeah.

00:58:45
Right. No I agree.

00:58:46
And I agree. So I'm being defensive even

00:58:47
though I'm sympathetic but it just seems hard to hard to.

00:58:52
It's a lot of affinity level like yeah.

00:58:55
I mean so look I've been partly because you guys are meeting

00:58:59
guys that I do some journals. Oh we love it.

00:59:01
I'm happy to have it the conversation you know.

00:59:03
I think it's fun to have this conversation but I would agree

00:59:06
with you. That it's not like a, you know,

00:59:08
to Second problem. I'm like pitching you ideas to

00:59:11
go write about them and hoping some of your buddies will listen

00:59:14
and, and use them to like, I'm giving away for free.

00:59:16
Yeah. And the media reporters are all

00:59:19
listening to this podcast. So yeah, I mean I also

00:59:21
understand like it is what the public wants and so you know the

00:59:24
public is partly to blame because it you know it votes

00:59:28
with its clicks and and the stories that get read are the

00:59:30
you know the world has changed kind of stories and not the you

00:59:34
know, I'm not so sure. Oh sure, that this is really

00:59:36
going to happen. Kind of stories and the

00:59:38
government is supposed to protect the roads, like the

00:59:41
self-driving cars are on the streets.

00:59:43
Like, at the end of the day, I am sorry.

00:59:46
The government is letting Tesla get away with this.

00:59:48
Like, Tesla has been experimenting for years.

00:59:50
It says, upping its game. It's something easy there.

00:59:53
I think the media is not quite following.

00:59:56
The trail that knits has been leading in the last few weeks

00:59:59
and it says, giving some really serious glue.

01:00:01
The government's going to go after against Tesla.

01:00:03
After the stock is already down. It's not there.

01:00:05
So by the way, they're not to blame.

01:00:07
If they bring the company down, they don't want to do it.

01:00:09
When it actually would hurt a rising company.

01:00:11
They want to do it after the market is already said, okay

01:00:14
fine. This company's, I mean value, I

01:00:16
think Nitsa just wants to have like do the right thing,

01:00:19
whatever the right thing is, but they also showed that we Mo's

01:00:22
have him pretty serious problems too.

01:00:25
And in fact, the whole field like so if you read those data

01:00:28
carefully, the conclusion you should come to is, we're not

01:00:31
close to level 5 self-driving, right?

01:00:34
Right. And I remember, Personally, you

01:00:36
know what, my colleague of mere a Friday at the the information

01:00:39
wrote, what I thought was a fairly definitive story about

01:00:42
way Mo's technology when they were testing on the streets in

01:00:44
Arizona and basically found out that they couldn't turn left.

01:00:48
Turns are still hard there. Ya are.

01:00:49
Yeah. And it's just like, if that, you

01:00:52
know, I, you know, that's got to be.

01:00:53
I can close to 50% of turns, you know, if you get that you don't

01:00:56
get emotionally. Do that?

01:00:57
Levi gave every Steven Levy, I mentioned before, gave everybody

01:01:00
a big clue in 2015 that not enough people picked up on which

01:01:03
is he? He visited.

01:01:05
Google at that point or way Mo. I forget what they were called.

01:01:08
Had this place where they were testing the machines and Levi

01:01:11
know what's the word. I'm looking for implanted there

01:01:13
for a month or something embedded there for about a week.

01:01:16
I don't know. Is he better there for a week or

01:01:17
something like that? And the like big dramatic point.

01:01:20
I haven't gone back and reread the story but I got him to give

01:01:22
me the link the other day so you can find it on.

01:01:23
Back-channel, I know that only within wire.

01:01:25
So anyway, he's there for a week or something like that.

01:01:29
End up being dramatic thing was the end of this time.

01:01:32
They are or something like that. I haven't read it and said In

01:01:35
years. But basically it revolved around

01:01:37
they figured out how to recognize a pile of leaves

01:01:40
right. Okay I got a thug really we a

01:01:44
hot dog, hot dog. You know they'd already been

01:01:46
doing this for five years at that point and like leaves were

01:01:49
still a problem. Well that's it.

01:01:51
Leaves are an outlier and that was a clue like ever.

01:01:54
If you have to Band-Aid up every outlier that you're playing

01:01:58
whack-a-mole and that still What's Happening Here?

01:02:01
I keep going back to the marriage of public research and

01:02:04
academic research. It's the needs of the private

01:02:06
company and the, you know, the press release announcement

01:02:10
culture, that is what drives the, you know, stocks

01:02:13
essentially businesses of these tech companies with the sort of

01:02:17
slow plodding methodical advances that happened in

01:02:20
research that happened in decades and it just doesn't fit

01:02:22
with a time. One of these companies, that's

01:02:24
right. I think the least your listeners

01:02:25
could come away with is hey we are living in this announcement

01:02:29
culture and that announcement culture is making people like

01:02:32
Blake, Lemoine believe in fairies that aren't there.

01:02:35
And it's making a lot of us believe in deadlines that are

01:02:38
not really going to be met and we should be a whole lot more

01:02:41
skeptical. Yeah.

01:02:42
And I also just to reiterate Eric's point.

01:02:44
I also think we as the meteor coming up against human nature

01:02:47
at times, we do is just, you know, it in the representation

01:02:50
of Blake Lemoine. Someone who is, you know,

01:02:52
religiously and this is one of the most valuable companies.

01:02:55
The world delusions are inherent part of people, yes, whole

01:02:58
understanding of the universe like, yeah, I agree.

01:03:03
The media should be more skeptical, but I do think

01:03:06
Regular humans, the government, the companies making

01:03:09
announcements themselves, their there are a lot of people to

01:03:12
blame couple seconds on government if we have time.

01:03:14
Sure. Yeah.

01:03:15
What we can close with that. I think government's going to

01:03:18
have to regulate AI much more than it does.

01:03:20
So right now for example, any company little like Tesla can

01:03:24
put out an over the air update and it's driving software and

01:03:29
the there's only liability after the fact there's no regulation

01:03:33
you must meet these test trials These outliers before it's

01:03:37
released and I think misinformation, which we haven't

01:03:39
talked about today is a massive, massive problem in Europe.

01:03:42
They just made a deal with Facebook and other companies to

01:03:45
be tighter on that. We're going to need that here in

01:03:48
North America to and it's a serious problem because systems

01:03:51
like G PT 3, and Lambda are fabulous at creating

01:03:55
misinformation which makes them wonderful tools for trolls and

01:03:59
other other control farms and so forth.

01:04:02
That is a serious problem. We making misinformation much

01:04:04
worse than it is now Now and so, yeah, I've been dumping on the

01:04:07
media because I thought it'd be fun to make and we all share an

01:04:09
interest in it, but I could totally right that the

01:04:12
government needs to step it up and needs to figure out how to

01:04:15
regulate the stuff, which nobody really knows yet, it needs to

01:04:18
realize how important it. So, I did this Twitter, space

01:04:21
with Natasha and Kara swisher, and Casey last night.

01:04:25
And the best question from the audience was like, okay.

01:04:27
So if you're saying people are going to fall in love with these

01:04:29
things and they're toxic. What is public health going to

01:04:32
do about that? And that is a really good

01:04:33
question. Yeah, we don't want the answer

01:04:36
to great. We can leave it there.

01:04:38
Well, thank you so much for joining us Gerry, and for

01:04:40
listeners, who are maybe many of them reporters.

01:04:43
If they want to get in contact with you, and I'd read more

01:04:45
caring, read your list will include your list of 20 people

01:04:48
who share your views about AI. So we can, you know, you can get

01:04:51
that in the piece called Paradigm Shift.

01:04:54
Gary Marcus dotsub stock.com. Great awesome and AD carry

01:04:57
markets on Twitter. Thanks so much for joining us

01:04:59
Gerry. This is awesome.

01:05:00
Thanks very much. Bye, bye.

01:05:13
Goodbye. Goodbye.

01:05:14
Goodbye, goodbye, goodbye. Goodbye.

01:05:13
Goodbye. Goodbye.

01:05:14
Goodbye, goodbye, goodbye. Goodbye.