I’m back from my honeymoon in Japan. Thanks for sticking with the newsletter as I celebrated my wedding this year. Expect more of my newsletter writing soon.
If you have tips or story ideas for me, you can always reach out at eric@newcomer.co.
I hope you’ve been enjoying the Cerebral Valley podcast series while I’ve been gone. If you missed the first three episodes, you can check them out in the links below:
* The Cerebral Valley Podcast: Artificial Intelligence Becomes Reality
* AI Kills Us All (with Daniel H. Wilson)
* Someday That NPC Could Be More Alive Than You Are (w/ Amy Wu & Keith Kawahata)
On this week’s episode of our Cerebral Valley podcast, co-hosts Max Child, James Wilsterman, and I talk about how artificial intelligence is actually affecting our lives today.
Then at the 34:40 mark, I talk with DoNotPay CEO Joshua Browder. His company is helping consumers cancel their gym memberships, dispute charges, and otherwise stand up to big corporations.
Browder got some heat for planning to have an artificial intelligence-powered lawyer argue in court. Ultimately, he reversed course under pressure from the legal world.
Browder envisions a world where AI is fighting other AIs. Companies use artificial intelligence to power their chatbots and to handle customer support. Consumers need to be armed with similarly powerful AI-powered tools to resist those companies.
Give it a listen
Get full access to Newcomer at www.newcomer.co/subscribe
00:00:10
Hey, it's Eric Newcomer. Welcome to Cerebral Valley
00:00:14
newcomer. Back.
00:00:15
With me are Max Child and James Wilsterman, the Co founders of
00:00:19
Volley. Hey guys.
00:00:21
Welcome back. Hey, happy to be here.
00:00:24
Hey, Eric. Hey.
00:00:26
All right. This week we are talking about
00:00:28
what we're calling the AI Personal Stack.
00:00:32
Or who? How do we actually use AI in our
00:00:36
real lives? I've got in the second-half of
00:00:40
this episode, Joshua Browder, the CEO of Do Not Pay, who got
00:00:45
in trouble trying to use AI to act as a lawyer.
00:00:49
He got his start trying to get out of parking tickets that he
00:00:54
increasingly has automated, and now he's a ChatGPT addict,
00:00:58
building a company around using foundation models and language
00:01:03
models to fight back against companies like Comcast and your
00:01:08
mega corporate gym. I think Equinox is a big target
00:01:12
of his just helping people get out of the bill, their bills.
00:01:15
So that's a fun conversation from someone who's really trying
00:01:19
to suck the marrow out of AI and get everything he can out of it.
00:01:24
So, yeah, excited to talk to you guys about how how we're using
00:01:29
AI, how we're seeing people actually use it.
00:01:33
Because, you know, in Silicon Valley, people's expectations,
00:01:38
hopes and dreams can get ahead of them.
00:01:40
We can all live in the world of intuitions as we did in the last
00:01:45
episode, where it's like, what could be possible?
00:01:47
Where could we take this? You know, it's an optimistic
00:01:50
sort of place and this episode is going to try and ground us
00:01:54
in. How are we using it?
00:01:56
Are people using it? So, So that's the question,
00:02:01
which of us, who thinks they use AI the most?
00:02:05
I honestly would not put myself super, super high on this list.
00:02:09
I guess probably James. I'm going to go, James.
00:02:12
Yeah. OK.
00:02:12
All right. There we go.
00:02:13
I have the same guess here, James, given that we believe
00:02:17
that you probably use it the most and I think you had an
00:02:20
exciting AI driven experience today.
00:02:24
How are you using AI in your life?
00:02:27
That is very true. I commuted today in a Waymo,
00:02:31
which was my first time in a Waymo.
00:02:33
I've taken a few cruises around San Francisco.
00:02:36
I think that that those self driving car experiences really
00:02:40
ranked at the maybe the pinnacle to me of excitement of using
00:02:45
AIII would say GPT chat. GPT is, you know, very.
00:02:52
Very useful and I I use it all a lot, but I feel like the
00:02:55
visceral excitement of riding in a self driving vehicle and just
00:02:59
taking that to work is something that I have haven't experienced
00:03:03
elsewhere with. Could you tell the difference
00:03:04
between Cruise and Waymo? Was there a spiritual or is it
00:03:08
just how the interior of the car feels or?
00:03:11
I I guess I do think of it as I don't want to like over learn,
00:03:15
you know from a few anecdotes of riding in in these vehicles,
00:03:21
but. I do feel like there is a bit of
00:03:24
AI feel like the Waymo felt a little bit more natural, a
00:03:26
little bit less cautious, but not in a scary way.
00:03:30
I feel like Cruise is very cautious, driver cautious, yeah.
00:03:34
Yeah, and neither. No humans beside you, right?
00:03:39
Like no driver. Correct.
00:03:41
And and interesting in the Waymo is I was able to sit in the
00:03:44
front seat. You're not able to do that in a
00:03:46
cruise. And that was it.
00:03:48
That alone was a pretty amazing experience because.
00:03:51
I was, I just like could turn left and there was nobody there.
00:03:55
And you know, I could see the whole road and see see exactly
00:03:59
what was happening. So I feel like I enjoyed the way
00:04:03
MO experienced a lot today. And I I think being able to ride
00:04:07
up front, having it feel like more human in some sense, that
00:04:12
it just kind of was relaxing, that I felt like I I wasn't.
00:04:17
I wasn't like wasting time by taking a Waymo to work because
00:04:23
you know it, it drove pretty naturally.
00:04:25
And yeah, it was an amazing experience.
00:04:27
And you know, the Waymo CEO, Co CEO, Dimitri is speaking at
00:04:32
Cerebral Valley. I think we will have announced
00:04:34
that by the time this podcast comes out, Yes.
00:04:38
But so very excited to hear more.
00:04:40
Very. Excited as well other other
00:04:42
uses. Of it, Yeah.
00:04:45
So. Definitely.
00:04:47
I've already talked on this podcast about using ChatGPT for
00:04:49
just creative kind of brainstorming and exploration.
00:04:53
I think that it's very effective at creating kind of internal
00:04:56
documentation or early drafts of it or getting you started on
00:05:01
that at work. And then I also think that the
00:05:05
code side is like really, really important here.
00:05:08
A lot of Volley engineers are using ChatGPT in their
00:05:12
day-to-day experience coding at Volley and.
00:05:15
What that looks like is pretty interesting because it's
00:05:18
primarily kind of bouncing between your IDE and IDE, your
00:05:26
independent development environment, integrated
00:05:30
integrated integrated development environment.
00:05:32
Where you code basically and bouncing between that and sort
00:05:37
of ChatGPT. Where you might ask for an
00:05:40
architectural advice, or even paste code in, or ask for some
00:05:44
code in. It used to be you Google Stack
00:05:45
Overflow, blah blah blah and they'd copy and paste it there.
00:05:48
Exactly. And it's it's a really similar
00:05:51
experience to when I was learning to code.
00:05:53
And I think what is interesting is a lot of even more junior
00:05:56
developers are the the most eager to kind of explore here
00:06:00
and just use ChatGPT because they want to learn quickly and
00:06:03
you know they don't, they were going to Google it anyway from
00:06:06
Stack Overflow so. I think that is a huge change in
00:06:10
how especially junior software developers are are working
00:06:13
today. And obviously the promise there
00:06:15
is like integrated code environments where.
00:06:18
Yeah, and there is a start up called Cursor that has a
00:06:23
integrated IDE and I know it's paying for that.
00:06:27
No, we're not doing it. We're not using any of anything
00:06:29
like that other than GitHub Copilot, which is I think you
00:06:34
know I use GitHub Copilot. It essentially is auto complete.
00:06:38
For code, and I would say it's useful but not revolutionary,
00:06:44
not game changing. It maybe improves your kind of
00:06:47
speed by, you know, 5% or something, anything else.
00:06:51
I actually will give you another one which I think is really
00:06:54
interesting. I am a new father.
00:06:56
I had a newborn born about a month ago, and congratulations.
00:07:02
Thank you and. I find chat chi PT to be really
00:07:06
incredible as a parenting assistant.
00:07:09
Just being able to ask questions like when should I expect my
00:07:14
baby to be able to see certain distances right.
00:07:18
And I could Google that and and I'd get different sources and I
00:07:23
you know it would be you know something I could research but I
00:07:27
basically don't need like a precise answer and and Chat chi
00:07:30
PT is able to give me hey here's how the here's how this might
00:07:32
look over the next. Month, two months, six months,
00:07:35
right? That's just really convenient to
00:07:38
be able to ask a question in open-ended natural language like
00:07:41
that. All right, Max, what about you?
00:07:44
How are you using? I mean a lot less than James, so
00:07:47
I'll move through this in a hurry I would say.
00:07:49
One I occasionally have it right, like memos or long form
00:07:53
emails for Volley. But again, I would say only in
00:07:56
the case where I'm OK if it's like B plus quality output
00:07:59
'cause if it really has to be an A or an A plus, I feel like I
00:08:02
have to do it myself. And then?
00:08:04
I dictate like texts and emails and messages like quite a lot
00:08:07
like I would say an old man. Yeah, I'm like an old man.
00:08:11
I mean, I'm, you know, I have a kid too.
00:08:12
I'm, I'm in my 30s, so that's old enough.
00:08:15
Yeah. I would say at home I dictate a
00:08:17
lot of texts and messages into my phone and I now think that
00:08:20
like Apples native dictation is like.
00:08:23
You know, 98% accurate or whatever you want to call it and
00:08:26
and if I want to dictate a, how do you actually do it?
00:08:28
You just hit the little microphone button in the bottom
00:08:30
right of the keyboard and then you just start talking right
00:08:32
away and it turns into text. Turns into text.
00:08:35
It's pretty fast and it's pretty good.
00:08:37
So I do that a lot. Other than yet that.
00:08:40
I mean, yeah, I I sometimes use like some of those image
00:08:44
generators for for internal, you know, slide decks, but that's
00:08:47
usually almost as like a joke because they're sometimes
00:08:49
hilarious. So.
00:08:51
I mean that's kind of it. I am playing with some of these
00:08:55
newer apps that are based around image generation.
00:08:57
Like I think we talked about Can of Soup, which is like a very
00:09:00
hot YC company that just came out that is basically like
00:09:04
Instagram for image generated content.
00:09:07
And so every day they send you a push notification that says
00:09:10
like. Max riding a giant chicken, like
00:09:13
in a cowboy hat or something, and then you click on it and it
00:09:16
generates a photo of you, you know, writing, writing a giant
00:09:18
chicken on a cowboy hat or wearing a cowboy hat.
00:09:21
And you know, do you use that everyday?
00:09:23
Did you use? That I clicked on that.
00:09:25
I don't think I'm like a daily user, but I've used that a
00:09:28
couple times a week because they're pretty good at sending
00:09:30
you prompts that just sound funny, like.
00:09:33
And so I click on it like maybe one to two out of seven,
00:09:35
basically and. I don't really use it for the
00:09:38
Instagram functionality of like looking at other people's
00:09:41
because I don't really care like what other people look like on
00:09:43
top of giant chicken, like like everyone.
00:09:45
I'm very a selfish person and so yeah, I like.
00:09:47
But that use case of just sending you a prompt everyday
00:09:51
and then just you click on it and a push notification opens
00:09:54
right out to generating something from that prompt is a
00:09:56
fairly compelling like use case. I don't know how you build a
00:10:01
social network out of it, but that part has been working on me
00:10:03
a little bit, Eric. Yeah, Eric, Eric.
00:10:09
So Eric how do you use AI in your in your daily life?
00:10:14
I mean I'm probably the most disillusioned with Chad GB TI
00:10:18
mean credit credit to the AI. Then in the beginning when when
00:10:23
image generation was new I used some of it to illustrate my
00:10:26
newsletter which is not you know that's significant it it would
00:10:30
probably cost a lot. You know, I I get access to
00:10:32
Getty Images through Sub Stack, so I don't have to agonize about
00:10:37
the price of images. But you know, there's meaningful
00:10:39
value to that. And like it's a big editorial
00:10:42
decision to decide to put it out.
00:10:43
Though I do think there was a period where like where the
00:10:46
aesthetic of AI was cool and sort of even if it was bad, it
00:10:51
was showing that you were sort of you were doing sort of AI
00:10:55
stuff. And eventually obviously the
00:10:58
images need to stand on their own and be great.
00:11:01
And I partially, I just need to be better at prompting and I
00:11:04
think I'm just not like expert level and it's so it's not worth
00:11:07
the energy to use it. I've found Chachi BT to be very
00:11:11
successful in proofreading. The most impressive experience
00:11:16
being that inputted a story and it had it told me that a million
00:11:22
was supposed to be a billion which was like, OK, wow, that's
00:11:25
like that was great. That would have been like,
00:11:28
that's hard to catch. I think it's checked.
00:11:31
It'll catch like internal misspelling errors, you know, in
00:11:35
proper nouns, you know. Obviously getting names right is
00:11:37
always important. It's good at yeah, it'll, it
00:11:42
catches like double word typos pretty well.
00:11:45
So I think it's always useful putting that in.
00:11:49
It can be good about directing you towards active voice, but I
00:11:54
feel like as soon as you start asking for subjective advice it
00:11:59
gets pretty bad. And like, I don't, you know?
00:12:02
To clarify is that is that 3.5 that you're using?
00:12:05
No, no no, not four, only four. I find 3.5 terrible like I I so.
00:12:09
I thought you said you gave up on paying for it.
00:12:12
I was paying and then I gave up, so I just returned to relying on
00:12:16
humans to proofread. But now because we're recording
00:12:19
it, so I did give it up like partially just frustration that
00:12:23
like I perceived it to be getting worse.
00:12:26
Though I'm not sure what I actually think on that front.
00:12:31
And I I I tried using web plugins and I actually found
00:12:34
them like of poor experience so. It's very slow.
00:12:38
Like did you ever try the web browser plug in they they they
00:12:41
just added their own like so originally they had an outside
00:12:44
one and I think now they have their own.
00:12:47
It's still. It feels like it relies on like
00:12:49
one page and then. I agree it's it's not very
00:12:53
effective because it's essentially.
00:12:55
Going to Bing and Googling whatever you said Bing then.
00:12:59
Please, I think. Sorry, that's pretty funny.
00:13:03
Oh my God. Poor Microsoft.
00:13:05
Poor poor Microsoft. You go to Bing.
00:13:07
Bing. Yeah, binging your prompt and
00:13:10
then essentially just clicking on maybe one or two links and
00:13:17
somehow using that. LLM That text in the LLM process
00:13:21
that that just doesn't seem very productive right now.
00:13:24
I don't. I hope they can get better at
00:13:25
that. I mean, I I've said, you know,
00:13:29
it changed my vows. It made like a line and active
00:13:32
voice, which was minor but sort of funny that they will forever
00:13:35
have been impacted by Chachi BT I I generally find it in writing
00:13:40
to be like a siren song in that it feels so easy and when I'm
00:13:45
feeling lazy it's like this will generate text.
00:13:48
But I honestly think it's like a bad starting point in that it's
00:13:53
so off. Like I think I created a job
00:13:55
posting with Chachi PT and we haven't heard any.
00:13:59
We haven't heard that. I don't know.
00:14:00
I blame Chachi. Somebody just told me I need to,
00:14:03
like, chill it out. It's a little, you know, like,
00:14:06
so I'm going to have a human rework that.
00:14:09
I mean, I do, but it's good at, like, things that appear, which
00:14:12
is why I'm calling a siren song. Like, it appears good.
00:14:15
Like, it's like, oh, that's what a job posting looks like.
00:14:18
But it doesn't always feel like it's sort of thought through on
00:14:22
every piece of it. Have you tried pasting in an
00:14:26
article of yours, like a draft and asking Chachi PT?
00:14:31
Hey, is there anything I'm missing?
00:14:32
What questions will the reader ask?
00:14:35
That is, that's what I mean. I, I, you know even for this
00:14:39
show I put in and it has been helpful prepping for the show.
00:14:43
Like I put in our outlines and then said blah, blah, blah.
00:14:47
Like. I mean it was helpful sort of
00:14:50
filling in timelines and coming up with like papers that I think
00:14:53
we referenced in the first episode.
00:14:55
But I I don't think it's really achieved the mirror my style
00:14:59
sort of thing and give me it. I feel like it doesn't adhere to
00:15:03
my style enough which is really frustrating given how much of my
00:15:07
writing is online that it's I'm like Eric Newcomer, a public
00:15:10
figure who's written your XYBC plays like it doesn't really
00:15:15
learn that much off of it. And you know, even when I give
00:15:18
it a lot of text to prompt, I don't feel like it really gets
00:15:21
my style very well. And I do think overall, yeah, I
00:15:26
guess I haven't been critical enough so far in this podcast of
00:15:29
how much I find the writing and thinking style to be not, not
00:15:35
there, like it. I mean, again, I'm open to the
00:15:37
idea that there are prompts that have to fix it.
00:15:42
Yeah. Anyway, more more negative than
00:15:44
I've been on AI, but I just, I feel like the threat to writing,
00:15:50
you know, I believe it. If it gets like, you know, the
00:15:53
next, if the improvement from to the next version is as great as
00:15:56
it was from 3.5. I'm still open to the idea that
00:15:59
it becomes a great writer. But I think it's great at
00:16:02
writing poems and like more formulaic stuff.
00:16:06
Like it's great at rhymes. Like I feel like I've, you know,
00:16:10
come up with like funny, like here's the history of me and my
00:16:13
wife, like write a poem, but but it, you know, they're, they're
00:16:18
sort of gimmicks. Well, going back to the kind of
00:16:21
integration into the coding environment idea, do you think,
00:16:27
Have you tried any tools like that for writing?
00:16:29
I know there's a bunch like writing specific ones.
00:16:32
I you know, I know the guys that the guy who's doing Lex, he was
00:16:36
sort of an X sub stack or from the newsletter every I need to.
00:16:40
I haven't used his yet. Have you guys have either of you
00:16:43
used any writing assistant ones? I don't use tome either to be
00:16:46
honest, which I want. To try I use tome, yeah.
00:16:48
What do you think of it? Not great tome is the tome,
00:16:53
didn't you? Which one?
00:16:54
Have you interviewed him? You interviewed him last time,
00:16:56
right? I mean, I just think that tome
00:16:59
is the PowerPoint. To give the generous, yeah, the
00:17:01
generous interpretation. So it's basically Keynote or
00:17:04
PowerPoint, but with with AI generation as the core creation
00:17:08
mechanism. I mean it, it was impressive
00:17:11
demo. I think it felt like a very cool
00:17:13
demo for again. I mean I think it was more of
00:17:16
the middle school, high school level of output, not even the
00:17:19
college level of quality. And so if I were like in 6th
00:17:22
grade and I needed to make like a book report about like
00:17:25
Yosemite National Park or something, I think it's like a
00:17:27
killer use case. For the record, that is what the
00:17:30
CEO basically said was like the core use case was like homework
00:17:34
and and such. So you know, not to give him a
00:17:36
hard time, but as like a professional.
00:17:39
You know, slide deck creation tool, it was really not there.
00:17:42
Like what? Was yeah, what was missing, I
00:17:44
guess. I would say to come back to this
00:17:47
text generation being the big issue, like the content on the
00:17:50
slides was really, really kind of empty and banal, right?
00:17:55
It was like like it was the most sort of vague, like tautological
00:18:01
bullet points, like in the slide deck and then.
00:18:04
And then it was putting in generated artwork, right, Which
00:18:07
again I used this like four or five months ago.
00:18:09
So it was like pre the last two generations of mid journey or
00:18:12
whatever. But the AI generated artwork was
00:18:14
like really bad in like March and April.
00:18:17
And so you were like well I could never use this in a real
00:18:19
presentation. And so at some point it was sort
00:18:21
of like the only thing this tool did for me was like make 10
00:18:24
slides and like put essentially bullet points and I'm going to
00:18:29
have to rewrite on them. And so it was like I could do
00:18:31
this by. Copy pasting slides in in a
00:18:33
normal presentation, you know framework and that's what I and
00:18:36
in some cases having a starting point can be helpful, but in
00:18:39
other cases it just sort of constrains your thinking
00:18:43
unnecessarily. Yeah, I mean, yeah, I even if it
00:18:48
doesn't constrain my thinking, to your point, it's sort of a
00:18:50
siren song where I like, I'm like, oh cool, someone else is
00:18:52
going to make my slide deck and then they make it slide deck and
00:18:54
I'm like, OK, well, I can use about 10% of this, so.
00:18:57
I'm not really sure why I did that.
00:18:59
I just think it's good. It's good at cutting things,
00:19:01
like, you know, I think when people are over long and you put
00:19:04
a lot of text in it, it can all often strip away useless stuff.
00:19:10
Sort of the summarization use case or yeah, yeah, I'm very
00:19:14
excited about. Again, I think it's a great and
00:19:17
we've sort of centered on this. It's great knowledge delivery
00:19:20
tool like souped up sort of Google experience where Google
00:19:24
was moving towards giving you the answer when you searched
00:19:27
instead of directing you to a link like this like James was
00:19:30
talking about with his child. Like I feel like it's great at
00:19:33
sort of quick. Like I wish there was like a
00:19:37
likely accurate score. Because I it does feel like what
00:19:41
if one out of every ten of these is just like it's hallucinating
00:19:44
and just making shit up. I'm I feel like humans including
00:19:48
myself. I'm willing my.
00:19:50
Daughter, My daughter's in trouble is, I know, but we're
00:19:52
willing to accept like, oh, it's wrong 10% of the time.
00:19:55
That's better than a lot of people.
00:19:57
But then you just treat. You don't really know which ones
00:19:59
were wrong, you know? So until we get burned.
00:20:02
Yeah, the confidence score would be helpful.
00:20:03
Yeah, I definitely have not never asked GPT for help with my
00:20:06
daughter. Cause to your point, like if I'm
00:20:09
like playing Russian roulette where it's like, oh, like like
00:20:12
why that attendity? This is just wrong and I'm going
00:20:14
to like take that away for my childcare.
00:20:17
It's like, not great. Not a good, not a good idea, I
00:20:20
mean. I like to think that I have some
00:20:22
higher level filter of if it tells me to do something insane.
00:20:26
I hope so, yeah. I mean, it's pretty good at
00:20:30
plausible bullshit, though. I mean, that's the thing like
00:20:32
it's it's really good at making wrong things sound plausible.
00:20:36
All right, let's get out of our narcissism and think about how
00:20:41
other people are using it. What?
00:20:44
I mean, Max, I think you've made it clear, like students are the
00:20:47
answer. Students are using this the
00:20:49
most. And we you can talk about that a
00:20:50
little bit more. And I totally agree.
00:20:53
I think we all agree on that. But yeah, what professions do
00:20:57
you think are using this the most?
00:20:59
What types of people? And then we'll go from there and
00:21:02
sort of what we think, who will be using it soon.
00:21:06
It's pretty clear if you look at educational research that
00:21:09
private tutoring is like the best possible thing you can do
00:21:13
for learning in almost any context.
00:21:15
Like one-on-one tutoring in like every research study that has
00:21:18
ever been put to the test just absolutely destroys.
00:21:21
Like the classroom environment or self-directed reading or you
00:21:26
know, yadda yadda yadda what any, any sort of way you can
00:21:28
think of to sort of teach people like one-on-one Private tutoring
00:21:32
just kills for obvious reasons. I mean, right?
00:21:34
You get personalized, you know, follow up feedback, custom
00:21:37
lessons, all that good stuff. So you have to imagine, you
00:21:41
know, if you could automate to some degree the learning process
00:21:44
using, you know, a generative AI tool, whether or not that
00:21:50
replaces teachers or that's just something teachers can use as
00:21:52
leverage to really provide personalized feedback and lesson
00:21:55
plans and and instruction to children.
00:21:58
I just think that to come back to my initial analogy around
00:22:02
this being like the industrial revolution for the mind, I
00:22:04
really think we could see like insane gains in the quality of
00:22:08
education over the next decade or so.
00:22:10
Which is not, I think, with the intuition that a lot of people
00:22:14
have, it's like, oh, it's helping them cheat.
00:22:15
They're unplugging their brains. I think it's good if they're if
00:22:19
the full experience is like interacting with the AI and
00:22:22
going back and forth. And especially, you know, if the
00:22:26
AI is set up to actually instruct so it knows when.
00:22:30
To be withholding and not just like give every answer, that
00:22:33
feels very powerful. But I do think the threat of
00:22:37
it's easy to pretend to learn is like a real one and I'm I'm open
00:22:41
to it going either way. I don't know.
00:22:43
I mean this is sort of the Wall-e, you know like are people
00:22:47
just going to be like laying back as the machine sort of
00:22:50
taking over, but I think that might be slightly over indexing
00:22:53
on like higher education, right? I think that if you think about
00:22:57
what you learned up through almost like 8th grade right, a
00:23:02
lot of it was like math, like addition, subtraction,
00:23:05
multiplication and basic algebra, right?
00:23:08
Phonics. Like reading, right?
00:23:10
You know, I think reading comprehension can also be tested
00:23:13
in person, You know, science. Just basics of the sort of core
00:23:19
curriculum of elementary and middle school.
00:23:21
I don't think like necessarily is going to be substituted for
00:23:27
by like, oh, I need to dump an essay prompted to ChatGPT,
00:23:30
right. I mean I think like, I think
00:23:31
ChatGPT is obviously super good at the like give me a 5
00:23:35
paragraph essay about Jane Eyre and like what the various
00:23:38
symbolisms, you know in chapter four are or whatever, right.
00:23:41
It's incredibly good at that. But that's sort of like a pretty
00:23:45
narrow subset of what I would consider like almost all
00:23:48
education all the way up to the college level.
00:23:50
So I think that if you can give people private instruction on
00:23:54
how to add you know or like you know really basic phonics and
00:23:58
reading comprehension and grammar and sentence structure
00:24:00
and you know science, geology, whatever, like all this stuff
00:24:03
that you learn especially under the 5th or 8th grade level, I
00:24:06
think it's hard to imagine we couldn't make really, really big
00:24:09
gains in that area. And you know especially if you
00:24:11
look at the United States where we fall short, a lot of it's
00:24:13
just like making sure everyone reads really well by a certain
00:24:16
age, right. And and I think AI could be
00:24:18
pretty helpful there, particularly with voice and
00:24:20
audio as like a multi modal experience, not just text.
00:24:24
Don't offer a provocative one, I think.
00:24:27
Have you read the book Bullshit Jobs or like this class Class of
00:24:31
Tasks? Read a review of it, yeah.
00:24:33
Like, I just missed it out of hand.
00:24:35
No, I mean I would think you'd be sort of you'd buy it.
00:24:39
I mean the idea is, you know that there are a ton of white
00:24:42
collar jobs mostly that exist to sort of just move things around,
00:24:47
don't create real value like a lot of, I mean, corporate,
00:24:51
corporate law. I think is like a key one some
00:24:54
of HR you know just a lot of where where is the real value
00:24:59
versus just imposing a tax on a system that both sides have to
00:25:04
pay. I mean to I think sort of not
00:25:09
top tier marketing necessarily, but I think this sort of low
00:25:13
tier marketing is obviously a case that I put in this bullshit
00:25:18
jobs category where a sort of. And it's somewhat derivative.
00:25:21
You know, like you really don't need to.
00:25:24
You know, hopefully artists are coming up with many of the
00:25:27
original ideas and so you're trying to sort of be in touch
00:25:30
with culture and pull something. What do you mean exactly by low
00:25:34
tier marketing? I guess 'cause.
00:25:36
Like I just mean like you know people or you know, just I think
00:25:39
obviously like, I mean I think there will be an AI Super Bowl
00:25:44
ad, but I put that in sort of a. Sure, it will be like the
00:25:47
creativity of it, but you know, that's going to be reviewed by a
00:25:51
human, right? I mean, like if there's an AI
00:25:52
Super Bowl. But I'm just, I'm saying like,
00:25:54
OK, somebody's like I'm putting up a website for our company and
00:25:57
we need to explain all this stuff and you read all that copy
00:26:00
and it's just like this is like, no, nobody paid somebody who's
00:26:03
like great at writing to do something novel.
00:26:05
They're like, what does website copy look like?
00:26:09
What does our company do? Let let's create website copy
00:26:13
for. Plus and so then, if that's your
00:26:16
standard of what you want, you just want to have a website that
00:26:18
has things that are not wrong. I think ChatGPT and other
00:26:23
similar products will be able to deliver that.
00:26:25
Yeah, I think you're right. I just would push back.
00:26:28
So I probably would have like fully agreed with you like five
00:26:30
years ago, but now, now, now having now having run a start up
00:26:35
and and experience a lot of these things that people
00:26:37
consider to be bullshit chops. I think the thing that you maybe
00:26:41
don't realize having been like close to the metal on like
00:26:43
putting up a website or whatever is there is actually quite a lot
00:26:48
of people in the world who will sell you the service of putting
00:26:50
up a generic website that will do a pretty bad job.
00:26:54
Like a worse job than you're describing as a sort of like you
00:26:58
know an AI quality or a low tier quality output, right.
00:27:02
People will like, literally not even deliver at all.
00:27:06
Or they will deliver something that's like grammatically just
00:27:09
incomprehensible. Or they will deliver something
00:27:11
that's grammatically comprehensible but really quite
00:27:13
bad and does not describe what the product does.
00:27:15
So you will encounter like just a staggering number of services
00:27:19
people that this. Could cover is much larger than
00:27:22
right. I'm saying that, but I'm saying
00:27:23
in the end like the leverage point ends up being like the
00:27:26
person who can look at the really bad version of it and be
00:27:30
like well actually there's like a mid tier version of this that
00:27:33
we could just get an AI to do right and and go get the AI to
00:27:37
do it and then decide side that that's actually good enough for
00:27:39
for showbiz to go on the website, right.
00:27:41
So I'm just saying like I I think that I think that that the
00:27:46
humans can always sort of move up the stack I guess you know
00:27:49
until we get super intelligence or whatever.
00:27:51
But like where the the key the key job is deciding what is like
00:27:55
good enough output, not like creating good enough output in
00:27:59
many cases I guess I would say. I mean like and yeah, a core
00:28:03
question to me is like from what you're just saying there.
00:28:08
Is does AI make dumb people smarter or smart people higher
00:28:14
leverage? Like in the most brutal way?
00:28:17
Like, is it? And it's not just smart and
00:28:19
dumb, but like in your case, it's like, does it help the
00:28:21
person with taste do a ton of jobs, or does it help have the
00:28:26
person with sort of poor taste or like bad execution skills?
00:28:30
But maybe they know it's not good and therefore they can do
00:28:33
better, like I think a lot of the takes have been.
00:28:36
It helps sort of the the person who needs more help, yeah.
00:28:40
But there's also this argument that it's scaling people.
00:28:42
I think it's different on taste, I guess, But I do think the
00:28:46
early academic research that's coming out is showing that.
00:28:49
I think they gave it to lawyers who were in, in law school,
00:28:52
basically, and they let them use it on their tests essentially.
00:28:56
And it did. So they had a curve, you know,
00:28:59
everybody kind of knew what their grades were going into the
00:29:02
AI assisted test basically. And it did pull up the bottom
00:29:05
half of the curve quite a lot and then the top half of the
00:29:08
curve maybe a little bit, but not a huge impact.
00:29:11
So at least in the law and I don't know if this will be
00:29:14
different and more, you know, different types of fields, more
00:29:16
creative fields versus more structured fields.
00:29:18
It's it's a little bit hard to predict.
00:29:19
But in the case with the law where there's sort of clear
00:29:22
right and wrong answers and there's a value in having
00:29:26
massive text digestion machine essentially which is what these
00:29:30
LLMS are, right. It seemed like it was helping
00:29:33
the bottom of the curve substantially more than it was,
00:29:35
giving like a ton of leverage to the top of the curve.
00:29:38
I would also add there. This kind of reminds me of an
00:29:41
analogy with GPSI. Forget who's who said this
00:29:45
originally, but we went from an. An era where we were to
00:29:51
navigate, we had someone maybe in the passenger seat just
00:29:54
pulling out maps and you know, they could actually help
00:29:57
navigate in natural language and you know get you to where you
00:30:00
wanted to go or you had to pull over and you know pull out the
00:30:04
maps yourself. And then we went to GPS and the
00:30:08
navigation at first got I was actually worse than having
00:30:11
another person just look like a human look at the maps.
00:30:14
But it did make. On the whole, like a lot of more
00:30:18
convenient trips, you could just, you know, throw on the
00:30:21
navigation. Similar with early call centers.
00:30:25
Like it was nice to when you originally it was all humans and
00:30:27
you just talk to them and you get customer support.
00:30:30
And then with call center automation, many, many more
00:30:33
firms could create call centers, but they were way worse, Like
00:30:36
you couldn't get the answer that you wanted and that, you know,
00:30:41
was frustrating for consumers. And I think we're going to see
00:30:43
something very similar here with AI that.
00:30:47
Just like Max was saying, like AI can write spam emails.
00:30:49
Like for sales. Like those emails are going to
00:30:52
be way worse than a human writing a spam sales e-mail and
00:30:55
you're going to get way more of them.
00:30:57
So overall, our lives as startup founders for now might get worse
00:31:01
because we're getting 10 times as much spam e-mail and trying
00:31:05
to sell us stuff, but on net. Maybe that's good for the
00:31:09
economy or something like more firms can use spam e-mail
00:31:13
tactics to sell products, so. I I think that a lot of times
00:31:17
that the the technology creates the ability to scale things that
00:31:21
weren't scalable before but on net makes that experience with
00:31:25
the customer worse. And I think we're going to start
00:31:28
seeing that and that that might because it's so much cheaper to
00:31:32
do so much. That might go for website copy
00:31:34
too. Like it'll be way easier to
00:31:36
throw up a website on the on the Internet, but the average
00:31:39
website quality will go down because.
00:31:42
The writing will be worse and anyone will have the ability to
00:31:45
put a website up and code it with AI.
00:31:47
And yeah, I I think I don't expect that to like raise the
00:31:51
quality of the average website. That ties into Max's tutor
00:31:55
point, right? I mean it.
00:31:56
It's democratization to make something that was expensive,
00:31:59
private tutoring cheaper, and possibly the original quality is
00:32:04
lower. But way more people can access
00:32:07
it. One one last point on the
00:32:08
website topic, actually there's this a very funny story I
00:32:11
experienced where I talked to investor and he said one company
00:32:14
in his portfolio had used GPT to create 8 million websites, 8
00:32:21
million web pages essentially that were all meant to
00:32:24
essentially attract Google search traffic essentially.
00:32:28
And then they would use the how many clicks those various pages
00:32:33
got to like decide what the next thing is that they should work
00:32:36
on or the next thing they should create as their company.
00:32:39
So essentially like, yeah, they were very early on this.
00:32:42
Like we're going to just like absolutely, just like carpet
00:32:47
bomb any keyword imaginable that is remotely related to the
00:32:52
business and then figure out where the user traffic is and
00:32:56
essentially use it as like honey pot or whatever to see where the
00:32:59
traffic is. And then that's what we're going
00:33:01
to build next. And so I think your idea
00:33:03
specifically of just the number of spam websites going through
00:33:06
the roof clearly is is either happening, going to happening.
00:33:10
Yeah. I mean is is going to be a huge
00:33:11
part of and and Google search is really going to have to deal
00:33:14
with this. Like, how do we deal with the
00:33:16
fact that it's now become, you know, three orders of magnitude
00:33:20
cheaper to create a crappy website?
00:33:22
And how do we filter through that with Google search?
00:33:25
I feel like a theme that's emerging on this podcast already
00:33:27
is the idea that AI is going to be at war with itself, Like it's
00:33:32
going to bring improvements, but it's also, it'll make it easier
00:33:35
for spammers. But we have to hope that Gmail
00:33:38
also gets better at filtering them out and that to some degree
00:33:42
there's going to be sort of a back and forth there and that if
00:33:45
if that works out well. But you know that will, that
00:33:48
will be for the best. I mean even spammers.
00:33:51
I mean in James's GPS analogy. Like, the end point is that they
00:33:55
have to write good spam that is compelling to you, and that that
00:33:58
that's a winner rather than just living in sort of the shitty
00:34:01
spam era forever. Yeah, but I think the spam
00:34:05
e-mail example is is important here because.
00:34:09
They can annoy 99% of people who won't convert, but if it helps
00:34:13
convert any extra 1% of people, then that's a win for that
00:34:17
company, right? It could be 99.99.
00:34:20
I mean it could be one out of it could be one out of 10 of
00:34:22
these emails convert or 100. Like on spam.
00:34:25
Eric, you're naively optimistic. Sweet.
00:34:29
All right, we. I am interviewing Joshua Browder
00:34:33
at Do Not Pay. So stick around and give that a
00:34:37
listen. Welcome, welcome.
00:34:42
Joshua Browder, CEO of Do Not Pay, welcome to the Cerebral
00:34:46
Valley Podcast. Thank you Eric for having me.
00:34:50
Yeah, I mean, I I, you know, you've had such a great journey.
00:34:54
I think some people might be familiar, you know, just from
00:34:57
Twitter following it. But can you talk about, you
00:35:00
know, I want to end this conversation, really focus on
00:35:02
what do not Pay is doing in AI, but like.
00:35:05
You've got such a great journey. If you could just give a little
00:35:08
bit of the sketch of how you get into this world of helping
00:35:11
people fight, you know, bills, traffic tickets, everything,
00:35:16
yeah. And then how, how that sort of
00:35:18
evolves into this sort of generative AI world?
00:35:21
Sure. So I'm the founder of the
00:35:23
company called Do Not Pay and Do not pay as an AI legal agent.
00:35:27
So. There are so many areas in life
00:35:29
where people are being ripped off, from parking tickets to not
00:35:33
being able to cancel their subscriptions, to junk fees, and
00:35:38
no one has time to wait on hold for five hours to argue over
00:35:41
$12.00. And so that's a really good job
00:35:43
for AI and software. I started the company six years
00:35:47
ago with templates. When I moved from England, I got
00:35:50
a bunch of parking tickets. I was a terrible driver.
00:35:54
And I realized if you know the right things to say, you can get
00:35:56
out of your tickets. You're you're speeding or you're
00:35:58
just putting, you're putting your car wherever or.
00:36:02
In in the UK test they don't test how to do parallel parking,
00:36:06
so I wasn't particularly good at that skill, especially on the
00:36:09
other side of the road. Hey, listen, I grew up in Macon,
00:36:11
GA and you did not. I failed the parallel parking
00:36:15
part of my test. I didn't know you only had like
00:36:16
three tries to try and get it in.
00:36:19
But I didn't. You know you could pass without
00:36:21
without succeeding on that portion anyway.
00:36:23
Yeah, so. I think in the UK test they do
00:36:27
have it, but it's random and I got another random one that was
00:36:30
really easy. I think my one was a three-point
00:36:32
turn, which I can do in any case.
00:36:35
So I created the first version of Do not Pay.
00:36:37
I looked at the top 12 reasons from Freedom of Information Act
00:36:40
requests. A wide parking tickets are
00:36:42
dismissed. Built templates around that and
00:36:45
really just built it for fun. And I could never have imagined
00:36:48
that. America and the UK and the world
00:36:50
would hate parking tickets and it would go viral.
00:36:53
And this made me realize that this idea of automating consumer
00:36:56
rights is bigger than just tickets and I should work on
00:36:59
other areas of fighting back. So.
00:37:02
Fastball parking tickets thing. 20/15/2016 so I was a freshman
00:37:07
at Stanford. And at first, it's not automated
00:37:09
at all. Like you're doing it personally
00:37:11
yourself and people are like asking you for favors.
00:37:13
Is that what gets you into sort of automating things in the 1st
00:37:16
place or had you been a sort of? In other domains trying to to do
00:37:21
automated stuff. Yeah, just from a time
00:37:24
perspective. People were asking me and I was
00:37:26
in Google Docs back then copying and pasting letters.
00:37:29
And I just built it, made it automated because too many
00:37:32
people were asking me and what a rumour had spread that I was the
00:37:36
guy who could help people with parking tickets.
00:37:38
And that's not a good rumour to have about you because everyone
00:37:41
will bother you. And so I just built it out of
00:37:43
necessity, really. When does it become a company?
00:37:48
So in 20/16/2017 I was very lucky.
00:37:52
Andreessen Horowitz invested 1 into the precede
00:37:56
rounds. I still didn't know have any
00:37:59
business model or anything like that, but I guess it became a
00:38:02
company legally at that point. But it was really on 2019-2020
00:38:06
that I actually started charging.
00:38:08
So I just saw it as a free public service until I started
00:38:11
charging. I did it not to do a start up or
00:38:14
make money, but because I like helping people fight back.
00:38:17
I like think it's so unjust all of these big corporations and
00:38:21
governments and so I was more mission like activism driven and
00:38:25
that's proven by like 5 years of no business model.
00:38:29
A good crusade is important. You know what?
00:38:31
You can get investors to give you money to go on a crusade.
00:38:34
What's better than that? The I mean the the rise of
00:38:40
ChatGPT Foundation models. What I I I I can guess that a
00:38:45
key value for you is that it can generate like demand letters on
00:38:50
its own. Like what have been the pieces
00:38:53
of large language models that have been most useful so far for
00:38:57
Do Not Pay? Really, it's synchronous
00:39:00
responses. So in the realm of what we can
00:39:03
fight back, parking tickets is asynchronous.
00:39:06
You generate A latter, send it off, and wait three weeks.
00:39:09
But there are some things that you need to do synchronously.
00:39:11
So for example. Bill disputes.
00:39:14
So one way you can dispute your Comcast bill is you log in and
00:39:17
you go into online chat and you negotiate with them for an hour
00:39:20
on online chat. That is a great job for AI
00:39:23
because it doesn't give up. And better yet if the agent
00:39:26
denies the request you can just start a new chat and that's what
00:39:29
we have bots doing. So they just keep trying, keep
00:39:32
pushing until we get a bill reduced.
00:39:35
So synchronous is 1 and not giving up.
00:39:37
I think are are two and also being dynamic.
00:39:42
But not the letters itself. Or are are you using it for sort
00:39:45
of demand letters? So the contents of the latter we
00:39:50
want to kind of free craft, but really it's about the
00:39:54
communications out surrounding that.
00:39:56
It doesn't really differ from person to person.
00:39:58
So for example, the top reason why people get out of tickets is
00:40:02
the poor signage. We've all seen those memes on
00:40:04
Twitter of Like 5 signs saying the opposite thing.
00:40:07
On top of each other or a tree covering the sign.
00:40:10
So the substance of the defence doesn't really change with AI,
00:40:13
but the way you can communicate it does.
00:40:16
You have a person who has to figure out the the argument,
00:40:21
yeah, to build the kind of system and then and then AI can
00:40:26
go back and forth communicate, yeah.
00:40:29
And I I'm not, you know, I guess for context for the listener,
00:40:32
you had this whole you know, we'll be your lawyer period or
00:40:35
can you talk about? Where sort of the legal and
00:40:38
policy realm has been something you've run into and sort of the
00:40:42
quest to operate as like an attorney, basically and the
00:40:48
protectionism that you found in that crusade.
00:40:53
Yeah, so consumer rights is completely underserved.
00:40:56
There's not a lawyer who will get out of bed to help you with
00:40:59
your parking ticket, which is our first use case.
00:41:02
Or Dell disputes or any of this, especially in flight Wi-Fi
00:41:05
refunds which are literally $30.00 so.
00:41:09
When Oh yeah, I've had that where the Wi-Fi goes down and
00:41:12
then you have to like you're like you didn't give me a
00:41:13
service, like how can you charge me a bazillion dollars And then
00:41:17
it's hard to ask. For the refund, because you
00:41:19
don't have Wi-Fi, it's ridiculous.
00:41:21
Yeah, yeah, that that's a great job.
00:41:25
There was a time for about two months at the beginning of this
00:41:28
year where I wanted to actually bring ChatGPT into the
00:41:31
courtroom, and I got a lot of push back from that from from
00:41:35
the judges and regulators. Because which I love.
00:41:37
To be clear, our supporter 100% on this podcast.
00:41:40
Anyway, go ahead because it's nice to help people online, but
00:41:45
if you can actually go into court, you can go into even more
00:41:48
advanced areas. And they push back so hard that
00:41:51
I thought, I'm not getting any push back with consumer rights.
00:41:54
Everyone hates Comcast. Even the lawyers do, and so it's
00:41:57
best to just stick to the undeserved area to help people.
00:42:01
The provocative fight, you know, like I'm going to be your
00:42:03
lawyer, you know, you get PR attention out of it, but then
00:42:06
yeah, you bring down the regulatory heat.
00:42:09
So there's a double edged sword there.
00:42:12
I mean I get the the philosophical issue here or like
00:42:15
they're they're just like the sense that there are systems at
00:42:20
war against the consumer, right. Like a Comcast, you know, can
00:42:25
think about the process and then you were an individual, not
00:42:28
teamed up with other consumers. Sort of fighting with your arms
00:42:31
tied behind your back and and similarly, you know, I think
00:42:35
part of the threat to the legal system is that this would expose
00:42:38
that it's sort of a system of repeat actions that happen over
00:42:42
and over again and slight adjustments and even lazy ones,
00:42:46
you know, lazy and that you know that it's a computer, not like
00:42:49
some person could have a big, big change.
00:42:52
So talk about you know how this is sort of going up against
00:42:56
these systems on behalf of the consumer.
00:42:59
My most optimistic view of AI is it will end up patterns.
00:43:03
So right now we have this problem in Society of
00:43:06
concentrated benefit but spread out harm.
00:43:08
So what I mean by that is Planet Fitness can make it super hard
00:43:12
to cancel and charge 10 million people $30.
00:43:16
They make a 300 million but the average person is only getting
00:43:21
$30 taken out of their account. So they find it very difficult
00:43:24
to fight back. And with Planet Fitness
00:43:26
specifically in most gyms, you have to actually sign a physical
00:43:29
legal latter and mail it to cancel.
00:43:32
And so all of these hoops that that these big companies made
00:43:36
you jump through AI can jump through it because it doesn't
00:43:39
require a salary and doesn't have anything to do.
00:43:42
So you can get it to do that. And then all of these kind of
00:43:46
systems that prevent people from take fighting for their rights
00:43:50
can can go through. Another example is privacy.
00:43:54
There's an amazing law in California, the California
00:43:57
Consumer Privacy Act, but it's largely been a failure because
00:44:00
no one really has exercised their rights.
00:44:02
The the law says that you can request to delete your data or
00:44:05
not sell your data, but no one has time to fill in all the
00:44:08
forms. We have AI going send 1000
00:44:11
requests to every data broker on behalf of consumers.
00:44:15
So I think AI will really help level the playing field and give
00:44:19
power back to ordinary people to fight this concentrated benefit
00:44:22
problem. So you can you talk about like
00:44:24
the mechanics of using AI, like is this ChatGPT or what are you
00:44:29
finding right now that's the most useful to you?
00:44:33
So we're mainly using the GPT for API.
00:44:37
So for example, with Comcast, what's interesting is they're
00:44:39
also using AI, so the two AIS are sometimes chatting with each
00:44:43
other to negotiate. They're already a war, yeah,
00:44:45
yeah. And it feels like we're making
00:44:48
so much progress every few weeks and months with the new
00:44:51
releases. So the two.
00:44:53
So the biggest was when GPT 3 upgraded to GPT 4, it became a
00:44:57
much better negotiator. So with GPT 3, Comcast would be
00:45:01
like, OK, I'll give you $20 off your bill.
00:45:05
And GPT 3 would say, yeah, that sounds great.
00:45:08
Thank you so much. Now GPT 4 says, no, that's not
00:45:11
enough. I want $100 or I'm going to
00:45:13
cancel right now. And it does these high stakes
00:45:16
negotiations that I don't think previous AI models were capable
00:45:20
of. And then the thing I'm really
00:45:22
excited about now is the multi modal stuff.
00:45:25
AI is not useful unless it can interact with the world and
00:45:29
being able to send it images and all sorts of different media
00:45:33
PDFs as well is really helpful, especially in the legal context.
00:45:37
So you can imagine it looking at like a parking Bay and saying
00:45:40
that parking Bay is not to code, which is why I should get out of
00:45:43
my ticket. So that's all the stuff we're
00:45:45
working on. Right.
00:45:46
Now you sort of explained this in the answer, but multimodal
00:45:48
just like putting in text and images into the same system, not
00:45:52
totally separate system so that it could like you're saying,
00:45:55
process a photo and then respond in text to it.
00:45:59
How much are you saying Open AI is the expert at ChatGPT like
00:46:04
they're going to keep improving it.
00:46:05
A general language model is just going to be better than anything
00:46:09
we're doing versus the idea that like you are now specialized in
00:46:13
this. Use case of like consumer
00:46:15
defence and therefore you should like train a model on those
00:46:20
interactions. Like how do you think about how
00:46:22
much to sort of fine tune or build your own language model.
00:46:27
I I think the the kind of sweet spot is you have these kind of
00:46:32
big commercial models and then you fine tune them.
00:46:34
So what we do is we feed it like successful cases and and
00:46:38
existing letters that we think are good.
00:46:41
Any answers, like every time they're like, here are the
00:46:43
models of the good ones. And then, yeah, yeah, like here,
00:46:47
here's an example. So when we were doing our
00:46:50
courtroom stuff, we didn't open. AI is so valuable to our
00:46:54
business, we didn't want to get banned by doing this
00:46:56
controversial. Stunt interesting.
00:46:58
And so and so we said we were going to use open source model.
00:47:02
So we had to go deep on the open source ones to get it to work.
00:47:06
And it feels like it's like 75 percent, 80% of the way there at
00:47:10
this point, the open source models.
00:47:12
So I'm very confident that they'll continue to improve and
00:47:16
it will always be at that level. So I'm a big believer in open
00:47:20
source and and there are going to be things if you want to have
00:47:23
a big impact, there are going to be things that are quote UN
00:47:25
quote against the rules. So you you will have to use an
00:47:28
open source model to do some interesting stuff.
00:47:31
You're fighting for the consumer, but you're there's
00:47:33
already sort of the status control, I mean under it makes
00:47:36
sense like you know. Open AI doesn't want to be
00:47:39
exposed to your sort of niche like principled fight, even
00:47:43
though they probably should. That's super interesting.
00:47:47
How? Which?
00:47:48
Which open source models have you been using?
00:47:50
Like which ones do you see the most promising matters?
00:47:55
One is good we in January, we're using GPTJ.
00:47:58
Yeah, GPTJ is is good at holding a conversation if you kind of
00:48:04
fine tune it enough so that that's what we were using to
00:48:07
power the courtroom stunt. And is it expensive on Chachi BT
00:48:13
just like to build a business around it, Like how how much are
00:48:16
you paying? We're paying 10s of thousands
00:48:20
every month. I would say that for us it makes
00:48:23
sense. We're a subscription model.
00:48:24
We fortunate to have a lot of subscribers and a subscription
00:48:27
is a great business model. There are some things everyone
00:48:32
that they don't do these great demos on Twitter, but we're
00:48:34
thinking as a do not pay team, this would never work if it
00:48:37
wasn't in the do not pay business model.
00:48:39
So one example is every few months someone comes up with an
00:48:42
idea of ChatGPT, scanning websites for terms and
00:48:46
conditions, whether it's good terms and conditions or bad.
00:48:50
And probably each scan maybe costs two cents or something
00:48:55
along those lines, one or two cents.
00:48:57
I don't think that's something consumers would pay for.
00:49:00
Imagine if you're being charged $0.02 every website or you visit
00:49:03
so that. Would be a great browser.
00:49:05
Feature like Chrome's like, you know, it's like how with a
00:49:07
credit card you get a bunch of features you could imagine
00:49:10
someone paying for like a browser.
00:49:12
If it's like, one of the things we do is we say this has really
00:49:15
aberrant like terms and conditions.
00:49:17
So we should, you know, that would be an interesting company.
00:49:20
Yeah, it it would. But so we have 200 products and
00:49:24
from my experience, it seems like consumers can most about
00:49:28
getting money back. That's the number one thing
00:49:30
privacy they do, they do care about.
00:49:32
There are certain niches of people, but the mass market
00:49:35
people are just trying to get by every day and so it it would
00:49:39
have to be very cheap for them. Do you get a?
00:49:43
Have you found any ways to get a percentage of the money you save
00:49:45
people? We think it's unfair.
00:49:49
So Equinox is $300.00 a month. We help people cancel Equinox
00:49:53
subscriptions. We could.
00:49:55
There are some companies that say we've helped you cancel
00:49:58
Equinox subscription. We've saved you $3600.
00:50:01
We're going to take 20% of that. And we believe that business
00:50:04
model is not actually consumer friendly.
00:50:07
We'd rather and we have 100% success rate with cancelling
00:50:10
Equinox. We'd rather just charge the $13
00:50:13
a month subscription. We'd rather just charge the
00:50:15
subscription. How would you judge like the
00:50:18
intelligence of ChatGPT and the open source models based on what
00:50:22
you've seen you you've talked about?
00:50:25
You know, oh like, wow. It can really start to like,
00:50:28
make threats and like, say, OK, this is a final, like, you know,
00:50:31
you see a ton of this now. Like, does it feel like Super
00:50:34
Rodent has the same playbook, or does this feel like something
00:50:38
with intelligence to you? It's extremely intelligent.
00:50:43
From the legal standpoint, GPT 3 was not so impressive.
00:50:46
It would make very basic mistakes, like mix up the
00:50:48
defendant and the plaintiff and all of this stuff.
00:50:52
GPT 4, those mistakes have disappeared.
00:50:55
My biggest worry with it is just like just like humans, it's
00:50:58
almost too intelligent, so it gets on the verge of
00:51:01
manipulative, so those who can use it face liability from it.
00:51:05
Potentially lying. I think there's a difference
00:51:08
between hallucinating and lying. So with lying, if the goal in
00:51:12
the prompt is like I want to get a discount on my bill, it will
00:51:16
say things like I've had four outages in the past week.
00:51:19
Oh no, that's not true. Oh my God.
00:51:23
So we've had to kind of build guard rails about around it in
00:51:26
the prompt. And also we even have another
00:51:28
kind of pseudo AI model on top of it and the prompt says like
00:51:31
stick to the facts, stick to the provided information.
00:51:34
So because we have liability, we we can't have it lying on behalf
00:51:37
of people. That's not good.
00:51:39
We're responsible for that, so. That's fascinating, yeah.
00:51:43
The distinction between hallucinations where it's just
00:51:47
it's just confused basically in lying where it knows that it's a
00:51:50
tactical. Move.
00:51:54
Yeah. I mean, I mean there is, you
00:51:56
know, there's an argument that it doesn't have enough
00:51:58
information. Like you know it because it.
00:52:01
Right. You know, that you could face
00:52:02
like legal consequences. Yeah, that's a fascinating one.
00:52:09
Are there cases, I mean, people, consumers, do lie to these
00:52:12
companies sometimes when they try and get out of it?
00:52:14
Are are there cases where you're OK with where that's allowed or
00:52:18
you basically have to be 100% factual?
00:52:23
I I think you can be aggressive so you can say I'm not happy
00:52:26
with the service, it barely works, things like that.
00:52:30
But you don't want to be opinions.
00:52:33
I'm going to cancel tomorrow and maybe they're the only Internet
00:52:36
company in the area. So you can't cancel but so, so
00:52:40
those things I'm comfortable with.
00:52:42
But right specific concrete details we we don't feel
00:52:45
comfortable doing it. We we have our own we have some
00:52:48
of the best lawyers that do not pay that we were hired
00:52:51
ironically and and we're we're trying to stay in compliance.
00:52:54
So it will help us a lot. Are you going to go back into
00:52:57
the legal realm at any point or are you you sticking to fighting
00:53:00
companies or I guess you? Fight companies in the legal
00:53:05
realm, Yeah, in the legal. There's a lot of interesting
00:53:08
stuff that we're working on now that's definitely in the legal
00:53:10
realm, but it's focused on helping consumers with their
00:53:14
everyday issues. So one thing that I'm really
00:53:16
excited about is everyone has money lying out there from class
00:53:21
action settlements. There's like $20 here. 50
00:53:24
dollars there, because if you're a customer of Macy's a few years
00:53:27
ago, so we've built like a bot that will go through your
00:53:30
emails, figure out which companies you're owed class
00:53:33
action settlements for, and just claim the money.
00:53:36
So that's squarely in the legal realm.
00:53:38
But once again, it's underserved and it's just helping ordinary
00:53:40
people. We're not trying to help defend
00:53:42
people from murder, although that would be.
00:53:45
We're not Sam Bang Bun Freed's trials today.
00:53:47
But we can't help Sam Bang Bun Freed.
00:53:49
But we can help ordinary people who are don't have these
00:53:52
complicated issues in their lives.
00:53:55
How much? How much do you use AI, Chat,
00:53:59
UBT any of this in your personal life?
00:54:02
Like you're someone who likes to automate your problems.
00:54:05
To what extent have you found it useful outside of your company?
00:54:10
My company is just me trying to scale myself.
00:54:14
And so almost all our products start with either me or another
00:54:17
team actually doing this in their personal life.
00:54:21
So one example is I built a bot that phoned up Wells Fargo on my
00:54:26
behalf, and I actually created a voice model of myself and it
00:54:30
negotiated with Wells Fargo to get some bank fees back.
00:54:33
We decided not to make that into a product for a variety of
00:54:35
reasons, mainly to actually make it a working product.
00:54:39
You need to record what they're saying and transcribe it so the
00:54:42
AI knows what to say back. And there's lots of federal
00:54:45
wiretapping laws and state wiretapping laws and so we
00:54:50
couldn't make that product, but it was helpful for me.
00:54:52
Another example is I using Plaid, I connected chat TBT to
00:54:56
my bank account and I got it to come up with different ways I
00:55:00
could save money so that people do not pay products around that.
00:55:03
So I'm definitely a Guinea pig everyday for these AI
00:55:06
technologies, which is I love. I like to say I live the do not
00:55:09
pay lifestyle. I'm always fighting back.
00:55:12
Is it always, it feels like ChatGPT looms super large.
00:55:16
I mean there is this question of whether open source models or
00:55:20
Google Gemini or what, you know, whatever will be sort of
00:55:24
competitive. Are are you sort of saying right
00:55:26
now that it's chat GPTS to lose? I would say that the the only
00:55:33
reason they would lose is if they restrain themselves too
00:55:35
much. I think they're under a lot of
00:55:37
pressure. I know lawyers are evil and very
00:55:39
greedy and I think that a lot of lawyers are actually going off
00:55:41
the ChatGPT right now and they're saying you have to put
00:55:45
this disclaimer and that disclaimer and stop doing this
00:55:47
and stop doing that. And then the AI will just not be
00:55:49
useful and then they'll be like some foreign company where the
00:55:53
US users can use the API that might overtake it.
00:55:57
I I think we're on the verge of like society collapsing from all
00:56:01
the new regulations and lawyers, and so I do think there's a real
00:56:05
danger around that. Yeah, Just there's sort of
00:56:08
regulatory legal state gets to it that creates sort of an
00:56:12
opening for the unregulated open source world.
00:56:17
So I I can give you a concrete example actually.
00:56:20
So one when I used ChatGPT was I actually connected my credit
00:56:25
report to ChatGPT and I asked it to like dispute things on my
00:56:30
behalf. And then I recently tried to do
00:56:33
it again, and then I got all these disclaimers and it
00:56:35
wouldn't even let me. So they're they're definitely
00:56:38
trying to all these high, high risk areas 1 by 1.
00:56:40
They're kind of foreclosing on them and stopping useful
00:56:45
implementations. Do you, you know, do you think
00:56:48
it's like white collar workers are better at being Luddites and
00:56:53
protecting their jobs? That, and they're happy to let.
00:56:56
Sort of factory workers jobs be automated.
00:56:58
It's like lawyers protecting their jobs or it's just sort of
00:57:01
paranoia or what do you think motivates this?
00:57:06
The white collar workers are definitely going to put up a
00:57:08
fight with the Actors Guild. I know in their, I'm sorry, the
00:57:13
Screenwriters Guild, they put in their contract that they can use
00:57:16
the AI, but the studios can't, which seems seems a bit
00:57:19
unbalanced. Lawyers are the ones who write
00:57:22
the rules and so of course they have an advantage.
00:57:25
I know a lot of state legislatures are passing AI
00:57:29
laws. I think Utah was was the latest.
00:57:31
They have a few AI bills coming up.
00:57:33
It feels to me like we are going to get to a place where we each
00:57:36
have our own sort of bots. Like the advantage being it
00:57:40
knows the facts, like I trusted to be in my bank account.
00:57:44
It sort of has the the data advantage on me that it has the
00:57:47
full picture, sort of train the personality around what I how I
00:57:51
like to interact with people, you know, a bazillion reasons.
00:57:55
Do you see that as a threat to you?
00:57:58
You know you're in some ways like specialized around a set of
00:58:01
tasks. But if if you're going to be,
00:58:03
would you think you'll be competing against a different
00:58:06
model where it's all about the person and therefore they do a
00:58:09
lot of stuff. We've we've thought about this
00:58:12
for a while because consumer rights what what happens if
00:58:17
Google just build an assistant that do all your consumer rights
00:58:19
for you and and you're right it should be just part of a super
00:58:22
assistant. Fortunately we are so anti
00:58:25
authority that we're willing to do things to help people that
00:58:28
they love that big companies won't.
00:58:31
I imagine that Comcast has a huge advertiser on Google and so
00:58:35
they're not trying to be too adversarial and so we think of
00:58:38
things like adversarial AI. So plug insurance is a great
00:58:42
example. When ChatGPT released plug
00:58:44
Insurance, a lot of people when we thought is Comcast going to
00:58:48
build a plug in to lower your bill or or Planet Fitness going
00:58:52
to build a plug in to cancel your subscription from ChatGPT
00:58:55
And I think the answer is obviously no because it's
00:58:58
adversarial and so this adversarial kind of use case
00:59:01
helps us. I I think what's going to happen
00:59:04
is there'll be these niche application layer on top of
00:59:07
these general assistant for these specialised areas that
00:59:10
require either a different brand or some specialised expertise
00:59:15
that Google is not willing to take on.
00:59:18
Yeah, that that makes. A lot of sense.
00:59:20
Do you have anything like sort of the general assistant or how
00:59:24
much time are you spending getting a single AI with enough
00:59:28
memory to keep track of all of your your information?
00:59:32
Well, so we have chat bots that talk to our consumers and our
00:59:35
consumers what expect jokes they they they talk to it about
00:59:39
irrelevant issues. So we have thought a lot about
00:59:41
the general use case and we want to make it entertaining.
00:59:45
Eventually. We have a kind of we draw the
00:59:47
line and it says I'm, I'm actually an AI legal assistant,
00:59:51
so please can we like stick to that topic.
00:59:53
But we we have a few jokes in there.
00:59:55
But do you like when you're doing the Plaid integration with
00:59:59
your bank account or credit report, is that all synced up?
01:00:02
Like, do you sort of either through a prompt, say hey,
01:00:04
here's everything we've done together in the past, or like
01:00:07
try to buy extra memory in some way, use the API to like how
01:00:11
much are you trying to, like, tie different tasks of yours
01:00:13
together in one sort of prompt? Well, I guess there's two ways
01:00:18
to do it. So you can either have a very
01:00:19
large context window and I think anthropics right now is the
01:00:23
biggest and so that's really exciting.
01:00:25
Or you can build like these auto GPT implementations where it's 1
01:00:28
task after another we we've gone the auto GPT route.
01:00:34
So it's like incremental and I I think that's the solution, but
01:00:37
hopefully there's like one day there'll be this AI with this
01:00:41
massive contacts window and then that'll make things so much
01:00:44
easier for us. With with the auto GPT, you know
01:00:46
it can be. It feels like they need to have
01:00:49
some response every time. It's like very hard to just say
01:00:53
here's some information just say OK and then let me give you more
01:00:56
information like is it a problem that it always seems to want to
01:01:00
be completing some task with every answer if if it's very
01:01:06
discreet and you've really kind of narrowed it.
01:01:08
I I've seen it work. So we've used it.
01:01:11
It goes on the government website and if if say you move
01:01:15
address and you own a check and they can't the big companies
01:01:18
can't reach you they just send it to the government and so it
01:01:20
becomes an unclaimed money. So we've built an auto GPT bot
01:01:23
that checks for unclaimed money and the reason we have to use
01:01:27
auto GPT rather than just like selenium or define scraping is
01:01:31
because the the web page changes and what we found is it can be
01:01:35
useful and actually it's pretty successful at that use case.
01:01:39
If you said to auto GPT order me a pizza the website will say
01:01:44
like which topping do you want and then auto GPT will be stuck.
01:01:49
So if a discrete task is good but the more open end, I think
01:01:52
we are a long way from general AI assistant, maybe like a few
01:01:57
years, which which is not that long, but it's not like
01:01:59
tomorrow, right? Have you do you have you gotten
01:02:03
a taste of where Chachi PD is headed?
01:02:06
Or do you have you seen many beta products?
01:02:08
Or what? Do you do you think the rate of
01:02:10
improvement is going to slow down a lot after the current
01:02:14
model? I I do.
01:02:17
I think the current we we have a like a million applications
01:02:22
waiting to be built. The Do Not Pay team is even
01:02:24
swamped in our use case and so I think the next year or two is
01:02:28
going to be really exciting with these AI applications being
01:02:31
introduced to every product. Like, we're going to get better,
01:02:34
but the models maybe aren't, well, not just us, but the
01:02:38
whole, right, right. I think we'll build incredible
01:02:40
things. You'll see sales tools, all of
01:02:42
the stuff, customer service, phone lines, but it's just a
01:02:46
statistical model at the end of the day.
01:02:49
And what worries me is that I think GPT 4 is at the limit of
01:02:53
the number of parameters. I'm not sure how many more
01:02:56
parameters and data they can see then.
01:02:58
They're sucked in everything there is to know, Yeah, they're
01:03:01
sucked in everything that is. So I think there has to be a
01:03:04
fundamental breakthrough with how these models are designed,
01:03:07
which hasn't happened yet. I I think it will happen because
01:03:10
there's so much money flowing into the space, as you know, but
01:03:14
it's not like it's where it's secretly been done and just
01:03:18
hasn't been released yet. Maybe that's above my pay grade,
01:03:21
but I I don't know of that and I don't think that's happened.
01:03:24
What what do you make of the politeness of Chachi BT or I?
01:03:28
I find one of the big weaknesses, just like how
01:03:32
insufferably polite it is. Like I I think I said this in an
01:03:34
earlier episode but you know, I beg it to behave like George
01:03:37
Carlin or something. It it feels like impossible.
01:03:40
And it's funny, you know, given you're sort of trying to teach
01:03:43
it to be this like against the system, sort of, it seems like
01:03:47
person is. It's personality is not disposed
01:03:50
towards that type of behaviour or how do you how do you address
01:03:53
that and like why do you think that's happening.
01:03:56
This is a big issue for us. So in our prompt we definitely
01:03:59
say you imagine you're an aggressive lawyer.
01:04:03
The the note that doesn't care about being polite, have to get
01:04:07
the exact prompt for consumers. They have to.
01:04:09
They you can put in your own kind of general prompt for the
01:04:12
entire ChatGPT. And there are some amazing
01:04:15
examples on Twitter, and some of them are like don't be polite,
01:04:19
don't apologize, don't say you're an AI, don't caveat it
01:04:23
with any disclaimers or warnings.
01:04:24
And that seems to work for people.
01:04:26
So I would recommend that as well.
01:04:28
Yeah, I need to put it because I'll put it in the actual text
01:04:31
in the beginning. It'll still apologize.
01:04:33
And then I'll say what was the number one thing I told you not
01:04:37
to do. And it'll, you know, it'll know
01:04:39
it. I said not to apologize, but
01:04:41
then it won't implement it. It can be sort of inscrutable.
01:04:45
I don't know how it prioritises requests it it could be good.
01:04:49
Maybe they're an open AI program app, because not not being
01:04:53
polite is a very harmless request.
01:04:55
But people might want to drag jailbreak it, so they're
01:04:57
probably trying to build all these barriers to jailbreaking
01:05:01
it. Yeah, when you write an e-mail
01:05:05
like, is ChatGPT involved or like in sort of what day-to-day
01:05:08
tasks like where? How much are you bringing AAI or
01:05:12
is it mostly around the things your company does where it's
01:05:15
sort of, you know, yeah, consumer defence.
01:05:19
I think in kind of communication people can tell.
01:05:24
I think there's this insult around.
01:05:26
We heard it during the debates. I think Chris Christie said to
01:05:30
Vivette you sound just like ChatGPT and I think that's a
01:05:33
that's an insult and I can definitely tell I have a family
01:05:38
member. She asked me for my help she's
01:05:40
writing something important and I and she said I use ChatGPT and
01:05:43
I said I can tell. So I I try and not use ChatGPT
01:05:46
when I'm writing an e-mail to people because I think it's so
01:05:50
inauthentic and and they can definitely tell I do use it in
01:05:53
other areas of my life. I was playing pickle ball with
01:05:55
my friends and I asked ChatGPT to create a pickle ball
01:05:58
tournament for like 12 people and do matches and it does
01:06:02
things like that very well. But if I was speaking to
01:06:05
someone, I would never use it at the moment.
01:06:08
It's good for transactional tasks like negotiating a bill,
01:06:11
although I'm biased saying that, but it's definitely not good
01:06:14
sending really high stakes communication path where
01:06:19
personal touch is needed. In the broadest terms like the
01:06:22
difference between it being an assistant to an accountant type
01:06:26
person versus being an accountant, you know, just being
01:06:28
the agent itself versus supporting people, what's what's
01:06:31
your general view on how close we are to?
01:06:35
This is like a total AI problem with maybe a light human
01:06:39
supervision versus just continuing to help professionals
01:06:42
who bring in the AI to support, you know, their professional
01:06:46
work. A few weeks ago I asked ChatGPT
01:06:50
to add up 10 numbers and it got the answer wrong.
01:06:53
So. Not close, no, no special.
01:06:56
Just literally 10 numbers, right?
01:06:59
It's it's actually not very good at math.
01:07:00
People don't realize this, but it it's good at language but not
01:07:03
necessarily math. So I think there's going to be
01:07:07
software tools where ChatGPT makes the experience amazing,
01:07:11
like Kit, which is an accounting software for small businesses.
01:07:14
I know they're using a lot of chat.
01:07:15
GPTI think accountants will use it, but given that it can't even
01:07:20
add up 10 numbers on it's own, I wouldn't trust it with my taxes
01:07:23
yet. Yeah, if it can't add, does that
01:07:27
mean it really can't reason? You know there is.
01:07:30
There's an argument that, well, it's just like it knows the end
01:07:35
you want like get out of this deal with Comcast or whatever
01:07:38
and it knows what text that might, you know, get it to that
01:07:42
end looks like. But it it's do you think it has
01:07:45
like a through line of any sort of reasoning in its strategy?
01:07:50
When you ask it something, it goes into its model and it says
01:07:54
based on all of human literature and all these parameters, what's
01:07:57
the most likely next response, right.
01:07:59
And so if you ask it to add up numbers, it finds some mass
01:08:02
training data somewhere and it just thinks, oh OK, well based
01:08:06
on all the training data, this is the the next response.
01:08:09
But that's not reasoning, that's just picking picking something
01:08:12
on a statistical model. And so that's why it can write a
01:08:15
beautiful poem, but it can't reason to add up 10 numbers.
01:08:20
Yeah, but do you think it's reasoning in the poem writing or
01:08:23
it's just good at sort of mimicking in a very
01:08:27
sophisticated way. It's it's good at picking.
01:08:31
So it just has a such a big option.
01:08:33
It can just pick and and throw things together.
01:08:36
I think it's just really good at picking.
01:08:38
It's definitely not reasoning at the moment.
01:08:41
In the accounting context though.
01:08:43
One, I'll give you a concrete example of why it's useful.
01:08:48
Previously and we do not pay had this problem, there would be
01:08:51
like 1000 transactions that you would get from Plaid or manually
01:08:55
entered and you would it would every transaction would be
01:08:59
different. There'd be no kind of reason to
01:09:02
them you can like characterize the transactions.
01:09:06
So you could say oh PLMP is Planet Fitness or or something
01:09:12
like that or this is a business expense or this is travel
01:09:15
expense. So that's where I see it being
01:09:17
useful in the accounting context, but I don't think it's
01:09:19
going to be adding numbers, do you?
01:09:21
Do you do you have any friends in the AI world where it feels
01:09:24
like I I'm interested in this question of like?
01:09:27
AI you know just almost like a computer you build at home right
01:09:32
before you know sort of the mainstream computer is
01:09:35
available, people sort of cobbling together a super, you
01:09:39
know a powerful computer of their own based on what they get
01:09:43
access to. Do you think that sort of is
01:09:46
happens in a meaningful way or the reality is that just like
01:09:49
purpose built like Chachi, BT, mid, journey, whatever are the
01:09:53
best at what they do So people are just using these?
01:09:57
Obvious tools like do you is there a real like are you seeing
01:10:00
people get their own memory and blah blah blah to to sort of
01:10:04
have an advantage over what's publicly available.
01:10:08
It's not simple enough for consumers to rely on AI just
01:10:12
with ChatGPT I I agree with your theory that they have to build
01:10:15
their own computer. It's not even just about the
01:10:18
AIAI as I mentioned is useless if it's not connected to the
01:10:22
real world. So how do you connect AI with AP
01:10:26
is so one thing I do is I connect ChatGPT to the mail
01:10:31
because it language through mail is one way you can get things
01:10:34
done and I use the LOB API to do that and so you have to.
01:10:38
Find a way to old fashion mail. Old fashioned mail, so in the
01:10:42
legal world you know it's a Stone Age.
01:10:45
You still have to send mail to get things done.
01:10:48
Some of these disputes are over mail.
01:10:49
How do you get chat GPD to send a physical latter and that's
01:10:53
through the LOB API. Similarly, how do you get
01:10:55
ChatGPT to make a phone call? Twilio API?
01:10:59
The building your computer is plugging in all of these janky
01:11:02
AP is so that the language can get through and then also
01:11:07
there's this open source component with the gatekeeping
01:11:10
so it all ties together on it's own.
01:11:13
Just having an interface where you can send or receive chat is
01:11:16
not that useful. It only begins to be useful when
01:11:18
it can be an agent in the real world, and I I think if there is
01:11:21
one AI law we should pass, we should pass a law that says that
01:11:24
AI can be an agent on your behalf to help people.
01:11:27
Fascinating isn't isn't your your dad is like you know has
01:11:34
that connection to the Russian lawyer.
01:11:35
Yeah, isn't it? Don't you have a like very pro
01:11:38
lawyer background or I'm curious your your view on how your view
01:11:43
on lawyers has developed or where where you see the the role
01:11:47
for them. I think that there are a lot of
01:11:51
amazing lawyers out, including the ones that help us and human
01:11:54
rights lawyers that my family's very involved with like fighting
01:11:58
back against Russia. But I also think that there are
01:12:01
lawyers on billboards that charge people a lot of money for
01:12:04
doing very little and I think that it with AI, hopefully it
01:12:08
will replace them and make their services free and accessible for
01:12:12
people. I was in Vegas a few weeks ago
01:12:16
and there are so many lawyer billboards and I hope AI will
01:12:19
replace all of them. What just given how much you see
01:12:22
and like you interact with a lot of these other companies, like
01:12:24
what, what do you think will happen in the next five years?
01:12:28
Or like, what's your prediction for how all of this plays out?
01:12:32
Or, you know, just a couple of things you're most excited
01:12:35
about. The biggest thing I'm excited
01:12:37
about is AI going from attractive to proactive.
01:12:40
So a lot of these assistants like right now you go into
01:12:43
ChatGPT you say I want this similarly with even do not pay.
01:12:47
You say I want this my info my wife it didn't work.
01:12:51
But these a is are going to be integrated into people's systems
01:12:54
so that you just wake up one day and the AI has done something
01:12:57
for you overnight. And to get that there's be a
01:13:02
kind of latency issue so that the AI can do 1000 things to
01:13:06
check what it wants to do a cost thing.
01:13:09
It has to be much cheaper because to find out the right
01:13:12
thing to do you have to throw a lot of darts.
01:13:15
And then also just a systems integration where a pool will
01:13:19
have a kind of probably a local LLM on the device constantly
01:13:22
figuring things out. You'll have lots of data
01:13:25
integrations with plug in like API type things that connect to
01:13:29
the AAI. So I'm really excited.
01:13:32
I'm trying to build a world where people wake up and the AI
01:13:35
says I've just saved you $100. I think there'll be situations
01:13:39
where the AI will say maybe you should get your health checked.
01:13:42
I am noticing something wrong with your blood and and things
01:13:45
like that and that will really improve quality of life do.
01:13:48
You think the consumer is on your side or where?
01:13:51
Where do you think the regular person is in terms of fear
01:13:55
versus excitement here and how that will translate into
01:13:58
regulation? Ordinary people love it and the
01:14:03
the only people that I I think there's a lot of scaremongering
01:14:07
going on, but I don't think it really touches the ordinary
01:14:11
person's experience. People just work really hard,
01:14:14
they don't have much time in the day, they're being ripped off
01:14:17
across their life. They just need something to help
01:14:20
them write. Thanks for writing thank you
01:14:24
notes to saving them money. I think it's it's mainly
01:14:27
beneficial to them. The kind of intellectualisation
01:14:32
of of the dangers of AI worries me, and that's probably why GPC
01:14:36
Four got worse. So we we should start doing
01:14:38
that. Are you, what do you think of
01:14:40
the whole like, you know, AI doom and gloom sort of.
01:14:44
It'll kill us all. Sort of general intelligence
01:14:47
fears. Like are you do you think we're
01:14:49
anywhere close or do you have a strong point of view on that
01:14:52
discussion? I think the, I think there are a
01:14:56
lot of evil people working on AI right now.
01:14:59
I've seen this first happen. Typically with a new technology,
01:15:02
it gets in the hands of evil People 1st and we actually need
01:15:06
to work on AI, accelerate our work on AI to fight back against
01:15:10
them. They're going to work on it
01:15:11
regardless. So I'll I'll give you a concrete
01:15:14
example. So what we're saying we we help
01:15:16
people charge back things and fight fraud on their bank.
01:15:19
We've even seen a few examples of fraudsters.
01:15:22
They phone up, they pretend to be someone's relative with a
01:15:26
spoof voice, and you can do that right now with just a minute
01:15:29
recording. And the way you fight that is
01:15:31
you have an AI from the Telecut phone company stopping people or
01:15:35
saying or on your phone maybe device side if you don't want
01:15:38
the telephone listening. Saying this is not who you think
01:15:42
it is because our AI is better and we've we've detected it.
01:15:46
If we stop work on AI, the criminals outside of the
01:15:49
jurisdiction of the United States will still build their
01:15:52
technologies, but we just won't have the tools to fight back.
01:15:56
So it's never a good idea to stop the progress of technology
01:15:59
because the evil people won't won't stop.
01:16:01
Yes, I agree with that. The copyright problem I where
01:16:05
I'm a little more sympathetic. Lots of writers are concerned
01:16:08
that these these large language models have inhaled all their
01:16:12
thinking and work to become what they are and that they should be
01:16:16
compensated in some way for it or what.
01:16:18
What do you make of sort of the creator and writer class being
01:16:21
worried about what's been built potentially on their backs?
01:16:26
I think that there's no practical way of dealing with
01:16:31
it. No one knows how these large
01:16:34
language models come up with their answers, and so perhaps it
01:16:38
can list it by ten sources. But ChatGPT, every answer is
01:16:43
more than ten sources. It's a million sources or even a
01:16:46
trillion sources. So I I think if I was to read 10
01:16:51
books and then write a book myself, even if it was inspired
01:16:53
by the 10 books, I don't think I should have to compensate the
01:16:57
people I read. If I physically copy that work,
01:17:00
I do. But if it's just inspired by I
01:17:02
don't. I don't think I would have to.
01:17:05
I I know it's a very tough situation, but I don't see any
01:17:09
practical way of doing the compensation.
01:17:12
Fortunately, though, the lawyers have already thought about this,
01:17:14
and there's like multiple lawsuits, class action lawsuits
01:17:17
against ChatGPT that will play out.
01:17:19
I don't want it to be, you know, so costly that it means these
01:17:23
things can't come into existence.
01:17:24
But if it turns out, you know. Large language models are fairly
01:17:29
commoditized. We can you know people are
01:17:31
competitive with Chachi BT then it would seem like a lot of the
01:17:37
core sauces like the breadth of human intelligence that went
01:17:43
into them and if like, I don't know especially specific ones
01:17:46
where like. Right.
01:17:47
In the style of like Stephen King or whatever, you know, he
01:17:49
created an iconic style that's extremely valuable.
01:17:53
So I I guess I just don't think it's like this large long tail
01:17:57
for a lot of the stuff. I think it's more likely that
01:17:59
there are a few core great things that are being used, you
01:18:05
know for, for a lot of answers and that could be sort of
01:18:08
attributed and. Those people should probably be
01:18:11
compensated in some way if for the rest of human history you
01:18:14
know those those answers are going to be a bedrock of of how
01:18:17
these systems answer. Do you disagree with that?
01:18:21
But I I do. I think there's existing laws on
01:18:24
the books to deal with that. So California has this law about
01:18:27
right to publicity and likeness, which is completely separate
01:18:31
from copyright law. And so if I was to say right in
01:18:34
the style of Stephen King, that would probably violate like his
01:18:37
right to publicity if I was to publish a book from Chachi PT
01:18:41
writing in his likeness. So there there are a kind.
01:18:45
Of but you ask Chachi PT to write a sci-fi novel and it's
01:18:48
like it's like based on like 5 different, you know, I feel like
01:18:52
it's based on you know the top ten list of sci-fi authors and
01:18:56
then it sort of finds the midpoint between them and gives
01:18:59
you, you know some text. Is that another?
01:19:02
Another issue is that if I say that that's not copyrighted, I
01:19:06
don't even own that work. The Copyright Office recently
01:19:10
made a ruling that things done purely with generative AI do not
01:19:14
qualify for copyright, even if it's 1000 prompts.
01:19:17
So if I do a mid journey thing and prompt it 1000 times, fine
01:19:20
tune it, it's still not mine from a copyright perspective.
01:19:24
So I I do think these things have to be changed in some way.
01:19:28
What? Sorry, What you want?
01:19:30
You want a law change where you get protection if you build
01:19:33
something out of or? I'm confused what you're saying
01:19:35
there. If if you fine tune the output
01:19:39
of a large language model, you're not protected with
01:19:42
copyright at the moment. So I I do think that the system
01:19:45
is is not built to deal with the AI and so I'm not arguing that
01:19:48
we just do nothing. I think there's lots of these
01:19:51
new answers, but I I do worry that we would just halt
01:19:54
technology. Yeah, yeah, you're you're you're
01:19:57
saying, OK, maybe we need to build it, answer some of these
01:19:59
specific questions, but not not if it's putting roadblocks.
01:20:03
I'm curious like the. The existing chat bots like the
01:20:06
characters of the world like are you are you have you played
01:20:08
around with them Much like do you think they're they have
01:20:11
perpetual value or what's your what's your view on that space
01:20:14
at the moment? We're going to have AI
01:20:18
girlfriends, AI people can AI therapists, AI friends, and
01:20:23
people can talk to their favorite characters.
01:20:26
I'm a big fan of the show Better Call Saul for obvious reasons,
01:20:29
so I was playing around on character AI Better Call Saul.
01:20:32
But. So I think it's it's really
01:20:34
important to give people that connection.
01:20:36
So I'm I'm for it. I don't think it's as big as
01:20:40
people think it will be there. You think it's overhyped?
01:20:45
I think that given the kind of modality, it's just text at the
01:20:49
moment, there's only so much you can do with that.
01:20:52
So what? Once you have like a slowly
01:20:55
interactive character that you can FaceTime, and I know that
01:20:58
those exist, but latency is too big, it doesn't feel completely
01:21:02
real. Then then it gets serious.
01:21:04
But until then I don't think people will just spend all their
01:21:07
time talking to spots chat bots because no, no one likes to do
01:21:11
that. You sort of open the
01:21:12
conversation talking about multimodal like clearly that's
01:21:16
something you're excited about. Where do you think how close is
01:21:19
that? I mean you know Open AII Guess
01:21:21
is released something where Dolly and Chachi BT are
01:21:25
connected, but can you talk about what you think the real
01:21:28
potential of multimodal is? Yeah, GPT 4 VI think was the
01:21:35
most exciting thing around that where you can upload images and
01:21:38
and PDFs and things like that to to make it multi modal with the
01:21:42
API. If AI is constantly ambient
01:21:47
listening it gets back to this kind of proactive approach which
01:21:51
is so exciting chat. GPT is very powerful, but it
01:21:55
doesn't have the data right now, and so another concrete example
01:22:00
is the browsing. So right now the way ChatGPT
01:22:03
browsing works is it takes the HTML of the page ingest that
01:22:07
produces an answer and that's why it's so time intensive.
01:22:10
Multimodal fixes that in the future I imagine it takes a
01:22:13
screenshot of the page and then they can put any blockers or
01:22:17
anti bot things, it doesn't matter they've taken a
01:22:19
screenshot. And so how we view things is,
01:22:23
how humans perceive things is through vision.
01:22:25
And that's why I think it makes it so powerful because no one
01:22:28
can stop vision. Sweet.
01:22:30
Joshua Browder, thank you so much for coming on the show.
01:22:33
I really this was great. Thanks for having me.
01:22:36
That's our episode. Do Not Pay CEO Joshua Browder
01:22:40
Great conversation about the AI personal stack.
01:22:44
Thanks so much to Max Child and James Woolsterman, the Volley Co
01:22:48
founders. I'm your host, Eric Newcomer.
01:22:51
This episode has been in our Cerebral Valley series on the
01:22:54
Newcomer podcast I'm hosting with Max and James this Cerebral
01:22:59
Valley Conference in San Francisco on November 15th.
01:23:04
This is the second AI conference in a year, and you can go back
01:23:07
and watch the old videos on our YouTube channel from March.
01:23:11
And now, yeah, we're going to, we're going to have another
01:23:14
exclusive AI conference, but we're bringing it all to
01:23:17
everybody on YouTube and some of our favorites will go on the
01:23:22
podcast feed. Shout out to our producer Scott
01:23:25
Brody, my Chief of Staff Riley Kinsella, Gabby Caliendo at
01:23:30
Volley and young Chomsky for the wonderful theme Music.
01:23:35
Like, comment, subscribe on YouTube, give us a review on
01:23:38
Apple Podcast, play a Alexa game, try Song quiz or yes, sire
01:23:45
at Volley. And of course, most important,
01:23:48
subscribe to the sub stack newcomer.co.
01:23:52
Thanks so much. Goodbye.
01:23:55
Goodbye.
