We were delighted to kick off the 2nd Cerebral Valley AI Summit with Ali Ghodsi, CEO of Databricks, and Naveen Rao, co-founder of MosaicML.
Their encounter at our debut event in March led to Ghodsi buying Rao’s company, which had little revenue, for $1.3 billion. At our event on Nov. 15, the two discussed how the deal came together quickly after meeting at the conference dinner.
Thousands of enterprises around the world rely on Oracle Cloud Infrastructure (OCI) to power applications that drive their businesses. OCI customers include leaders across industries, such as healthcare, scientific research, financial services, telecommunications, and more.
NVIDIA DGX Cloud on OCI is an AI training-as-a-service platform for customers to train complex AI models like generative AI applications. Included with DGX Cloud, NVIDIA AI Enterprise brings the software layer of the NVIDIA AI platform to OCI.
Talk with Oracle about accelerating your GPU workloads.
Ghodsi recounted how he started spending some time with Rao and thought, “these guys are pretty good,” and then by chance noticed an employee he respected poking around with MosaicML and offering a strong endorsement. Soon Ghodsi was on the phone with the head of his deals team, who told him “if you want to buy these guys you have to do it this weekend.” Rao said by that point “you kind of know he’s going to pop the question,” and once they worked out the money, the deal was done.
The two executives certainly seemed to be in harmony as they touted the potential benefits from their combination, which in simple terms will bring MosaicML’s expertise in building specialized generative AI models to Databricks’ corporate data platform products, essentially super-charging Databricks for the generative AI era.
They were eager to defend the idea of open-source foundation models that are specific to certain tasks, rejecting the notion that general-purpose models like ChatGPT-4 will eventually swallow everything. (This conversation took place before OpenAI was thrown into chaos by its board of directors.)
Ghodsi said calls to limit open-source models on the grounds that they’ll be too easily exploited by bad actors a “horrible, horrendous” idea that would “put a stop to all innovation.”
“It’s essential that we have an open-source ecosystem,” he said, noting that even now it’s unclear how a lot of AI models work, and open-source research will be critical to answering those questions.
Rao added that many of the people making predictions about how AI would develop are “full of s**t.” On the safety question, he noted that cost alone would stand in the way of any existential risks for a long time, and in the meantime the focus should be on real threats like disinformation and robot safety.
Give it a listen
Get full access to Newcomer at www.newcomer.co/subscribe
00:00:00
Hey, it's Eric Newcomer. Welcome to the Newcomer Podcast.
00:00:03
We've got a great episode for you this week.
00:00:05
Coming to you live from the Cerebral Valley AI Summit, I
00:00:09
spoke with Databrick CEO Ali Godsey and Mosaic founder Naveen
00:00:14
Rao. The duo met at the first
00:00:17
Cerebral Valley and eventually Ali acquired Naveen's company
00:00:21
for $1.3 billion. As we talk about on stage, this
00:00:24
conversation took place November 15th, but I think it's still
00:00:29
very relevant to you. So give it a listen before we
00:00:32
get to that conversation. A word from our sponsors, Oracle
00:00:36
and NVIDIA. Thousands of enterprises around
00:00:39
the world rely on Oracle Cloud Infrastructure, OCI to power
00:00:43
applications that drive their businesses.
00:00:45
OCI customers include leaders across industries such as
00:00:48
healthcare, scientific research, financial services,
00:00:51
telecommunications and more. OCI also works with NVIDIA to
00:00:56
provide an AI training as a service platform for customers
00:00:59
to train complex AI models. Talk with Oracle about
00:01:02
accelerating your GPU workloads at the link in the description.
00:01:06
And now here's my conversation with Databrick CEO Ali Godsey
00:01:11
and Mosaic founder Naveen Rao. Welcome.
00:01:14
Thank you so much. You guys have been such big
00:01:16
supporters of the conference. Thanks for coming back and
00:01:18
getting on stage this time. Tell tell the story to start off
00:01:21
with just of the acquisition. You know, I know we we bragged
00:01:25
many times about how you met at Cerebral Valley, but how does
00:01:28
that turn into an actual merger or acquisition?
00:01:32
We met during the cerebral valley.
00:01:34
But then you had this thing in the evening.
00:01:36
What was that? There was like a a speaker
00:01:38
dinner. Yeah, you were there.
00:01:39
Speaker dinner and yeah, it was like 8-9 PM and that's when we
00:01:42
started like hanging out and talking about, hey, how's it
00:01:44
going? What are you doing?
00:01:45
You were telling me war stories, but I don't think I'm allowed
00:01:47
to, yeah. Please don't repeat those.
00:01:49
Keep them between us. And yeah, so started talking to
00:01:51
Naveen about what he's doing. And I was like, oh, he seems
00:01:53
pretty legit, like, you know, and the business is pretty good
00:01:55
and, you know, so I started looking into more and more
00:01:58
Mosaic ML and what they're doing and, you know, everything
00:02:02
looked, you know, super awesome. And I I remember like, God,
00:02:04
these guys are pretty good. And then I walk out of my office
00:02:07
in SF and I see one of our key employees, Shangri.
00:02:11
He's sitting there on the laptop and I look at, I look over his
00:02:12
shoulder, see what he's doing. Hopefully he's working and you
00:02:15
know, and he's on the Mosaic ML website and I'm like, hey, what
00:02:18
are you looking at? And he's like, hey, these guys
00:02:20
just released, like, and then you're just released serving
00:02:22
model serving of, you know, GPUs.
00:02:24
And I was like, wow, what? What's is it any good?
00:02:27
He's like, yeah, this is very competitive pricing.
00:02:28
This will be difficult. Like this is pretty good.
00:02:30
And I was like, huh, So like, all the stars are lining.
00:02:33
And that day I called the guy who runs all of like,
00:02:36
acquisitions and so on for Native bricks.
00:02:38
And I said hey, like this company I just saw yesterday.
00:02:40
And they seem pretty good. And he's like, Naveen at Mosaic.
00:02:43
I was like, yeah, yeah, yeah. He's like if you want that
00:02:46
company, you have to call him this weekend and by this
00:02:49
weekend. Yeah.
00:02:50
When you get that call, like what's what's your thought
00:02:52
process you you build this a start up for the long term like
00:02:56
yeah, what what were you thinking when you got the call?
00:02:58
Well, you know, it's using the dating analogy, right?
00:03:01
It's like you kind of know he's going to pop the question, you
00:03:03
know? How do I know it?
00:03:06
Smirk on his face as I called him.
00:03:07
You could see him. He was like, sitting there.
00:03:09
Like he knows. And this had happened to you
00:03:10
before your last. Yeah.
00:03:12
And it always happens faster than people think.
00:03:14
It's like one of these things, like once everything aligns,
00:03:16
it's it's like go, you made the decision go, right.
00:03:19
So I kind of knew what I was going to think through a little
00:03:22
bit, but it wasn't a no ally. I don't want to do this.
00:03:25
It was like, no, no, no, this actually makes a lot of sense.
00:03:27
Let's talk about how that could work.
00:03:29
And basically, I think in the first conversation, I even said,
00:03:31
like, I think it's going to come down to economics.
00:03:33
Can we make this work? I'll quote you exactly.
00:03:35
You said this makes sense. How much of the upside am I
00:03:37
getting? It was the exact.
00:03:39
Quote it was like. Depends on how much of the
00:03:42
upside I'm getting. Yeah.
00:03:44
Well, you know, you got to think about these things a little bit.
00:03:46
I have investors to keep happy, right.
00:03:48
And then you know you have some news today.
00:03:50
I mean it sort of starts the groundwork of the integration.
00:03:53
Can you talk a little bit about how AI actually fits into data
00:03:57
bricks as business? So this is, we basically have to
00:03:59
now having spent six months together, figure out what is the
00:04:02
strategy of the company, how do we combine the two companies?
00:04:04
OK, so they're awesome at generative AI.
00:04:07
We have a data platform. How do the two?
00:04:09
Together and it turns out that basically there's something you
00:04:12
can do that's pretty unique and I think this is what's gonna
00:04:14
happen with all these data platforms.
00:04:16
With data platforms, I mean data bricks, Snowflake, Bigquery, you
00:04:18
name it, all these. In the future, I think what will
00:04:21
happen is that you will basically leverage something
00:04:23
like they had which we call is data intelligence where you can
00:04:26
generate these generative AI models for each of your
00:04:30
customers. And they really understand
00:04:32
deeply the semantics of the data of the enterprise Oregon
00:04:36
organization. So each each customer you have,
00:04:38
the models that you create, understands exactly the jargon,
00:04:43
the priorities, everything. So what that enables is that
00:04:47
basically today in a large organization that uses a data
00:04:50
platform, you need to have people who know how to code the
00:04:52
right Python or SQL. But with this you can basically
00:04:56
enable anyone who can speak English or any natural language
00:04:59
to ask questions and you can get them the answers.
00:05:01
So I think it changes everything.
00:05:02
We call it Data Intelligence Platform.
00:05:04
So you can in plain language, query data bricks against your
00:05:08
data, right? Is that available now or is that
00:05:11
it? Is, and I mean it's being
00:05:12
improved continuously. So it's early days still, but I
00:05:15
think the concept of making data interactive is really what.
00:05:19
Brought this together and made it, made it happen.
00:05:21
Because it's a It's a very natural thing.
00:05:23
Like you want to customize, personalize that interaction.
00:05:26
And you want to make it something where you can start
00:05:29
driving business value across the company.
00:05:31
Which means you can't just have people who know how to write SQL
00:05:33
queries. And do you see actually
00:05:35
providing foundation models as part of your business?
00:05:39
Absolutely, Yeah. I mean, Foundation models I look
00:05:41
at as a starting point for customization.
00:05:43
I mean, sometimes it warrants building a whole new model from
00:05:46
scratch depending on what the the types of data are.
00:05:48
Like if it's. Very particular kinds of jargon
00:05:51
or very particular kinds of knowledge that have to be
00:05:52
embedded. This is.
00:05:53
Which by the way, this is like that, that's your sport.
00:05:55
That's what you guys did for Living Mosaic ML, building large
00:05:58
language models from scratch. That's right.
00:06:00
On custom data that enterprises had, right?
00:06:03
Exactly, yeah. And you know, we want to make it
00:06:04
easy. So where we see patterns, where
00:06:06
different companies use things, we're going to build great
00:06:09
starting points for that. There will always be cases where
00:06:11
people need to do a higher level of customization.
00:06:14
Are you trying to commoditize foundation models?
00:06:16
Like how much do you think foundation models will be
00:06:19
something where there's a lot of value to unlock or it's just
00:06:22
sort of a starting point for for other businesses?
00:06:24
It's a little bit of both. I mean it depends on what your
00:06:26
use case is, right. I mean if you go and invest lots
00:06:29
of money like Open AI or these other companies, they're they're
00:06:32
going after like a very different kind of market than we
00:06:34
are. I think enterprises need a lot
00:06:36
of customization. It's very different than
00:06:37
consumer and I I think the the, the cost to do so actually is
00:06:42
commensurately high. Maybe the volume of inferences
00:06:44
is also commensurately higher, but.
00:06:47
It's it's a starting point, but it's ephemeral.
00:06:49
Like every They have a time to live, six months, eight months,
00:06:52
maybe a year. Your model is irrelevant because
00:06:54
there's going to be better architecture, going to be better
00:06:56
data, better data cleaning techniques, this kind of a
00:06:59
thing. Who do you think are 1/2 right
00:07:00
now and quality? I mean, TPT 4 is still the king,
00:07:04
right? I mean, I don't think anyone's
00:07:06
beat it yet. But The thing is, in the
00:07:07
enterprise, I mean, this is what he's saying.
00:07:09
You know, in the companies that you work, there are three letter
00:07:12
acronyms that no one understands what they stand for, right?
00:07:15
If you ask Chat TPD, it won't know what that is.
00:07:16
You have specific data that's super confidential in your
00:07:19
organization. You know Chat TPD doesn't know
00:07:21
what that is. And by the way, when you ask
00:07:23
questions from that AI, you want it to really be accurate.
00:07:26
There's going to be a need for organizations to custom train at
00:07:28
least or fine tune or modify is that rag?
00:07:31
I mean I feel like everybody in the valley is talking about
00:07:33
retrieval, augmented generation. Is that you think the key or you
00:07:36
think it is fine tuning and having a sophisticated?
00:07:40
It's a part of it. I mean all of these are ways to
00:07:42
customize the outcome, right? Sometimes you RAG makes a lot of
00:07:45
sense when you have permissions that change on particular data,
00:07:48
when you have data that's updated continuously, that's a
00:07:50
great use case for rag. We look at this on kind of a
00:07:52
spectrum of techniques. It's not like there's one thing
00:07:55
that's going to, you know kill everything else.
00:07:57
They're all, they all have different power and and
00:08:00
capabilities. And keep in mind for RAG, you
00:08:02
also have a large language model and you're combining it with a
00:08:04
vector search database. So it turns out, if you
00:08:06
actually. Custom train the large language
00:08:09
model to be really good. At RAG, you actually get even
00:08:12
better results. So we're doing that as well.
00:08:15
The accuracy issue, right, you sort of signaled that this is
00:08:18
sort of just getting started. So you know some people are very
00:08:21
reluctant to roll something out without sort of certainty.
00:08:24
I mean you have like medical organizations in some cases
00:08:27
using it. How do you think about how
00:08:29
accurate the queries are, even against customer data?
00:08:32
And how did you think about whether to wait for perfect
00:08:35
accuracy versus let people sort of try it and experiment?
00:08:39
Well, I think this is actually one of those clear bright line
00:08:42
differences between enterprise use cases and consumer, right.
00:08:44
If you're doing something where accuracy doesn't matter or you
00:08:46
just you know it's a writing assistant or something like that
00:08:48
you you can get away with a lot more stuff where as an
00:08:51
enterprise we can't. But at the same time we need to
00:08:54
get people used to the flow and thinking about it like this.
00:08:57
I think having you know sort of techniques where you can, you
00:09:00
can have some suggestions from the AI, but alongside what
00:09:03
humans do for now. That's probably a good paradigm.
00:09:06
As we get more and more accurate, you learn to trust the
00:09:08
systems and then that can be something that can start taking
00:09:10
over. But right now it's not really.
00:09:12
I don't think you should turn loose one of these systems on
00:09:15
mission critical outputs, you know.
00:09:18
And the way so in the data intelligence platform what we've
00:09:20
done is that yes, you can ask in English, it can find you the yes
00:09:24
you can ask in English. And it like gives you the
00:09:26
answer. But there is a box you can click
00:09:27
on and it gives you the query that it actually.
00:09:29
So you can go under the hood and you can have someone that if you
00:09:32
want to be dead. Sure that this is correct.
00:09:35
That person can audit and look at.
00:09:37
OK, let me look at the query under the hood in Sequel, Yeah,
00:09:40
this looks good. We can put it in the board deck.
00:09:42
You know, this is our financial prediction for next year and
00:09:44
we're not going to get fired at the first Cerebral Valley like
00:09:46
it was like Open AI versus like the open source world.
00:09:49
Or it's like can we sort of cobble together enough open
00:09:52
source projects to to sort of fight against Open AI?
00:09:55
On the one hand, you know, I think with Facebook and Llama,
00:09:58
we've seen sort of strong offerings.
00:10:01
On the other hand, GPD 4 still seems sort of invincible in
00:10:05
terms of like being the smartest AI in the room.
00:10:08
I'm curious, are you guys still all in on open source?
00:10:12
I know it's key to your identity.
00:10:13
You you would release Dolly like what's the state of contributing
00:10:17
to open source and how it fits into the strategy of Databricks.
00:10:20
100%, I mean, you know, it's we think it's super important that
00:10:24
researchers around the world do open research.
00:10:27
And we have these open models that we can understand because
00:10:29
we don't really understand these things.
00:10:31
I mean, we understand how we built them, but we don't
00:10:32
understand why they exactly work, right?
00:10:34
It's kind of like a little bit like the isn't that terrifying
00:10:36
to you as Aceo? And it's like we do.
00:10:38
It is terrifying. Yeah, So, but how do we
00:10:40
understand it then? Is it to have two companies that
00:10:42
have two secret models that they don't want to share anything
00:10:44
about? Or do we want the researchers,
00:10:47
all the all the sort of labs around the world, to spend time
00:10:50
trying to understand what's going on and make progress
00:10:52
towards understanding how these things work and how we can
00:10:54
control them and how we can align them and all those kind of
00:10:56
things? So we think open source is
00:10:58
essential. And actually unfortunately I
00:11:00
would say, I mean I don't know if you agree or not, but it
00:11:03
seems there are talks now, there's, you know, in certain
00:11:06
circles that maybe we should ban open source large language
00:11:09
models altogether. There's discussions in many
00:11:11
countries where this is kind of coming up, which I think it's
00:11:15
horrible, it's, it's it's horrendous.
00:11:16
It's going to, it's kind of put a stop to all innovation and
00:11:20
it'll just kill off the whole ecosystem and it won't help us
00:11:23
understand what these models do. So I think it's absolutely
00:11:27
essential that we have an open ecosystem that continues to
00:11:29
thrive. I mean, it's ironic because by
00:11:31
closing off models, you're actually going to ensure the
00:11:35
thing that you were trying to prevent, right?
00:11:37
Because I think we all believe in the AI world that like, these
00:11:39
models are weird. They're they can do potentially
00:11:42
very damaging things. I don't know how that's going to
00:11:44
manifest. I think the people who profess
00:11:45
to know are kind of full of shit, to be honest with you.
00:11:49
Because we really don't know. And that's OK.
00:11:50
But the the way we're going to figure this out is through many
00:11:53
minds, many people, researchers in academia and different
00:11:56
companies, building solutions, putting those into the world,
00:11:59
seeing how they work, and then figuring out how to make them
00:12:01
better. We have to increase access, not
00:12:04
limited. Keep in mind also all the big
00:12:06
innovations that we're leveraging today that made these
00:12:08
possible was done in open research.
00:12:10
It was before the, you know, shutdown of all of this stuff.
00:12:12
It was pre 2020 releasing the transformer paper, right?
00:12:15
Yeah, public, public research. Just to follow up on one thing
00:12:20
you said, do you think a country will ban open source?
00:12:22
Like do you see that in the cards where some country is
00:12:25
actually going to move and do that?
00:12:27
I don't know. I hope not.
00:12:28
There's serious talks. If I had to say from the
00:12:30
information I have behind the scenes, it seems in some
00:12:34
countries the the camp that's winning is the anti open source
00:12:37
camp right now because you know you have the biggest companies.
00:12:40
Saying, hey, this is super dangerous.
00:12:43
I'm creating it and what I'm creating is super dangerous.
00:12:45
So please regulate me. And then, you know, the
00:12:47
regulators are like, OK, they're telling me to regulate them.
00:12:49
And, you know, and media is writing about how, you know,
00:12:52
dangerous this could be. So they're freaking out as well.
00:12:55
The public's freaking out. So, you know, it's kind of
00:12:57
pointing in that direction. So right now, I actually think
00:13:00
that it's going in the wrong way.
00:13:01
I hope we can stop it. Because just trusting two
00:13:04
companies to figure this out or four companies to figure this
00:13:07
out, I don't think it's the right way.
00:13:08
Yeah, you you guys want to be team open source, you're a big
00:13:11
company sort of respected. How do you, how did that fit
00:13:14
into your thinking on the Biden executive order?
00:13:17
Did you guys stake out a position?
00:13:19
What is the position? Did you think that was sort of a
00:13:22
good middle ground or how do you think about the executive?
00:13:24
Order this This is just the first sort of step, right?
00:13:27
For the writing, the executive order.
00:13:28
It has some limits and so on. It didn't actually weigh in on
00:13:32
whether open source is going to be allowed or not.
00:13:33
It kind of mentions it making weights free.
00:13:36
So we hope that if this becomes law or the continuation of this
00:13:42
absolutely is not going to sort of ban or put a stop to open
00:13:45
source because that's going to be essential to figure it out.
00:13:47
And also by the way, there's worries about what about other
00:13:50
countries, they could just pick up this open source stuff.
00:13:52
It'd be like basically giving away our IP to other countries.
00:13:55
But what would you rather do that all countries in the world
00:13:58
are leveraging your technology stack or that they're building
00:14:01
their own? Proprietary thing that they
00:14:03
have, right. So, so we hope, we hope that
00:14:06
it's going to continue to support open open source, open
00:14:08
research, but we don't really know, yeah.
00:14:11
And I think for the executive order, there were some some good
00:14:13
things. I think NIST is a good natural
00:14:15
home for some of this stuff, which I thought was a good, good
00:14:18
thing. Focus on the organization that's
00:14:19
going to oversee it. That's.
00:14:20
Right under under the commerce branch and I think that that
00:14:24
made a lot of sense. I think focus on transparency,
00:14:26
maybe some data tools or on lineages kind of stuff.
00:14:29
There are opportunities here in terms of market.
00:14:31
Which is which is a good thing. The the places where I think it
00:14:34
might be kind of not dangerous but like not super relevant or
00:14:38
when you start putting compute limits in because these things
00:14:40
change constantly. Something that was really big a
00:14:42
year ago is really not that big anymore.
00:14:45
So I think it was 1 E 26 FLOPS. I mean OK what precision what
00:14:48
you know there's so many different things that how that
00:14:50
can be interpreted. So I I don't think it's a great
00:14:52
idea to start dictating these kind of limits because we just
00:14:55
so are you do you oppose it on? That well, I I'm not.
00:14:58
I'm opposed to that part of it. I don't think it dictated in a
00:15:00
hard way. It was kind of soft.
00:15:02
I guess for me it's still an open question.
00:15:04
How quickly can those limits be changed?
00:15:06
Are they going to have to go through a whole government
00:15:08
committee or something like that and take a year to change or is
00:15:10
it something that's that can be highly variable?
00:15:13
And then the limits are pretty high right now for this year or
00:15:15
next year. But yeah, in five years you look
00:15:16
back and it's, you know, it's silly, probably.
00:15:19
Are you worried about existential risk or is that just
00:15:22
a way to hype up the industry? I mean, Dario at Anthropic gave
00:15:24
an interview where he said he thought there was like 20%
00:15:27
chance something like really bad happens and he runs, you know,
00:15:30
one of the top foundation model companies.
00:15:32
Do you feel that way or how much risk do you see?
00:15:35
This is what I'm saying. It's like, hey, I'm going to
00:15:37
build a huge model. And by the way, it's going to
00:15:39
take over the world, you know, regulate me.
00:15:41
What do you think of that? And I.
00:15:43
Think that 20% number is, But we're not exactly on the same
00:15:46
page. I think with existential risk.
00:15:48
I love disagreement. Let's go.
00:15:49
That's OK. Are you with your boss right
00:15:50
now? I think you're, I mean, I don't
00:15:53
want to put words in your mouth. You're more in the camp of like,
00:15:55
you know it's way less. And I'm a little bit like, you
00:15:58
know, maybe it's a little bit more than, you know.
00:15:59
But we're both actually on the side of, let me let me kind of
00:16:03
like argue sort of his side. I do think that we're protected
00:16:06
right now because these things can't reproduce themselves.
00:16:09
We're simply protected by the following thing.
00:16:11
It costs a lot of money and it takes a lot of time to train GPT
00:16:16
4. OK.
00:16:17
And the scaling laws say if you want to have even better model,
00:16:20
you better spend even more money and even more time, right?
00:16:24
Just to get more intelligence. OK, so therefore I'm protected.
00:16:28
These things are not going to reproduce themselves.
00:16:30
Now if it was the case that you could for one cent cost too
00:16:34
much. But if it costs one cent, and if
00:16:37
it took one second to produce GPT 4, you can see, then you
00:16:40
would start making a loop for loop, run a genetic algorithm,
00:16:44
improve itself, and you could see how you could sort of get to
00:16:46
this autonomous reproduction, but without reproduction.
00:16:49
You know, I think we're we're safe, actually.
00:16:51
We should still do research and understand this stuff.
00:16:53
But I don't think the essential risk is like immediate or that
00:16:55
we should stop all research or stop all activity that's going
00:17:00
on. It's it's sort of exaggerated
00:17:01
and I think that 2020% number is sort of just completely random.
00:17:04
I don't know what you. Think yeah, I think that 20% is
00:17:06
way, way overcalling it. What we all kind of agree on is
00:17:08
that there is some eventuality where AIS will become as
00:17:12
intelligent, if not more intelligent than humans.
00:17:13
I don't. I don't think anyone will really
00:17:14
argue with that. It's more the time scale.
00:17:16
And I look at it like, OK, hang on, If you start putting these
00:17:19
restrictions on now, you're actually going to make it so
00:17:21
that fewer people are working on this problem.
00:17:23
That's a bad thing. That's what I'm worried about.
00:17:25
I think at some point that will be relevant, especially when it
00:17:28
becomes one cent to do a G PT4 style model.
00:17:31
Also, a lot of time, a lot of the things I've seen have been
00:17:34
rhetoric that's saying like, oh, I can see a model escaping and
00:17:36
this and that. Really, you can look at that in
00:17:39
in terms of what is true today. Computer viruses are self
00:17:42
replicating. They can actually escape.
00:17:44
They can do all of these things. Is this really a new problem or
00:17:47
you just slapping AI on it and making people scared about it?
00:17:49
That's a problem I have. So let's let's look at the
00:17:52
things that are real, real threats that we have today and I
00:17:55
think this information might be one of them.
00:17:56
Let's focus on those, right. Let's focus on robotic safety.
00:17:59
Like, I mean there was AAAI think a factory worker that was
00:18:02
killed by a automated robot just two, two weeks ago.
00:18:06
So these are real threats in the in the short term, what's going
00:18:09
to happen in 20 years? I don't know.
00:18:11
Let's. Or the OR the use cases, you
00:18:12
know, for putting these AIS into weaponry.
00:18:15
Absolutely. Maybe we should look at that.
00:18:16
Maybe we should regulate that, right?
00:18:18
Maybe that we shouldn't go crazy with that.
00:18:19
All right, I want to try and thread a sort of nuanced
00:18:22
question. I mean, on the one hand, being
00:18:25
more of an AI company seems great for the Databricks story
00:18:28
on a March towards going public someday.
00:18:30
I feel like there's a huge appetite for like an AI company.
00:18:34
On the other hand, it's really expensive to run.
00:18:37
Like how did you think about the trade off in terms of actually
00:18:40
deploying stuff when it's very costly?
00:18:42
And how did that fit into your calculus of maybe moving towards
00:18:45
an IPO? And can you give us any sort of
00:18:47
update on where that stands? Yeah, the IPO plans got smashed
00:18:51
with the acquisition. He He destroyed the.
00:18:56
He destroyed. That destroying the PL.
00:18:58
everyday, Did it delay? Did it delay when you make a
00:19:00
bubble? No, it it doesn't actually we we
00:19:02
run, we're very careful about this stuff.
00:19:03
You know it's all part of the plans and you know the way I
00:19:06
would think about it is we just have a much bigger budget to
00:19:08
absorb these things. So you know it's different for a
00:19:11
start up with 5-10 people that doesn't have any revenue yet to
00:19:14
spend $100 million on GPU's. You know our budget, annual
00:19:17
budget with or without these things was in the billions
00:19:20
anyway. So you can absorb these things
00:19:22
plus. These guys did a really good job
00:19:25
of, I mean, their revenue was growing really fast.
00:19:26
So they're also selling the GPUs.
00:19:28
So we're cooking it doesn't micro public next year.
00:19:32
That's a great question. We are watching the public
00:19:34
markets. We looked at the recent IPOs
00:19:37
that happened. Actually they haven't been smash
00:19:40
hit. It's kind of wobbly.
00:19:42
So you know when the time is right and the markets are open,
00:19:44
we will go. It's not something that we
00:19:46
obsess over right now to be honest.
00:19:47
We're just, you know, there's so much demand for AI.
00:19:49
We just want to satisfy that and continue innovating.
00:19:52
Do you have a Microsoft partnership right now?
00:19:53
We do. And what's the relationship?
00:19:55
That's great. Thanks for asking.
00:19:57
All right. You got, you know, I'm a
00:19:59
diligent reporter at the end of the day.
00:20:01
OK, last give it just real quick because we're out of time.
00:20:04
But give us a piece of advice for people here who want to make
00:20:07
like a deal like you guys did. What is?
00:20:09
What is the advice you give in terms of building the
00:20:11
relationship or doing something like?
00:20:13
That come to this event and go to the party afterwards and grab
00:20:15
come. To the party and sell your
00:20:17
company for a billion dollars, right?
00:20:18
That's what happens. And answer the phone calls on
00:20:20
the weekends. You know I I think these these
00:20:22
events is it's it's it's serendipitous right.
00:20:25
There's no good way to really architect this stuff to happen
00:20:28
but I think relationships matter.
00:20:29
I think FaceTime getting to to meet people and know them
00:20:33
actually really matters and that's how we build trust.
00:20:35
I mean, honestly the reason this happened is that when I
00:20:38
interacted with Ally and the other Co founders of of
00:20:40
Databricks and they interact with us, I was like, we're not
00:20:42
going to agree on everything and that's OK, but we're going to
00:20:45
figure the shit out. Whatever it is we're going to,
00:20:47
I'm I'm confident we can figure it out.
00:20:49
That's what matters. Can I give some some insight?
00:20:51
Scoop, I think, like, build an awesome company, right?
00:20:53
You know, when we looked at Mosaic, we were like, wow, they
00:20:55
get really good people. You know, they're sort of we're
00:20:58
the same DNA, So that was important.
00:21:00
So hire phenomenal people. That matters.
00:21:02
Second, they had an awesome business, you know they were
00:21:04
growing like crazy. You were like 1 ARR in
00:21:06
January and there was 20 million ARR by the time the acquisition
00:21:09
May or June. We were talking about April, May
00:21:12
time frame. So you know the business was
00:21:13
growing. So you had actually a good
00:21:14
business model foundation to it as well.
00:21:17
So I think you know, have a good business model and have great
00:21:19
people, then you know, people will be very, very interested.
00:21:23
Ollie Naveen, thank you so much for all your support and coming
00:21:25
here on stage. All right.
00:21:27
Next panel. Thank you.
00:21:31
That's our episode. Thanks so much for listening.
00:21:33
Shout out to Max Child and James Wilsterman, my Cerebral Valley
00:21:37
AI Summit Co host. Thank you to Riley Kinsella, my
00:21:40
chief of staff Gabby Caliendo at Volley, who's been instrumental
00:21:44
on putting the conference together.
00:21:46
Thanks to Young Chomsky for the theme music.
00:21:48
Please like, comment, subscribe on YouTube, give us a review on
00:21:51
Apple Podcasts, and please subscribe to the sub stack
00:21:55
newcomer.co. Thank you so much.
00:21:59
Goodbye. Goodbye.
00:22:00
Goodbye. Goodbye.
00:22:02
Goodbye, Goodbye, Goodbye.
