Today, we have a double episode for you — two conversations from the Cerebral Valley AI Summit.
Reid Hoffman was fresh off a meeting with President Joe Biden when Hoffman and I sat down on stage at the Cerebral Valley AI Summit Nov. 15. On stage, he told us that working to get Biden elected next year is one of his top priorities.
Then, I sat down with the ever-feisty Vinod Khosla. The investor called for a TikTok ban and more welcoming immigration policies while warning against open-source artificial intelligence projects.
Thousands of enterprises around the world rely on Oracle Cloud Infrastructure (OCI) to power applications that drive their businesses. OCI customers include leaders across industries, such as healthcare, scientific research, financial services, telecommunications, and more.
NVIDIA DGX Cloud on OCI is an AI training-as-a-service platform for customers to train complex AI models like generative AI applications. Included with DGX Cloud, NVIDIA AI Enterprise brings the software layer of the NVIDIA AI platform to OCI.
Talk with Oracle about accelerating your GPU workloads.
Hoffman Plans to Go Big for Biden
Hoffman, fresh off a meeting with President Biden, kicked off the afternoon with a strong endorsement of the President’s record. Hoffman praised Biden for his recent executive order on artificial intelligence.
Reid called himself “a 95%-98% supporter” of the executive order, endorsing provisions on reporting and monitoring, “red team” testing, and voluntary commitments by companies that might eventually be enforced via the Defense Production Act. But he pushed back on the idea that the FTC should be monitoring the AI industry for anti-competitive conduct.
“Startups are not being impeded right now,” he asserted, despite the apparent dominance of OpenAI and the mega-cap tech companies. Reid sits on the board of Microsoft, and offered that he was in fact “first money in” on OpenAI, through his personal foundation, but he’s not concerned about, er, his own companies having too much power. “I don’t think it constrains competition on any level.”
Hoffman is always happy to engage on policy, and I asked him what he thought about Marc Andreessen’s recent “techno-optimist” manifesto, which denigrates the very idea of government oversight. Reid said he was a techno-optimist too, and half-joked that Andreessen “quoted kind of liberally from things I’ve written and said” without any attribution. But Hoffman said that he’s not on board with Andreessen’s approach. “It’s kind of dumb to think that when you have major technologies there can’t be negative side effects,” he said, noting that all his AI projects have safety teams. “Tech can be amazing. Let’s be intentional about building.”
Khosla Wins Cheers from the Cerebral Valley Audience
Venture capitalist Vinod Khosla confirmed that his firm, boosted by an early stake in OpenAI, was about to close on $3 billion in commitments for a new fund. Valuations are high, he said, “but just because valuations are high doesn’t mean it isn’t a good time to invest.”
He’s not buying existential risk, calling it “nonsensical” talk from academics who had nothing better to do. But he’s long on China risk, saying the U.S. is in a “techno-economic war” with China and needs to fight hard. “I would ban TikTok in a nano-second,” he said, unlike his predecessor on stage, Hoffman, who Khosla said he very much admired. Khosla is firmly against open-source AI models as well due to the China risk.
Bio-risk and cyber risk are real concerns too, he noted.
But if China or rogue viruses don’t kill us, Khosla thinks the near-future is very bright: “I do think in 10 years we’ll have free doctors, free tutors, free lawyers” all powered by AI.
Khosla also gave a grudging endorsement of the Biden Executive Order, saying it was “okay.”
But like most Silicon Valley moguls, he has no time for antitrust issues. “We have to get people like Khan out of there,” he said, referring to the chair of the FTC (though misstating her name), calling her “crazy, left-wing.”
Khosla said he’s long believed that AI would force us to “redefine what it is to me human.”
Meantime he himself plans another 25 years of VC investing, and if all goes well, maybe more.
Give it a listen
Get full access to Newcomer at www.newcomer.co/subscribe
[00:00:00] Hey, it's Eric Nucumber. This is the newcomer podcast. We've got a live episode coming to you from the Sribble Valley AI Summit on November 15th.
[00:00:08] We've got two conversations between me and prominent venture capitalists. This is two of my favorite conversations back to back.
[00:00:16] First, Reid Hoffman, the co-founder of LinkedIn co-founder of the Foundation Model Company.
[00:00:21] Inflection, Graylock, Partner, and then
[00:00:21] of the Foundation Model Company, Inflection, Graylock, Partner,
[00:00:24] and then after Reed, we've got Vinod,
[00:00:27] CoSla, founder of CoSla Ventures,
[00:00:29] founder of Sun Microsystems,
[00:00:31] and always opinionated, spicy venture capitalists.
[00:00:35] Reed and Vinod took different stakes and some issues.
[00:00:37] So we wanted to put them in conversation with each other.
[00:00:41] We did record the conversation in November 15th,
[00:00:44] so before the blow upup at OpenAI,
[00:00:46] so listen with that in mind, before we get to those conversations, a word from our sponsors,
[00:00:52] Oracle and NVIDIA.
[00:00:55] Thousands of enterprises around the world rely on Oracle Cloud infrastructure, OCI to power
[00:00:59] applications that drive their business.
[00:01:02] OCI customers include leaders across industries,
[00:01:05] such as healthcare, scientific research,
[00:01:07] financial services, telecommunications, and more.
[00:01:10] OCI also works with NVIDIA to provide an AI training
[00:01:14] as a service platform for customers to train complex AI models.
[00:01:18] Talk with Oracle about accelerating your GPU workloads
[00:01:21] at the link in the description.
[00:01:23] And now my conversations first,
[00:01:26] with Reid Hoffman and then Vinad Kosa.
[00:01:29] I wanted to start off, you know,
[00:01:31] we've been so AI focused that I think for the first half
[00:01:34] we forgot, oh, you know, the president is meeting
[00:01:37] with Xi Jinping today in San Francisco
[00:01:39] besides this is being taken over by politics
[00:01:42] and international relations.
[00:01:44] I guess have you talked to any of the people in town as someone who gets to connect with
[00:01:48] that sort of crowd?
[00:01:50] Well, so yesterday I actually met with President Biden, who obviously was not flying out
[00:01:55] here to meet with me, he was flying out here to meet with President Xi, but of the many
[00:01:59] things that I don't think Biden gets enough credit for is he's actually fairly intellectually
[00:02:03] curious. And so they did a whole bunch of very good work
[00:02:07] on the executive order and he was like,
[00:02:09] okay, was it good?
[00:02:11] What do we need to do next?
[00:02:12] One of the things that I think they did
[00:02:13] that was very smart is they put the primary locust
[00:02:16] within commerce because part of what Biden is thinking about
[00:02:20] is how do we make sure that this is really good
[00:02:23] for American industry jobs, your American
[00:02:25] worker, and so forth, and that was the kind of conversation you wanted to have.
[00:02:28] So he was flying out here to meet with President Xi, but he was like, oh, you live out there.
[00:02:32] I can talk to you about AI.
[00:02:34] Come by.
[00:02:35] And so, of course, getting through the security coordinate, the Fairmont, was a bit of work.
[00:02:40] What did you say about the executive order?
[00:02:43] Are you an ambassador leader, middle ground? What's your mood about the executive order? Are you unabashedly supporter, middle ground?
[00:02:46] What's your mood on the executive order?
[00:02:48] I would call myself a 95, 98% supporter.
[00:02:54] I think a lot of it's very good.
[00:02:56] I mean, the kind of notion of reporting and monitoring,
[00:02:59] being an agent dialogue, revealing what's kind of,
[00:03:01] what's going on, that kind of thing is all,
[00:03:03] I think part of good smart intelligent governance.
[00:03:07] I think red teaming is very good.
[00:03:09] I think getting red teaming by third parties is good.
[00:03:13] Starting this with voluntary commitments
[00:03:15] is a way of doing it and then gearing it up
[00:03:17] and using the Defense Production Act,
[00:03:19] I think is all very smart as ways
[00:03:21] of being kind of effective governance.
[00:03:23] The only part that I probably disagree with in part
[00:03:26] because, you know, I think a little bit of what
[00:03:29] Lena Kahn's doing with the FTC is to say,
[00:03:31] big technology is bad.
[00:03:33] I actually don't think that's the case,
[00:03:35] not necessarily, just because it's big.
[00:03:38] Part of having big industries and businesses
[00:03:41] is they set global platforms.
[00:03:43] They can do things from everything from American industry to
[00:03:47] American soft power and influence in the world.
[00:03:50] And the line in there that's like, oh, the FTC should be making sure
[00:03:53] that there is nothing here, anti-monopoly.
[00:03:56] If you actually look at a lot of the interesting work, which is,
[00:03:59] not just happening with inflection, but also open AI and
[00:04:03] entropic and a whole bunch of other things. You know, startups are not being impeded right now.
[00:04:07] Even though obviously there's great work from Microsoft, Google, etc. on this as well.
[00:04:11] So that was probably the principal piece of the EO that I disagreed with.
[00:04:16] But the stack of how do we be navigating intelligently and
[00:04:21] be getting good data was I think smart.
[00:04:24] How much did China play the backdrop?
[00:04:26] I mean, there's this great foil for Silicon Valley,
[00:04:29] which is if you over-regulate us,
[00:04:31] China is going to out-compete us in AI,
[00:04:35] and that's going to be a threat to national security
[00:04:37] or economy and everything else.
[00:04:38] How much was China coming up in the conversation?
[00:04:40] Well, in my conversation yesterday, not very much,
[00:04:45] I did ask him some questions about what
[00:04:48] would be the set of things around his discussion
[00:04:51] in the present.
[00:04:52] She and it was kind of questions of, OK,
[00:04:54] let's try to make sure that we have good relationships
[00:04:58] that we're competing on a fair playing ground
[00:05:00] on a number of things.
[00:05:02] Let's be still focused on climate,
[00:05:04] set of different things.
[00:05:05] But relative to China and AI, this is, I think,
[00:05:10] as momentous a moment in economic elevation as the steam engine.
[00:05:15] So, you know, I kind of, I call this the steam engine of the mind.
[00:05:19] And so as such, it's intensely part of how we elevate our industries.
[00:05:24] You know, one could choose not to use the loom. That would be mistake. One could choose not to use the loom.
[00:05:26] That would be mistake.
[00:05:27] One could choose not to embrace this steam engine.
[00:05:29] That would be a mistake.
[00:05:31] Doesn't mean that there aren't a lot of challenges
[00:05:33] as you re-tool your industries in economy.
[00:05:37] What does that mean for industry jobs?
[00:05:39] You have to navigate that.
[00:05:40] Not to oversimplify where you stand,
[00:05:42] but do you think TikTok should operate in the United States?
[00:05:45] Yes.
[00:05:47] Although, so here's the general thing is we, as the US,
[00:05:52] say that we think that it's an important thing
[00:05:54] to have internet platforms broadly available,
[00:05:57] because by the way, historically, most of those internet
[00:05:59] platforms have been US ones.
[00:06:02] If you say, well, we're now uncomfortable
[00:06:04] with another internet
[00:06:05] platform operating here, that validates
[00:06:09] all other countries going, well, we're Brazil,
[00:06:12] and you're internet platforms are operating here.
[00:06:16] I think the question with TikTok is not, I mean,
[00:06:20] and we could say, here's what we're going to do
[00:06:22] in terms of general content governance across
[00:06:25] country borders, and then we should do a multilateral version of that.
[00:06:30] But I think that the thing that we need to do with think, obviously, look, you slant your
[00:06:35] local Chinese tech market away from US and other companies.
[00:06:40] Let Facebook manipulate you if we're going to have TikToks going to manipulate us.
[00:06:44] Yes. Or let's have a fair's going to manipulate us. Yes. Yes.
[00:06:45] Or let's have a fair playing field between the two of them.
[00:06:48] Right.
[00:06:48] In terms of being a Democratic donor, maybe beyond even just
[00:06:52] the AI question.
[00:06:53] I mean, you're sort of, I don't know,
[00:06:55] Bizarro World counterpart Peter Thiel on the right is saying,
[00:06:58] I'm going to swear off like donating this time.
[00:07:02] I don't know if it's Oppo researchers coming out of him
[00:07:04] or just like he said in a recent interview
[00:07:06] that Trump was worse than he thought,
[00:07:08] yeah, I'm curious how Gung Ho are you for Biden
[00:07:11] and the Democrats in this reelection
[00:07:14] and then maybe what he would make of Peter Thiel's comments?
[00:07:17] So, you know, one of the things that I think Biden
[00:07:21] to his credit had been so focused on doing the job,
[00:07:23] he hasn't been spending the time tuning his own horn.
[00:07:26] He's navigated a very difficult situation around Gaza,
[00:07:29] you know, with, you know, grace and a plumb,
[00:07:32] trying to, you know, the multiple tragedies that are going on there.
[00:07:34] He brought all of Europe and a lot of the world
[00:07:38] into the Ukraine and dealing with that situation.
[00:07:41] The Inflation Reduction Act with climate
[00:07:44] is a very good thing. Post-COVID pandemic,
[00:07:46] economic recovery, you look at the job numbers, it's very good. It's got to be better bragging.
[00:07:50] In a more bipartisan legislation than in decades is ways of doing it. All of this sauce is very,
[00:07:58] very good. So I think Biden, one can make a very strong case, has done a very good job over the last three years,
[00:08:07] and it's partially because of experience. Now, that being said, I think we have a
[00:08:14] prospective presidential candidate who has former people he's describing as his lawyers,
[00:08:23] former people he's describing as his lawyers, kind of turning and saying and turning evidence against him,
[00:08:28] this is very reminiscent of mob boss, kind of behavior,
[00:08:32] and describing his political opponents as vermin,
[00:08:37] which is a very fascist-like way of doing this.
[00:08:40] Yes, and so it's like, I think it's super important.
[00:08:44] I think what we care about as American values in rule of law
[00:08:49] in democracy and a bunch of other things is on the table
[00:08:52] and I think we should be out there doing something about it
[00:08:55] so that's what I'm doing.
[00:08:56] Have you committed a dollar figure yet?
[00:08:58] Well, I never pre-commit dollar figures
[00:09:00] but I've already started an investment.
[00:09:02] The Peter Tillpiece, you didn't touch on that.
[00:09:04] Yeah, happy to do it.
[00:09:05] Sorry. I was just answering the first question.
[00:09:07] I know I asked you a long, too long of a question.
[00:09:09] So, well, look, I had a number of arguments
[00:09:12] with Peter about Trump.
[00:09:14] Matter of fact, it was probably the most ferocious
[00:09:15] set of arguments we've had, because I just couldn't
[00:09:19] understand why Peter couldn't see Trump as a,
[00:09:24] you know, it's kind of a Chernobyl, the way that I see him.
[00:09:28] And, you know, part of Peter's demise,
[00:09:31] friendship all the way back to the Stanford days
[00:09:32] was based on the value of public intellectual discourse
[00:09:36] of making arguments.
[00:09:37] And I've learned a ton from Peter about, you know,
[00:09:40] kind of how to think about, you know,
[00:09:42] some of the interesting perspectives
[00:09:43] and in good in depth perspectives
[00:09:45] on conservative arguments from him.
[00:09:47] And then that part of the thing,
[00:09:48] I was like, Trump strikes me as, you know,
[00:09:50] how can you tell his lying, his lips are moving, right?
[00:09:53] So it's like, what?
[00:09:56] Right.
[00:09:57] And so I was not surprised that he then went,
[00:09:59] oh shit, like this was not as worse than I expected.
[00:10:03] But I think we still have to deal
[00:10:04] with the ramifications of we have a former liar and chief.
[00:10:09] We talked about active regulation.
[00:10:12] Now there's this effort, responsible innovation,
[00:10:15] Heymont and Asia, General Cattleist is very involved.
[00:10:18] Do you have a stance on that?
[00:10:20] What do you think of sort of industry self regulation where that is positioned?
[00:10:24] Broadly, I think industry self regulations
[00:10:27] are good things to generally do.
[00:10:29] Industry usually will calibrate well
[00:10:31] to what the cost and so forth of doing the stuff is.
[00:10:34] I think that doing the voluntary commitments
[00:10:36] from the White House was good.
[00:10:38] I think that saying, hey look,
[00:10:40] investors should be responsible here too,
[00:10:42] is a good motion.
[00:10:44] I think I kind of looked
[00:10:46] at it and kind of bogged down on not coming around to signing it because one of the dynamics
[00:10:51] that I believe and I know my partner is a great luck, believe as well, is you don't impose
[00:10:56] things on companies. You only invest in good entrepreneurs who have good ethical compasses
[00:11:02] and good projects in the world. But you don't show up saying,
[00:11:05] well, I have a 10-item thing that you have to do.
[00:11:08] It's like, no, do the right ethical thing
[00:11:10] of the thing you're doing and we work and collaborate with you.
[00:11:12] So it just seemed a little dictatorial
[00:11:14] between kind of investors and founders
[00:11:17] though we don't almost ever do that sort of thing.
[00:11:20] It's like, like we don't show up saying,
[00:11:22] well, you're gonna have a company culture
[00:11:23] with no sexual harassment. Well, if you're gonna have have a culture with a sexual harassment, we want nothing to do
[00:11:27] with you.
[00:11:28] Right.
[00:11:29] We don't need that sort of.
[00:11:30] You've just screened in the founder selection sort of ethics.
[00:11:32] Yeah.
[00:11:33] So that's how we operate on these things and we don't, we show up as the invited partners
[00:11:40] with our entrepreneurs versus the, oh, well, we have a set of things that you have to do
[00:11:44] what we tell you to do.
[00:11:45] What do you make on the other sort of extreme of GC
[00:11:48] in the coasting notorious?
[00:11:51] And Richard Horowitz is, you know,
[00:11:52] really using this as a way to be like,
[00:11:54] we're wild, like anything goes,
[00:11:56] do you have a reaction to that?
[00:11:57] Do you think it's dangerous or?
[00:11:59] Look, so I am also a techno optimist.
[00:12:02] I found it entertaining that Mark,
[00:12:06] like, Thomas Mark and Jason quoted
[00:12:08] kind of liberally from some of the things
[00:12:10] that I write and speak about like homo technique
[00:12:11] and other things without attributing.
[00:12:14] That's fine, no problem.
[00:12:15] Right.
[00:12:17] You know, and I've been kind of a techno optimist on AI
[00:12:21] from very early.
[00:12:22] It's part of like publishing and prompt to
[00:12:23] and all the rest of all the new opening I used for early.
[00:12:26] Now, that being said, it's dumb to think
[00:12:30] when you have major technologies
[00:12:32] that there can't be negative side effects
[00:12:34] that you need to navigate around
[00:12:36] and be thoughtful about them.
[00:12:38] And so, for example, every project that I'm part of,
[00:12:43] OpenAI, inflection, et cetera, does have a safety team that's focused on
[00:12:47] important topics, like important topics like don't make it
[00:12:51] easier to make bombs, right? Like there's an easy one that
[00:12:55] everyone agrees with. And there's a stack of these things.
[00:12:59] They say, well, but then it'll talk smack about Trump, but not
[00:13:04] about Biden.
[00:13:05] Like, okay, I'm sure you can ultimately get them to crack jokes about both of them.
[00:13:10] I'm sure I could go to get light bulb jokes about both of them as kind of ways of doing it.
[00:13:16] But like, for example, having an informed point of view of the 2020 election was not actually
[00:13:22] stolen, right?
[00:13:24] Or if it's stolen, then every American election is stolen,
[00:13:27] so it's a meaningless state.
[00:13:28] You're sort of saying it can look biased
[00:13:30] if the facts on the ground bias.
[00:13:32] Exactly, so it's like, okay, so like,
[00:13:35] be more thoughtful about navigating instant.
[00:13:37] It's not like whatever you can build with technology
[00:13:40] is grand, it's technology can be amazing.
[00:13:43] Let's be intentional about building that amazing technology.
[00:13:47] What is your view right now on existential risk?
[00:13:50] Ex-risk.
[00:13:51] I feel like there's almost like a backlash to it
[00:13:53] at this point where there are people
[00:13:54] who think it's a promotional technique for AI.
[00:13:57] Like where are you on the possibility that x-risk is real?
[00:14:00] So I think the people who articulate x-risk, existential risk, I presume everybody here knows that,
[00:14:08] are serious and earnest. And so I value their, their, your inflection.
[00:14:14] Yes.
[00:14:15] Yes.
[00:14:16] We stopped by who we're talking to at the end of the day seems more worried about some of the things.
[00:14:19] Yes. Although you, what you will find in talking to them, but he's more, he finds that the X-risk people are including the real risks, which is what is AI doing the hands of
[00:14:26] humans? What does it mean for jobs? What does it mean for bad
[00:14:30] human actors doing stuff that the importance of things which he
[00:14:33] and I totally agree with as those are the risks to focus on
[00:14:36] versus S-Grisks. Now, the thing with X-Risk is, you know, like
[00:14:39] that 22-word statement, I should be considered a existential risk
[00:14:42] along with climate change and so forth. The reason I didn't sign that statement is because when you look at climate change,
[00:14:48] pandemic, a bunch of things, those are just risks. They're just bad. AI might add some robot risk,
[00:14:54] but it also is how do we solve pandemic? How do we improve climate? How do we do with asteroids?
[00:15:00] All these things, AI's in the positive comment and the mistake that I think all the
[00:15:05] X-risk people make is they try to treat them as
[00:15:07] each solo versus a portfolio.
[00:15:10] So my view about X-risk for humanity is what is
[00:15:13] the portfolio of X-risk for humanity and as you're
[00:15:17] doing things, are you improving net portfolio?
[00:15:20] And I think AI improves the net portfolio.
[00:15:23] And so therefore I'm not one of the ex-risk figures.
[00:15:26] They get to solve things.
[00:15:27] We don't know which way it's going to go.
[00:15:29] I certainly see the argument.
[00:15:31] OK, sort of this next portion of the conversation
[00:15:33] into the more the business dynamics and the companies.
[00:15:37] Yeah, you were so involved with OpenAI earlier early.
[00:15:40] You've stepped off the board.
[00:15:42] What is, and they just had dev day,
[00:15:44] and I think it's come up at this conference already.
[00:15:46] Like, there's a sense that, oh, is that going to, like,
[00:15:48] destroy every startup?
[00:15:50] Like, is it going to hurt your investments?
[00:15:51] Like, what do you make of the power of OpenAI right now
[00:15:54] and where there's room to compete with them?
[00:15:57] So, I think OpenAI has obviously made a set of very smart bets
[00:16:02] about the scale application of large language models.
[00:16:05] And they have, with a bunch of genius and bright moves and people have created the leading
[00:16:14] edge of the drum beat by which everything else is following.
[00:16:17] And that's awesome.
[00:16:19] I don't think it constrains competition and almost any level. You know, I think there's even people competing to offer frontier model APIs.
[00:16:29] Open AI is not the only party doing that.
[00:16:32] I think that the question of there's going to be tons of different interesting bots.
[00:16:37] So this chat GPT is one of the great bots.
[00:16:39] I think there's going to be others obviously within flexion of pies as part of doing that.
[00:16:43] But I think there are going to be others, obviously, with inflection and pies as part of doing that. But I think they're going to be different in solving different needs.
[00:16:45] Now, if what your startup plan was,
[00:16:47] I'm going to be a thin wrapper
[00:16:49] on top of a company's API.
[00:16:53] That's a dangerous place, whether it's open AI,
[00:16:57] Google, Amazon, Microsoft, any of these things,
[00:17:00] you have to do something more substantive,
[00:17:03] whether it's like an enterprise integration, a network effect, a stack of technology that really adds, in addition to the
[00:17:10] API, those are the things you need to be doing.
[00:17:13] And by the way, of course, part of it is there's so much progress happening within these
[00:17:18] AI capabilities that you can't say, oh, well, I built my thing on GBD-3.
[00:17:22] Oh, shit, GBDBD4 is so much better.
[00:17:25] Fun, you need to be anticipating.
[00:17:27] That's what's coming.
[00:17:29] Where do you see us right now in sort of maybe
[00:17:31] the S curves or whatever charting of AI improvement?
[00:17:35] Like do you think next year we're gonna enter
[00:17:37] sort of a flat lining period
[00:17:39] or you're seeing a lot of still acceleration
[00:17:42] or how do you think about the actual
[00:17:43] underlying technological improvement?
[00:17:46] So two things. One, I don't think we're going to ask her yet.
[00:17:50] But one of the things that people frequently misunderstand is that we get to
[00:17:54] each new level of scale, they think it's just put in 10X compute,
[00:17:58] just put in 10X data, press button, magic emerges.
[00:18:03] And he's like, it's a lot more work than that.
[00:18:06] And part of it is you get these different levels of scale.
[00:18:09] You have to figure out different set of techniques that
[00:18:11] cause it to work the right way.
[00:18:13] And so you can have the very first training run of GPD4
[00:18:18] failed.
[00:18:19] And then they figured out some techniques to go, oh,
[00:18:22] if we teach at some realization and we do this,
[00:18:24] then we can make it work really well.
[00:18:26] And so I think we will have to do those things for GPD-5,
[00:18:30] 4.5-5, and the kind of equivalent.
[00:18:33] But I think they're there.
[00:18:34] I think that's capable to do that.
[00:18:36] Like I think that's the way we found.
[00:18:38] It's not science risk, maybe some time in the next five or ten years.
[00:18:41] I think those things happen.
[00:18:42] And so I think we will certainly see similar
[00:18:45] between three and four, between four and five.
[00:18:47] Next year.
[00:18:49] I don't know about next year.
[00:18:50] That would be huge to me, right?
[00:18:52] Three and four, four to five would be...
[00:18:54] But even 4.5 next year will be significant.
[00:18:58] Wow. Have you started to see any of that or...?
[00:19:01] I haven't yet, but I anticipate from the buzz
[00:19:06] that all of the very leaky industry.
[00:19:09] Yes, exactly.
[00:19:10] I'm anticipating.
[00:19:11] You see, you can dodge this
[00:19:13] when if you're not allowed it.
[00:19:14] But like, is OpenAI Microsoft strategy?
[00:19:16] I mean, you're on the Microsoft board, right?
[00:19:18] Like, how much do you see OpenAI's
[00:19:20] the strategy for Microsoft in terms of AI?
[00:19:23] Well, it's certainly one of the major strategies.
[00:19:28] Obviously, Microsoft has a number of
[00:19:30] different business lines that open
[00:19:33] AI is not really in that it's also doing these things in.
[00:19:36] But I think the partnership between Open AI and Microsoft is going to be
[00:19:41] one of the epic partnerships that business school classes will be taught on for decades,
[00:19:47] just like Windows Intel, from back of the day.
[00:19:50] It's a similar, massive alignment
[00:19:54] that's going to create all kinds of things.
[00:19:56] Or it could be Steve Jobs returning to Apple sort of situation.
[00:20:00] Is there a risk that OpenAI represents the future
[00:20:03] and Sam Wollman becomes sort of the future of Microsoft?
[00:20:06] Well, both Sam and Satya have put a lot of energy
[00:20:10] into aligning the interests.
[00:20:12] And so that's the reason it's much more like kind of the
[00:20:15] wind tell kind of period, which is I think they will both
[00:20:19] more or less succeed together or not succeed together.
[00:20:22] I think that's the way that that will play out.
[00:20:25] How do you think about your time, you know, you're stepping back so I think they will succeed.
[00:20:30] Yeah, just stick clear. I didn't mean to express hesitancy. I just think it was well
[00:20:34] aligned. No, actually now you're making me go back. The, you know, where I'm going to have
[00:20:38] the note on stage, they ended up getting the first round. Do you feel any regret that you didn't get the sort of venture investment in Open Air instead of personal?
[00:20:47] So I did, I actually was the lead finance year personally on my foundation.
[00:20:54] I did talk to you.
[00:20:55] So that has a lot of equity still in...
[00:20:57] OK.
[00:20:58] Yeah.
[00:20:59] You know, so I technically first money in.
[00:21:04] All right, so I technically first money in.
[00:21:05] All right, so we'll give you your credit here.
[00:21:07] Yeah, that's fine, whatever, whatever, whatever.
[00:21:09] You know, now I did talk to Greylock about it and said,
[00:21:13] look, they don't have a go-to-market strategy.
[00:21:15] They don't have a business model, right?
[00:21:17] They don't have and look, part of our job with our LPs
[00:21:21] is to invest professionally, right, on this.
[00:21:25] So look, I think the technology is going to be really great, but I have no idea, which
[00:21:27] is the reason I'm doing it from my foundation.
[00:21:30] Right.
[00:21:31] But of course, if knowing what you know now, if you somehow look all investing, it's much
[00:21:36] easier with crystal balls, 10 years in the future.
[00:21:39] So what does that foundation give to you?
[00:21:42] Is that part of your political giving or is that separate?
[00:21:45] It's a 5.1.3.
[00:21:46] It's a foundation that invests in opportunity, for example.
[00:21:50] How do we get the various disadvantaged communities
[00:21:53] that have much more economic opportunity?
[00:21:55] How do you enable science and it's doing a whole bunch of them?
[00:21:59] How do you think about your time and what you're working on
[00:22:02] right now?
[00:22:03] Because you do so many things where so many hats.
[00:22:04] What sort of the priority at the moment?
[00:22:06] So for me, the priority for this year and next year,
[00:22:12] well, next year especially, is obviously artificial intelligence
[00:22:14] of making sure we don't fumble all of the really great things
[00:22:18] that can help elevate humanity and with some regret
[00:22:22] 2024 election because I think it matters to us and to the world.
[00:22:28] I would rather just be building, right?
[00:22:30] I'd rather just be doing all the kind of investing
[00:22:32] and technology stuff, but when the win.
[00:22:35] So you're going to call in on the election?
[00:22:36] Oh, yes, 100%.
[00:22:37] Are you primary and some Democrats?
[00:22:39] Not currently that I'm aware of.
[00:22:42] Right.
[00:22:43] I do have a whole political team and all the rest that also does other things for other people.
[00:22:48] Look, I'm fundamentally a centrist.
[00:22:50] What I would really like both parties to have is more of an incentive going towards the center.
[00:22:57] Like so.
[00:22:58] I think that's part of the corrosion that's affecting our society.
[00:23:03] Now, one of the things that is a very popular thing to say
[00:23:06] is that, wow, you know, both Biden and Trump are choosing,
[00:23:09] like Biden's a centrist.
[00:23:10] He's been a centrist for decades.
[00:23:12] He's entire political career.
[00:23:15] So that's part of the reason why I strongly provide him.
[00:23:18] Great.
[00:23:19] Thank you so much for coming on stage.
[00:23:21] This was awesome.
[00:23:23] Good bye.
[00:23:24] In our second to last conversation of the day, joining us on stage, venture capitalist
[00:23:28] Beno Kosla.
[00:23:29] He's talking with Eric newcomer.
[00:23:32] Beno Kosla, thank you so much for sitting down with me.
[00:23:37] I read in the Wall Street Journal that you guys are Kosla Ventures raising a $3 billion
[00:23:42] fund.
[00:23:43] Is that right?
[00:23:44] Yes, it that right?
[00:23:45] Yes, it's right, we're almost done.
[00:23:48] What, it's a time where many venture capital firms
[00:23:51] are sort of struggling or downsizing their fund size.
[00:23:55] I assume the OpenAI investment helped along,
[00:23:58] but yeah, what was the decision to sort of expand
[00:24:01] in what was sort of the fundraising environment like?
[00:24:03] You know, as Warren Buffett says,
[00:24:05] when others are fearful, it's time to be aggressive,
[00:24:08] when others are optimistic, it's time to be conservative.
[00:24:12] You know, one of the odd things about our fund,
[00:24:16] if you look back the last five or six years,
[00:24:19] our rate of investing hasn't changed.
[00:24:21] So 2021, 22, we didn't like double our rate of investing hasn't changed. So 2021, 22, we didn't double our rate of investing.
[00:24:27] Generally, it stayed about the same.
[00:24:29] I do think in this new domain of AI,
[00:24:32] it's time to be aggressive and both thoughtful and aggressive.
[00:24:37] It feels like you're deploying more capital than many firms right now.
[00:24:40] Do you, I feel like there's concern that, you know, valuations,
[00:24:44] you're just paying
[00:24:45] a high price for AI companies and that we don't know anything outside of AI.
[00:24:49] We don't know what the bottom is or what prices will look like.
[00:24:52] How do you think about value in companies in such an uncertain environment?
[00:24:56] You know, there's two styles of investing and, you know, Insta-Card recently had an IPO
[00:25:01] that really showed up all the styles of investing. I'll get the series
[00:25:06] wrong but the first three rounds were reasonable valuation stepping up and we stepped up aggressively
[00:25:14] invested. Now we started with a million dollars at a 10 million post so we bought it at a pretty
[00:25:21] good valuation then that's very nice these days. Yeah, and then we invest in next series and the next series and the valuation got to a billion
[00:25:29] And we decide not to invest but what was happening was investors were doing momentum investing so they're looking or he's investing at a billion
[00:25:40] He's investing at a billion I'm in that kind That kind of follow the herd, kind of momentum investing
[00:25:47] is different than when you're investing in fundamentals.
[00:25:52] And we were investing in fundamentals.
[00:25:54] By the way, same time, we invested in,
[00:25:56] and it's got in the same timeframe roughly.
[00:25:59] We also put a million dollars into door dash
[00:26:02] at the gain a $10 million post,
[00:26:05] because not too many people were investing
[00:26:07] in these areas back then.
[00:26:09] Right.
[00:26:10] And I mean, to translate, you know,
[00:26:12] early Instagram investors made money
[00:26:13] and later ones less so.
[00:26:16] So, can you talk us through giving, you know,
[00:26:19] the AI audience, the investment in Open AI.
[00:26:22] I mean, we had Reid Hoffman on stage, you know,
[00:26:24] he got in through his foundation,
[00:26:26] but you took the first venture around.
[00:26:28] What was the dynamic there?
[00:26:30] And how did you just have the conviction
[00:26:32] for what seemed at the time to be a science project?
[00:26:35] Well, you know,
[00:26:37] Reid is very forward looking.
[00:26:40] So I'd say,
[00:26:43] I, of all the people in the venture business, he's high on my
[00:26:47] list of people I really admire in how he invests. So I admire him a lot. In fact,
[00:26:53] but a venture fund like Greylock probably wouldn't want to invest in what was a
[00:26:58] speculative realm. But the math was very simple. If you lose, you lose one
[00:27:03] times your money. If you win, you make 100 times your money.
[00:27:06] So you could place 50 bets.
[00:27:09] And if 49 is a 50 loss, you'd still do OK.
[00:27:14] But it was much more than that.
[00:27:17] I started writing about AI.
[00:27:19] I think it was 2011 Christmas.
[00:27:21] I said, wrote about do we need doctors?
[00:27:24] And do we need teachers?
[00:27:27] With the idea that an AI tutor would do the right thing.
[00:27:30] And my wife has just a beautiful AI tutor.
[00:27:33] That's free, by the way, it's in a nonprofit.
[00:27:35] So I don't much care whether something's in the for-profit
[00:27:38] or a nonprofit.
[00:27:40] My son's working at Keurai in an AI doctor.
[00:27:43] I wrote about that in 2012
[00:27:45] What was clear to me by 2018? It was around this time
[00:27:50] We made the decision five years ago to invest in Open AI
[00:27:53] We'd already invested in a couple of deep learning companies actually
[00:28:00] one or two that didn't work out and got
[00:28:08] actually one or two that didn't work out and got sold for essentially aqua-hire kinds of prices. But the fact that the companies didn't work didn't disuate us, because we
[00:28:13] fundamentally believed in the thesis.
[00:28:15] How much was it a bet on Sam Altman or the team versus the technology?
[00:28:20] It was clearly a bet on Sam. We knew Sam and thought he was awesome.
[00:28:26] We knew Greg and Alia and spent a lot of time with the team,
[00:28:29] and we really liked the team there.
[00:28:32] But more than anything, I'd long held this belief.
[00:28:37] 2012, when I first talked about,
[00:28:41] can it replace essentially all expertise?
[00:28:44] If true, then the upper bound is unlimited
[00:28:48] and is great for humanity if that happens.
[00:28:52] And so for my point of view, the upside was huge
[00:28:55] and it was important to make it happen.
[00:28:58] It was the structure.
[00:28:59] I had to give you a hint, there was an article
[00:29:01] in the New York Times with a writer called Laura Holson.
[00:29:06] In the year 2000, I said at some point in the next 25 years, I forget what I said back
[00:29:14] in the year 2000, I said, A.I. will be so powerful, we will have to redefine what it means
[00:29:21] to be human.
[00:29:22] That was 2000.
[00:29:23] So, I was already dreaming
[00:29:27] Less this kind of crazy feeling now
[00:29:30] The non-profit structure of obonai
[00:29:35] Were you worried about just like the structure of the company and still sort of a mystery to people outside how it works?
[00:29:37] People get hung up on structure
[00:29:42] That's the wrong way to look at it if you're talking about changing the world
[00:29:44] Who can cares about structure,
[00:29:46] who figured that out? We did. You know, when I worked with Sam that time, that reasonable
[00:29:52] proposal said make sense. There wasn't a lot of negotiations.
[00:29:55] Is there some limits how much money you can make off the MS?
[00:29:58] Yeah. They limited it because the for-profit, nonprofit
[00:30:02] nature of the parent company, which is fine.
[00:30:05] How have you make $5 billion on our $50 million investment?
[00:30:09] Those are public numbers.
[00:30:10] I'm fine.
[00:30:12] Well, we're good outcome.
[00:30:13] It seems more than $6, what is it?
[00:30:15] Hopefully in this matter to SAM2,
[00:30:18] he knew I cared about the impact the opening I would have
[00:30:22] or an AGI would have.
[00:30:24] So now you want to go forward and invest in AI companies.
[00:30:27] Opening I just had dev day where it seems like it's coming
[00:30:31] for every startup.
[00:30:32] How do you think about where you can invest
[00:30:35] to get more shots on go on?
[00:30:37] Yeah, I'm not just talking to a non-penetered outside.
[00:30:39] I said, what was the sessions like he said?
[00:30:43] Some of the speakers were more like,
[00:30:45] chat TV, and he could have done that talk.
[00:30:47] Brutal.
[00:30:48] Haha.
[00:30:49] Haha.
[00:30:50] Haha.
[00:30:51] Haha.
[00:30:52] Haha.
[00:30:53] Haha.
[00:30:54] It's true.
[00:30:55] People speak in generalities.
[00:30:58] And chat TVD does that really, really well.
[00:31:00] But the question was, yeah, what startups are you investing in?
[00:31:07] Yeah, this is a very tricky time to invest.
[00:31:12] There's a lot of very high valuations and I've written about the fact that the very high
[00:31:17] valuations are bad both for the investors and the entrepreneurs.
[00:31:22] But just because evaluation is high,
[00:31:25] doesn't mean it's not a good investment.
[00:31:31] So I'll give you an example.
[00:31:32] There's a lot of billion dollar valuations.
[00:31:34] We've looked at a lot of them, in fact most of them.
[00:31:38] But when adapt, sorry, not adapt,
[00:31:40] but the rep lead came along.
[00:31:43] I thought because of conversations with the founder,
[00:31:47] great founder, and a fairly different mission for where they want to be in two years.
[00:31:54] We had a series of conversations about the future direction of replet, and it's pretty different
[00:32:00] than what replet was last year. And we invested a billion dollar
[00:32:05] valuation because I thought they would create something that now one of my I
[00:32:09] have 10 predictions that might I won't remember all of them but one of them is
[00:32:14] within 10 years there'll be a billion people on the planet programming and
[00:32:19] what I mean by that is pro-writing code by using natural language.
[00:32:26] That's a large enough change.
[00:32:29] That it's worth a billion dollar valuation if it's successful.
[00:32:33] So it becomes more like Israel-pled successful or not.
[00:32:36] And if it is, the valuation wouldn't matter.
[00:32:40] Most of the people who are chasing somebody else,
[00:32:42] their valuation matters a lot.
[00:32:50] We've been sort of having a runway chat to you, but he would give you a new one stance.
[00:32:57] We've been having sort of running conversation about existential risk. I'm curious what your view is of opening eyes, sort of, you know, causing some real problem in the world. You know,
[00:33:03] I'm frustrated with the academics
[00:33:06] who have nothing to do but be academic,
[00:33:08] and they think about academic risks.
[00:33:11] The chance of a sentient AI going wild in the next 10
[00:33:15] or 15 years is about the same as a chance
[00:33:18] of asteroid hitting planet Earth in the next whatever years.
[00:33:23] I think this sentient AI talk is such
[00:33:27] nonsensical talk and sensible people like when when when I was talking to
[00:33:32] Fairfield Lee, she said there's more immediate risks to worry about. There's real
[00:33:37] risks that we should worry about like bio warfare. This morning I was at an AI security summit talking about
[00:33:47] bio-res. Now that's a real risk. There's cyber cyber risks. There's a larger
[00:33:53] risk of falling behind China because President Xi has declared he wants to be
[00:33:59] the source of technological innovation in AI by 2030.
[00:34:06] So they will put a lot of resources.
[00:34:08] Now those kinds of dictatorial edicts
[00:34:12] don't tend to work very well,
[00:34:14] but here's declared that.
[00:34:16] And I do think, and I'm breaking about this,
[00:34:20] we are in the Techno-economic War with China,
[00:34:23] and we should do everything to win that war.
[00:34:26] So what do you think Biden should do?
[00:34:28] You're a team of people that are going on.
[00:34:30] My recommendation is to open up immigration to anybody with talent in this area.
[00:34:36] Talent is.
[00:34:40] We have an advantage in the talent war, and that's what we should do. We should absolutely really go after slowing down China as much as we can,
[00:34:52] so I like the restrictions we placed on China.
[00:34:55] Do you want to ban TikTok?
[00:34:57] I absolutely would ban TikTok in the nanosecond.
[00:35:01] Wow, okay, we got a real disagreement here.
[00:35:03] Reads at the opposite.
[00:35:05] Well, I, you know, I'm very clear in the US, companies influence politics.
[00:35:14] In China, politics influences companies with total control.
[00:35:19] It's a very different system.
[00:35:21] I'd be happy to debate read on it.
[00:35:24] There's no question. It has a
[00:35:27] village or a village. And it has the ability to be controlled by the Chinese government.
[00:35:34] Well, to go one round with you, I think Reid's response would just be what he really wants
[00:35:38] is American companies to be able to operate in China and buy a penny.
[00:35:41] How it would happen? So you can dream all you want. There's no chance that happens.
[00:35:46] We've seen this for the last 30 years since we opened up
[00:35:51] I'm persuaded.
[00:35:54] What do you think of the following?
[00:35:57] I feel like the audience is on your side on this one.
[00:35:59] The biggest control point influencing everybody
[00:36:03] from little kids to adults at the control
[00:36:07] point of consumer behavior is TikTok and we know it works for the Chinese Communist Party.
[00:36:15] At the other end, with 5G networks from Hawaii, they have the ability to surveil about 60%
[00:36:23] of the globe's population. because their equipment is in the
[00:36:28] networks. I think we should be very, very worried and not worrying about sentient risks.
[00:36:35] You know, Matt Stegmar can do that all he wants. He doesn't need to do anything real.
[00:36:41] I love it. I love it. Are you know my style? So why we had you here? Are you donating?
[00:36:49] Are you going to get involved in this political cycle or what's your stance there?
[00:36:53] You know, I tend to almost always contribute. I've contributed over the last 15 years to both
[00:36:59] Republicans and Democrats. So I look at the candidates, not the party. And I like this general idea of a no-label party also,
[00:37:10] though I haven't contributed anything to it.
[00:37:12] Yeah, they're going to call you up after this to Benkroll.
[00:37:14] Have you talked to me anything like that?
[00:37:16] I'm not a fan of Manchin because he's so opposed to climate change.
[00:37:19] Oh, and he's very parochial about Virginia coal.
[00:37:22] And so I'll never support mention for that reason.
[00:37:27] And climate is a very large risk for the planet,
[00:37:31] just like winning an AI as a large risk for the planet.
[00:37:34] And a few weeks ago, I wrote a blog about that,
[00:37:39] about our Techno-economic war with China.
[00:37:41] And why can we came to AI regulation to slow us down.
[00:37:45] I'm happy to argue that and I was glad to see
[00:37:49] much of the AI executive orders were
[00:37:53] considering of this point of view.
[00:37:55] I spent plenty of time in DC talking to everybody about.
[00:37:59] You're gonna forecast my questions.
[00:38:01] Yeah, do you think the executive order on AI was okay?
[00:38:04] Or I think AI was okay?
[00:38:05] I think it was okay.
[00:38:07] Do you worry it sort of signals worse regulation or you're optimistic about the situation?
[00:38:12] It's really hard to tell.
[00:38:13] We're coming into an election, yeah.
[00:38:15] It's going to be about the election not about what's right.
[00:38:18] That's the pragmatic part.
[00:38:20] That's the equivalent saying otherwise would be the equivalent of read saying,
[00:38:24] hey, American companies should be able to do X in China.
[00:38:26] Won't happen.
[00:38:28] You know, so I love reading respect him tremendously,
[00:38:34] but we don't have to agree on everything.
[00:38:37] I do think the next year is about getting elected and
[00:38:39] the next four years will be the important years.
[00:38:42] It's too early to tell.
[00:38:44] And we have to get people like Lisa Khan out of her crazy,
[00:38:49] so left-wing, cookie, no in-acarnity FTC.
[00:38:54] What does that, do you hinge any sort of Biden donation
[00:38:57] on something like that?
[00:38:58] Or you can't do that.
[00:39:00] You know, there's 300 million people in this planet
[00:39:05] in this country that he's not gonna say
[00:39:09] I'll donate on condition of XOY, those never worked.
[00:39:13] It's unrealistic.
[00:39:14] There's lots of pressures on lots of people.
[00:39:17] Maybe it'll be practical about it.
[00:39:20] We love your spicy takes, you've been doing this for 40 years.
[00:39:24] I interviewed you many months know, many months ago
[00:39:26] and asked you a similar question,
[00:39:27] but how much longer do you think you're gonna stay
[00:39:30] sort of an active investor or spearheading that?
[00:39:32] If I life extension,
[00:39:34] efforts by its PETA Tilo, whoever else is doing at the book,
[00:39:40] no, seriously.
[00:39:42] I have a saying, you grow old when you retire,
[00:39:46] you don't retire when you grow old.
[00:39:48] I've seen too many people retire and grow old.
[00:39:51] So I clearly plan to do and help permitting this
[00:39:55] for the next 25 years.
[00:39:57] And then I'll be Warren Buffett's age
[00:39:59] and he's still doing it.
[00:40:02] I mean, look, I'll do this is so much fun and so impactful and
[00:40:07] Keeps to me so engaged. I still work easily 80 hours a week easily
[00:40:12] You're always telling me some random paper you've read or yeah
[00:40:15] Yeah, I wanted to sort of the last couple questions that end of this talk
[00:40:20] Just looking forward like do you think you know the GPD-5 will be a major improvement
[00:40:26] like we saw three to four?
[00:40:27] What do you see in terms of where we are on the S curve
[00:40:31] and how much improvement you think we're
[00:40:33] going to see in many of these companies?
[00:40:34] Yeah.
[00:40:34] Look, one, it's very hard to forecast.
[00:40:38] But I think what GPD-4 was to GPD-3
[00:40:43] and have no inside information.
[00:40:45] And frankly, even the people there at OpenAI couldn't tell you what GPT-5 will be.
[00:40:50] But GPT-5, I expect we haven't seen anywhere near the limits of AI capability.
[00:40:58] That's a reasonable assumption.
[00:41:00] And so, when I'm working with startups,
[00:41:03] I try and look at what five might have
[00:41:06] and six might have or what might happen
[00:41:10] when five helps design six.
[00:41:12] So GPT five probably will help design GPT six.
[00:41:16] You got this exponential effect.
[00:41:19] And the question for all of you becomes
[00:41:22] which startups become road-kill in this process?
[00:41:26] And being thoughtful about that,
[00:41:29] I spend an incredible amount of time saying,
[00:41:32] which startups should be invested
[00:41:33] because they won't become road-kill in the-
[00:41:36] No, categories you would suggest.
[00:41:38] Well, there's a lot of categories.
[00:41:39] There's lots of very, you know,
[00:41:41] we create a billion programmers.
[00:41:42] You're gonna create real value, no matter what the market- That's a very, you know, if we create a billion programmers, you're gonna create real value no matter what the market is.
[00:41:46] That's a positive one.
[00:41:47] Yeah, but I'm also forecasting 25 years,
[00:41:50] we'll have a billion bipedal robots.
[00:41:52] That will create a massive industry larger than today's
[00:41:56] auto industry.
[00:41:58] And my bet is we'll have more than a million in less than 10 years.
[00:42:01] And somebody told me I was being too pessimistic.
[00:42:04] The last thing I wanted to ask you was just,
[00:42:07] what is your stance on open source right now?
[00:42:09] You've sort of been like,
[00:42:10] oh, I want to man, Han projects.
[00:42:12] So it's got to all my thoughts.
[00:42:13] I do think in 10 years we'll have free doctors,
[00:42:16] free tutors for everybody, free lawyers,
[00:42:20] so they can access the legal system.
[00:42:23] I'm in the process of writing a blog that is good note to finish on.
[00:42:27] You will answer the open source question.
[00:42:30] Really, AI lead to dystopia or utopia.
[00:42:33] Too many people are looking at the dystopia angle of this 10%
[00:42:38] probability of something bad happening,
[00:42:41] ignoring the benefits to humanity of AI,
[00:42:44] and the 10,000 startups that
[00:42:46] are going to do truly wonderful things.
[00:42:49] And my job is to help them navigate the path forward.
[00:42:54] Coming back to your open source question, I'm very much against open sourcing and AI.
[00:43:00] Keep in mind, we were the firm back in the 80s
[00:43:06] that literally started the open source movement at Sun.
[00:43:10] NFS was the first major piece of software
[00:43:13] that was open source, Linux came later.
[00:43:18] So I was very much an open source fan
[00:43:21] and what it adds to creativity.
[00:43:23] We were the investor, first investors didn't get that. It's the investor. First investors in GitLab.
[00:43:26] It's the only people forget.
[00:43:28] GitHub wasn't open source.
[00:43:30] It was for open source software.
[00:43:32] GitLab was both open source and for open source software.
[00:43:36] So huge fans of open source.
[00:43:38] But in this techno-economic race with China,
[00:43:41] we will help them.
[00:43:42] And we can slow them down by six months or a year,
[00:43:46] I think it's good for America.
[00:43:48] But note, Kosa, always entertaining and enjoyable.
[00:43:51] Thank you.
[00:43:52] Thank you.
[00:43:53] Thank you.
[00:43:56] That's our episode.
[00:43:57] Thanks so much for listening.
[00:43:58] Shout out to Max Shaw and James Will Sermon,
[00:44:00] my cerebral valley AI summit co-host.
[00:44:03] Thank you to Riley Kinsullum, my chief of staff, Gabby Caliendo, and Volley who's been
[00:44:08] instrumental on putting the conference together.
[00:44:11] Thanks to Jung Chomski for theme music, please like, comment, subscribe on YouTube.
[00:44:15] Give us a review on Apple Podcasts and please subscribe to the Subsec, newcomer.co.
[00:44:21] Thank you so much.
