Elon Musk’s Boisterous OpenAI Bid
Newcomer PodFebruary 14, 202500:19:1217.59 MB

Elon Musk’s Boisterous OpenAI Bid

Eric and Madeline unpack the biggest “deal that wasn’t” story of Elon Musk’s unsolicited offer to purchase OpenAI for $97.4 billion. WIth Sam Altman flat-out rejecting the offer on X, this feels less like an offer and more of a statement about Musk’s frustration with OpenAI’s continued conversion to a for-profit company that competes with him. Pressures have been mounting on engineers to look for greener pastures, though, if Thrive Capital’s Joshua Kushner’s urging speech for talent to stay put is any indication. 

Then, they turn to Eric’s reporting on Lightspeed Venture Partners’ new fundraising documents, where the megafirm showed stronger returns on earlier funds ahead of its next big capital raise. They also unpack the AI Action Summit’s 180-degree swing from an AI safety forum to a conference dominated by CEOs and accelerationist world leaders. They close with a breakdown with a who’s who on the cap table of legal-tech Harvey’s latest Series D, and Mercury’s rumored new Sequoia-backed fundraise.


Produced by Christopher Gates

Music by Suno

[00:00:00] Hi, I'm Eric Newcomer. And I am Madeline Rynbarger. And this is the Newcomer Podcast. Each week, Eric and I discuss the VC deals and the drama that went down. Let's do it. Here we go. Ooh, a loyal supporter of the Newcomer Podcast. I think our first advertiser and Christina, the CEO, has been on the show.

[00:00:28] Anyway, without further ado, you're a startup founder. Finding product market fit is probably your number one priority. But to land bigger customers, you also need security compliance. And obtaining your SOC 2 or ISO 27001 certification can open those big doors. But they take time and energy, pulling you away from building and shipping. That's where Vanta comes in. Vanta is the all-in-one compliance solution, helping startups like yours get audit ready

[00:00:56] and build a strong security foundation quickly and painlessly. How? Vanta automates the manual security tasks that slow you down, helping you streamline your audit. And the platform connects you with trusted experts to build your program, auditors to get you through audits quickly, and a marketplace for essentials like pen testing. So whether you're closing your first deal or gearing up for growth, Vanta makes compliance easy.

[00:01:23] Join over 9,000 companies, including many Y Combinator and Techstars startups, who trust Vanta. Simplify compliance and get $1,000 off at Vanta.com slash newcomer. That's V-A-N-T-A dot com slash newcomer for $1,000 off. Welcome back to the show. I feel like all of the news is really front loaded this week. Monday kicked off with Elon Musk's bid to purchase the nonprofit wing of OpenAI.

[00:01:51] Elon has the capacity to be in every headline. You know, it's just like he's the guy running the country. His company Tesla is the one getting government contracts. He can't just be the one suing OpenAI. Now he has to be the one trying to buy OpenAI too. It's like, is it one saying it absolutely needs to remain a nonprofit? And oh wait, if it's not going to be a nonprofit, I would like to own it. And of course, this is a man who accidentally bought a company in the past, you know, sort

[00:02:18] of put out what seems like a not quite serious offer to Twitter only to end up buying it. So you would think he'd have learned his lesson about acquisitions that are messaging because they can become reality. And potentially he maybe learned a lesson or two about overpaying for acquisitions because this price point, 97.4 billion for the nonprofit wing is much higher than what this was valued in even imported deal talks and even the last closed round for OpenAI.

[00:02:47] I don't understand that. Doesn't the nonprofit owns the overall company? The overall company's valuation is higher. Why isn't the nonprofit worth more than the overall company? I'm not sure about that. Actually, I don't understand how it could be. This valuation math is just I read a take that it was basically also kind of like calling the bluff on the nonprofit because it's like if it's valued so high, then over the round,

[00:03:14] then they have to say like, no, why are we rejecting so much money? And that's like making this case. That's like, OK, you know, where do you think the valuation really is? You know, I think he's just trying to make OpenAI's life as difficult as possible. I mean, though, I imagine he set a price that he thinks if I got it for that, it'd be worth it. I mean, he's already learned that there are plenty of non-monetary value in controlling the world's news social network.

[00:03:42] And I'm sure there's plenty of value to Elon and controlling the most popular AI company in the world. So if you think of him as someone who has plenty of money and an insatiable appetite for influence, then there could be some credibility to the acquisition. But I do think mostly it's meant just to frustrate OpenAI, make it difficult for Sam Altman to turn this thing into a for profit.

[00:04:09] I mean, Sam is just kind of outright rejecting this offer. I mean, he's, you know, quipped back like. Yeah, he's going to buy Twitter for nine point seven billion dollars. Yeah, which is the offer with a decimal point moved over one section. So, you know, that's just like a trolly reply. He's clearly not taking this seriously. I told Axios later this week that OpenAI is not for sale. So today, as we're recording the news breaks that Musk is saying, you know, he'll withdraw

[00:04:35] the bid if OpenAI cancels its conversion to for profit. Never mind. We'll stop. So I think that makes it abundantly clear what this move is about. Though he might have given Altman ammunition that Musk is being disingenuous here. It's like, oh, if Musk says it really should never be anything but a nonprofit, then why is he open to buying it? Anyway, it's another like trolly move. Well, another OpenAI news thrives. Joshua Kushner has been, you know, making the pitch to OpenAI staff to not leave per the

[00:05:05] information. Time honored tradition of the mega unicorn being like this thing's going to be a trillion dollar company. And if you look at our valuation, this is what we're going to be worth. I mean, and they're not always wrong. You know, Uber did go on to become a super valuable company. Maybe not on the timeline that some people imagine when they got their equity. I think Joe Lonsdale was just reflecting on how Palantir had modeled out comp. So certainly not unprecedented. A funny situation.

[00:05:34] I mean, you know, all these highly talented AI software engineers are they're like, man, I have a once in a generation skill that I don't even need to create a company to become extremely wealthy. And I need to maximize that. You have all these former OpenAI founders and executives recruiting people away. You got Mira Morati starting thinking machines, clearly drawing a lot of OpenAI people away. And that's on OpenAI is mine.

[00:06:02] You have Ilya with safe superintelligence also pulling on the same people. Basically, the entire legion of OpenAI co-founders that have since gone on to bigger, brighter things. And anthropic, you know. I mean, that's the original one, right? The beauty of, you know, Mira's right now is it's like, oh, well, it's a new company. So you're going to have more upside. And so I think, you know, Thrive is trying to answer, well, we, you know, if you think about

[00:06:30] how unlikely their success is versus the trajectory we're on, you know, it's just a funny exercise. And of course, there is overall a sort of collective action problem, which is OpenAI's valuation sort of depends on all those employees staying. It might be in your rational interest to defect and join Mira because then you get a larger share of the potential upside if some of the momentum shifts. But that undermines OpenAI.

[00:07:00] So there's a classic like we could all just stay at the same place. We could get rich, but we're all going to like go everywhere. Maybe we hurt the valuations of the collective companies or we create more value. But they're getting paid plenty of money. I don't feel too bad. Yeah. Sounds like they're getting paid regardless. And, you know, it's just how much how much of a better deal I could get. Plus, all these engineers staying at OpenAI will get to work on GPT-5, as Sam Altman announced this week.

[00:07:29] This was very confusing thread, Eric, of how it was. Let's clear the air on what GPT-5 is and then explain perhaps the most complicated product bundle I've seen in an announcement. Well, he's a once he's isn't he like sort of swearing that they're going to stop having all these confusing different labels and then even the explanation of what models there will be remains confusing. Like, I mean, I'm paying for the top tier. I have no idea which one I'm supposed to use.

[00:07:57] Like, when do you use the pro, the super pro version versus O3 is very I'm and like some of them have access to the web and some you can have PDFs. So it's it's very confusing. I also don't understand, like when you change on a thread, how much the memory it's just like it's not a very good user experience as a hardcore user. For the hardcore podcast fans on the Cerebral Valley podcast with Max and James, we had a

[00:08:24] lengthy debate over whether they would actually call this GPT-5 and that our predictions about what would happen in our sort of forecasting episode. An important data point that they are going to use the name GPT-5, apparently. Yes. But what GPT-5 is in this transition maybe looks a little different. And Anthropic is doing a similar thing, too, with its next cloud release to incorporate reasoning models in with the larger package suite.

[00:08:50] And so it seems like now everyone is just productizing these models in a way and trying to find the best way to get it to stick with this new CompuPower and really testing what people will pay for at these different tiers. I mean, Eric, as a pro user, do you select different models? I feel like the end result we want here is the models to sort of be smart enough to pick for themselves how it should be routed.

[00:09:17] And I think that's clearly on the horizon where it's like, OK, you ask a question and if it's like practically a Google search, you get sort of the classic model. And if it's a very creative philosophical question or math problem, you get a reasoning model. So I do think there's an element which I can understand is these are all very engineer driven

[00:09:41] products who want a lot of optimization and are like, I can decide whether time is a preference or thinking is a preference or, you know, and so you get and they come up with weird names and, you know, they could use a little bit of a Steve Jobs product centric, like we're building this for a big customer. But it's working, you know, it's working. And I think, you know, OpenAI in particular, a foundational experience for them is like,

[00:10:09] oh, we have a release line called ChatGBT. And it's one of the most popular products in America. So like, it's often the case that everything you think you know about brand isn't true. And if you have a great product, people figure out how to make it work. And so it's sort of funny. And I do think they should come up with clear names, though, but it'll be fine. You got to peek behind the curtain at Lightspeed's Venture Return. What did you make of these numbers? Yeah, I mean, I think Lightspeed is doing a good job.

[00:10:39] I think that was number one. We haven't really seen Megafun performance besides the actual LP class. So this is a rare, very exhaustive, to pat myself on the back, window into how one of these Megafuns is doing. You know, I just done General Catalyst. But that one was sort of old. This is fresh. This is like compared with Cambridge performance. So we're seeing it benchmarked. So it's super useful. And I mean, basically, the early Lightspeed funds are very good.

[00:11:09] There's mostly first and second quartile funds. There's one or two, depending how you measure it, third quartile funds. So Lightspeed, you know, super strong player. But what we're seeing with all these funds is their recent humongous funds just haven't returned a lot to investors. Obviously, you know, it takes time. But I do think it's amazing that we're seeing these, the sort of private equitification of venture capital firms.

[00:11:36] They're becoming so large when they haven't even really proved to LPs that the multi-billion dollar funds work. You know, their proof is based on a $500 million fund. But, you know, Lightspeed is clearly getting in the right companies. If you want to bet on tech and ventures getting bigger, Lightspeed seems to make a lot of sense. They have good portfolio companies. Beyond even, you know, they had their initial wins, you know, when Jeremy Liu was there with Snap. But recently, they've been getting into some really good deals. They're in Mistral.

[00:12:05] Well, like Glean is super important. I mean, I actually have this is the level of depth. I have the most important companies by net asset value to Lightspeed. It's Wiz number one, Grafana Labs number two, Navon number three, Clean, Anthropic. I think Anthropic is going to go up in the most recent thing. So yeah, time to open your wallet, expense a subscription, read our report and understand finally how venture capital works.

[00:12:35] I think it's super interesting. Now, someone please just send me Andreessen Horowitz's returns, please. Please, that's all I want. Eric and Newcomer Deco, I'll protect you. And then we want Sequoia. We've done things around Sequoia over the years, but we'd take Sequoia to any returns. But those are the big, big ones. Meanwhile, while I've been in Lightspeed land, you've been actually listening to what? The News of the Week. JD Vance gave his speech in Europe.

[00:13:04] Like what's, what's tell us what's going on in Paris? What's your read on that situation? Paris was interesting. The AI Action Summit, this isn't the first version of this summit that they've had, but it definitely was the first that strayed from the initial mission of AI safety. In the past, if this had been an AI safety focused event before, it is just all tech leaders, Sundar, Dario, Sam Altman mingling with Narendra Modi, Emmanuel Macron.

[00:13:32] JD Vance was there, gave an opening speech on Monday. That was the America first of AI speech, if you will. He had this one quote, the Trump administration will ensure that the most powerful AI systems are built in the US with American design and manufactured chips, which is quite the statement to make at a world summit on AI. There's a funny dynamic in that America sort of wants Europe to get its shit together. It's like, oh, if you're such a great ally to us, like deliver, pay, pay for some of the

[00:14:01] defense stuff, like build good companies. Don't just extract from our companies. Like, you know, I don't, I'm no Trumper, but I'm sympathetic to that message that it does feel like we go on vacation in Europe. It's like, oh, it's chill over here. But like, I'm like, where's everybody? I'm always like, where are people working? You know, like, whereas you go to New York and you know people are working and you can see the results with the companies.

[00:14:26] So what was Europe's response to all this goading from J.D. Vance? After all the taunting, European leaders did make some sizable announcements this week that showed that they want to get in the game and are taking this seriously, even if it's, you know, on the later side. Macron and France are announcing billions of dollars in nuclear power that will be allotted to building data centers for AI.

[00:14:50] And French banner AI lab Mr. All is announced a partnership with the defense company Helsing to build defense focused models, which is very American dynamism of them. I must say it sounds like safety's dead. That's basically what everyone told me. I mean, what? Dario at Anthropic put out. I mean, I like Dario's writing, but I thought it was sort of a weird note in this classic like I'm trying to be positive, but I'm sort of negative here. I mean, you seemed a little worried that, you know, people weren't talking enough about safety.

[00:15:20] It certainly seemed like he was holding back maybe a stronger opinion than what was published, but his essay was really critiquing in some ways a soft backhanded critique of the lack of safety in the geopolitical discussions around AI now. It seems like everyone is looking for their national advantage and in a polarized world, that certainly makes sense. Kind of going, you know, almost Cold War industrialization of this technology where if we don't do it, you know, deep seek in China will.

[00:15:48] And there was also quite a bit of a hush over the conference around deep seek stability and Western leaders, you know, saying, oh, we need to really focus on this as a threat. Of course, the unique fear with AI, unlike nuclear weapons, is that maybe someday they will figure out how to fire themselves. You know, that is that's why it's like, oh, we need to be very mindful because one day you could accidentally come up with the AI that is too smart for its own good.

[00:16:14] But yeah, I don't we are not going to necessarily be keeping them in a box. I floated on this podcast in the past that it is very possible that we look back at the Trump years and say, man, we were so worried about democratic norms and immigration things. I'm very concerned about it. But we actually say, oh, man, it was tech. It was, you know, AI policy that mattered the most because we were at a true inflection point in AI. And honestly, this is a data point that at least the Trump administration is somewhat aware of

[00:16:42] that if the vice president's first speeches on AI seems like, you know, J.D. Vance is someone of Silicon Valley. Even if he's like, you know, throw caution to the wind, they're clearly very mindful that they're living in an AI era that could be swamp some of the other historical trends that they're involved in. Moving on to the deal of the week. Legal tech Harvey, the darling of all legal tech, has raised 300 million at a

[00:17:07] 3 billion valuation led by Sequoia and is including KOTU, Kleiner Perkins, OpenAI Startup Fund, GV Conviction, Elad Gill, Rev, LexisNexis had to get in there too because legal industry. But yeah, they're an AI legal assistant effectively. So they build tools to help with drafting briefings, really a lot of content generation and summarizing themes and leads.

[00:17:33] So tools that things AI is very good at, you know, parsing through text based data for certain points and pulling those out and summarizing them and then subsequently drafting them for lawyers, which is the next step. So they offer a suite of tools along those lines of what, you know, a legal assistant or a junior paralegal would do for a lawyer.

[00:17:56] Hopefully the days of minding every comma in your legal briefing will soon be over and AI will solve some of the monotony of lawyers. Lawyers are corporate lawyers are back. You know, they're making a lot of money these days. So and now they've got software to automate the whole thing. It's like good, good time to be a lawyer. There's no shortage of copyright cases these days. I'll say that too in our AI focused episode. The honor will mention other deal that we have to shout out is that Sequoia once again

[00:18:23] is in talks to lead an investment in Mercury, the digital banking company at a valuation of more than three billion, which is a big step up. All newcomers, not all, but much of newcomers bank is on Mercury. So as a customer, it's always nice. OK, Sequoia is culpable for this one and they're getting more money. You know, it's part of what you want is a banking startup is to give people confidence that you're going to be around. And this is certainly one of those confidence inspiring moments.

[00:18:53] That's our show. See you next week and subscribe to the newsletter to read the Lightspeed post. If you haven't yet, get Eric's insights into mega fund returns only at Newcomer. All right. See ya.