This week, Tom breaks down his scoops on how the big foundation model providers are doing.and much to the chagrin of our resident skeptic, they’re earning lots of real revenue! OpenAI is on track to crack over $12 billion in revenue this year, and Anthropic projects it will double its ARR to $4 billion by the end of the year. But have we hit the AGI moment? Eric relays his o3 experiments as evidence. Madeline gets into the perpetual VC optimism in spite of market turmoil and why AI has early stage investors mostly unphased.
In the second half of our show, Eric interviews Contrary’s Kyle Harrison about his viral foundation model market map and why AI has led many VCs to embrace startup polyamory.
[00:00:00] Tom published his first post. Tom newcomer byline. How does it feel? It feels like I'm born again. One of our mutual friends, I think Corey, the information was like, I'm going to call it Do-Tom, whatever you write here is. Tom, a newcomer. I don't know. It's funny. Yeah, we haven't discussed branding on my stories, but if we could have some sort of like a hover graphic or something where like if the mouse goes over newcomer, like my name kind of just flashes in. Yeah. Let's talk about it. I'm willing to work it into comp discussions.
[00:00:30] This episode is presented by Brex, the financial stack that founders and VCs can truly bank on. Imagine what your founders could do with their runway if they had a banking solution that had no minimums, no transaction fees, and 20 times the standard FDIC protection. Plus, they could earn an industry-leading yield while maintaining access to funds whenever needed. Brex simplifies financial services for startups so they can focus on building.
[00:00:57] Connect your portfolio to the financial stack that one in three U.S. venture-backed startups already use. Check out brex.com forward slash banking dash solutions. Your piece was basically markets collapsing, but otherwise it would have been a great business news period for the foundation model companies, which is not your disposition. I feel like almost like I didn't order you to come in and say like, tell everybody how AI is doing great. It's like you'd be the guy to like, be like, I don't know.
[00:01:26] I'm worried about everything. Eric's hitting the techno-optimism whip. Like, you got to publish a positive story. Right. I wish you'd come in negative so it didn't sound like, oh, you're writing for us, the AI conference people. You're going to give a booster take. What convinced you that things were going well in the foundation model world? Yeah. I really wanted to come in with a negative piece and then quit right after that and say like, Eric rejected my contrarian take, my contrarian market take. Got spiked. Yeah, exactly. Immediately.
[00:01:56] Like, this is who pays the bills here at Newcomer. No, on the contrary. Eric, he wants harder hitting stuff than what I'm delivering. I'm actually very worried. I was talking to some people and I just wanted to get a dipstick test of where AI was at the moment. And much to my chagrin, much to my disappointment, it was a very strong start of the year. You know, like I have this quote in the story, but I was talking to an investor who was telling me we all had in mind this idea that competition was the most important thing and there were
[00:02:26] only going to be a small number of winners and we needed to back those winners and everyone else is going to be shut out of luck. And at least so far, it turns out that like the market size is large and there are enough people on the consumer and business side who want to pay for this stuff that the businesses, you know, and we were looking mostly at foundation model companies. So the ones that have raised the most money, but are building the tech that's powering
[00:02:50] almost all the applications, they had really good quarters and their business grew substantially. And, you know, in certain cases above initial projections, basically things are going very well in AI right now. And the only thing that could maybe derail it in the short term is the economy collapsing is the markets, you know, shitting the bed and everyone realizing that they're going to pull back on their spending because right now, at least they're buying into the hype. And so, yeah, that's the backdrop of it.
[00:03:19] It's like, we've got all these numbers. We have all these exclusive figures on how all the foundation model companies are doing. Yeah. Give us a tier ranking. Like who's doing the best of these foundation model companies right now for your numbers? Good week, bad week. Good quarter, bad quarter. It's kind of hard to dislodge open AI right now. They're really killing it as much as it kills me because it's a much better story when they're not. I'm sorry. But, you know, relentless product rollout pace. Yeah.
[00:03:49] We're going to talk about O3, which I'm extremely excited about. We'll talk about the actual tech after we talk about sort of the business, you know, revenue. Right. So the product stuff, which we'll get to. The revenue stuff, which they are projecting that they could get as much as 12, you know, almost 13 billion in revenue this year. And that seems very plausible based on their user growth. You know, Sam had this... I hadn't watched it until I was doing this story.
[00:04:13] Very funny interview at the TED conference where he kind of let slip that their users doubled in the last couple of weeks. And, you know, back of the envelope, you could put that up. Thanks to Studio Ghibli. Right? I mean, it's got to be mostly, right? Like, that's got to be a lot of it. I feel like the image generation has to just be overwhelming demand coming from that. Nothing like a product people actually like that it's like, oh, let me show off using it to everybody. It's not like a social feature.
[00:04:41] It's like literally something so cool that it becomes like marketing for your product. But Anthropic was doing well too. It is not just open AI, right? Yeah. Anthropic, which we really dove into their numbers. They finished off 24 with like 950 million in revenue. So very solid year. And, you know, already at the beginning of the year, they're tracking to do annualized 2 billion, well over 2 billion. And by the end of the year, if growth continues apace, it could be as much as 4 billion.
[00:05:08] So that's still, you know, a third of open AI's business, which is interesting, you know? And I think we're paying attention to the fact that one is like head and shoulders above the rest. But like Anthropic has really established itself as the enterprise AI model maker of choice. And if when it comes to coding, Claude is sort of where it's at. What's interesting with that is as I was doing this piece, like literally we were getting close to putting the last touches on it.
[00:05:33] News breaks that open AI is in talks to acquire Windsor, which is this big AI coding, not big, but they're like, you know, a somewhat popular. Used to be Codium until like a couple of days ago. Yeah. I didn't even know about Windsor, to be honest. Windsor was the product, you know, they can be with cursor, which is called any sphere. But for some reason, this category is full of companies with products that are more famous than the underlying companies. Anyway, yeah.
[00:05:58] Open AI is supposed to be buying it for three billion, which, you know, will scare the shit out of every application company. Or what do you guys make of that? I mean, my first take on it, because I actually did not realize this until I was, you know, digging into Anthropics numbers. But those coding application startups, the any what the fuck is it? Any sphere? Any sphere. I've never called it that. They should just change. Everyone calls it cursor and podium. Anyway, guys, I think generally across it. So there is a company called any scale.
[00:06:29] And, you know, I'm like, I do all this conference organizing and like we did this enterprise 30 list. So I accidentally emailed like any scale about any sphere. And like they both ended up on the same thread. So I was on this like comical thread where like both of them were replying and I wanted one of them. I ended up talking to any scale just because it's like, oh, now we're on a thread. You're another interesting company. Like, let's do a call. But yeah, it's extremely confusing.
[00:06:56] That's that's your growth pitch to these people. It's like, since we're on the phone. Yeah. Since I mistook you for another company. It's like now that you've wasted our time, like you at least need to hear our spiel, you know, like anyway, we should have a whole episode on the shitty naming across all of AI. Like why 04 is the less powerful model to 03. Absolute nonsense. Yeah, I would say I feel like honestly, any sphere, any scale has become like a recurring character on this podcast at this point. Just like pick a name.
[00:07:27] Yeah. Any any port in a storm. So their acquisition of them is actually could be hugely problematic for Anthropic because one of the significant drivers of business for Anthropic are these coding startups that incorporate Anthropic's models into their application. And that that powers the whole thing. And you got to assume if OpenAI, you know, continue like gets this acquisition across the line, they're going to cut out all of Anthropic's models.
[00:07:54] And I don't know exactly how much revenue I should look into that one. But it's certainly an offensive move on on on OpenAI's part. So that's something to watch. That deal aside, Anthropic's in a great place. And there's a huge amount of confidence there. And again, for the second place player to be in a good space generally means good things for the health of the sector. I don't think it's just that Anthropic could lose the revenue.
[00:08:19] I think what matters is that OpenAI is going further into the application space, right? It's not just ChatGPT is their application. It would seem they want to provide other applications, which means anyone built on top of OpenAI has their peers justified. I can reveal here for the first time that one sentence in Tom's story was written entirely by O3. You can pause here to go see if you can identify it.
[00:08:46] But it was, any AI app acquisition will jolt the remaining independents confirming their fear that ChatGPT's parent won't settle for being just the brains behind everyone else's software. I had the most convoluted sentence and rewrote it. You're like, ChatGPT is better. I want to let listeners in. Okay, you kind of got it behind the scenes here. This is a line that Eric wrote, not a line that I wrote.
[00:09:12] Because anytime I put my language into ChatGPT and I say, can you make this sound better? They say, I can't. I truly can't. This is peak human intelligence. And it would be foolhardy for me to suggest anything. It just shuts off when you try. Yeah. I actually get a call from Brad Lightcap every time I put my wording in there and say, we'd actually like to refund you $20, sir. So it's the pro version of ChatGPT. I'm paying $200 now. Yeah. Yeah. This stuff's getting better.
[00:09:40] I think the market right now is really strong. And it pains me to say it. Well, what about... I mean, even in apps, you looked into Perplexity too, right? How are they doing? Briefly, we did. Yeah. Good. You're doing good. Good. Good. Yeah. I mean, the CEO of Perplexity, Arvind Srinivas, he's out there a lot. He likes promoting the company. And so he tweeted out that they had reached $100 million in ARR in March.
[00:10:07] But digging a little bit deeper into them, I found that from the investor side of things, they're expecting that to almost double by the end of the year. So they feel really strong about stuff. You talked about OpenAI as a big player. I mean, Google's Gemini 2.5 is hitting a bunch of benchmarks too. But Meta seems like kind of fell off with Llama. Like it's been doing nearly as well. I know obviously there's reports of engineers taking that they worked on the latest model off
[00:10:33] their LinkedIn or clarifying that they did not work on the latest model on their LinkedIn because it's just not doing as well. The accusation is that what Llama 4 was built to do well on the test, but not necessarily that big of an improvement. Yeah. I mean, first of all, I relate. There are certain jobs that I've had that I've taken off of my LinkedIn as well. I never know. So it's nice to know that AI researchers are just like us. Llama is kind of a huge question mark right now.
[00:11:02] People were very impressed with the earlier iterations of it. I mean, this was like a high quality open or free to use thing that basically became the standard open source model. And then DeepSea came along and had deep reasoning capabilities, which I think really appealed to a lot of people. And then Llama 4 was supposed to come out and blow everything out of the water. And they had, like you were mentioning, Madeline, these benchmarks look very impressive.
[00:11:27] But when people were using it, like when developers actually were trying to apply it, they were like, man, this sucks. You know, it's like very verbose and its answers are much worse than what the competitive ones are offering. And it got to a point where AVP of AI at Meta had to get out there and say like, well, we're working through some bugs right now. We're trying to make sure that everything works for our users. And these accusations that we were training Llama on the test set so that it could perform
[00:11:56] well on the benchmarks is totally unfounded. And look, I don't know yet. And I'll look into it as to what true or isn't true. But once you're doing that kind of damage control, you kind of already lost. Like it's a PR game to a degree. And so much of open source is like, hey, we're cool in this space. We're going to attract engineers. I mean, to some degree, Meta is just trying to kneecap everybody else and be like, we give it away for free, you know, there are fewer giant tech companies.
[00:12:22] And if DeepSeek is playing that role, you know, Meta still wins and sort of making it hard for everybody else to compete. But yeah, Llama's going to need to rebound, I think, on the next one. And I'm kind of, I'm sympathetic to them because these models, like you do these training runs and you're kind of get what you get. And if it's not good enough, you do another one. And either you release it or you don't. And they're clearly, I mean, if it truly is as bad as some people are saying, you know,
[00:12:50] there was a lot of pressure on them to just release something. And, you know, they put out something that maybe wasn't very good. And like that's the beauty of capitalism. Everybody's competing. We should be happy, you know? I do feel like separate from all of these numbers on how these companies are doing, looking at the technical specs, like people are really excited about O3. It's like some people are saying we're hitting the AGI moment right now. Eric, you were AGI momenting all over Twitter this week. You're like, oh, yeah.
[00:13:17] I was up, I was declaring on Twitter, like, oh, history will reflect. This is a huge moment, but the headlines are about everything. But you were in our Slack last night at like 9 p.m. just being like, wow, how did it do this? You were like, hey. I told you guys, you're not obligated to engage with me when I'm just like slacking away. But Tyler Cowen said AGI seriously. He really thinks it's AGI. I was getting some pushback from AI diehards that are like, okay, O3 isn't so much better than Gemini and Anthropic.
[00:13:47] And I guess I'm not a student of which one is better. I think what's amazing to me about O3 is how great it is at like searching and sort of pulling a bunch of stuff together along with reasoning. I think, and we're going to make a concerted effort to prove this true, I think we can get a genuine scoop thanks to ChatGPT, which I think would be a landmark moment from reporters.
[00:14:11] I don't know how much that's sustainable versus like there's a lot of stuff that's sort of like on the internet that search isn't as good at finding. Then now ChatGPT with its ability to analyze stuff can sort of put pieces together. I don't know. I was going to say scoops were like the, I mean, scoops are like the one thing that AI doesn't have on writers, right? You know? Right, exactly. If they can do that, then it's AGI. That's what I'm saying from my perspective.
[00:14:36] I actually don't think that they will be able to turn out scoops because you pay $200 a month for ChatGPT. And that's much less than you're paying me for scoops. And I'm very concerned that you think $200 a month equals the scoops that I could be able to give you. So by that metric, I do not believe that they're able to get scoops. You're like the market says otherwise. Yeah. Right. That's true. Investment-wise speaking, it's just not possible.
[00:15:04] Because otherwise, you would just pay for that and not me. It's coming for you. Yeah. I believe that's not true. And I believe it can't do that. And there's a huge amount of value for paying people to write scoops. Significantly more than the $200 a month that you were paying for. It is partially tongue-in-cheek. But I mean, I do think there's a reality that lovable can code. But I'm not a coder. And so I'm not seeing it through.
[00:15:32] And there's going to be the same way that ChatGPT can do some of these things. But if you're not like a crafts person, you don't really have the taste for it. You don't know what to look for. It's sort of like it's hard to really extract it. And so I think some of the threats of job replacement in these sort of areas where there's a lot of taste and you have to know all the sort of steps. Even if it does some of the steps well, it's only going to be a tool for the crafts people. Yeah.
[00:16:01] I mean, what was interesting with the stuff that you were going on about on Slack? Basically, it can find filings for venture performance that we hadn't found. It can find unlisted links that are listed by the LPs and the endowments and pension funds where they have these filings already on the web. But you have to know the URL to find the specific filing document. So it's not fully guarded, but it's...
[00:16:28] And it analyzed some filings from Amazon. I don't think we'll ever do this story. It analyzed some findings from Amazon that broke down the nature of Amazon's investment in Anthropoc. That, you know, Reuters or wherever who had written the story hadn't really gotten the full breakdown. So it argued that there were like nuances of how the investment was structured that it caught on to that reporters hadn't yet found. Which... So it's already doing like incremental scoops. Right.
[00:16:58] It's already doing, you know, like actually this is new information. It's arguing with its editor like, no, this means the needle, you know. That's honestly the strongest case for AGI I've seen yet. If it's already like a very self-conscious, you know, sensitive reporter that's trying to say, no, this is new intel. This is good stuff. Why do you think this is just the same thing Reuters did? I've never more deeply related to an AI.
[00:17:19] No, seriously though, I think there's an interesting case to make that if this somehow does get more wrapped up into the reporting process and the pulling of public documents or semi-public documents is something that like O3 or O3 Pro can do pretty well. I think it puts more pressure on the aspect of reporting, which is gathering information from human beings. And it's not just document diving, which is valuable. I feel like one of those woke people. I need to be snapping to you. I couldn't agree more with what you're saying. Yeah, that's what we're video too.
[00:17:49] Exactly. I mean... This is about the people. AI is about the people. AI makes us more human. There's two levels about it. One, like people... The sort of like... What people are like isn't really like on the internet. Maybe you see YouTube videos, but it's like hard to... It's not a big data problem. It's like a small data problem where you need to talk to people, interacting with people. So getting the story of what people like and how organizations are working, there isn't like the data for the AI to consume.
[00:18:16] And then obviously humans are just better at understanding like what's interesting about a person and like telling a personal story. So I totally agree with you. Like AI will be better at solving data stories and stuff where it's like pulling together what's happening with like money and numbers. And reporters are gonna have to focus on people like more than ever. It's interesting to see how this tech will be incorporated into newsrooms because there's a desperate move by all top people.
[00:18:43] And I know this from my time at the Wall Street Journal to force down the newsroom's throat AI and into the process because it makes you seem futuristic and advanced. And you can pitch to your non-journalistic business execs that were embracing AI. And so far, that's mostly meant using like AI-generated talking points or like bullet points at the tops of stories, which at best maybe removes one person whose job was to write those things. But it's not AI. It's not actually disruptive in the flow of news or reporting.
[00:19:13] So maybe what you were messing around with, Eric, will be closer to it. And that can actually change things. But aside from that, it's been pretty unimpressive. I think it's a good sounding board, good proofreader. I mean, we still pay a proofreader, but it's going to be more and more helpful. But I agree that the heavy-handed attempts to use it can be heavy-handed. Do you think O3 and similar models represent AGI, as Tyler Cowen seems to say?
[00:19:38] Or as Crank Gary Marcus believes this is nowhere close, can't even do math with decimal points. It's idiotic and there's a propaganda machine. I don't know, man. I don't know. The bar is always moving. I don't really know what it means. I mean, there's like financial definitions for it. There's philosophical definitions.
[00:19:58] I have a feeling that this back reverse engineering of AGI and say we've actually had it for the last couple of years or the last year or so because of reasoning models, that's weak. I'm sorry. I want a lot more than that. AGI to me needs to be like in SimCity 2000 when a headline in the newspaper said nuclear energy exists and you're allowed to build nuclear plants.
[00:20:21] I'm most sympathetic to that, that it hasn't produced any real innovation or new ideas. And that is some barrier. But yeah, does that mean that the threshold for AGI has to be novelty? Because I feel like while I'm as a creative person would love to say yes, that's the threshold and therefore you can't do what I do. There are plenty of things that it can do.
[00:20:47] And if your definition of AGI is to replace someone in the workforce at a competent level, it's getting closer and closer to being able to do that to the point where like this could be the AGI moment. Like I think what you were working with, Eric, just for it pulling up filings and things like that, like it can do stuff that analysts can do just hands down already. I think it's safe to say I know a lot about venture capital and I think OpenAI knows more than me. You know, it's just like it can pull all the world's information and sort of synthesize it.
[00:21:16] It makes an occasional error, but so do I. Like I think even now in areas where I'm an expert, it has more context and information than me. I mean, I, you know, I'll be honest. I've known Gary for a while. I probably muted him at this point. And that's more just out of like respect for him as a person because I like him. But he's so fucking annoying on X. And I'm willing to keep that in the pod like he just is, even though I think he's a really smart guy.
[00:21:43] But I've seen some interesting stuff on there just pointing out like its inability to do what would be basic logic that a human of average and even maybe slightly below average intelligence could do. There was this one picture. I don't know if you guys saw it of like these curvy lines pointing from a name to like a stick figure. And it was saying like, you know, it put the image into O3 and asked it to identify the person with the color of the stick figure that it was pointing to. And it got it wrong, completely wrong.
[00:22:12] And that's something that probably a third grader could do. And I think those sorts of mistakes are meaningful. Like, sure, it has research tool capabilities. And I think, you know, this gets into like the co-pilot versus agent debate. Like, is it something that works as an assistant or is it like autonomous? And I think those mistakes, until they're fixed on a regular enough basis, stops me from getting too excited about like human level intelligence.
[00:22:40] It just doesn't mean it just doesn't mean that. All right. We've looked at how foundation models have been doing really well. Madeline, your story for Friday looks at our VCs reacting to market force. Well, they love to say that they do not react to the markets. Right, Eric. They love to say that they're, you know, thinking of the future and that the markets now don't really interplay with what they're doing. True or false? Or what have you found?
[00:23:07] Charles Hudson, a precursor, told me best and said the tailwind of AI is overcoming the headwinds of the market. At the late stage, there are delayed IPOs. We cannot disregard that. Obviously, that's happening. I would not say that late stage is 100% affected. I'd say that there's more of a pause there. But like at early stage deals, deals are still flowing. Companies are getting backed. Most of the investors I talked to had not, you know, increased the pace of deals that they plan to do over the next few months and by the end of the year.
[00:23:37] But most of them also had not slowed down. The money is still flowing. It's just, of course, you know, the classic AI versus everybody else story. Do you think they're in denial about how bad the economy could get? I mean, Ryan Peterson, who everybody in Silicon Valley Trust we had on the podcast, is like, if these tariffs don't get reversed, like tons of businesses are going to die. Like we're already seeing, you know, companies sell to their factory in China. Like obviously those aren't tech companies. Those are like, you know, T-shirt manufacturers probably.
[00:24:07] But that could have a huge impact. Like how much do you think they're in denial about how bad the economy could get? I mean, I think that the early stage investors, it's not as much as they're in denial. They just like are looking at a 10 year horizon. So like in their mind, maybe the economy is fixed in 10 years. Who cares? This is going to be the next generational wave of technology and we got to get in it right now. I'd say more at the series B plus level. There's a little more doom and gloom. It's not entirely rosy.
[00:24:33] I've heard a couple rumors of term sheets getting pulled for companies and logistics chip making tools. Anything like kind of that touches things that will be heavily tariffed that are raising right now. I don't think it's not that they're not going to be able to raise. It's just looking a little tougher. Anything in retail not looking so great. So yes, there are carve outs where there are like things that are not doing well. But there's also like the consumer packaged goods companies like the D2C marketplaces that were like really big in the boom times.
[00:25:01] They haven't been doing super well, honestly, for a while. And this is just kind of like nail in the coffin for them, unfortunately. You know, if you're sticking to like VC, like software company, AI engineering, AI application, those deals are fine. That has not slowed down at all. No one, especially in early stage VC is like worried about the markets right now. Other than just like when you get to that sort of mid stage is when people start to like slow down a bit. If you had to do business, tech is where you'd want to do it.
[00:25:30] You know, the least physical goods possible. I mean, obviously what, Tom, you've touched on the fact that software is not just ones and zeros. There is like a hardware component that influences the cost structure. What's your view on how much worse things are going to get for the tech industry? I mean, I feel pretty good right now. Patriots are in control. You know, we've got the all in podcast people mostly running policy.
[00:25:58] It's kind of hard for me to feel more confident than that. It seems like everything's gone according to plan. So I don't know where any of this doom and gloom is coming from. I mean, look, if you want to get things done, you've got to do things. It doesn't matter what they are. You just got to do things. And I think we're in the era of doing things. And I couldn't be more impressed with the administration's do things. Whether the thing that they do undoes the previous thing doesn't matter as long as it was a thing that was done.
[00:26:22] And I also want to say I'm very impressed with the number of Twitter ads that I'm seeing that then have community notes saying this is a scam propagated by a dropshipping company. Well, they're all I think dropshipping is getting killed. They're literally seen tweets of like Donald Trump, I voted for you. I love you. I worship you. You can do no wrong, but you're killing my business. Please. Yeah. Yeah. Trump's completely turning his back on the whole dropshipping community, I think, was a huge mistake.
[00:26:50] Granted, these people don't seem to have money or it's not denominated in dollars, it seems. But I'm I really think like Trump versus the dropshippers. Like, I don't want to make any predictions, but like I make sure I keep a close eye on the dropshippers. If I was farmers are going to get a bailout, maybe they'll do a targeted bailout on dropshippers. If there's a dropshipping bailout, that would be incredible. If there's a dropshipping bailout, I were cooked. It's over. I'm sorry.
[00:27:18] I want liberation day two is going to be a huge press conference where like everybody that's selling, you know, stuff that was a dropship from Vietnam. Tom is that that is the end state of America that I've been very excited about. So that's as much as I can offer on this one. Well, with that last little dash of Tom nihilism, I think we pretty much covered the whole gambit of news this week. We got, you know, the foundation models crushing it in terms of business.
[00:27:46] We've got VCs excited. Markets be damned, except maybe a little bit worried. But we won't talk about it now because we're still doing deals. Eric, you had a nice chat with Kyle Harrison about VC polyamory. How did that go? Yeah, he did great thought leadership. He made a viral image on Twitter, the ultimate influencer. That's like a research report in venture.
[00:28:07] Anyway, so Kyle Harrison at Contrary, friend of the pod, came on to talk about venture capitalists investing in many, many foundation model companies and his chart of all the incestuousness in venture and foundation model companies. So it was a fun coda to Tom's reporting on the state of models. Kyle and I talked about how venture capitalists are playing all the money flowing into foundation model companies. Give it a listen. Hey, it's Eric Newcomer.
[00:28:34] I'm here with Kyle Harrison of Contrary, former investor at Index and KOTU. Thanks for joining me. Thanks for having me. We are here to talk about one specific thing, infidelity among venture capitalists, where you had a viral image in tech world that shows all the different investments that VCs are making into foundation model companies. And, you know, a lot of the VCs are making investments into competitors.
[00:29:02] So fun topic. Do you want to just start off like the impetus for making the graphic and then sort of walk us through what you found? It was kind of a vibe that I had been noticing over the course of several months where I would see these fundraising announcements and it would be a big firm talking about, you know, I think it was XAI was the one that really triggered me where I saw people that were like, you know, we're so excited to back you on again. You generational company building all that. And it was all this like stuff about how great the business was.
[00:29:30] And I was like, wait, didn't these people invest in OpenAI or Anthropic or whatever? And I went and looked and was like, oh, they did. Like, oh, I guess people are making a lot of cross multi-layered investments and stuff. And then and then in a non-account that I really like, who's one of the best LP accounts on here is this guy in Downman Eddie. He was like, OK, wait, is backing competitive companies allowed now in venture? And I was like, let me go look at the cross section. It's like, yeah, I guess so. Yes, clearly is the answer. Do you just want to walk us through it?
[00:29:58] Like Andreessen, you have SSI, Ilya's company, XAI, OpenAI and Mira's Thinking Machines. What are some of the other interesting ones? Yeah, I mean, I think the biggest thing was that you've got and I think even some of these I missed, like this was not necessarily a purely academic exercise. But what's crazy is that. No, you made a screenshot. Now you are the scholar on this topic. That's right. That's the level of thought leadership. That's how it works on the internet. This is much more baked than most VC takes with this screenshot.
[00:30:28] Anyway, go ahead. Yeah, no, Harvard sent me a degree in the mail. Yeah, exactly. I feel like the biggest thing is like you can kind of clearly see. And I think somebody actually retweeted this and pointed this out that it's like a very specific strategy. We're like the one who's most active is Andreessen, which is consistent. Like Andreessen is super broad and investing in a lot of different companies. Martin Casado, who I love, said on stage at Cerebral Valley when I was like, how do you know which segment of AI to invest in? He was just like, why not all of them?
[00:30:59] They're like openly going for coverage at this point. And apparently even in the key categories with competitors. Yeah, that's Andreessen. Andreessen, I think is everywhere. You've got Sequoia, who is also quite active, a little less than Andreessen. So a little bit more Demura or whatever. And then I think you have firms like Founders Fund. And Sequoia just to – I mean, Sequoia is interesting because they obviously, what, a firm. And Clorna, they have these famous ones where they've done both.
[00:31:28] They try, I think, to be loyal. But they would have more of a story of like, you know, I don't know. It's getting pretty broad. There are a lot of them. I was going to try and do some spin that it's like, oh, they're an X in that sort of relationship with the Rulof world. And then they invest in the leader, which is clearly open AI in my opinion. I don't know. But yeah, with any Sequoia analysis? I think because it's also – you think about the story of like – I don't know if you remember this, but they invested in Phoenix back in the day on the payment side.
[00:31:57] And Stripe was pretty upset about it. And so Sequoia was literally like, our bad, keep the money. We won't take an equity stake. It's just a donation. Right. So they're definitely more thoughtful about where they're investing. I think to Sequoia's credit, one of the things that they've been very deliberate about is that they are more selective in the earlier stages. Somebody else, I think, pointed this out in a retweet where it's like, once it gets into the later stages, you're largely just – you're garnering exposure to assets.
[00:32:27] Like it's not as much – once these guys are raising $10 billion plus rounds or whatever, it's like you're really just getting exposure. I think Sequoia is a little bit more protective of their reputation at the earliest stages. Lightspeed is another one. I mean, Anthropic is going to be one of their biggest positions overall. Like because, you know, I dug into their returns and that was like – totally didn't realize that. And then they're in Mistral and XAI.
[00:32:54] I think they're also in one other – somebody else – I can't remember which one, but somebody pointed out that maybe there is one on here that I missed. Because Lightspeed has also been super active. And I would argue that I think Lightspeed has been playing not necessarily like catch-up, but they're really trying to like deliberately demonstrate that they want to be in this bucket of like the Andreessen's GCs of the world. Like just be very, very active. What's funny, where is GC? GC and NEA.
[00:33:22] Like if you're going to do the Andreessen-style blast them all approach, shouldn't you be in the defining category of our time? Yeah, it's a good question. And I will defer back again to the lack of my – It's like, hey, deal-blow. They might be in five of these and I just didn't want to look up another Crunchbase profile. We also didn't look up everyone. Okay. They'll be like, no, we're sluts in this. Who are the serial monogamous here?
[00:33:52] Or who do you think of the sort of big brand firms has sort of stuck to one bet? Well, I think that – and that is actually a sort of oversight on my side like in making this chart because originally it was supposed to show the cross-section. But I kind of just started at the bottom. I said, okay, OpenAI, who's in that? Boom, boom, boom. And I worked my way around with the companies, adding firms as I went. And what I realized when I posted it and it kind of popped off is that I had left FoundersFund on there despite the fact that they are actually monogamous in this particular strategy.
[00:34:23] And the other one that I didn't put on here but that called it out is Thrive. And I think that, again, going back to this, this is actually a pretty indicative sampling of different firm strategies. I think that FoundersFund and Thrive are famous for being really dedicated to running an incredibly concentrated portfolio and just like doubling down, tripling down into their winners. And FoundersFund, despite being like perceived as like, I don't know, perceptive and early, you know, Airbnb was like a series C.
[00:34:52] Some of these, they're like, what is the momentum company that's going to define the generation? How can we like back up the truck into it? And, you know, clearly they see OpenAI here. Well, and I think that's one of the reasons why they're not afraid to admit when they did, even if they didn't get there as early as possible. Right. I think that, and probably in that, in that vein of monogamous, I think Kostla is another one where they've been all OpenAI all day for as long as possible. Like you need to roll that company into you. Just write it. Jesus, right. Yeah.
[00:35:22] That would be insane, I think, to diversify away. It's one thing, you know, I think a consideration for a lot of these is they're getting an OpenAI. It's such a high price that it fits into the growth story where you're talking about, where they were like not traditional venture investors in OpenAI. They're just like, we can't miss the company of our generation. And then they're like, well, let's try and do real venture investments in other model companies if we get a shot at it. Yeah.
[00:35:46] So I think that, but I think to Founders Fund's credit, even if they're not there at the earliest days, they're willing to acknowledge that like, hey, listen, this is clearly broken out as a category definer. And they'll say that too. Like I've heard Trey say this multiple times where they talk about, listen, like our perspective is that there's one company. And like, if you're going to invest in space and you didn't invest in SpaceX, you probably lost money. If you want to invest in defense and you didn't invest in Anderle, you probably are going to lose money. Like that's their thought process. And so even if they're getting there a little late, they're going to back up the truck.
[00:36:17] And that's Peter Thiel's like, you want to be a monopoly business. It's sort of like, you don't want to be in a company that's creating a category necessarily because then you have all these competitors. You want to be the company in a category that chokes out all the oxygen for everybody else. That's right. I also think that there is a narrative here, and we can talk about this if you want to, that there is increasing differentiation. It's very subtle and it's going to start overlapping again. But there is some semblance of different companies trying to prioritize different places.
[00:36:46] We can talk about that if you want to. No, I think, yeah, let's talk about it, especially in light of OpenAI potentially purchasing Windsor, formerly known as Codium. That speaks to them demolishing everything. But it also speaks to the idea that if you're an investor, it's hard to know where these companies are going to go. They're sort of going to lean into where they're successful. Foundation models in particular, it's like you could become an application. You could be infrastructure. It's hard to know.
[00:37:13] So are you really – no one is saying VC shouldn't invest in Windsor. And if Windsor is part of OpenAI's turf, then it's sort of like the floodgates are open. What would you say about that? So I think that it's actually very, very similar to what happened with the hyperscalers where you look at like the Amazons and Googles and Microsofts of the world. The difference was that their sort of amalgamation of a bunch of different faces happened long after they had been founded, long after they'd gone public.
[00:37:42] The difference was – it's not necessarily – because even like think about like OpenAI launching a social app. And I had a friend of mine tweet and say, so are just like our network effects dead? Like does nothing mean anything anymore? All the business-y stuff that we've learned is nonsense. And it's like if you think about it, it's not that any of the fundamental principles have changed. It's that the velocity or like speed of the thing happening is just so much faster than it's ever been before.
[00:38:09] But at the end of the day, like if OpenAI does launch a social app and it does turn out to be successful, it's going to be for the same reasons why when Meta launched Threads, they also got up there pretty quick. It's because they have a big, broad, established install base they can benefit from. So it's exactly the same dynamics, but they can play into different strategies. And so when I think about like each company, each model company, I feel like they are trying to carve out their niche in the universe.
[00:38:38] Some are fuzzier than others. But I do think the clearest distinction is when you look at the projected revenue of specifically Anthropic versus OpenAI. I think it was like 2027 revenue estimates that somebody had leaked or put out or whatever. But it was basically like, hey, by 2027, Anthropic believed that they would have, I think, like 5x the API revenue that OpenAI was projecting. And it wasn't Anthropic crapping on OpenAI.
[00:39:05] It was OpenAI's forecast for 2027 compared to Anthropic's forecast. And Anthropic assumes they're going to have five times as much API revenue than OpenAI. And it becomes really obvious that OpenAI, whether they meant to do this or not, is going to become largely a consumer-driven business. Like ChatGPT is their cash cow right now. And whereas Anthropic is trying to squarely plant itself in the API world, sort of the B2B world.
[00:39:33] That's a clear distinction that's defining a lot of the decisions those companies are making. You know, I love Anthropic and they're very competitive. But there is the sort of infrastructure API business is like under threat from open source in a way that having a dominant consumer application is a type of moat that we're all familiar with.
[00:39:54] It feels like besides OpenAI having a lot of users, everybody else is sort of like, we need to run faster than our competition to have sort of incrementally better stuff. Obviously, over time, we build customer relationships, sales missions, sort of product differentiation. But it is totally if I mean, that's absolutely the the bear case on Anthropic is that progress is just too quick for you to get in and establish any real meaningful like enterprise install base.
[00:40:22] Whereas I think on the consumer side, that's different because when you see these surveys that say something like 65, 75 percent of adults in the US, like not in tech, but like period, everyone has used an AI product. The vast majority of them are chat GPT.
[00:40:38] And I think about this all the time that like when we look back at the end of 2022, when chat GPT came out, what's so interesting is when you talk to the people who are the sort of experts in this world at that point in time, that was not a technological breakthrough. Like it wasn't that wasn't something that a lot of the companies who were operating in 2022 couldn't have done. Like they could have put a chat interface on their model, but they didn't. And OpenAI put out it as like almost an experimental tool to say, hey, in this neat, you can kind of play around with it.
[00:41:06] And it exploded because it was people's first direct experience with generative text. Everybody was too afraid of their tails. And only OpenAI was willing to unleash AI onto us all in a cavalier way. Yeah. Well, I think they even think it was an accident, right? Like they didn't expect it to do that either. Right. But I mean, clearly, I think some of the break of Anthropic was, you know, the rollout of chat GPT and everything.
[00:41:29] I do think there's a lot of room for consumer adoption just because I feel like I'm begging people to use OpenAI or to use chat GPT. I'm like, like my family members are like, it's great. I'm like, you know, super excited about O3, spending a lot of time on it. And it is, you know, people, these companies clearly need to find ways to bring the product to people in a way that doesn't take as much work. And that's going to be a huge buildup. Going back to venture, because that's really the focus of this conversation.
[00:41:58] Do you think what we're seeing with the models is playing out in other categories? Or do you think this category is unique where VCs invest all over the place? So I had one of my much smarter than me friends kind of articulate this analogy of like, when you think about the dot com, there was sort of, you know, almost the three layers of what was happening. And that really drove like the distribution of the Internet.
[00:42:28] And you basically have, there were folks on the infrastructure side, you're building physical, you know, almost like compute and stuff like that. Then you have the networking where the Cisco's of the world getting to broaden out and get access to the underlying tech. And then you had the applications, right? The pets.com and stuff like that. But the difference there, when you think about like how that is similar to today, is that you look at the chips company, chips company once again are ripping, right? The sort of underlying infrastructure is still powerful, regardless of what's getting built on top of it.
[00:42:57] That's very powerful. And so I think that segment has more problems with, you know, geopolitical stuff than it does like distribution. But what's really unique about this point in time that makes it really difficult for venture investors is that in the dot com, you had networking companies that were like, we're just trying to get this out into the world. And then you had applications coming and saying, we're going to get this package and deliver it to people as a service. Today, you have the networking layer, which is effectively the model companies also building the application.
[00:43:26] So it's effectively as if Cisco was launching pets.com, it would be really difficult for pets.com to compete with that, you know, and, you know, maybe it falls apart because it wasn't the underlying thing. But today it is like AI is the underlying aspect of that product. And so when you see folks like OpenAI buying Windsurf, like that is a, it's a perfectly logical extension, right? You see this sort of wave of companies. To reframe what you're saying a little bit, you're saying the non-monogamy or the polyamory with foundation models is polyamory with everything.
[00:43:55] Because the models sort of are competing with everything that's investable and interesting right now. And so it's like, yeah, if you're doing applications, infrastructure, whatever, inevitably, you're going to be a collision course with your model investment. That seems okay. It's just, I think it's just the pure model on model that feels like, oh my God, like what, you know, how many, how many closed source models do we need? I guess is a backdrop question.
[00:44:18] Well, and I guess the question, I think the thing that people are sort of struggling with is that like the, not just the progress of individual companies, but the progress from model to model. I mean, you look at like people's reactions from Claude 3.5 to 3.7. And it's like, that's a misstep that like could make or break the game for a while, right? Like that, it can happen that quickly.
[00:44:43] Or like, you know, I think somebody else tweeted this of this like constant, like we're so back, it's so over cycle of like, oh, like I actually think, you know, 4.5 is not very good. And then it's like, oh, 3 is incredible. And like, it's so quick with each model that it's so much harder that like at the end of the day, yeah, if you're a massive multi-billion dollar firm, you've got to go get exposure to everything because you don't know what's going to come out on top. This is what we want from capitalism, isn't it? Everybody's competing, lots of money flowing into an interesting space.
[00:45:12] And like, yeah, you got to be the best of the best if you want to extract value. I don't know. And it's a category where I'm like happy for the world to burn some capital, you know, trying to crack it. So I don't know. Dotcom, the problem with dotcom was like there was no distribution. It's like they didn't, you couldn't, there wasn't the customer base when you set up the internet. Now in all the other bubbles, it's like, well, there's still a distribution to quickly get a bunch of customers. I mean, you invoke dotcom.
[00:45:42] Are you worried about like a big bubble? Cool. So I, my biggest concern is this idea of like, like hype cycles are built on momentum. Um, value cycles, if you will, like actual value creation happens when you are able to push through all the hype and all the excitement and all the experimentation and get to the actual creation of business value.
[00:46:06] And if there is underlying business value, that's consistent and replicable and changing the way an organization works, then you can capture some of that value. My sense is that the volume of capital and the volume of hype is not commensurate with the, our ability to articulate and execute on actual business value. Like that does not, those feel wildly.
[00:46:29] And part of what made that possible this time is every enterprise in the world was willing to spend money initially to say, oh, how do we figure out AI? But that could be a sort of false signal on how much they stick around when they don't get the value. And that goes back to your question about like, you know, we, we, the, the biggest thing that we have to unlock is we have to figure out how do we get people to use this more consistently in a way that creates value. That's not just true of like you and I trying to convince our mom to use chat GPT or whatever.
[00:46:59] That's also true. Yeah. We look too similar. We can't be saying something like that, but it's, but like that, that kind of stuff, like it's from the consumer side is also true on the business side. And I think all the time about like, yeah, we've seen crazy like scale ups. Right. But it's like, okay, what are probably the three companies that have made the most money from this pop? And it's like NVIDIA, which is like, okay, demand, demand, people need it, whatever.
[00:47:26] Open AI, which is like innate offering it to people to play around with and use. And then like Accenture, who's like making an ungodly amount of money, helping people figure out like, what do, what do we do with this? Like, how does this actually impact our business? But like outside of that, like there haven't been a ton of case studies, at least that I've seen of like, oh, like John Deere is suddenly ramping up. You hear some stories, like I think it was the, what, the Klarna story of like. Yeah. Klarna and Shopify, but some of it feels very aspirational.
[00:47:55] Like they want to be seen as getting it. Yeah. But it's like, it's. I want. Yeah, totally. Listen, I would love for that to happen. Like, I think that like, yeah, the fears of like job destruction and stuff like that. Like, I agree that it's the same as like Willy Wonka or what's his name? Charlie in the chocolate factory, his dad getting displaced by a robot and then getting a job, fixing the robot that took his job. Like there's, there's lots of opportunities for evolution. I'm not afraid of that. I want that to happen. The question is just, is it capable of doing that?
[00:48:22] And I don't, I don't know that we're there, which could set us up for a very painful pop. But like you look back at the dot com and like Amazon and Google came through the dot com. Like there will be businesses that get built that are still generational businesses, despite the fact that they will participate in short term pain. Sam Altman, OpenAI did reportedly try to scare investors off investing. You know, he had that list that was reported. You think that was just like a total failure or any observations about that?
[00:48:51] Well, so what was interesting is if I remember correctly, one of the biggest sort of stinks that was made about that list was Glean. Was, was not people not investing in Glean and in OpenAI. And here's the thing, like, so to go back to the question of like, you're not as worried about the infidelity for between like the model and application layer.
[00:49:12] Like, I actually don't think that you can distinguish because it, at the end of the day, it increasingly, it seems that it is going to be the same thing because you're not like the, again, like your point, like the API revenue. It's really difficult to believe that that's long term defensible. Like, I thought it was fascinating when I dug into all of the cogen agents, right, where you look at like the cursors and windsurf and lovables and bolts and replets of the world and stuff like that. They're all using cloud. Like at the time they were using cloud 3.5.
[00:49:41] This is a couple months ago, but they were all using that model. But it's like tomorrow we can all switch over to this one because now Gemini is the better cogen model or whatever. Like that progress means that it's going to be incredibly difficult for any of these companies to justify their existence at the model layer, which means that, well, where do you make money? You go pay $3 billion for windsurf because that's where you're going to make money because people want to have a thing that they use.
[00:50:06] The bigger question there then for windsurf users is like, okay, wait, but am I going to be stuck now then on GPT models? And it's like, not if they want to keep that install base, like they're going to have to let people use different models. And on that acquisition, there is people are like, oh, you can also train your models on the behavior on windsurf.
[00:50:25] So there's an argument that, you know, you could be open AI, leave it, leave customers allowed to use whatever model they want and then learn a lot to then make your model better so that eventually they want to use your model again. Yeah. But if I think about like, okay, if I'm open AI and I want to rule the world and I'm not, because I do think also like, again, going back to this, like some of these companies will differentiate.
[00:50:46] I think that SSI is going to be sort of go off on this, like AI safety, like let's save the planet stuff. And it's like, let's see how that goes. I don't know what that's going to evolve to look like, but it's clear that open AI is not, you know, again, maybe the narrative and the marketing is making AGI accessible for everybody or whatever. And it's like, right, they want to own the world and they want to make a lot of money. And they're trying to be a for-profit so that they can raise more money to make more money. Like they're clearly trying to own the world.
[00:51:13] And it's like, okay, if I'm open AI and I need to justify a hundreds of billions of dollar valuation, how am I going to do that? It's not going to be, I promise I'll keep making the next best model and like a constant, you know, a hamster wheel. It's going to be, I'm going to be the thing for your workplace. And I'm going to be the thing for your planning and your personal life and your AI agent and whatever.
[00:51:35] I was going to say initially, open AI is a uniquely problematic juggernaut from an investor perspective because it is a nonprofit. And, you know, because it had the ousted CEO and everything. It's like, I can see why you'd want to sort of hedge your bets that maybe it's sort of like a shakier dominant player than in past generations.
[00:51:55] But then you reflect and you're like, wait, Uber had the Travis stuff, like Facebook, Mark Zuckerberg was like firing his whole executive team and had to like pivot to mobile. In some ways, like the juggernaut status company was never like so clearly anointed as we write the story. But, you know, going through it, you're like, man, opening AI could totally fail to become a for-profit. And then I better have an investment somewhere else. I wanted to ask, and I guess let this be sort of the last question.
[00:52:27] What, without talking your own book, what should a founder do based on this information? Or like, what do you think, like if everybody's investing in everybody, like, do I need to go to KOTU and thrive? Or it's like, I need to just accept it? Or what advice would you give to a founder faced with this polyamory? So increasingly, and I don't know that I'd go so far as to say that polyamory is rampant in every category, at every stage, for every firm.
[00:52:57] Like everybody's different. But what we're seeing here is a symptom of something that you and I have talked about a lot before of this like sort of capital agglomerator, like rise of the, you know, the black stones of our industry, right? Like that's what's happening. This is a substance. This is a. The mega funds, for lack of a better term. Yeah. Yeah. Like this is a sub system of that. And I think that in, in thinking about that, because it is true that they're going to do more, there's going to be more applications that they're investing in that may be competitive and whatever, because they're just trying to deploy capital.
[00:53:27] What founders need to understand and what I've spent three years writing about on my, on my subset is that increasingly venture firms, at least the business model of venture firm has increasingly become more focused on asset management. That's not the marketing.
[00:54:10] Or your sounding board. There are firms and there are investors that you can go get who will be your confidant and your friend and support and whatever. And their firms are set up more aligned with your outcomes. These larger firms are not that. And so people need to take a, like a crash course in how public companies manage the asset managers that buy their stock, because that's the way you need to treat some of these firms is like that.
[00:54:35] And that goes to like things like information rights and strategy and, and, and who you let into what round and what percentage of ownership and how much they control of your parada and your, you know, majority voting rights and stuff like that. Like you need to manage them as the asset managers that they are and be very, very careful. It's not bad. It's a good weapon that you can use in a, in a, in a big fight, but you have to be really careful. Kyle Harrison, contrary. Thanks so much for joining us. Thanks for having me.