Has AI Hit a Wall? A Debate from the Cerebral Valley AI Summit
The Newcomer PodcastDecember 05, 202400:42:5640.77 MB

Has AI Hit a Wall? A Debate from the Cerebral Valley AI Summit

I’ve been spending some of the afternoon chatting with OpenAI’s now fully released o1. So far, I don’t know that it feels like the super intelligent ChatGPT 5 that we’ve all fantasized about — but it’s smart and sophisticated. The new model helped me to game out potential stories and talk through problems. And of course it wrote me a poem and told me a couple dad jokes.

It looks like the biggest improvement in the news model may be in math and coding, where OpenAI is highlighting meaningful improvements over o1-preview.

It will take some time to digest the new version of the model and see what it says about the pace of AI advancement.

Before the latest update, Max Child, James Wilsterman, and I got behind our microphones to reflect on the Cerebral Valley AI Summit and give you some of our takeaways.

(Max has already hooked up the advanced voice mode of ChatGPT to his iPhone action button. Good bye Siri, hello ChatGPT.)

Give it a listen.

Brought to you by Brex

Brex knows runway is everything for venture-backed startups, so they built a banking solution that helps them take every dollar further. Unlike traditional banking solutions, Brex has no minimums and gives startups access to 20x the standard FDIC protection via program banks.

Plus, startups can earn industry-leading yield from their first dollar — while being able to access their funds anytime. If you want to make sure your portfolio companies have a place to save, spend, and grow their capital, check out Brex here.

00:00 — Cerebral Valley AI Summit Overview

02:50 — Key Takeaways from Alexandr Wang's Talk

05:46 — The Wall in AI Foundation Models

08:57 — Dario Amodei’s Perspective on AI Progress

12:13 — Investing in AI: Insights from Martin Casado

15:06 — The Future of AI Agents and Voice Technology

17:56 — The Role of AI in Gaming and User Interaction

20:46 — AI in Enterprise: Trends and Predictions

24:03 — Challenges in Robotics and Home Automation

26:58 — Marissa Mayer on Google's Future in AI

29:51 — Final Thoughts and Future Outlook



Get full access to Newcomer at www.newcomer.co/subscribe

[00:00:03] Welcome to the Cerebral Valley Podcast. I'm Eric Newcomer. I'm here with Max Child and James Wilstroman, my Cerebral Valley co-hosts. Hey, guys.

[00:00:11] Good morning, Eric.

[00:00:13] Hey, James.

[00:00:14] Hey, Eric. Happy to be here. Excited to talk to you guys.

[00:00:17] Great. We survived the Cerebral Valley AI Summit. It was the week before Thanksgiving.

[00:00:22] We all checked out for Thanksgiving, and we're back here to give you the audio rundown of the most important things that happened if you couldn't be there.

[00:00:32] Max and James, why don't you start off with a favorite moment from the event off the top of your head?

[00:00:38] I think my favorite moment has to be Dario Amadei from Anthropic at the End, refuting the case that we don't have to worry about AI because it's just math by pointing out that all neurons in everyone's brains are just math.

[00:00:55] And therefore, we should be afraid of math because the math might produce a Hitler, for example, who just had a bunch of math going in his brain.

[00:01:01] Or he said Hitler was just math, basically.

[00:01:04] Yeah, Hitler was just math.

[00:01:05] He's basically the one-liner.

[00:01:06] His team, I don't think, was happy he said it that way.

[00:01:09] I did, as a moderator, try to be like, oh, that makes sense to me what he's saying.

[00:01:14] I have also had this thought in my mind whenever people say something is just math because you could say anything good or bad in human history is just math.

[00:01:21] Nuclear bombs are just math.

[00:01:22] Hitler is just math.

[00:01:23] Everything is just math fundamentally because all physics is just math.

[00:01:25] So I thought that was a good point because I always thought that's been a very dumb take that we shouldn't be scared of AI because it's just math.

[00:01:31] James, highlight?

[00:01:33] Sure.

[00:01:34] I mean, the first talk of the day I thought was great.

[00:01:36] You interviewed Alexander Wang from Scale.

[00:01:39] I thought he was letting us in a little bit on the wall discussion in AI foundation model training because he sees it from all of these AI foundation model companies needing more data.

[00:01:52] And it seems like on pre-training we are hitting a wall on data.

[00:01:56] I think it was the clear message.

[00:01:58] Very interesting.

[00:01:59] We'll debate that in a second.

[00:02:00] I think that clearly defined the day.

[00:02:02] I'd say my, I mean, it's the friends we made along the way.

[00:02:07] I mean, I love everybody who is there.

[00:02:09] And I think like just good group of founders and like getting everybody in a room and hanging out is a lot of fun.

[00:02:15] Man, when I started hosting events with you guys, like I did not know anything about the events business.

[00:02:21] Then last year, what we had two Cerebral Valleys and I got married.

[00:02:24] And, you know, now we've done three events this year.

[00:02:26] I was like, oh, I'm a big believer in events.

[00:02:28] And it's like, oh, we had great chairs.

[00:02:30] So everybody sat close to each other in the front.

[00:02:33] And like, it felt like standing room only.

[00:02:36] And like, so that was, yeah, I like the vibes.

[00:02:39] I thought the vibes were really good.

[00:02:40] People were dialed in.

[00:02:41] I almost wanted to say like, go downstairs and like talk to each other.

[00:02:44] Like you don't have to sit here and watch everything.

[00:02:46] But obviously if you're putting on stage stuff, you're happy people want to watch.

[00:02:50] So yeah, I thought it was a good, good vibes.

[00:02:53] It was probably my favorite.

[00:02:54] The audience was dialed in.

[00:02:55] The audience was dialed in.

[00:02:56] I think people wanted to hear from the panelists, which is great.

[00:03:00] That means we have a great panel.

[00:03:01] This episode is presented by Brex, the financial stack that founders and VCs can truly bank on.

[00:03:08] Imagine what your founders could do with their runway if they had a banking solution that had no minimums,

[00:03:14] no transaction fees, and 20 times the standard FDIC protection.

[00:03:18] Plus, they could earn an industry-leading yield while maintaining access to funds whenever needed.

[00:03:24] Brex simplifies financial services for startups so they can focus on building.

[00:03:30] Connect your portfolio to the financial stack that one in three U.S. venture-backed startups already use.

[00:03:35] Check out brex.com forward slash banking dash solutions.

[00:03:41] So yeah, we're starting off.

[00:03:43] James, you started to hint at it.

[00:03:45] Alexander Wang, CEO of Scale AI, started off the day and really framed a conversation for the whole conference,

[00:03:53] which is have we hit sort of a wall?

[00:03:56] And obviously a wall is a somewhat broad term, but he basically said-

[00:04:02] Can you explain the wall for people?

[00:04:03] Because I feel like we're using this vernacular.

[00:04:05] Yeah, yeah, yeah.

[00:04:06] We're insiders here, but what is the wall?

[00:04:08] The wall is this idea that foundation models aren't going to get smarter.

[00:04:13] You know, that fundamentally everything in AI world depends on sort of the power of models.

[00:04:20] And the ways that models are supposed to get smarter are either they throw more NVIDIA GPUs at it,

[00:04:26] basically, and just get bigger.

[00:04:28] And the money's there, it seems.

[00:04:30] So the money's there to spend on the GPUs.

[00:04:32] NVIDIA is producing them.

[00:04:33] Amazon and others are trying to get in the game.

[00:04:35] And so there's this real question of like, okay, given you have access to GPUs and money to spend on them,

[00:04:41] will you be able to improve the models?

[00:04:43] And nobody really knows besides OpenAI Anthropic and a handful of people.

[00:04:48] And so there's a lot of skepticism that that's going to be enough.

[00:04:51] So they need sort of new approaches.

[00:04:53] And then there's also this question of, given that OpenAI seemingly has like inhaled all of YouTube and like everything else on the internet,

[00:05:02] is there really more data to have, even if they're manually using companies like scale to find more?

[00:05:08] Like, are they really going to train these models?

[00:05:11] So the wall is this building of the foundation models.

[00:05:14] Has that tapped out?

[00:05:16] And I mean, Max and James, yeah.

[00:05:18] How did you take Alexander's response to that question?

[00:05:22] I mean, he was the first speaker, as you said.

[00:05:24] I would say, had I just watched him, I would have thought, oh, we're definitely hitting a wall.

[00:05:30] Like, he basically said it without saying it as in, yes, it's actually quite a struggle to get these models to the next generation, to scale up the data.

[00:05:39] You know, everyone's talking to me about how we can do this, how we can clean the data, how we can create more synthetic data, this kind of stuff.

[00:05:45] Like, it was a very short case for the next generation of models, which obviously worried me because I'm very excited about the next generation.

[00:05:52] I mean, he said it seems to be the case that we've hit a wall on pre-training.

[00:05:56] So the large cluster training on huge amounts of internet data that seems to have genuinely hit a wall, but we haven't hit a wall on progress in AI.

[00:06:03] So, James, you want to explain this progress in AI point?

[00:06:06] Yeah, I mean, I think he's making a distinction here that maybe we have hit a wall on pre-training performance.

[00:06:13] So maybe we've run out of data or, you know, just adding compute doesn't seem to be getting the gains that we would expect or seen in prior generations.

[00:06:23] But I think he's making the claim that that doesn't mean we're going to see diminishing model performance, let's say, next year or beyond,

[00:06:31] because we will be able to improve the models with test time compute or synthetic data like MaxMach.

[00:06:38] We're talking in this book a little bit, too, human reinforcement learning.

[00:06:41] I think some of it is happening in sort of this post-training where you built the model and you're sort of cleaning up around the edges, getting better data.

[00:06:50] And so it's sort of like, well, with scale, we can get past the wall.

[00:06:54] But the classic foundation model approach, you know, basically attention is all you need through a lot of, you know, run a bunch of transformer models and that's it.

[00:07:02] That approach, you know, the argument is, is having a wall.

[00:07:05] Well, it sort of makes sense in a sort of philosophical way that the models are about as good as they're going to get in terms of giving like quick answers to questions.

[00:07:16] Right. It's like we've scraped all the data on the Internet and we've scraped every video on YouTube.

[00:07:20] So we basically collected all of the sort of artifacts of human intelligence that you can write fairly quickly.

[00:07:26] And so these models are like about as good at quick answers as you could possibly get.

[00:07:30] Right. And that's but what everyone was saying and what you guys are sort of alluding to with this post-training concept is the idea of thinking for longer periods of time and sort of ruminating on what a good answer to a question would be.

[00:07:43] And what they call, you know, chain of thought and sort of iterating on that is where the next generation of gains will come, which analogizes very similar to humans in many ways.

[00:07:53] Right. Like very, very, very, very few humans can give like a quick answer to sort of such a broad array of questions as the current generation of models.

[00:08:01] But we obviously have Einstein's and Oppenheimer's and, you know, great literature, of course, you know, Shakespeare's and Jane Austen, where they had years or months and years and decades to sort of think about things and then put out these kind of finished products that moved the human race forward.

[00:08:19] Right. And it kind of seems like we're hitting this wall at like, yeah, this thing can instantly give you an answer at about the maximum level of human intelligence.

[00:08:26] But where the big picture of the super intelligence, the next the next generation will be will be give the model a really interesting problem.

[00:08:33] Let it think about it for a day or a week or a month and then you'll get something really, really good out of it, which I think is kind of an interesting metaphor.

[00:08:39] Podcast listener, you get an advantage over the conference attendee and that we can skip to the end of the conference where we had sort of the rebuttal.

[00:08:46] Well, literally, you know, first session, we've seems to have hit a pre-training wall.

[00:08:51] Last session, Dario's like Dario Amadei, the Anthropic CEO, you know, says basically we haven't hit a wall.

[00:08:57] I was among the first to document the scaling laws and the scaling of AI.

[00:09:01] Nothing I've seen in the field is out of character with what I've been I've seen over the last 10 years or leads me to expect that things will slow down.

[00:09:08] I think one challenge with his point of view is I don't think it still seems like we need new ideas or new approaches to continue to improve things.

[00:09:20] Or it's, you know, like neither Alexander or Dario are saying, yeah, just throw compute and scale and this will continue to work.

[00:09:28] I just think they're both saying the same thing, which is we need to do things slightly differently than we've done them before.

[00:09:35] But we have the visibility into that that we feel confident it will continue to the AI output will continue to improve.

[00:09:44] But Dario is more on the side of the model companies have like good ideas and sort of approach.

[00:09:50] We'll continue to put out, you know, the pace.

[00:09:52] Maybe maybe you felt you felt like Alexander was like, I don't know.

[00:09:55] I mean, I think we have an idea of how this could could improve, but we don't actually it's not like guaranteed.

[00:10:01] I mean, I mean, they're both talking their books in a little way.

[00:10:03] I mean, I do think Anthropic is pretty dependent on being one of the one of or the strongest model companies.

[00:10:11] Well, I think just to wrap this discussion, maybe Ali Goatzee as well, because I think a big portion of his discussion was also are we hitting this mystical wall?

[00:10:20] And he sees it from the data and infrastructure side.

[00:10:22] His take was it's just really effing expensive to train the next generation of models.

[00:10:29] And so possibly one of the reasons we're hitting the wall is that the next order of magnitude of cost is maybe not sustainable for any company that's, you know, even the fan companies potentially or certainly any company below the level of the fan companies.

[00:10:42] So I think he was a little bit short.

[00:10:45] I think it's important to support the next generation of scaling, really driving new new models.

[00:10:49] But as you said, the kind of cherry on top of the scaling wall discussion was Dario being like, no, full throttle, like right to go.

[00:10:56] Then like the day after the conference, he's got a partnership with Amazon.

[00:11:00] I don't know.

[00:11:01] My sense is Databricks, you know, has seen the progress of Facebook's llama and basically said, oh, that's a pretty good open source model.

[00:11:10] I don't know how much we need to be building our own models.

[00:11:13] So I think his strategy has been more like, oh, we're never going to be competitive relative to all the spending.

[00:11:18] But I don't it seems like there are lots of signs that like the hyperscalers are going to spend to build the model as long as they see evidence that that spending will produce an improvement on the trajectory we've seen so far.

[00:11:33] And that's sort of where this, you know, it becomes like, what are they seeing?

[00:11:36] Are they seeing signs that all that spending will deliver something?

[00:11:40] I think the next big hitter that, you know, you would like to dig into, given your kind of relationship there is Martine Casado, right?

[00:11:49] The A16 VC who has gotten deep into the kind of political intrigue, was probably the biggest anti-California legislation proponent, SB 1047.

[00:12:02] And I think specifically had an insight into what they look for in companies that are taking advantage of this new generation of models.

[00:12:08] So what was your take on Martine?

[00:12:09] Yeah, I mean, I think he had this line, you know, it's just like, well, actually, I'll flag two things.

[00:12:16] One, a little broader.

[00:12:18] It was sort of like, what's investable in AI?

[00:12:20] Like, who has the mode?

[00:12:22] And his take was sort of, you know, we don't know.

[00:12:25] And so we're going to invest in everything.

[00:12:26] And I, you know, joke that that was my sense of the Andreessen Horowitz approach to investing, which is just like, let's cover it all.

[00:12:34] And like, because you don't know what's going to be sort of the future.

[00:12:37] So I thought that was a funny, I mean, he didn't really like push back on my characterization.

[00:12:43] More specifically, I thought he did have an, I mean, I thought he was great overall.

[00:12:47] And I, even that part, they might be right.

[00:12:49] Just like index AI is the right approach.

[00:12:52] But he sort of made the point that he thinks in sort of business applications, there's sort of a strong chance that what's going to differentiate companies is the same stuff that it differentiated them in the software or the SaaS era.

[00:13:06] It's just like, know your customer, know how to sell to them, you know, know, you know, have both sides of the marketplace, like have great integrations in the space.

[00:13:15] You know, it's not necessarily like have the best model, you know, have the best coders in, you know, I don't know what call center, you know, call center APIs.

[00:13:27] Like if you're going to build the next generation of software company, it might not be you have sort of the hardest core engineers.

[00:13:34] It's these sort of no call centers, you know what their customers want, how to sell to them.

[00:13:39] You have a strategy, start to build integrations.

[00:13:41] I don't know, what do you guys think about that idea?

[00:13:43] I mean, it's sort of a core persistent question, Silicon Valley, which is like, how much do you need a technological edge or can you just be a great sort of sales product company?

[00:13:55] Yeah, I mean, I don't think any company has had like a sustainable software edge for the long term.

[00:14:03] It's like my odd take on Silicon Valley.

[00:14:05] I was arguing with a lot of VCs about a year ago when we were fundraising where they're like, oh, you know, do you guys have like the best generative AI for voice games?

[00:14:13] And I was like, no, like obviously open AI and Anthropic have the best generative AI.

[00:14:18] Like, you know, and I just think similarly to we just forget our history so quickly.

[00:14:23] Like, you know, you didn't have to have the best cloud software to build a cloud based business.

[00:14:27] You didn't have to have the best, you know, database to build a business with databases.

[00:14:31] You know, you didn't have to have the best operating system to build an applications business on Windows or Mac or whatever.

[00:14:36] You know, I mean, there's there's so many layers of technology that end up being commoditized and there's a few big winners.

[00:14:41] And then folks who are building the applications on top often end up still being massive successes, but they're not driving these technological advantages.

[00:14:50] And these technological advantages often end up being, you know, capital and scale and kind of economies of scale advantages, which is, I think, where we're seeing the open AIs and the Anthropics just kind of running away from everyone is there's this like winner take all cycle of having the best model means you get all these customers means you get all this VC money, which means you get to pour it all into GPUs, which means you get the next generation best model.

[00:15:11] And, you know, after two turns of that wheel, you're you're you're gone.

[00:15:15] No one can ever catch you, basically.

[00:15:17] Well, that's the hope, though.

[00:15:19] We've talked about open source models have been surprisingly able to catch up.

[00:15:22] And so Facebook definitely is like trying to be the third person in that game, basically.

[00:15:26] Right.

[00:15:27] With Lama, the Android.

[00:15:29] Yeah.

[00:15:29] Martin Martin also mentioned, which I thought was interesting, that he sees opportunity for small models.

[00:15:36] Right.

[00:15:36] Right.

[00:15:36] And I think we've seen seen this a lot in the last year that there are companies who are able to compete by having small but very specifically focused vertical models.

[00:15:48] Right.

[00:15:48] Like music generation or image generation or video generation.

[00:15:52] Like it does seem that those that's a more fertile ground for startups to train models in those creative fields, maybe.

[00:16:00] And I don't know, maybe we'll see that that all gets kind of eaten up by open AI and Anthropic as well.

[00:16:08] But so far right now, it does seem there's a lot of startups succeeding by training these smaller creative models.

[00:16:13] Yes.

[00:16:14] And presumably a theme we've talked about on this podcast, models that are optimized for the cost.

[00:16:19] It's like if you're focused on video or voice and you're doing it every day, you're like, oh, you know, it's not just about getting the best performance so we can show off.

[00:16:28] It's like doing it in a way that is sustainable and people can build businesses on top of or you can build a business on top of.

[00:16:35] Another another thing that really stood out to me from our teens interview with you, Eric, was this kind of question of like, what are what is agents?

[00:16:45] What are what are we going to see in the next year from agentic experiences?

[00:16:49] And he had like, I thought, a very good definition or at least like two definitions of agents.

[00:16:55] Right. One being that agents are what we think of human agents like customer support agent or travel agent.

[00:17:02] Right. Something that you can communicate with.

[00:17:05] It's sort of replicates that experience of talking to a human to accomplish a task being kind of bucket A and then bucket B being what a lot of people are excited about now.

[00:17:15] But we haven't seen really yet is the ability for these models or agentic models to go and take action on your behalf over long periods of time.

[00:17:25] And I think that will be something we see more and more is allowing these agentic systems to go, you know, use your computer, use a web browser and just go do things on the Internet for you for hours and hours at a time.

[00:17:40] Maybe even coding or using other just like human like interfaces like a browser or a code IDE.

[00:17:48] Right. And actively using them without human input.

[00:17:52] So I'm excited for both those experiences.

[00:17:54] Curious if you guys had thoughts on his discussion of agents.

[00:17:58] I mean, this sort of leads into my voice AI panel, but basically everyone said we're going to pass the audio touring test next year,

[00:18:04] i.e. end of 2025 saying we could be on the phone with a customer support agent and not know if they're a human or not the majority of the time.

[00:18:12] I buy that.

[00:18:14] Obviously, James, you and I have been prototyping a lot of voice AI game stuff recently, and it's unbelievably realistic.

[00:18:19] And I think that customer support is in the end fairly intellectually not very complicated.

[00:18:26] And so you could easily throw a fairly simple AI voice agent at customer support and handle 80 percent of support calls to the Apple or the AT&T phone numbers or whatever, like ASAP.

[00:18:38] The voice discussion I thought was super interesting.

[00:18:41] I mean, sort of give us a sense of like, you know, this was a larger panel, the sort of characters on it and including yourself and sort of where each of you sort of sits in the voice.

[00:18:53] You know, what motivates each of you in terms of voice and then what you thought was interesting out of it?

[00:18:58] Yeah. I mean, so voice AI, I do think, even though I'm very biased, is a is a hot topic this year.

[00:19:04] And I think we'll be going forward because talking to other people is obviously a huge portion of human civilization.

[00:19:10] And I think we're now able to replicate that.

[00:19:13] And so the people on the panel, we had two kind of tools slash infra companies.

[00:19:17] We had Cartesia, which does voice output, i.e. synthesized AI voices.

[00:19:23] And then we had Deepgram, which is mostly known for speech recognition, but is also adding kind of large language models and also voice output.

[00:19:31] And then on the kind of product and application side, we had Character, which is famous for letting people chat with Tony Stark in a web browser.

[00:19:39] But what was interesting here was they've added a voice mode where you can actually call Tony Stark and have a conversation with him as long as you want.

[00:19:47] Or you can type to him and he'll chat back to you.

[00:19:50] And then there was myself hosting the panel.

[00:19:52] We make voice AI games that are powered by chatting with a game show host or medieval times convincing a knight to go slay a dragon or something like that.

[00:20:02] Two interesting things about the panel really stood out to me.

[00:20:05] One was, as I alluded to earlier, everyone's saying we're going to pass this audio version of the touring test next year.

[00:20:11] As in, you're going to be able to talk to an AI and not know it's an AI for extended periods of time in the next 12, 14 months, whatever.

[00:20:19] Everyone's take was basically this is going to revolutionize like the phone call as a meta concept, right?

[00:20:25] Like everything you can imagine doing a phone call for that isn't calling your friend or your mom or whatever is going to be completely revolutionized by the fact that

[00:20:33] a company is not going to need a person on the other end of the line, whether that's customer support or, you know, just getting the hours of a business or ordering from a restaurant or, you know,

[00:20:46] any anything you could do a phone call for.

[00:20:48] We're no longer going to need humans.

[00:20:50] And that's going to be, I think, a huge, huge impact on the sort of cost structure of a ton of different companies.

[00:20:56] But also it might it might make people a lot more likely to call these companies to go solve their problems.

[00:21:01] That was interesting.

[00:21:02] And then the second interesting thing more on the consumer side, which is where I focus, is the character AI, CPO, Aaron Teague, who spoke, said when they added the voice mode,

[00:21:12] they thought people were going to be calling these characters all the time.

[00:21:14] But they actually discovered that a lot of what people want to ask fictional characters is pretty embarrassing.

[00:21:20] Like she specifically said in our in our preview that she had wanted to ask Martin Luther King why he had cheated on his wife as such an amazing, like interesting figure in history.

[00:21:32] And she said, I obviously would never ask the real Martin Luther King that question, but I really wanted to know from this character version of Martin Luther King.

[00:21:40] And so I asked him this question and he gave us a really nuanced, thoughtful answer about how, you know, he was on the road all the time and it was really complicated existence.

[00:21:50] And he didn't have the best relationship with his wife.

[00:21:52] And, you know, he had failed morally in some ways while still being a civil rights leader, which I don't know.

[00:21:57] I thought that was a really interesting anecdote.

[00:21:58] Her point, though, was that people don't want to say this stuff out loud.

[00:22:02] They don't even want to hear themselves say this when they're speaking to an AI character.

[00:22:06] And so what's actually very popular with their voice mode is typing these kind of embarrassing questions in a box, whether on your computer or your phone, and then hearing the voice output from the character, because that creates this more kind of emotive experience and feedback with the character.

[00:22:21] So I thought that might be like the beginning of a kernel of insight of how people want to interact with AI characters.

[00:22:28] You know, maybe that's just specific to their company, but I don't know.

[00:22:31] I got a lot out of that discussion and I thought it was really, really interesting.

[00:22:35] A big question that you guys are probably facing as you place games like.

[00:22:41] The sort of reasoning engines, language models allow for these sort of unstructured interactions, whereas, you know, I ask, you know, Martin Luther King a question gives me.

[00:22:51] An answer.

[00:22:53] But do you think consumers generally want that or do they want sort of, you know, a narrower path?

[00:23:00] You know, it's like, well, in some ways in the traditional video game world, it's like, do they want all these sort of like open world games or do they want sort of to be led along sort of a plot?

[00:23:11] I don't know.

[00:23:12] Do you think we're like taking the wrong thing from what LLMs can do where we're giving up on sort of the structure?

[00:23:19] Because we can't.

[00:23:20] Yeah.

[00:23:21] I would just make the point that like there's a lot of things you can do in games, but the mistake probably is to try to do everything at once with LLMs.

[00:23:29] Right.

[00:23:30] So you could like design all the assets with LLMs or you could have like these NPCs that you can talk to or you could even design the quests themselves with LLMs.

[00:23:40] All of those could be interesting if you do them really well.

[00:23:42] Right.

[00:23:43] Sort of in isolation.

[00:23:44] I do generally think, though, that to Eric's point, we at Bali definitely lean into the more structured experience.

[00:23:52] Yeah.

[00:23:52] Just, you know, as a one liner, I think we believe people want more structure.

[00:23:56] We think people want a goal and something they have to do and a mission, you know, to pursue at a game and that LLMs can make that experience more fun and more natural.

[00:24:06] I agree.

[00:24:08] That's what we are focused on at Bali.

[00:24:10] I think that obviously from characters success, people also just want open ended companionship in their characters and they want to create like never ending friendships that they can just kind of, you know, mold and adapt however they want on any given day.

[00:24:28] So, yeah, I think there is a value in the unstructured aspects as well.

[00:24:34] I mean, to really step back, you know, I'm just I feel like I'm faced with this question of do I think AI is going to improve a lot next year?

[00:24:43] Like how much are we sort of high on our own supply or not?

[00:24:46] You know, you see lots of, you know, I was very skeptical of self-driving cars along the way.

[00:24:52] And now I'm an extreme bull wrote like very positive case and newcomer on the free press.

[00:24:59] But it's like, you know, you Silicon Valley is loves bubbles and like everybody involved sort of is has an incentive to sort of lean into them and just sort of speculate, you know, sort of to do sort of the next step.

[00:25:13] But can be wrong on sort of timing.

[00:25:16] So, yeah, I guess throughout this conversation, I'm sort of riddled, you know, by this debate we had on stage between sort of have we hit a wall or not.

[00:25:27] And I guess one thought I had listening to you guys is this sense that.

[00:25:34] Like one bull case for AI is honestly that the open text chat is really hard for people to get value out of, you know, it's like.

[00:25:45] If you just have this spot, it's like, why don't I use AI a lot?

[00:25:48] It's like, well, because I have to sort of come up with a plan and figure out what I want.

[00:25:52] And I think sort of tying a couple of pieces together, it's like Martine's point.

[00:25:55] It's like you need companies that build a very clear use case, like what you're saying about games.

[00:26:00] You need to give people sort of a roadmap to use it.

[00:26:03] I do think, you know, even if the reasoning doesn't improve as much as we want next year, there's just going to be a lot of like.

[00:26:10] Like. Successfully leading the horse to water or it's like showing the consumer or the business like how how to use the AI we already have today in a much more productive way.

[00:26:24] Yeah, I 100 percent agree.

[00:26:26] I think that these tools are amazing, but they're very unstructured to your point.

[00:26:32] And you really have to be sort of a creative person to sort of get value out of these AI.

[00:26:38] You have to imagine what it might be able to do and then keep throwing spaghetti at the wall until it does something amazing in many cases.

[00:26:46] Obviously, one of the most overused line in Silicon Valley.

[00:26:48] But, you know, the future is here.

[00:26:49] It's not it's just not evenly distributed yet.

[00:26:51] Like everybody doesn't know about it yet.

[00:26:54] There's so much stuff is current generation of models can do that people don't take advantage of.

[00:26:58] And so much of that is going to be integrated in to new types of applications or making existing applications much better or just new types of user experiences that allow people to get more value out of these AI tools we already have.

[00:27:11] And as a lay user using open AI, I can be like, clearly, I want it to have a repository of my stories and do stuff based on it.

[00:27:18] But I don't have access to a team of engineers to sort of like go do that.

[00:27:23] And if, you know, a product is built that allows that, then even these use cases that I can sort of imagine will be sort of more at my fingertips than they are now.

[00:27:31] So anyway, that that got very zoom out.

[00:27:34] But I think, you know, that's a good point.

[00:27:35] Let's let's do our sort of like we're not going to hit every panel in such great depth.

[00:27:40] But I do think there were great nuggets out of them.

[00:27:42] And people maybe don't want to listen to the full conversations on YouTube, which, by the way, all these conversations are on YouTube.

[00:27:48] You don't need to hear our characterization of it.

[00:27:51] You can go and we encourage you to go listen yourself.

[00:27:53] But this is maybe, you know, a looser way to inhale everything that happened.

[00:27:58] And Max, do you want to kick off with sort of our digest of some of the other talks?

[00:28:03] Yeah, we're going to do a section called the lightning round where we bang through five or six panels in 30 seconds or less, maybe 60 seconds if we're really getting into it.

[00:28:12] Going to go in the order of the discussion and throw it to whomever has the juiciest nugget or the hottest take from that discussion.

[00:28:20] So first, Tim Tully gave a big presentation on AI in the enterprise.

[00:28:26] I think, Eric, I'm going to throw it to you.

[00:28:27] What's the one liner from Tim Tully's presentation?

[00:28:31] Enterprises are hedging their bets.

[00:28:34] They've learned from AWS and Microsoft and the hyperscalers, and they don't want to go all in with anybody.

[00:28:40] They're going to like, you know, we sort of saw it at the Goldman talk, Marco Argenti at Cerebral Valley, New York.

[00:28:46] It's like smart enterprises are trying open AI.

[00:28:49] They're doing anthropic.

[00:28:50] They're using small models, and they're not going to like go deep with just one model.

[00:28:54] They're going to make sure that nobody has leverage over them, and they're diversifying.

[00:28:58] He also called out that Anthropic had made huge, huge gains against open AI in the last year, which I think is super important to know.

[00:29:07] So and he also mentioned that coding was the number one, I think, by far use case for the enterprise in terms of these models.

[00:29:15] And maybe that goes hand in hand with Anthropic doing well against open AI in the last year.

[00:29:20] And this was Menlo, where Tim Tully is an investor, is an investor in Anthropic, but they'd commissioned this huge survey of enterprises to get the data.

[00:29:30] So just hopefully just reporting back the data they found.

[00:29:34] All right.

[00:29:35] Next one.

[00:29:35] All right.

[00:29:36] Next one.

[00:29:36] We had a biology panel, which was moderated by your newcomer colleague, Madeline.

[00:29:43] The quick take from the biology panel is that models are going to make drug discovery much better.

[00:29:48] I think, you know, the public will believe it when we get the drug, but it seems like people in industry are super optimistic that especially small targeted models will help them discover new drugs.

[00:29:59] Obviously, we've talked on this podcast before that, you know, human regulatory burdens will remain, but finding new drugs could be miraculously important to human beings.

[00:30:10] Dario at Anthropic has made a similar case that he thinks biology is an area where models are going to be transformational.

[00:30:16] So a lot of optimism from that world.

[00:30:20] All right.

[00:30:21] Nice.

[00:30:21] Next one.

[00:30:22] One of James's panels, train tune or turnkey models.

[00:30:27] Yeah, we gave you a big one.

[00:30:28] This is broad.

[00:30:29] Big fan of alliteration.

[00:30:30] James, what are the key moments here?

[00:30:33] This panel represented different parts of like train tune or turnkey, right?

[00:30:37] You had Glean that was like, okay, we deliver you a solution.

[00:30:40] He's like, it works on day one.

[00:30:41] Then you have core weave on the other end of the spectrum, which is like, we're going to help you sort of build your own models and get you access to compute.

[00:30:52] And then together, which is sort of in between, which is we're going to sort of have a little more of a heavy handed approach.

[00:30:57] I don't know.

[00:30:58] Did they see the world differently?

[00:30:59] Or did you feel like there was some consensus on what companies want?

[00:31:04] And I think the main consensus here is that they're all trying to figure out how to get into non-native AI enterprises.

[00:31:11] Right.

[00:31:12] So, you know, I mean, maybe Glean is kind of the furthest along there.

[00:31:18] But core weave and together, they, you know, they primarily work with AI labs that need to train models and fine tune or host the model by API.

[00:31:29] Right.

[00:31:29] But like, what are the use cases for the enterprise?

[00:31:32] I think like there's some interesting things.

[00:31:35] I think there's an example that together has where they worked with Washington Post to allow you to kind of chat with their archive of articles.

[00:31:45] So that kind of requires this fine tuning use case.

[00:31:48] And I think that's like a big question for the enterprise.

[00:31:50] Like, where does it make sense to to allow fine tuning or to use fine tuning to improve model performance?

[00:31:57] It's, you know, when you have proprietary data that isn't trained in the foundation models.

[00:32:04] Next one, how to train your robot.

[00:32:06] I thought this was super entertaining.

[00:32:08] James, you moderated this one.

[00:32:10] Robots in the home at work.

[00:32:12] What did we learn about the future of AI robots?

[00:32:15] I thought that Jonathan Hurst, he's the chief robot officer at Agility Robotics, was super interesting.

[00:32:23] He's building humanoid robots primarily for warehouse use cases.

[00:32:27] And he just made a lot of great points about how hard it will be to introduce humanoid robots into into more consumer environments like the home.

[00:32:38] A, they have to, like, be able to navigate, you know, every aspect of, you know, a house, including, you know, getting upstairs and, you know, figuring out how to if you're if they're going to help you unload your car or, you know, take the groceries out and put them in the cabinets.

[00:32:54] Right.

[00:32:54] And, you know, you're going to put it in the refrigerator.

[00:32:55] Like, all of that is, like, very specific use cases that need to need to work flawlessly.

[00:33:00] But, like, more importantly, there's a big safety element of having, you know, 100 pound, 150 pound robot walking around your home with your kids.

[00:33:10] Every dystopian movie in the world warrants us.

[00:33:13] Yeah.

[00:33:13] But it's not, I mean, it's almost like the bigger risk is just kind of not that they will go get evil and go kill you.

[00:33:24] It's just that they'll fall over and crush you or something.

[00:33:27] You know, I think that was an interesting point.

[00:33:28] Of course, the constant irony of all of this is that we drive around in heavy metal cars that because they're human intelligence powered, we're fine with insane collateral damage.

[00:33:37] And maybe the odds are, you know, robots in your house probably aren't going to kill more people than cars do.

[00:33:43] Well, we had the head of onboard AI at Waymo also, Srikanth.

[00:33:52] And he made some great points about just how hard it is to collect data at these long tail kind of environments for robotics.

[00:34:03] Like, you know, they there's just constantly new scenarios that they're seeing that, you know, you wouldn't experience in 10,000 lifetimes or something if you were just a normal human driver.

[00:34:15] And, you know, how do you how do you handle that is a super interesting question in robotics.

[00:34:20] All right.

[00:34:21] Last one.

[00:34:22] Marissa Meyer.

[00:34:23] I got to interview.

[00:34:24] A highlight.

[00:34:25] Yeah.

[00:34:25] The legendary head of search at Google, CEO at Yahoo.

[00:34:29] She is building an AI photo powered app, which is sort of like, I don't know, Instagram for groups or something.

[00:34:37] Still early on that app.

[00:34:39] So I think maybe the more interesting part of the discussion was her takes on Google having run search there for a while.

[00:34:44] I think Google's AI summaries replacing its links to websites was a big topic in the world right now.

[00:34:52] And also kind of upstarts like perplexity or open AI adding search to their chat interface as kind of a disruptive force against Google, I think are really interesting trends we're seeing.

[00:35:04] And her take.

[00:35:05] I mean, she is very loyal to Google, I would say.

[00:35:09] But I would also say that she said a lot of things that would scare me a lot if I worked at Google or were an investor at Google.

[00:35:16] Like she basically said, yeah, a huge portion of what I used to type in the Google search box.

[00:35:21] I just type into chat GPT.

[00:35:24] I assume a lot of other people are doing that, too.

[00:35:28] She also on the kind of supply side, i.e. the websites that Google indexes, she said, yes, I'm sure more and more of that is being generated by AI already and not really adding new value to the Internet.

[00:35:41] Meaning that sort of the kind of information that Google sits on top of is degrading or at least not improving in the last couple of years, which was a big problem for Google.

[00:35:50] And then she was like, yeah, I mean, the kind of ad model doesn't work in a world as well where you just want the answer to your question rather than 10 possible answers.

[00:35:58] And maybe three of them are sponsored links or something.

[00:36:01] As a media person.

[00:36:02] Yeah.

[00:36:02] The idea that they're just like strangling the creation of the information that they rely on is, you know, an ongoing, fascinating dynamic.

[00:36:11] Yeah.

[00:36:11] I thought it was a pretty short case for the core business of Google because she was like, the only way you get out of this is by being like a mediator for shopping and ticket master.

[00:36:20] And stuff and just directly accessing like inventory of goods from Google.

[00:36:25] And I was like, are these companies going to let Google do that?

[00:36:28] And she was kind of like, no, probably not.

[00:36:31] Like, it's really hard.

[00:36:32] Well, it almost suggests, you know, Google clearly isn't going this way, but it suggests Google should be on the side of getting people to other websites.

[00:36:40] It's like, oh, what do we do?

[00:36:41] Well, we get people to other websites, but instead they've decided like, no, we need to keep people on our page.

[00:36:47] Isn't the obvious biggest oversight so far that they just haven't created a Gemini assistant like within Google search?

[00:36:54] Like the.

[00:36:55] Just like a separate product instead of just trying to take over Google.

[00:36:59] Just put it in.

[00:36:59] No, no, not even a separate project.

[00:37:01] I'm just saying like right now.

[00:37:02] Yeah.

[00:37:02] You get these Gemini summaries at the top, but like, it's not a chat bot.

[00:37:07] Like, why not just put a chat bot into that search experience?

[00:37:10] Like, that's what people want.

[00:37:11] Just like side by side or just.

[00:37:13] I think so.

[00:37:14] Yeah.

[00:37:14] Just saying it.

[00:37:15] I feel like people want.

[00:37:17] I bet they're working on that.

[00:37:18] I mean, OpenAI is doing the opposite, right?

[00:37:20] OpenAI is putting a search engine in a chat bot.

[00:37:22] Right.

[00:37:23] Yeah.

[00:37:24] I think it's just really a struggle to disrupt their entire business.

[00:37:27] I mean, again, the word disrupt is really overused, but it's like they have the greatest ad business of all time.

[00:37:32] And putting a chat bot in the Google search box is probably quite bad for that business in many ways.

[00:37:37] And because you just can't show as many ad sponsored links.

[00:37:40] And so I'm sure they're working on what you're saying, James.

[00:37:44] But I would imagine they're very, very scared of destroying their entire business by launching this kind of product.

[00:37:50] So TBD, I thought that was super interesting discussion.

[00:37:53] I don't know.

[00:37:54] Eric, wrapping it up.

[00:37:55] Final takes.

[00:37:56] Final takes.

[00:37:57] I mean, I'd say we're four cerebral valleys in two years.

[00:38:00] At the first one, we were extremely bullish.

[00:38:03] It was early.

[00:38:04] We saw the promise of chat GPT.

[00:38:06] I think felt like super excited.

[00:38:08] I think, you know, by the second one that year in November, so November 2023, it was sort of like the big questions of AI and sort of these like big debates about like the world.

[00:38:25] I have a point about this.

[00:38:26] I have a point about this.

[00:38:27] You jump in.

[00:38:28] Sure.

[00:38:28] I mean, I think when we started this conference series, it was March 2023, right?

[00:38:34] And it was only a few weeks, I think, after GPT-4 had launched, right?

[00:38:41] But at that time, there was like a very large Overton window of what was being discussed.

[00:38:47] Like a lot of the people at the conference, like we're very scared that like in the next two to three years, you know, we would be all be dead or, you know, that AI was going to take over the world.

[00:39:00] And then other people, you know, were kind of like really focused on the foundation models and how, you know, they were just going to be, you know, take all the value in the ecosystem.

[00:39:12] Right.

[00:39:12] There's just like a massive Overton window of what was possible.

[00:39:16] And I think we've just seen that narrow.

[00:39:17] Like we've seen now, you know, more agreement in some sense.

[00:39:21] Like here are the safety risks, kind of what the risk profile is.

[00:39:25] And here's, you know, kind of where the opportunity lies.

[00:39:29] It's not just all within foundation models.

[00:39:31] And I think like just a general, you know, understanding of maybe, you know, what pace of advancement we're going to see in this industry.

[00:39:39] You know, not like it's not super clear to any of us, like whether we've, you know, exactly how powerful GPT-5 will be and when it will arrive or something.

[00:39:48] But at least we have like a pretty good understanding of what is possible in this space.

[00:39:53] Whereas I think even a year and a half ago, that wasn't the case.

[00:39:56] I don't know.

[00:39:57] I'm very, I'm very mixed.

[00:39:59] I think we sort of say two things at once.

[00:40:01] Like we say, okay, AI today is really good, but we need to sort of give people a roadmap how to use it.

[00:40:08] I think that's true.

[00:40:09] But I do think there's a reality in our heart of hearts that you sort of alluded to.

[00:40:13] They're all waiting for ChatGPT-5 or, you know, a super powered 01 or Anthropik to leap ahead and really get this like moment again where we're like, oh my God.

[00:40:22] It's so smart.

[00:40:24] And I do think that's what the wall discussion is all about.

[00:40:27] We're sort of thirsting for another sort of jump, you know, leap.

[00:40:31] And I feel like it hasn't happened again.

[00:40:35] Nothing has happened sort of like the first ChatGPT launch.

[00:40:40] Yeah.

[00:40:40] I mean, I think the key question is, was ChatGPT like the iPhone, you know, and everything else was the Palm Pilot.

[00:40:47] And even 17 years later, we still basically have an iPhone that, I mean, I had that first iPhone in 2007.

[00:40:55] It's not so different.

[00:40:56] I was a BlackBerry.

[00:40:57] Yeah.

[00:40:58] But I'm just saying there's only one iPhone moment, right?

[00:41:01] You could say like, OK, the iPhone 4 was like where they really got it all like working super well.

[00:41:06] Or, you know, you could always you could pick your favorite iPhone, but there still was only really one iPhone.

[00:41:10] Right.

[00:41:10] I mean, you know, like and was ChatGPT it?

[00:41:14] Like, was that the Rubicon?

[00:41:15] Like, or is there kind of one more next big leap?

[00:41:19] Right.

[00:41:20] Or is there a couple more?

[00:41:21] Right.

[00:41:21] I think that my take is there's a lot more to come.

[00:41:24] I mean, OK, there's a lot because if there's more leaps of this scale to come in the near future, the world is going to be so fucking crazy compared to the life that we've had to date.

[00:41:35] That is the sort of big open question.

[00:41:37] Right.

[00:41:37] Is is there more of these transformational moments coming?

[00:41:41] Because if so, like I think we've already in a new era.

[00:41:44] Yeah.

[00:41:44] We've talked a lot about them today and I'm bullish on them in the next one to two years.

[00:41:49] You know, it's like it's these agentic models.

[00:41:52] It's it's that go off and do things on your computer.

[00:41:55] It's like Max is talking about.

[00:41:57] You know, you're on phone calls with AIs much more often than you're on with humans.

[00:42:01] And that opens up a lot of new use cases.

[00:42:03] I think, you know, some of these small models like video and music, I think they're just going to get better and better.

[00:42:08] And we'll be able to make, you know, Hollywood quality movies and amazing, amazing music personally kind of for our own use.

[00:42:18] So I don't know.

[00:42:19] I think all of these could be like massive changes to sort of our day to day and society.

[00:42:24] And, you know, we were we're getting there pretty quickly.

[00:42:28] Great.

[00:42:29] All right.

[00:42:29] Well, we'll see where we are.

[00:42:31] We'll take state, you know.

[00:42:33] Cerebral Valley will probably have one to announce for the summer and certainly we'll be back in San Francisco in November.

[00:42:40] So we'll we'll keep keep an eye on it.

[00:42:43] Sounds good.

[00:42:44] Thanks.

[00:42:45] Thanks, guys.

[00:42:45] Yeah.

[00:42:46] See ya.

[00:42:46] See ya.