OpenAI has entered what insiders are calling Code Red!In this episode, we unpack why Sam Altman is suddenly sounding the alarm.Is this a genuine internal crisis, or a calculated move to set expectations ahead of a major product cycle next year?We explore:-What “Code Red” actually means inside OpenAI-Whether the urgency is operational, political, or performative-How leadership is using public pressure to reshape the company’s narrative-Why this dramatic messaging may signal a huge release in 2025-The possibility that Sam is intentionally creating a “we turned the ship around” story-How OpenAI’s competitors (Anthropic, Google DeepMind, xAI) are interpreting this momentOpenAI is still the most important company in AI, but the cracks, pressure points, and power moves have never been more visible. If this is really Code Red, the stakes are enormous.
00:00:00
They're like declaring a code red.
00:00:01
We're we're not winning on the product and now we're going to
00:00:05
make the Welcome back to the Newcomer
00:00:17
Podcast, Tom here, Eric and Madeline.
00:00:19
We're going to talk now about Open AI and the state of affairs
00:00:23
in the AI race because it's been a very chaotic seeming couple of
00:00:29
days. Alert danger It's.
00:00:31
Yeah, yeah, there are flashing lights happening over in the
00:00:34
Mission St. Yeah, yeah, they're just running
00:00:37
through the corridor screaming and throwing papers in the air
00:00:40
at Opening the Eye. Because door, yeah, door locks
00:00:42
are coming down. You know, there's smoke going
00:00:45
through the opening eye offices, it looks like.
00:00:47
Alien when they're locking down for the creature.
00:00:50
And the, yeah, I know there are people that's like, you're in or
00:00:52
out right now, Like, the water is filling into the rooms and
00:00:55
people's hands pressed up against the glass.
00:00:58
Yeah. So Sam Ultiman recently declared
00:01:00
a code Red did. You order the code red.
00:01:03
I did the job. Did you order the code red?
00:01:05
God damn right I did. Inside I don't know why it's so
00:01:09
funny to me he he sent out a memo to to employees saying that
00:01:13
they they needed a code red alert on on Chachi PT that
00:01:16
there's a lot of competitive threats in the app is not
00:01:19
working as well as they wanted to right now.
00:01:21
And this came a couple weeks after he also kind of warned
00:01:26
employees that there was about to be a very bad news cycle
00:01:28
around open AI, as Gemini and Google in particular, you know,
00:01:32
was breathing down their necks. And they released a new model
00:01:35
that was really good. And everyone now is down on open
00:01:38
AI. Tom's literal first story in
00:01:41
Newcomer said yeah, Gemini, Google, they're in a better
00:01:45
place than you might think. Just yeah.
00:01:47
Yeah, yeah. Read the newsletter.
00:01:50
You could have made a bunch of money.
00:01:51
I'd love to take more credit for it, but I was just talking to,
00:01:54
you know, sources at Google who were like, yeah, we have a
00:01:57
really good model here. I don't know why everyone
00:01:58
decided that we were fucked. Like, you know, Google literally
00:02:02
invented key parts of the transformer technology.
00:02:06
Well, on the David Sachs, all the all in guys are like Google.
00:02:08
The culture is gonna ruin the whole thing.
00:02:10
It didn't make any sense. And all the hedge run guys have
00:02:12
been super like standoffish about Google.
00:02:15
I I don't understand it. Yeah, Yeah.
00:02:18
Keep going. Yeah.
00:02:18
So what was the threat? Obviously, Obviously.
00:02:22
And they make money either way. That's the beauty beautiful
00:02:24
thing with Google. They have Google Cloud, you
00:02:27
know, and they have a search so they can drive the cost like
00:02:30
they do with, you know, their workspaces stuff.
00:02:34
You know they can screw over everybody else by offering
00:02:36
things that some people want to build money, right?
00:02:38
They're a monopoly, is what you're trying to say.
00:02:39
Yeah, you're. Saying they own title.
00:02:43
No. Is that improving the core line?
00:02:44
I can't say say anything about that, but.
00:02:47
Yeah, it's like they have the full stack.
00:02:49
And I think what's been like clear in the last couple of
00:02:51
weeks is that like they're competitive at every part of it.
00:02:54
Like Google Cloud is a major player in cloud, which we all
00:02:57
knew. The Tpus, which people kind of
00:02:59
all forgot about, are like the only legitimate competitor to
00:03:01
Nvidia's chips. And now they have one of.
00:03:04
Capture how much Tpus we're going to be a credible
00:03:06
competitor, I don't think. Yeah, well, they've been working
00:03:09
on it for a long time. Like you compare that to like
00:03:10
open AI, which is very Johnny come lately with all of these
00:03:14
things. They're basically like building
00:03:16
a airplane in the sky or whatever that metaphor is.
00:03:18
Whereas like Google is literally the airplane and like, you know,
00:03:22
they they certainly were slow when it came to embracing it and
00:03:25
like had a lot of concerns ethically, which is interesting
00:03:28
even though of. Course, I hope people listening
00:03:30
this podcast already know this fact, but attention is all you
00:03:33
need. The Seminole AI paper was
00:03:35
written by Google employees, like, so they were, they were
00:03:39
there, they knew it. They gave it away for free.
00:03:41
They created this whole problem themselves.
00:03:43
And so, yeah, it was frustrating that they weren't like, leading
00:03:46
from the whole time, like you invented this category.
00:03:49
But you know, yeah, not surprising that they would
00:03:51
figure it out and and catch up. Right.
00:03:54
And So what I guess is more interesting to me now is like
00:03:56
the state of affairs going forward because the idea that
00:03:59
open AI is fucked, it's it's literally just AAI mean open AI
00:04:04
and Sam Altman assisted, but it's a creation of media.
00:04:07
Nothing has fundamentally changed about the business like
00:04:10
their model maybe isn't ranking as highly.
00:04:12
On. Traffic.
00:04:13
Also I think there are some traffic numbers.
00:04:15
Right. Traffic is a little down, yeah,
00:04:16
to HPT. But I will say also, Sam Altman
00:04:20
is such a savvy media player. He knows that if he sends an
00:04:24
e-mail to the entire company that says Code Red in a memo,
00:04:28
that that's going to get leaked to a reporter like.
00:04:30
And it's very possible Open AI leaked it itself.
00:04:33
Like a lot of these company memos, they know they're going
00:04:35
to come out and some of them the companies leak.
00:04:38
You know, I I don't have proof either way on this one, but just
00:04:41
as a reporter, these company memos, especially in the
00:04:44
Facebook era, were coming out so often, but I think companies
00:04:48
were definitely handing them out to reporters because it's like
00:04:50
we're blasting them to all our employees.
00:04:52
This is the message we want out there.
00:04:53
And it gives it gives the statements some sincerity that
00:04:56
you wouldn't get if you put it, if you put out a press release
00:04:58
saying, oh, we're we're in trouble.
00:05:00
Like people be like, what? But if you're like, it's to
00:05:02
employees, it's like, well, that's what they're saying
00:05:04
authentically inside. Then we metabolize it in the
00:05:07
culture. They wrote this for public
00:05:08
consumption, whether they leaked it or not, right?
00:05:10
Yeah, it was for the public. I my, my belief is that they
00:05:15
didn't leak it on purpose. And I think the reason you can I
00:05:19
can say that confidently is that there actually weren't that many
00:05:21
outlets that reported it with their own reporting.
00:05:23
It's actually only been a small handful of them.
00:05:25
So if everybody says we have a source, then the company is
00:05:28
like, all right, yeah, yeah, that was true.
00:05:31
Here's the here's the memo. Yeah.
00:05:32
Yeah, but that said, like, Sam is a smart guy who knows how the
00:05:36
media works, and when you do send something out, you know
00:05:38
it's going to happen. So the level of intentionality
00:05:40
is sort of irrelevant at that point.
00:05:42
But I think like Google also had a code red exactly 3 years ago
00:05:48
to this time where they were worried about the state of
00:05:52
Google search and, and, and ChatGPT was, was taking off like
00:05:55
a rocket. And here's where we are now.
00:05:57
So, and I think Facebook in the past is, you know, Mark has
00:06:00
declared various code Reds over, over different things.
00:06:03
And I think actually over Google Plus, when that first came out,
00:06:06
there was a lot of concern over there.
00:06:07
So it's stupid to take from this like the beginning of the end.
00:06:11
And and I saw some people tweeting that out.
00:06:12
It's like when you've declared a code red, it's already over.
00:06:16
You've already lost, you know, your men are already dead.
00:06:18
And and it's like, we're so early in this.
00:06:22
And I hate using metaphor, but it literally is true.
00:06:24
Like if you can go from like an also ran like Google was a
00:06:27
couple of months ago, like when I wrote that piece, like
00:06:30
everyone thought Google was done and here we are months later and
00:06:34
they have the best model everything.
00:06:35
So it's like a lot is going to change.
00:06:37
The thing that does sort of start to worry me about open AI
00:06:40
is like I do think they have too many plates spinning right now.
00:06:44
I think they are literally trying to become Google at a
00:06:47
time that Google already exists. And I don't know like where
00:06:50
their focus is going to end up being.
00:06:51
Like you've got the Chachi PT stuff, you've got the Stargate
00:06:56
stuff, you've got the Johnny I've stuff, you've got the
00:06:59
browser stuff. Like I'm probably missing 5 or 6
00:07:01
other things. They're trying to get an
00:07:02
advertising right now. The meme is they're trying to
00:07:04
get an ads. They're like declaring a code
00:07:07
red. We're we're not winning on the
00:07:09
product and now we're going to make the product worse with
00:07:12
advertising. Right, right.
00:07:15
Which man? The whole advertising thing too.
00:07:17
You know, we had Alex Heath on while you were out, Eric.
00:07:19
And you know, I, I like Alex a lot.
00:07:21
He's an amazing, amazingly well sourced.
00:07:23
Supporter. You can go read his newsletter.
00:07:24
Yeah, go read sources and his podcast is called Access.
00:07:28
But he was, you know, very, I, I thought a little bit too, I
00:07:31
think a little, a little too credulous on the fact that ads
00:07:34
is going to be a big business for, for open.
00:07:36
AII think actually ads is going to be very difficult for them
00:07:39
because ads rely on proof of concept and proof of success and
00:07:43
also linking out and clicking out.
00:07:44
And it's sort of proved that this stuff does not link out
00:07:46
very well. So why would you want?
00:07:48
I'm shopping for all my Christmas gifts in Shia GBT.
00:07:51
I don't agree. Well, anyway, this is a
00:07:53
different discussion. I just think it's going to be
00:07:55
complicated for them. I don't think it's a done deal
00:07:57
and I, I worry for them if I were an investor, which is
00:08:02
increasingly everyone basically, you know, VCs on upwards that
00:08:08
like they are going to lose focus and what they're trying to
00:08:11
win on is like increasingly competitive.
00:08:13
And, and that to me is more like the red alert than like Chachi
00:08:17
BT is losing customers or something like that.
00:08:19
It's just like what matters and what doesn't matter and should
00:08:21
be doing should we be doing everything?
00:08:23
And I would be, I know we'll get to predictions in a later
00:08:28
episode, but I wouldn't be surprised if in the next year or
00:08:31
two you start seeing open AI dropping some of these ambitions
00:08:36
that something like the browser just sort of goes away.
00:08:39
It's not that necessary or I don't know.
00:08:42
I really don't have a lot of bullishness on the Johnny Eye of
00:08:45
hardware project. I think that is going to be
00:08:48
expensive and and difficult, but that's that's where I would be
00:08:52
worried about them. If you're selling Tam, you need
00:08:55
to be able to do everything. I mean, there's some valued
00:08:58
opening eye and just being ready if a category appears promising
00:09:03
that they can lean in. It's like, oh, coding is big,
00:09:05
we'll lean into that. So you have projects sort of
00:09:07
spinning. Yeah, I don't know.
00:09:10
The piece that stuck out to me with all of this too, is that
00:09:13
each iteration of the startup boom, whether we should be
00:09:18
looking at applications or foundation models and how do we
00:09:20
build out products and how do the large labs also build out
00:09:23
products? Every time a new model comes
00:09:25
out, like this junior Gemini model, the race is totally
00:09:29
reset. So all of the people that I
00:09:30
think have talked about, you know, will foundation models
00:09:33
ever be the Moat? The answer seems to be no again,
00:09:37
as it just keeps climbing and chasing back and forth.
00:09:40
And every few months we get a new model that tops the
00:09:42
leaderboard and then opening eyes back at the top and then
00:09:44
Google's back at the top. So it's not that I don't think I
00:09:47
would count them out at all. Obviously I think it's still too
00:09:50
early, but it just becomes a lot trickier to see, you know, can
00:09:55
you be the everything company when the everything company
00:09:57
already exists? Like you need to figure out
00:09:59
applications beyond ChatGPT that work because you will not get
00:10:02
the Moat with foundation models like this.
00:10:04
Race seems like it's nowhere near done.
00:10:05
Which is why I think ads feels like heart before the horse in
00:10:09
that, yeah, opening eye, ChatGPT needs more more clear designed
00:10:15
applications before just, I don't know, going into
00:10:19
monetization. I want to say 1.
00:10:22
I guess my last word on this is I do think it's it speaks to the
00:10:28
power of truth that open AI is saying code red.
00:10:31
I do think there's value in shaping your company's narrative
00:10:34
and just saying, yeah, we're we're in a hard place because it
00:10:37
undercuts the media and the commentary.
00:10:40
And just like Twitter and regular people would all be
00:10:43
like, oh, open AI is in trouble because of Gemini.
00:10:47
But when you say, oh, it's a code red, we're in big trouble,
00:10:50
then you lower the bar for yourself.
00:10:52
So maybe it hurts your reputation, but instead of
00:10:55
having to fend off like, oh, Google, are they better than
00:10:59
you, you say, yeah, we're we're really needing to work hard.
00:11:01
And so then everything you do to sort of be slightly better than
00:11:06
a Code Red is now a positive again.
00:11:08
And so I think there's a lot of value in just, you know,
00:11:11
communication strategy and lowering expectations for
00:11:15
yourself so that you can much more easily slip over them.
00:11:18
If you think entirely through narratives, then you're setting
00:11:21
yourself up for your comeback. You're setting yourself up for
00:11:24
like, we, we, we girded down and now everything is great.
00:11:27
And we, we, we the same way Google sort of did when they had
00:11:30
their code red and and now everyone's, you know,
00:11:32
celebrating them. So genius move by Sam Altman,
00:11:35
where we we applaud you tip of the cap for your great decision
00:11:39
to call yourself a cap. I don't.
00:11:40
Think Google's as savvy about the PRI.
00:11:42
Think they're just just building and trying.
00:11:45
I think Sam is very aware of, you know, how the narrative is,
00:11:49
is consumed. Yeah, cool.
00:11:52
Yeah. Well, fun to talk about this
00:11:54
more next week or whatever when we do our prediction.
00:11:56
Yeah. We're gonna do predictions.
00:11:57
We're gonna have superlatives. I think it's gonna be a fun end
00:12:00
of the year for the newcomer podcast where it's trying to
00:12:03
split up segments so that you can listen to the things you're
00:12:07
interested in. We're super.
00:12:09
Hopefully you're watching this on YouTube.
00:12:10
If you're listening on the podcast, we're we're working
00:12:13
hard on our YouTube channel. So go subscribe on YouTube,
00:12:16
please like comment, we're we're a small channel.
00:12:18
I'll come in and reply to your comments.
00:12:20
If they're not, they're not too mean.
00:12:22
So we appreciate all the support.
00:12:24
And of course, you know, we're at newcomer.co.
00:12:27
We write a sub stack about stripes and venture capital.
00:12:30
And, you know, if you want to be hearing about some of the stuff
00:12:33
early, maybe you like to buy stocks, maybe you're tracking
00:12:37
start-ups, maybe you're just fascinating.
00:12:39
One insiders take. You should subscribe to the
00:12:42
newsletter and newcomer.co.
