- The Narrative Field Guide
- Posts
- Issue No. 15: OpenAI
Issue No. 15: OpenAI
OpenAI’s narrative on safety with AGI is a masterclass on how even the right story can unravel when it’s not executed within the business.
There are two things you should know about the company right now:
OpenAI has done more than anyone to communicate that it’s building AGI with a safety-first mindset.
Over the last year, the company’s employees and the public have cast significant doubts about whether this is true.
It’s a stark reminder that a poorly executed narrative doesn’t just tarnish your brand, it can derail the business and leave you vulnerable to competitors.
That’s why I spent the last 3 weeks studying this company.
In doing so, I found four lessons every company should take to heart when it comes to its strategic narrative:
Your Narrative Is What You Say and Do
All Stakeholders Must Back Your Narrative
Keep Your Narrative in Its Lane
Narrative Isn’t Always About the Product
But before I unpack those, I want to share with you the backstory of what’s happening with OpenAI. Understanding this context will help make those lessons much more meaningful once we get to them.
So here’s how I’ve structured this issue:
First, I will break down the dynamics of the AGI category itself, laying out what’s at stake and what’s required to win
I’ll then share my view on what kind of narrative would give a brand in this space the best odds of winning the category
From there, I’ll contrast the story OpenAI is telling to the public with the drama that’s been happening with the company over the last year
I’ll then unpack those 4 lessons and leave you with some questions inspired by OpenAI’s situation that you can consider for yourself.
(P.S. In case you forgot, AGI stands for Artificial General Intelligence. Definitions vary, but according to McKinsey, it’s “AI with capabilities that rival those of a human.”)

OpenAI and its CEO, Sam Altman, have done more than any other frontier lab to take a public stance on AGI safety. Yet, with so many executives departing over safety concerns, we may not be hearing the full story. (Source: Fortune)
Last note before we dive in. This issue represents my perspective as someone who consults on narrative design and is looking at the company from the outside. I could be wrong in my assumptions. My goal is to provoke you to think deeply and differently about your own business, so you’re better equipped to win in your category.
Dynamics of the Race for AGI: Scarce Resources and a Theoretical Product
When you’re in an emerging category, like AGI, winning is always important. Research shows how most categories evolve to a winner-take-most dynamic. Once that dynamic is in place, the leaders are very hard to unseat.
But unlike products in most emerging categories, AGI is a theoretical product. The race (at this stage) isn’t about capturing market share, it’s about seeing who can deliver on an idea and turn that into a sustainable business.
What’s more, AGI is one of the most resource-intensive pursuits of all time. Every company competing in this space is in a race to secure resources in three areas:
Compute. The compute needed to train and operate AI models is immense. Companies like OpenAI simply cannot get their hands on enough GPUs and related hardware. The energy requirements are nothing to sneeze at either.
Capital. As you might expect, accessing this compute doesn’t come cheap. Maybe that’s why earlier this year, OpenAI’s CEO Sam Altman proposed raising $7 trillion to fuel the future of this space. Furthermore, AI is not yet a profitable business. OpenAI is projected to lose $5B in 2024.
Talent. There is an extremely limited pool of qualified talent available to move AI forward. Recruiting this talent away from a competitor (or convincing them to stay) doesn’t just require money - it means convincing them that their talents will be put to the best use.
This is the main dynamic driving this space: winning AGI means securing these scarce resources ahead of competitors. But in this situation, brands don’t just have to tell a convincing story about the financial opportunity, they must also convince the world that their products are safe.
Public Perception on Safety Will Make or Break AGI
Even for an investor who’s a pure mercenary (they only care about profits), products perceived as unsafe generally don’t make good investments. They may not be adopted by society, they might be banned outright, or overly regulated. All of those can hinder profits.
In 2024, AGI’s safety is an open question.
It sounds silly to even write this, but there are plenty of people smarter than me concerned that AGI could result in the end of humanity. That’s an extreme outcome, but consider these other concerns that are being brought up today.
For example:
How do we deal with the ramifications of AGI, such as disruptions to the job market and income inequality?
How can we avoid AGI introducing bias or other types of bad output?
How do we avoid AGI acting maliciously or deceiving users?
How do we avoid allowing AGI to fall into the hands of bad actors?
How do we prevent humans from relying on AGI to do too much thinking for us, weakening our mental capacity?

Safety concerns around AI don’t just concern OpenAI, they affect the whole category. That’s why I believe winning the race to AGI has as much to with the public perception of safety as it down the technology itself. (Source: NDTV)
Not to mention a more philosophical question - do we even want AGI in the first place? According to a 2023 poll, 63% of Americans want regulation to actively prevent superintelligent AI.
This is why I believe that public perception of AGI safety is what matters most at this stage.
If OpenAI can convince the world that its version of AGI is safe, then investors will be more willing to keep backing the company, regulators will be much less likely to treat the company unfavorably, talent more likely to join, and so on. Put another way, the company that does the best job of convincing the world that AGI is safe is the company most likely to secure the resources it needs to win.
That is the job that OpenAI should be focused on.
Let’s cut right to it then: Does OpenAI make a convincing argument that it has a handle on safety? The short answer is a loud “no.”
Public perception of AGI safety is what matters most at this stage.
OpenAI is Failing at Its Public Perception of Safety
If you’ve been following OpenAI in the news, you’ve already seen how the company has failed to create confidence around its track record with safety.
Take a look at this series of events over the last 12 months. Does this sound like a company operating with “safety” at the forefront to you?
Nov 17, 2023. Sam is fired as CEO, amid claims of “outright lying” and giving “inaccurate info about the company’s safety processes” to the board. He was rehired a few weeks later, which included a reshuffling of the board.
May 14, 2024. Ilya Sutskever, co-founder and Chief Scientist who had voted for Sam Altman’s firing, resigns. A month later, he announced the launch of a new, safety-oriented AI company.
May 17, 2024. Jan Leike, a key researcher, resigns, saying, “Safety culture and processes have taken a backseat to shiny products… OpenAI must become a safety-first AGI company.”
May 17, 2024. The Superalignment team, focused on long-term AI safety, is disbanded less than a year after it was formed.
May 22, 2024. OpenAI policy researcher Gretchen Krueger resigns, also citing safety concerns.
May 26, 2024. Former OpenAI board members Helen Toner and Tasha McCauley publish an op-ed accusing Altman of “lying” and stating that self-governance is not a viable option for the company.
June 4, 2024. A group of former and current OpenAI employees publishes and signs an open letter about the lack of oversight at the company.
June 4, 2024. Former OpenAI safety researcher Leopold Aschenbrenner alleges he was fired for raising safety concerns to OpenAI’s board.
July 16, 2024. Anonymous employees allege that OpenAI launched ChatGPT 4o while rushing the safety protocols it set up for itself.
August 8, 2024. Lawmakers request more information about how OpenAI handles whistleblowers and safety reviews, citing a “discrepancy between your public comments and reports of OpenAI’s actions.”
September 26, 2024. CTO Mira Murati resigns, citing a need to “create time and space for my own exploration.” Meanwhile, the OpenAI’s chief research officer and its VP of Research also depart.
October 14, 2024. The “Godfather of AI” and Nobel Winner Geoffrey Hinton says he is particularly proud that his former student Ilya Sutskever fired Sam Altman, because ”Sam is much less concerned with AI safety than with profits.”
October 24, 2024. OpenAI’s “AGI readiness czar” Miles Brundage quits, stating that neither OpenAI or any frontier lab is ready for AGI.
Whew.
All this, and I left out lawsuits from Elon Musk that accuse the company of abandoning its original mission, another from The New York Times for copyright infringement, poor reception around OpenAI’s recent plans to switch to a for-profit entity, and complaints against the company’s attempt to prevent criticism from employees.
For a company that needs to build trust in its safety practices, I’d say that wasn’t a good year. But what led to such a breakdown?

The resignation of Ilya Sutskever, co-founder and Chief Scientist at OpenAI, was one just one in a string of similar resignations over safety concerns. After departing, Ilya founded a new company, aptly named “Safe Superintelligence, Inc.”
If you knew nothing else about the company, the obvious assumption would be that OpenAI simply did not make good efforts to develop and communicate its stance on safety – either internally or externally. But the opposite is true. OpenAI is easily the most verbose company on this topic. It makes its failure to create the perception of trust all the more perplexing.
Despite Skepticism (or Perhaps Because of It) OpenAI Over-Indexes on Communicating Safety
Let’s start with the safety practices OpenAI has published on its website. Here’s an abbreviated list of the efforts it has taken to position itself as a “safe” brand:
System Cards
System Cards – an attempt to objectively evaluate the safety and performance of its products, as well as the impacts these products may have on society. These are deep evaluations, written by and for a scientific audience. Here’s an example of how OpenAI assesses the risk of GTP-4o negatively impacting human interactions.
Preparedness Framework
The company has also introduced a Preparedness Framework, which outlines OpenAI’s approach to developing and deploying safe AI systems, especially as they approach AGI. The document is designed to guide the team on how to assess risks, put safeguards in place, and prepare for the societal impacts of AGI.
Superalignment Team
Similarly, the company had introduced a “Superalignment” team in 2023 (although, as you saw above, it was later disbanded). It had the following mission:
...the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction… We are assembling a team of top machine learning researchers and engineers to work on this problem.
Company Charter
Most visible of all OpenAI’s narrative around AGI is its Charter. It’s a foundational document that “describes the principles we use to execute OpenAI’s mission.”
OpenAI’s mission is to ensure that artificial general intelligence (AGI)… benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
AGI Readiness Post
Similar to the Charter, this post explains the company’s views about AGI, laying out the following principles:
We want AGI to empower humanity to maximally flourish in the universe.
We want the benefits of, access to, and governance of AGI to be widely and fairly shared.
We want to successfully navigate massive risks… We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.
Research Publications
OpenAI publishes a massive amount of research, like this post about how new approaches to “red teaming” can improve safety with AI. While its research covers a variety of topics, a good deal of it covers the company’s understanding of safety risks and how to deal with them. Many are even academic research papers.

System Cards, like this one for GPT-4o, provide a standard for how OpenAI believes LLMs should be evaluated ahead of their release. It’s exactly the kind of material you would hope a company headed towards AGI would produce.
It’s an impressive effort.
You might think that this is just par for the course in the AGI space, but no. Google, Meta, and Microsoft have limited statements about safety. Claude and Cohere have limited content. x.ai (Elon Musk’s foray into AGI) doesn’t mention the word “safety” once on its website. Even Anthropic, which bills itself as an “AI safety and research company” has just a handful of articles on the topic.
But what’s published on the web is just half of the story.
Sam Altman is Vocal About Safety, But That Doesn’t Mean His Message is Believable
As if this documentation weren’t enough, Sam Altman himself is extremely vocal about the future of AGI and the need for safety. In many ways, he says many of the right things to build faith that his company takes safety with AGI seriously. For example…
May 16, 2023. “My worst fear is that our field can cause significant harm to the world. It’s why we started the company.”
June 26, 2023. “You shouldn’t trust me. No one person should be trusted here. I don’t have supervising shares'; I don’t want them. The board can fire me, I think that’s important… We think this technology… belongs to humanity as a whole.”
May 1, 2024. “I don’t want to minimize [the cataclysmic danger], I think they are very serious… I am worried about the rate at which society can adapt to something so new.”
June 27, 2024. “…at some point, we will do something like a UBI, or very long-term unemployment insurance, we will have some way of redistributing money in society, as people figure out new jobs…”
Sep 12, 2024. “We think it is important that our safety approaches are externally validated by independent experts, and that our decisions are informed at least in part by independent safety and risk assessments… we will continue to enhance safety precautions as our AI systems evolve.”
Oct 16, 2024. “…if things keep going like we think they're going to, it will require society to adapt at a rate that is more challenging… it's very unclear what to do about that.”
Oct 16, 2024. “OpenAI should not be making… determinations about the usage of a language model. There should be a process by which society collectively negotiates how we're going to use this technology.”

Sam Altman has been very vocal about the risks of AI, even asking lawmakers for more regulation on the industry. (Source: ABC News)
These statements essentially boil down to:
AGI is inevitable, so we should take this development seriously
I’m aware of the risks of AGI and believe we should prepare for them
We must consider how AGI will impact society
Proper governance around AGI is critical
But that’s not all he says, and this is where’s where things get interesting.
Many of Sam Altman’s statements take attention away from safety by over-emphasizing the upside and downplaying the risks.
June 30, 2023. “I don’t think we should stop a technology that can end poverty.”
Dec 7, 2023. “AGI will bring the greatest period of abundance that humanity has ever seen.”
Jan 18, 2024. When AGI comes, “…people will have a two-week freakout, and then they will move on with their lives… in those first few years, it will change the world much less.”
Jan 18, 2024. "The technological direction we've been trying to push this in, is one we believe we can make safe.”
Sep 23, 2024. “With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; a defining characteristic of the Intelligence Age will be massive prosperity…”
Sep 23, 2024. We expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think.”
Nov 4, 2024. In 5 years… AGI will have come and gone, but society will change surprisingly little. The current shortcomings with existing models will just be taken care of by future generations.”
With these statements, he’s saying:
AGI is too good to pass up
The benefits matter much more than any downside
Society won’t really change all that much
Any problems can just be addressed by future versions of AGI
This is where my faith in OpenAI’s story around safety starts to break down.
From my perspective, I can’t help but see cracks starting to form in the narrative. There are too many contradictions to ignore. Either AGI is a huge risk or it isn’t. It will either cause massive change to society, or it won’t. It could result in a utopia or a dystopia, but not both. But you’ll hear all of these statements made, just at different times.

In pieces like “The Intelligence Age”, Sam Altman waxes poetic about the promise of AI: “With these new abilities, we can have shared prosperity to a degree that seems unimaginable today;”
Either willfully or out of ignorance, OpenAI’s CEO is sending mixed messages to the world about how we should think about safety with AGI. And I can’t help but wonder: if the public dialogue is filled with contradictions, what does the narrative inside the company look like?
I may never know.
All we can look at are the signals. But one thing is for sure: while OpenAI takes great pains to communicate a story about safety (even though sometimes it may send mixed messages), it has failed to build trust around its safety practices, both inside and outside the company.
Why? I have a few ideas.
5 Theories on OpenAI’s Disconnect Between Story and Reality
A quick disclaimer: these theories are only my speculation from looking at the company from the outside in. That said, these aren’t theories I’m pulling out of thin air – they’re all scenarios I’ve witnessed directly at companies I’ve worked with myself, or seen at other businesses covered in the press.
Theory #1: OpenAI’s Public Narrative on Safety is Only Window Dressing
This is the explanation I least hope to be true: OpenAI doesn’t really believe in what it says publicly, and its narrative around safety is just a way to stave off criticism or scrutiny.
Unfortunately, too many CEOs have broken public trust recently that we can no longer give them the benefit of the doubt. Remember Sam Bankman-Fried? He was supposed to be the good guy in crypto, always talking publicly about the need for regulation and to conduct business above board. Turns out he was a crook.
I sincerely hope that isn’t the case with OpenAI.
But as I noted above, there are many inconsistencies and sweeping generalizations in what Sam Altman says. I can’t help but see a CEO who likes to paint a rosy vision of the future, while subtly absolving his company of responsibility for any negative consequences that might emerge.
Whether this is Sam’s intention, or merely the result of a misguided or overly improvised communication strategy, I couldn’t say.
I hope I’m wrong.
Theory #2: OpenAI Lacks the Ability to Put Its Public Narrative into Practice
A more likely theory is that OpenAI is genuine in its beliefs and intentions, but lacks the ability to execute them.
Sam may not be the right kind of leader, the kind of person who knows how to instill values and a philosophy into an organization. And perhaps didn’t make the right hires - he brought on too many people who were technically brilliant but under-skilled in leadership and holding others accountable.
Furthermore, I’ve come across many good-intentioned companies that could not find the time and attention to translate top-level values and translate them into actionable policies. With a new territory like AGI, that must be a difficult task.
A more benign explanation? OpenAI is navigating uncharted territory. With few precedents to guide them, they are simply trying to do the best they can in extraordinary circumstances.
Theory #3: OpenAI Has Mutually Incompatible Goals that Compromise Its Safety Narrative
There’s an interesting paradox with this category.
Every company in this space will do everything in its power to reach AGI first. That’s even their obligation to shareholders. Winning means getting those scarce resources of capital, compute, and talent and then moving as fast as possible.
However, a company won’t get those resources if it hasn’t convinced the world its products are safe. As many in this field have said already, safe AGI means moving slowly enough to get a handle on the situation.
So, if you’re OpenAI, do you move fast or do you move slow?
It’s a dynamic defined by the prisoner’s dilemma. If every player agrees to move slowly, then everyone benefits because the chances of a safe, beneficial AGI (vs a harmful or malicious one) are much greater. However, as soon as one player defects (and moves fast), the other players have no choice but to do the same.
That’s the situation we’re in now. Everyone is moving fast. With mutually incompatible goals, OpenAI is in a delicate situation.

At this point, AGI is a heated race. Slowing down is not an option. (Source: OpenCogMind)
Theory #4: OpenAI’s Narrative Overreaches
When you hear OpenAI talk about safety, it casts a pretty wide net. They address everything from bias in the model to AI’s effect on human relationships, income redistribution, environmental and energy concerns, existential threats to humanity, joblessness, and the list goes on.
And that’s part of the problem.
Attempting to cover so much ground undermines their credibility. No one expects a heart surgeon to address arthritis in your knee, nor do we expect CEOs to pontificate on social issues they have little expertise in.
Yet that’s what OpenAI tries too often to do.
When I hear Sam Altman posit solutions like Universal Basic Income and “some form of long-term unemployment insurance” I lose faith. Frankly, he hasn’t earned the right to talk about those topics. He’s a smart guy, but he’s out of his depth here. If he were more willing to say, “I don’t know,” a bit more often, he’d foster more trust.
Theory #5: Sore Losers and an Overly Vocal Minority
Finally, internal turmoil within OpenAI could be happening for another, more emotional reason. A few people got the short end of the stick and were overly vocal about a bad experience with their employer. Remember, people don’t go out of their way to talk to the press about how great their employer is. But if they feel slighted, you’ll hear about it.
It goes without saying, but these theories could all be true to one degree or another. But here’s what’s important: when a brand makes a concerted effort to communicate a narrative, but that narrative is not taking root inside the company, then there’s a major problem. The narrative isn’t being lived out.
If OpenAI doesn’t resolve this, it will soon find out the consequence of a poorly executed narrative. It will leave the door open for a competitor to win the AGI category and provide an interesting history lesson on what NOT to do.
But we don’t have to wait for this saga to play out fully to learn from it. There are already some lessons we can glean from what’s already happened.
Here’s What You Can Learn from OpenAI
We had to cover a lot of ground to get the full picture. Thanks for sticking with me.
As promised, here’s the full story on the lessons you can take from OpenAI. My hope is that these improve your own odds of creating the right story, executing it throughout your business, and winning your category.
Lesson #1: Your Narrative Is What You Say and Do
If you were to constrain your coverage of OpenAI to its website, you might think that the company has a great narrative. You’d be correct. It says many of the right things. It has a strong Point of View (POV) about the need for safe AGI and has even documented much of its research on how to improve safety.
But as we’ve seen, OpenAI doesn’t have it all figured out.
For reasons we may never know, company insiders don’t believe that OpenAI is taking safety seriously enough. There’s a disconnect between what the company says, and what it does.
Remember: a narrative only works when others see consistency between your words and your actions. Anyone can put all the right words on paper. Living them out is what matters.
A narrative only works when others see consistency between your words and your actions.
Lesson #2: All Stakeholders Must Back Your Narrative
Studying OpenAI has left me wondering if all its efforts to communicate safety were relegated to a marketing or corporate comms exercise. Because treating strategic work – like your narrative – as a departmental exercise will never do it justice. And it puts you at risk of being a company that can’t fulfill its promises.
Strategic narrative work must involve all right stakeholders, especially the CEO.
For a startup, that may also include roles like the head of marketing and the head of product or engineering. For a public company, the heads of strategy, sales, brand, legal, or even finance might be added – and sometimes even a board member.
If you get the right people involved, and you facilitate healthy debate along the way, it creates a shared sense of ownership that isn’t possible if the narrative is a mere “marketing project.” Remember: alignment across your team isn’t a “nice to have”; it’s essential.
Lesson #3: Keep Your Narrative in Its Lane
A strategic narrative has a clear job to do: for starters, it must lay out why a brand is different and meaningful to its buyers.
For a brand trying to win a new category, like OpenAI, it must also paint a vision for the future of the category and show the world what the right solution should look like. If you don’t do enough to paint the full picture, your story won’t be compelling.
But your narrative can reach too far, too.
If your story attempts to address issues outside your brand’s sphere of expertise, it loses credibility. When your brand has too many opinions on areas it doesn’t have purview over, then you risk sounding like that “know-it-all” uncle at the Thanksgiving dinner that no one wants to listen to.
You must take this into consideration when you communicate your narrative. Have a strong POV about the areas you’ve earned the right to speak to. Shut up about everything else.
Lesson #4: Narrative Isn’t Always About the Product
A narrative isn’t about getting people to buy a product. It’s about getting people to buy into an idea.
Regarding AGI, OpenAI’s current objective isn’t to get people to buy a product – it doesn’t exist yet. The objective is to secure resources to build AGI in the first place. The strategy for getting there is to convince the world that AGI is safe.
This has little to do with describing a product.
Instead, OpenAI must convince the world that it understands the risks of AGI, that it operates with transparency, that it’s willing to work with others to solve safety issues, and that it does what it says it will do.
Look, sometimes a strategy narrative can play at the product level. But not always. The stakes are different in every situation – it’s your job to recognize them.
Final Thoughts
Despite a poor grade on its safety narrative, I do commend OpenAI for its progress with AGI. To say their work is exciting would be a huge understatement.
But its technical achievements only underscore why the ability to tell and execute the right story is so crucial - because it could all become undone if trust in the company gets eroded.
With so many unknowns, the OpenAI’s success is not a foregone conclusion. Will they retain top talent? Secure enough capital? Convince regulators to allow AGI deployment? Compromises on any of these could put its future at stake.
With so many unknowns, the OpenAI’s success is not a foregone conclusion.
There are two approaches OpenAI could take on the rest of this journey.
One would be to simply ignore issues with its narrative on safety, and bulldoze through them as they came up. But that’s expensive. Fighting battle after batter has its costs, even if you win. Not just in terms of capital, but in terms of energy, momentum, and morale.
The other approach would be to rebuild trust by shoring up the company’s ability to tell the right story and execute it through the company.
It wouldn’t solve everything.
But it sure would be a good place to start.
3 Questions to Leave You With
You may not be working in AGI, but my hope is that this issue gave you a new perspective on how to approach your situation. Here are some additional questions to consider as you work on your own narrative:
If there were one message our brand needs to share with the world right now, what would it be?
For OpenAI, it’s simple: we can build AGI safely. If they get that right, the rest of the journey can get much easier. For your brand, it might be something very different. What do you think it is?
Where might there be cracks in our company’s understanding and adoption of our story?
Your brand might tell a great story to the world, but does your team believe it? Do they understand it? Do they have a clear direction on how to execute it? Chances are, there are weak spots that need attention. Go find them.
As the leader of our businesses, how have my public statements helped or hindered our ability to win our category?
If you’re the CEO, or simply have a role that’s public-facing, the things you say and do can have an enormous impact on the perception of your company. Even choosing not to say anything sends a message. This is a good time to ask yourself - what have I said recently, and what impact might it have had?
Thanks for reading.
If we haven’t met before, I own a consultancy called Flag & Frontier.
CMOs and CEOs hire me to align their executive teams around the right category strategy and strategic narrative. If you’re interested in learning more about that, schedule a chat with me here.
Cheers ✌️

John Rougeux
Founder, Flag & Frontier
[email protected]
LinkedIn