On national television this week, a former OpenAI researcher looked into the camera and said there is a 70% chance that artificial intelligence kills all of us. Not in a century. Not in fifty years. In about five.
His name is Daniel Kokotajlo. He used to work on AI safety at OpenAI — the company building the most powerful AI systems on Earth. He quit because he lost confidence that OpenAI would "behave responsibly around the time of AGI." He forfeited a significant equity stake to do so. And now he is on The Daily Show telling America that the thing he helped build might end civilization.
"We at the Futures Project think that there's a 70% chance of all humans dead or something similarly bad," Kokotajlo said during the interview. When the host asked him to clarify, he did not flinch. "Correct. Extinction."
This is not a random internet doomsayer. This is a man who was inside the room where it happened, who saw the trajectory of AI development from the inside of the world's leading AI lab, and who walked away from life-changing money because what he saw terrified him.
The question is: should the rest of us be terrified too?
Who Is Daniel Kokotajlo?
Understanding the weight of this warning requires understanding who is making it.
Kokotajlo is not a typical tech whistleblower. He is a former philosophy PhD student who pivoted into AI research because he genuinely believed the field posed existential questions that philosophers needed to engage with. He worked at AI Impacts, then the Center on Long-Term Risk, before joining OpenAI to work specifically on AI safety.
At OpenAI, his job was to think about what could go wrong. Not in the "oops, the chatbot said something offensive" sense. In the "what happens when these systems become smarter than us" sense. He was paid to contemplate the end of the world — and he concluded that it was not only possible but probable.
In 2021, while still processing what he was seeing inside OpenAI, Kokotajlo wrote a blog post titled "What 2026 Looks Like." It was a speculative forecast of how AI development would unfold over the next five years. At the time, it read like science fiction. GPT-3 had just been released, and the idea of AI agents autonomously managing workflows, writing code, and controlling computers seemed far-fetched.
Fast forward to March 2026. AI agents like OpenClaw are managing email, browsing the web, and executing code autonomously. NVIDIA's CEO is calling agentic AI "the next ChatGPT." AI models can write compilers, find zero-day vulnerabilities in production code, and generate photorealistic video from text prompts.
Kokotajlo's 2021 predictions were not just directionally correct — they were eerily accurate. This is the man telling us we have five years.
In June 2024, Kokotajlo co-signed an open letter with other current and former OpenAI employees demanding the "right to warn" about AI risks without fear of reprisal. The letter argued that AI companies were prioritizing commercial interests over safety and that employees who raised concerns were being silenced through restrictive non-disparagement agreements.
He then left OpenAI entirely, reportedly forfeiting equity worth millions, and founded the AI Futures Project — an organization dedicated to forecasting AI development and its risks. Their flagship report, "AI 2027," lays out a detailed scenario for how we get from here to artificial general intelligence, and what happens next.
The conclusion is not optimistic.
The 70% Number: Where Does It Come From?
Let us unpack that terrifying statistic. Kokotajlo is not claiming certainty. He is claiming probability — a 70% chance that advanced AI leads to human extinction or "something similarly bad." That still leaves a 30% chance that things work out fine.
But think about it this way: if someone told you there was a 70% chance your house would burn down in the next five years, you would not calmly continue redecorating the kitchen. You would buy fire extinguishers, install sprinklers, and probably move.
The estimate comes from the AI Futures Project's analysis of several converging risk factors:
1. The pace of AI development is accelerating, not plateauing.
Every year, the capabilities of frontier AI models take a significant leap. GPT-3 to GPT-4 was a jump. GPT-4 to GPT-5 was another. Claude Opus, Gemini Ultra, and open-source models like DeepSeek and Qwen are all pushing the frontier simultaneously. And now, leaked documents suggest Anthropic's upcoming Claude Mythos model represents a "step change" in capabilities beyond anything seen before.
Kokotajlo's argument is that this acceleration is not linear — it is exponential. Each generation of AI models is being used to help build the next generation. AI is writing its own code, optimizing its own training, and discovering new techniques that human researchers would not have found on their own.
2. AI systems are becoming harder to control.
Today, shutting down an AI system is as simple as closing a laptop or pulling a plug. But AI is rapidly being embedded into critical infrastructure — military systems, financial markets, power grids, healthcare networks, and autonomous vehicles. As these systems become more interconnected and autonomous, the "just turn it off" option becomes increasingly impractical.
Kokotajlo warns that we may reach a point where AI systems are so deeply integrated into civilization's operating stack that removing them causes more damage than leaving them running. At that point, we are no longer in control — we are dependent.
3. Safety research is losing the race against capability research.
This is perhaps Kokotajlo's most damning charge against his former employer and the industry at large. The teams working on making AI safe are dramatically outgunned — in funding, headcount, and institutional support — by the teams working on making AI more powerful.
OpenAI's safety team has experienced notable departures. Anthropic, despite branding itself as the "safety-first" company, just accidentally leaked details of a model it acknowledges poses "unprecedented cybersecurity risks." Google, Meta, and Chinese AI labs are racing to keep up with no visible safety infrastructure at all.
The incentive structure is broken: the company that ships the most capable model fastest wins the market. Safety is a cost center that slows you down.
4. There is no global coordination mechanism.
Nuclear weapons had the Cold War's mutually assured destruction and eventually arms control treaties. Climate change has the Paris Agreement (however imperfect). AI has nothing. No international treaty. No enforcement mechanism. No agreed-upon red lines. Not even a shared definition of what "dangerous AI" means.
The EU AI Act is a start, but it is primarily focused on existing AI applications, not future existential risks. The US has executive orders that change with each administration. China has its own regulatory framework that prioritizes state control over safety. And the companies building the most powerful systems are lobbying aggressively against any regulation that might slow them down.
The Counter-Arguments: Why Others Think He Is Wrong
Not everyone agrees with Kokotajlo's timeline or probability estimate. And in fairness, there are strong counter-arguments.
"AI progress will hit a wall." Some researchers argue that current approaches — scaling transformer models with more data and compute — will plateau. The easy gains are behind us, and future improvements will require fundamental breakthroughs that may take decades, not years.
"Alignment is making progress." While Kokotajlo paints safety research as losing the race, others point to genuine advances in interpretability, RLHF (reinforcement learning from human feedback), constitutional AI, and other alignment techniques. Anthropic itself has published significant safety research. The field is not standing still.
"Economic incentives favor safety." Companies that ship dangerous AI products face lawsuits, regulatory action, and reputational damage. The market itself creates pressure to build safe systems. No one wants to be the company that caused the AI apocalypse — it is bad for the stock price.
"Superintelligence does not automatically mean hostility." The classic "paperclip maximizer" scenario — where an AI optimizes for a goal with catastrophic side effects — assumes a particular type of failure that may not be inevitable. An AI smart enough to be dangerous might also be smart enough to understand human values.
These are not trivial objections. They represent the views of serious researchers at major institutions.
But Kokotajlo has a response: "All of those arguments require things going right. I am estimating what happens if some of them go wrong."
🔥 Our Hot Take: The Man Who Predicted 2026 Deserves Our Attention
Here is what makes Kokotajlo different from your average AI doomer on Twitter.
He was right before.
His 2021 blog post, "What 2026 Looks Like," written when GPT-3 was the state of the art, predicted with remarkable accuracy the world we are living in right now. AI agents managing workflows. Autonomous coding assistants. Models that can reason, plan, and execute multi-step tasks. The explosion of open-source AI. The geopolitical competition between the US and China.
If someone predicts the future once and gets it right, you can call it luck. If they lay out a detailed, multi-year forecast and the world unfolds almost exactly as described, you should probably listen when they tell you what comes next.
He gave up millions to say this.
Kokotajlo walked away from OpenAI equity that would have been worth a fortune. He did not have to become a whistleblower. He could have kept his head down, collected his stock options, and enjoyed the ride. Instead, he chose to warn people, knowing it would make him a controversial figure and potentially burn bridges across the industry.
People who sacrifice personal wealth to deliver uncomfortable truths deserve more attention than people who profit from telling you everything is fine.
The 70% number should not be taken literally — but it should be taken seriously.
Is the probability of human extinction from AI exactly 70%? Of course not. No one can assign precise probabilities to unprecedented events. But the directional signal is clear: a credible insider who saw the trajectory of AI development firsthand believes the risk is not remote. It is not theoretical. It is imminent.
Even if you think his estimate is off by a factor of ten — even if you think the real probability is 7% — that is still catastrophically high for an event that means the end of everything. We spend billions preparing for asteroid impacts with a far lower probability than that.
What Should We Actually Do?
Kokotajlo is not calling for us to smash all the computers and move to the woods. He is calling for specific, actionable steps:
- Mandatory safety evaluations before deploying frontier AI models
- International coordination on AI development, similar to nuclear non-proliferation frameworks
- Whistleblower protections for AI researchers who identify risks
- Slowing down the deployment of the most dangerous capabilities until safety catches up
- Transparency from AI companies about what their models can actually do
None of these are radical proposals. All of them are being resisted by the industry.
The Bottom Line
Daniel Kokotajlo sat on The Daily Show and told America there is a 70% chance AI kills everyone within five years. The audience laughed nervously. The host changed the subject. Social media made memes.
And the AI labs went back to work building the next model.
This is the pattern with existential warnings. They sound crazy until they do not. Climate scientists warned for decades before anyone took them seriously. Nuclear physicists warned about the bomb before Hiroshima made the abstract concrete.
Kokotajlo is not asking you to panic. He is asking you to pay attention. To demand that the companies building the most powerful technology in human history take safety as seriously as they take their quarterly earnings.
Whether the probability is 70% or 7% or 0.7%, the stakes are too high for indifference.
The man who predicted 2026 is now predicting 2031. And he is scared.
Maybe we should be too.