The unthinkable happened. In the span of 48 hours, the AI landscape has been fundamentally reshaped—not by a new model release, but by sworn enemies joining forces, and an underdog becoming king.
OpenAI, Anthropic, and Google—three companies that have spent years trashing each other's models, poaching each other's talent, and racing to outspend one another on compute—have suddenly decided they're on the same team. Their common enemy? Chinese AI labs they accuse of "stealing" their intellectual property through a technique called adversarial distillation.
And if that wasn't dramatic enough, Anthropic—the supposed "safety-first" underdog—just announced it's generating $30 billion in annualized revenue, officially overtaking OpenAI's $25 billion run rate. Oh, and they dropped Claude Mythos Preview, a model so powerful they're literally afraid to release it to the public.
The AI wars just entered a new phase. Buckle up.
When Rivals Become Allies
According to a Bloomberg report published April 6, the three US AI giants are now sharing intelligence through the Frontier Model Forum—the same industry nonprofit they founded with Microsoft back in 2023. The goal? Detect and stop what they call "adversarial distillation attempts" by Chinese competitors.
This is unprecedented. We're talking about companies that have been in a bare-knuckle brawl for market dominance, now swapping notes like classmates cheating on a test they didn't study for.
But the threat they perceive is serious enough to overcome their rivalries. Here's what's happening: Chinese AI labs are allegedly using a technique called model distillation to essentially clone the capabilities of US frontier models at a fraction of the cost. They pump vast quantities of prompts through OpenAI's GPT-4, Anthropic's Claude, or Google's Gemini, capture the outputs, and use that data to train their own "student" models that mimic the "teacher" models' behavior.
The result? Chinese models like DeepSeek's R1 that match Western capabilities but undercut them dramatically on price. We're talking about models that cost pennies on the dollar compared to their American counterparts.
DeepSeek Changed Everything
This isn't theoretical. The wake-up call came in January 2025, when Chinese startup DeepSeek dropped R1—a reasoning model that sent shockwaves through Silicon Valley. R1 performed near-parity with OpenAI's best models but was practically free to use.
Microsoft and OpenAI immediately launched an investigation. What they found—or at least what they allege—was that DeepSeek had improperly "exfiltrated" massive amounts of training data from OpenAI's APIs to build their model.
In February, OpenAI took their case to Congress. In a memo to the House Select Committee on China, they accused DeepSeek of "free-riding on the capabilities developed by OpenAI and other US frontier labs" and warned that the Chinese firm was already working on a new version using the same techniques.
The economic stakes are staggering. US officials estimate that unauthorized distillation is costing American AI labs billions of dollars in annual profit. When you're spending hundreds of billions on data centers and GPUs, watching competitors clone your work for essentially free is existential.
The National Security Angle
But this isn't just about money—it's being framed as a national security issue. And honestly? They might have a point.
The fear is that Chinese labs could use distillation to create models stripped of safety guardrails. Think AI systems that could help develop bioweapons, launch sophisticated cyberattacks, or spread disinformation at scale—all without the limitations that US labs have painstakingly built into their systems.
"Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen," the Straits Times reported.
The Trump administration appears receptive. The AI Action Plan unveiled in 2025 called for creating an "information sharing and analysis center" specifically for this purpose. But there are complications: antitrust concerns make the companies nervous about what they can legally share, and they're reportedly seeking clearer guidance from Washington.
Meanwhile, Anthropic Crowned Itself King
While everyone was watching the China drama, Anthropic executed the business equivalent of a coup.
On April 7, the company announced its annualized revenue run rate had hit $30 billion—up from just $9 billion at the end of 2025. That's more than a 3x increase in four months. For context, that's higher than the trailing 12-month revenues of all but 129 companies in the S&P 500.
OpenAI? They're sitting at roughly $25 billion. Anthropic has officially taken the crown as the highest-earning AI unicorn on the planet.
The customer growth is equally jaw-dropping. In February, Anthropic reported 500 business customers spending over $1 million annually. Today? That number exceeds 1,000—doubling in less than two months.
This isn't just growth. This is hockey-stick growth at a scale that defies conventional business logic.
Mythos: The Model Too Dangerous to Release
But Anthropic wasn't done. On the same day they flexed their financial muscles, they unveiled Claude Mythos Preview—and immediately announced they won't be making it publicly available.
Why? Because it's apparently too good at hacking.
Mythos is Anthropic's most capable model yet. Not because it was specifically trained for cybersecurity, but because its general coding and reasoning abilities are so advanced that they naturally extend to finding security vulnerabilities. According to Anthropic, Mythos has already identified thousands of zero-day vulnerabilities, including critical bugs that have been hiding in code for decades—including a 27-year-old bug in OpenBSD, an operating system specifically designed for security.
The company is so concerned about potential misuse that they're launching Project Glasswing—a limited-access program where only select partners (Apple, Google, Microsoft, Amazon, Nvidia, CrowdStrike, Palo Alto Networks, and about 40 others) can use Mythos for defensive security purposes only.
"There was a lot of internal deliberation," Dianne Penn, Anthropic's head of research product management, told CNBC. "We really do view this as a first step for giving a lot of cyber defenders a head start on a topic that will be increasingly important."
The message is clear: this technology is a dual-use nightmare, and Anthropic is trying to get ahead of the inevitable race between defensive and offensive applications.
The Infrastructure Arms Race
Behind all these headlines is the relentless drumbeat of infrastructure expansion. You can't talk about AI in 2026 without talking about compute—and Anthropic just secured the bag.
Alongside their revenue announcement, Anthropic confirmed expanded partnerships with Google and Broadcom. According to a Broadcom filing, Anthropic will access 3.5 gigawatts of TPU-based AI compute capacity beginning in 2027. That's Google-speak for "a mind-boggling amount of AI chips."
This compute bonanza will power future Claude models and help meet what Anthropic calls "extraordinary demand from customers worldwide."
The message to competitors? We're just getting started.
The Bigger Picture: A New Cold War
What we're witnessing isn't just a business story—it's the AI equivalent of a Cold War arms race. The lines are being drawn, and they're not just economic.
The US-China AI rivalry has moved from subtext to headline. When OpenAI, Anthropic, and Google—companies that have been at each other's throats—are suddenly holding hands and singing kumbaya, you know something fundamental has shifted.
Here's the uncomfortable truth: Chinese AI labs have played the open-source game brilliantly. By releasing open-weight models that anyone can download and run locally, they've created an ecosystem that naturally undercuts the proprietary, API-based business models of US labs. DeepSeek R1 wasn't just a technical achievement—it was a business model grenade lobbed into Silicon Valley's carefully constructed fortresses.
The US response—forming an alliance to combat distillation—is necessary but insufficient. It's playing defense when they need to be playing offense. The real question is whether American AI companies can innovate fast enough to maintain their lead, or whether the cost advantages of Chinese models will eventually erode their market position entirely.
What This Means for the Industry
For businesses and developers, this landscape shift has immediate implications. The proliferation of capable, low-cost Chinese models means pricing pressure on US APIs will only intensify. At the same time, the alliance's focus on detecting distillation attempts suggests API restrictions and rate limiting may become more aggressive.
Enterprise buyers face a choice: accept the security and compliance risks of Chinese models for cost savings, or pay premium prices for US models with stronger guardrails. For regulated industries—healthcare, finance, defense—the answer is obvious. For startups and cost-sensitive applications, the math gets complicated.
The distillation arms race also raises questions about intellectual property in the age of AI. When a model can be effectively cloned through API queries, what does "proprietary technology" even mean? This is uncharted legal territory, and the courts will be sorting it out for years.
Our Hot Take
We're witnessing the moment when AI stops being a technology story and becomes a geopolitical story. The alliance has formed not because these companies suddenly love each other, but because they recognize a common threat to their collective survival.
As for Anthropic's rise? This is the most interesting subplot. For years, they positioned themselves as the "responsible" alternative to OpenAI—safety-first, cautious, perhaps a bit slower. But the numbers don't lie: $30 billion ARR means they've figured out how to scale while maintaining their principles (or at least their marketing about principles).
The Mythos release strategy is peak Anthropic—simultaneously showing off their technical capabilities while making a meta-point about responsibility. "Look how powerful our model is. Look how dangerous it could be. Look how carefully we're handling it." It's brilliant positioning.
But here's what keeps us up at night: if Mythos is too dangerous to release today, what happens when the next version is even more capable? And what happens when Chinese labs—undeterred by terms of service violations—train their own versions using the same distillation techniques the alliance is trying to stop?
The genie is out of the bottle. The question isn't whether powerful AI will proliferate—it's who controls it, who benefits from it, and whether we can keep it from being used against us.
The AI wars are just beginning. The alliance has formed. The stakes couldn't be higher.
Sources: Bloomberg, CNBC, TechCrunch, Sherwood News, The Straits Times, Gadgets 360, Fortune