Thereâs an unwritten rule in the technology industry: if youâre going to build the most dangerous AI model ever created, you should probably make sure the blog post about it isnât sitting in a publicly searchable data store. Anthropic â the $60 billion AI company founded by former OpenAI researchers who left because they thought their old employer wasnât being careful enough about safety â apparently didnât get the memo.
On March 26, 2026, Fortune reporter Bea Nolan discovered unpublished files tied to Anthropicâs blog sitting in a publicly accessible data cache. Among the documents was a draft blog post describing a new AI model called 'Claude Mythos' â described by Anthropicâs own materials as 'the most powerful AI model weâve ever developed.' The documents had been left exposed due to what the company later called a 'human error in the configuration of its CMS tools.'
Within 24 hours, cybersecurity stocks were in freefall, Bitcoin was sliding, and the internet was doing what the internet does best: pointing out the delicious irony of a safety-focused AI company accidentally leaking its own doomsday weapon through a misconfigured WordPress equivalent.
What Leaked: Meet Claude Mythos and Its Evil Twin, Capybara
The leaked materials paint a picture of a model that represents a genuine leap beyond anything Anthropic â or arguably anyone else â has built before. According to an archived development page reviewed by Decrypt, Anthropic describes Mythos as follows:
'Mythos is a new name for a new tier of model: larger and more intelligent than our Opus models â which were, until now, our most powerful. We chose the name to evoke the deep connective tissues that link together knowledge and ideas.'
This is significant. In Anthropicâs model hierarchy, Opus has always been the crown jewel â the most powerful, most expensive, most capable tier. Mythos isnât just another Opus update. Itâs an entirely new tier above Opus. Think of it as the penthouse floor that nobody knew the building had.
According to the leaked benchmarks, Mythos scored 'dramatically higher' than Claude Opus 4.6 â the current top-of-the-line model â across three critical domains: software coding, academic reasoning, and cybersecurity. Anthropic didnât share specific numbers, but 'dramatically higher' from a company known for its measured, precise language suggests this isnât a marginal improvement.
But hereâs where it gets really interesting: the documents also reference a second version of the model, internally codenamed 'Capybara.' Capybara appears to be positioned as Mythos version two â an even more advanced iteration that Anthropic is approaching with what the leaked materials describe as 'extra caution.' If Mythos is the model that keeps cybersecurity executives up at night, Capybara is the one that gives them actual nightmares.
Anthropicâs Own Words: 'Far Ahead of Any Other AI Model in Cyber Capabilities'
The most alarming aspect of this leak isnât that it happened â companies leak things all the time. Itâs what Anthropicâs own internal documents say about the modelâs capabilities and risks.
According to the leaked draft blog, Anthropic wrote: 'Although Mythos is currently far ahead of any other AI model in cyber capabilities, it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.'
Read that again. Anthropic isnât saying competitors might eventually build something dangerous. Theyâre saying theyâve already built it, and they believe others will follow. The company is essentially warning that the cybersecurity arms race between AI attackers and AI defenders is about to tip decisively in favor of the attackers â and their own model is leading the charge.
The leaked materials elaborate further: 'In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses â even beyond what we learn in our own testing. In particular, we want to understand the modelâs potential near-term risks in the realm of cybersecurity â and share the results to help cyber defenders prepare.'
This is Anthropic telling the world, in their own words, that theyâve built something so powerful in the cybersecurity domain that theyâre afraid of what happens when it â or models like it â get into the wrong hands. The plan was to release this information on their own terms, through controlled channels, with appropriate context and caveats. Instead, it got dumped into the public domain through a misconfigured CMS.
The Market Reaction: Billions Wiped in Hours
Wall Streetâs reaction was swift and brutal. Within hours of the leak becoming public, cybersecurity stocks experienced their worst single-day decline in months:
- Palo Alto Networks (PANW): Down approximately 7%
- CrowdStrike (CRWD): Down approximately 6.4%
- Zscaler (ZS): Down approximately 5.8%
- Fortinet (FTNT): Down approximately 4%
Even Bitcoin slid alongside software stocks, according to CoinDesk reporting, as the broader tech market digested the implications of an AI model that could fundamentally alter the cybersecurity landscape.
The selloff echoes an eerily similar event from just one month earlier. In February 2026, Anthropic unveiled Claude Cowork â an AI system designed to automate complex workplace tasks including contract review and compliance. That announcement triggered a broad selloff across software and professional-services companies that erased roughly $285 billion in market value.
As Nexatech Ventures founder Scott Dylan told Decrypt at the time: 'The marketâs response was a signal â not that AI agents will immediately replace these businesses, but that investors are finally pricing in the structural risk that foundation model providers can now compete directly with the software layer.'
Two massive market disruptions in a single month. Anthropic is now the only AI company in history that can tank entire market sectors by simply talking about what theyâve built â or in this case, by accidentally leaving the documentation in an unlocked filing cabinet.
The Irony Is Physically Painful
Letâs talk about the elephant in the room. Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and several other former OpenAI researchers who left specifically because they were concerned about AI safety. The companyâs entire brand identity is built around being the responsible, cautious, safety-first alternative to the 'move fast and break things' approach of its competitors.
Anthropic created the concept of 'Constitutional AI.' They pioneered the Responsible Scaling Policy. They publish detailed model safety cards. They have an entire research division dedicated to AI alignment and interpretability. They are, by their own description, the adults in the room.
And yet, here we are. The company that worries about existential risk from superintelligent AI couldnât secure a data store. The firm that employs some of the worldâs foremost experts on AI safety had a CMS configuration so basic that a journalist could find unreleased product documentation through a public search.
This isnât a sophisticated hack. This isnât a state-sponsored espionage operation. This isnât even a disgruntled employee leaking to the press. This is someone forgetting to check a privacy setting â the digital equivalent of leaving your diary open on a park bench.
Futurismâs headline captured the mood perfectly: 'Anthropic Just Leaked Upcoming Model With Unprecedented Cybersecurity Risks in the Most Ironic Way Possible.'
The irony cuts even deeper when you consider the timing. Anthropic recently reopened discussions with the Pentagon about potential AI applications for national defense. If youâre pitching yourself to the worldâs most security-conscious customer as a trusted AI partner, accidentally leaking your crown jewels through an unsecured data store is... not ideal messaging.
What This Means for the AI Industry
Beyond the schadenfreude, the Mythos leak raises several serious questions that the entire AI industry needs to grapple with.
First: The capability overhang is real. Companies are building models that are significantly more powerful than whatâs publicly available. If Anthropic has Mythos â and is already working on Capybara â itâs safe to assume that OpenAI, Google DeepMind, and others have comparable projects in their labs. The public-facing models we interact with daily are the tip of the iceberg. The gap between what exists and whatâs deployed is widening.
Second: AIâs impact on cybersecurity is no longer theoretical. When the company that built the model warns in its own internal documents that it could 'exploit vulnerabilities in ways that far outpace the efforts of defenders,' thatâs not marketing hyperbole or academic hand-wraving. Thatâs a builder looking at what theyâve created and feeling genuinely concerned. The cybersecurity industry needs to take this seriously â not as a future risk, but as a present reality.
Third: Market pricing of AI disruption is becoming a real force. Two selloffs in one month â $285 billion from Claude Cowork, and now another major hit from Mythos â suggest that financial markets are beginning to internalize the disruptive potential of frontier AI models. Every AI company announcement is now a potential market-moving event, not just for tech stocks, but for the industries those AI models target.
Fourth: Operational security at AI labs is inadequate. If the company most associated with AI safety canât secure its own CMS, what does that say about the dozens of smaller AI labs racing to build powerful models? The AI safety conversation has been almost entirely focused on the models themselves â alignment, interpretability, guardrails. This incident is a stark reminder that mundane operational security failures can be just as consequential as any misaligned AI system.
The Controlled Release That Wasnât
An Anthropic spokesperson confirmed the modelâs existence to Fortune after the leak, stating: 'Weâre developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, weâre being deliberate about how we release it.'
The companyâs plan was apparently to start with a limited early-access rollout aimed specifically at organizations working on cybersecurity defense. This is a responsible approach â give the defenders a head start before the attackers can get their hands on it. Itâs the kind of thoughtful, staged release strategy that Anthropic is known for and that the industry should follow.
But that strategy assumed Anthropic would control the narrative. Instead, the world learned about Mythos through a Fortune exclusive about a security lapse. The conversation isnât about how responsibly Anthropic is handling its most powerful model. Itâs about how irresponsible they were with their data storage. The frame has shifted from 'Anthropic is being careful' to 'Anthropic canât even secure a blog draft.'
This matters because public trust in AI companies is fragile. Every incident like this erodes the credibility that labs need to maintain when they ask regulators and the public to trust them with increasingly powerful systems. If you want governments to let you self-regulate, you need to demonstrate basic competence in securing your own infrastructure.
đ„ Our Hot Take
This is the most important AI story of 2026 so far, and not for the reason you think.
Yes, Claude Mythos sounds incredibly powerful. Yes, the cybersecurity implications are genuinely concerning. And yes, the market reaction shows that investors are taking AI disruption seriously.
But the real story here is about the gap between AI safety rhetoric and operational reality. Anthropic talks the best game in the industry about responsible AI development. Their research is world-class. Their policies are thoughtful. Their communication is measured and precise. And none of that mattered when someone misconfigured a data store.
This is a lesson every AI company â and every organization deploying AI â needs to internalize: safety isnât just about the model. Itâs about the entire system around the model. The supply chain, the infrastructure, the human processes, the boring operational details that nobody wants to think about. You can have the most aligned AI in the world, and it doesnât matter if your deployment pipeline has a permissions error.
Weâre also watching closely to see how this affects the Pentagon conversation. Anthropic was positioning itself as a trusted partner for defense applications. Defense contracts require security clearances, ITAR compliance, and operational security standards that make the commercial tech world look like a childrenâs playgroup. A CMS leak wonât kill the deal, but it certainly wonât help.
Our prediction: Anthropic will use this incident to accelerate Mythosâs release timeline. The cat is out of the bag â everyone knows the model exists and roughly what it can do. Keeping it under wraps serves no strategic purpose now. Expect an official announcement within weeks, positioned as 'we were already planning to share this' (which, to be fair, they were).
In the meantime, if youâre holding cybersecurity stocks, donât panic sell. The market overreacted. Powerful AI models are a threat to legacy cybersecurity approaches, but theyâre also the biggest opportunity the industry has ever seen. The companies that can integrate AI-powered defense capabilities will be worth more, not less. The dip is a buying opportunity â just maybe not on a Sunday when you should be having brunch.
What to Watch
- Official Mythos announcement: Expect Anthropic to formally unveil the model within 2-4 weeks. The controlled rollout strategy is still intact, but the timeline has been compressed.
- Pentagon talks: Watch for any signals about whether the leak affects Anthropicâs defense positioning. Congressional hearings are not out of the question.
- Competitor responses: OpenAI, Google, and Meta will want to signal that they have comparable capabilities. Expect a flurry of benchmark announcements.
- Cybersecurity industry adaptation: The smart cybersecurity companies will pivot to 'AI-powered defense' messaging immediately. Palo Alto Networks and CrowdStrike already have significant AI capabilities â this could actually accelerate their strategic narratives.
- Capybara: The leaked documents reference Mythos version 2. If version 1 is this powerful, the next iteration could redefine what frontier AI looks like.
The AgentBear Corps is tracking the frontier AI race. Follow us for coverage of the models that move markets â and the humans who accidentally leak them.