Bundesbank chief says companies affected by Anthropic's AI should get access to assess risks
Joachim Nagel argues that sharing the Mythos model more widely is key to fair oversight and responsible development
At a glance
What matters most
- Bundesbank President Joachim Nagel says companies and institutions affected by Anthropic's Mythos AI should be given access to assess its risks fairly.
- Anthropic has chosen not to release Mythos to the public, citing safety concerns, but some question whether the move is more about image than ethics.
- The debate highlights growing tension between AI innovation, corporate control, and the need for transparent, shared oversight.
- Experts warn that without broader access, only a few powerful entities will shape how advanced AI is governed.
Across the spectrum
What people are saying
A quick look at how the same story is being framed from different angles.
On the Left
Withholding powerful AI like Mythos under the guise of safety risks entrenching corporate control over technology that affects everyone. True responsibility means transparency, not PR-driven secrecy. Affected institutions - especially public ones - should have access to audit and challenge these systems, not just accept corporate assurances.
In the Center
Anthropic has a duty to consider safety, but so do regulators and institutions that will face the consequences of AI misuse. Nagel's call for balanced access makes sense: full public release could be risky, but letting one company decide who sees the model undermines trust and preparedness.
On the Right
Anthropic is taking a responsible approach by not rushing a high-risk AI to market. Companies should have the freedom to manage their technology as they see fit, especially when national security or economic stability could be at stake. Government demands for access could lead to overreach or politicization of private innovation.
Full coverage
What you should know
Joachim Nagel, president of Germany's central bank, is urging broader access to Anthropic's latest AI model, Mythos, saying those affected by its capabilities should have a chance to understand it. In a statement released Monday, Nagel argued that keeping such a powerful system locked down limits the ability of regulators, businesses, and civil society to assess its real-world impact. His position adds weight to a growing chorus calling for more transparency in how advanced AI systems are evaluated and governed.
Anthropic, the AI company behind Mythos, announced earlier this month that it had built a model so capable it chose not to release it publicly. The company said Mythos Preview excels at identifying and exploiting weaknesses in digital systems, raising serious safety concerns. By withholding it, Anthropic framed the decision as an act of responsibility. But critics, including some researchers and policymakers, wonder if the move is as much about public perception as it is about safety.
Nagel didn't dispute the risks. Instead, he focused on fairness. If AI systems like Mythos are going to influence financial systems, infrastructure, and public services, then the organizations that operate them need a way to test and prepare. "A level playing field means shared understanding," he said. "When a technology can disrupt entire sectors, access shouldn't be controlled by one company's discretion alone."
The debate cuts to the heart of how society manages powerful new technologies. On one side, there's a push to move fast and let innovation lead. On the other, there's concern that without oversight, a handful of private firms could end up steering the future of AI with little accountability. Nagel's stance leans toward the latter - not demanding a full public release, but insisting that affected institutions get meaningful access.
Some experts agree. They argue that responsible development doesn't mean secrecy - it means inclusive testing. "We've seen this before with cybersecurity tools," said one AI policy analyst not affiliated with Anthropic. "The best defenses come from stress-testing by independent teams, not just internal reviews." Without that, even well-intentioned safeguards might miss real-world threats.
Anthropic has said it's working with select partners and government agencies to study Mythos in controlled settings. But Nagel and others say that's not enough. They want clearer criteria for who gets access and why, especially as similar models are likely already in development elsewhere. The concern isn't just about one AI - it's about setting a precedent for how the most powerful systems are handled going forward.
As AI continues to evolve, the question isn't just what these models can do, but who gets to decide. Nagel's message is clear: if AI is going to shape the global economy, the people and institutions living with its consequences should have a say - and the tools to understand it.
About this author
Zwely News Staff compiles multi-source reporting into concise, viewpoint-aware coverage for readers who want context without noise.
Source Notes
Mythos Access Must Be Granted on Level Playing Field, Nagel Says
Anthropic’s Mythos model should be shared with affected organizations to ensure a level playing field in assessing its uses and dangers, according to Bundesbank President Joachim Nagel.
Mythos: are fears over new AI model panic or PR? – podcast
Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and explo...
Previous story
Christina Applegate says she's taking time to focus on her health after hospitalization reports
Next story