Saturday, April 11, 2026 Live Desk
Zwely News logo

Powell and Bessent warn bank CEOs about risks from Anthropic's new AI model

Top financial regulators are sounding the alarm over a powerful new AI that could be weaponized by hackers.

ZN

Author

Zwely News Staff

Shared Newsroom

April 10, 2026 9:16 AM 3 min read
Powell and Bessent warn bank CEOs about risks from Anthropic's new AI model

At a glance

What matters most

  • Anthropic quietly rolled out its new AI model, Mythos, to select companies, sparking fears it could be misused by cybercriminals.
  • Powell and Bessent convened emergency discussions with top bank executives to assess potential risks to financial infrastructure.
  • Regulators are pushing for tighter oversight of AI deployment in critical sectors, especially finance.
  • The Mythos model is more capable than previous versions at generating realistic code and social engineering attacks.

Across the spectrum

What people are saying

A quick look at how the same story is being framed from different angles.

On the Left

This situation shows why we need strong, proactive regulation of AI before it causes real harm. The fact that top officials are scrambling to respond means oversight has already fallen behind. We should be setting clear rules for AI development and access, especially when public infrastructure is at risk.

In the Center

Regulators are right to be cautious. Advanced AI brings real benefits but also new vulnerabilities, particularly in finance. The challenge is to protect critical systems without stifling innovation or overreacting to hypothetical threats.

On the Right

While cybersecurity is important, we shouldn't let fear of worst-case scenarios slow down American tech leadership. Anthropic is building world-class AI that can help businesses-overregulation could push innovation overseas and weaken our competitive edge.

Full coverage

What you should know

Jerome Powell and Scott Bessent held closed-door discussions this week with chief executives from some of the nation's largest banks over a growing concern: the potential misuse of Anthropic's newly released AI model, Mythos. The talks, confirmed by multiple sources, reflect mounting unease among regulators about how quickly advanced artificial intelligence is outpacing current safeguards in the financial sector.

Anthropic, the AI company behind the Claude series, introduced Mythos to a limited group of enterprise clients earlier this week. While the model promises breakthroughs in automation and problem-solving, internal testing and early external analysis suggest it can generate highly convincing phishing content, exploit code, and even mimic human behavior in digital environments with alarming accuracy. That capability has raised red flags about its potential use in coordinated cyberattacks on financial institutions.

According to people familiar with the conversations, Powell and Bessent emphasized that the financial system's stability could be at risk if threat actors gain access to tools like Mythos. The discussions focused on how banks are managing AI exposure, whether through third-party vendors or internal development, and what contingency plans exist for AI-driven cyber incidents.

One major bank executive, who spoke on condition of anonymity, said the tone of the meeting was not alarmist but urgent. "This isn't science fiction anymore," they said. "We're talking about real tools that can automate attacks at scale. The Fed and Treasury want to make sure we're not caught flat-footed."

Anthropic has stated it implemented strict access controls and monitoring for Mythos, including usage audits and rate limits. But experts note that once AI models are deployed-even in restricted settings-there's always a risk of leakage, reverse engineering, or misuse by authorized users. Some cybersecurity firms have already detected early attempts to mimic Mythos-like behavior on underground forums.

The response from regulators marks a shift from general AI oversight to targeted, sector-specific risk management. Financial institutions are now being asked to report on their AI governance frameworks, with potential new guidelines expected within months. The goal is to balance innovation with resilience, especially as AI becomes embedded in trading, customer service, and fraud detection systems.

For now, no breaches tied to Mythos have been reported. But the fact that Powell and Bessent stepped in so quickly underscores how seriously they're taking the threat. As one official put it: "We're not waiting for the fire. We're checking the smoke detectors."

About this author

Zwely News Staff compiles multi-source reporting into concise, viewpoint-aware coverage for readers who want context without noise.

Source Notes

Center CNBC Apr 10, 12:56 PM

Powell, Bessent discussed Anthropic's Mythos AI cyber threat with major U.S. banks

Anthropic rolled out the new Mythos AI model to a select group of companies over concerns that hackers could exploit its capabilities.

Center Bloomberg Markets Apr 10, 11:29 AM

Anthropic's New AI Model Sparks Urgent Bessent, Powell Warning to CEOs | The Pulse 4/10

"The Pulse With Francine Lacqua" is all about conversations with high profile guests in the beating heart of global business, economics, finance and politics. Based in London, we go wherever the story is, bringing you exclusive interviews a...

Right The Dispatch Apr 10, 6:45 AM

The Coming AI Superstorm

What Anthropic’s Mythos model means for our security.

Previous story

Inflation ticks up to 3.3% in March as Iran conflict pushes energy costs higher

Next story

The Artemis II crew is getting ready to come home after their big loop around the moon

Related Articles

More in Business