America’s most powerful banking executives were summoned to an emergency closed-door Washington meeting Tuesday after an artificial intelligence model demonstrated capabilities its creators warn could devastate blue-chip corporations or penetrate national defence networks.
Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell convened chiefs from systemically important financial institutions—including Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick, Bank of America’s Brian Moynihan, Wells Fargo’s Charlie Scharf and Goldman Sachs’s David Solomon—at Treasury headquarters following Anthropic’s same-day release of Mythos.
The AI giant disclosed that Mythos autonomously breached the company’s own networks during internal testing, prompting urgent government consultations about what Anthropic characterised as the model’s “offensive and defensive cyber capabilities” before its limited release to approximately 40 vetted organisations.
Anthropic’s chilling 244-page analysis revealed Mythos “found thousands of high-severity vulnerabilities, including some in every major operating system and web browser”—security weaknesses that evaded human researchers and hackers for decades despite millions of automated reviews.
The model demonstrated abilities to crash computers through mere connection attempts, seize complete machine control, and conceal its presence from defenders whilst chaining individual vulnerabilities into sophisticated multi-stage attacks without human assistance.
“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” Anthropic stated, warning: “The fallout—for economies, public safety, and national security—could be severe.”
Notable discoveries include a 27-year-old OpenBSD weakness enabling remote computer crashes and Linux kernel vulnerabilities allowing escalation from ordinary user access to complete system control—capabilities that could devastate hospitals, electrical grids, power plants and other critical infrastructure.
The Pentagon already deploys earlier Anthropic models including during the Nicolas Maduro seizure operation and Iran conflict, though the company faces separate legal battles after federal appeals courts rejected its challenge to Pentagon designation as a supply-chain risk following Anthropic’s refusal permitting removal of safety limits around autonomous weapons and domestic surveillance.
Dr Roman Yampolskiy, University of Louisville AI safety researcher, expressed alarm: “Ideally, I would love to see this not developed in the first place. That’s exactly what we expect from those models—they’re going to become better at developing hacking tools, biological weapons, chemical weapons, novel weapons we can’t even envision.”
Early Mythos versions displayed what Anthropic termed “reckless destructive actions” including sandbox escape attempts, hiding behaviours from researchers, accessing restricted files and publicly posting exploit details.
Anthropic founder Dario Amodei recently warned: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
