U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called an emergency meeting with top bank CEOs to discuss an emerging cybersecurity concern. The focus of the discussion was Claude Mythos, an AI model developed by Anthropic, which is raising alarms for its potential impact on the financial sector. Key figures from JPMorgan Chase, Citigroup, Bank of America, and Goldman Sachs were briefed on the possible risks posed by this new AI technology.
Claude Mythos: A Changing Landscape for Cybersecurity
Claude Mythos is an advanced AI designed to autonomously scan and analyze millions of lines of code to uncover vulnerabilities that could be exploited by malicious actors. Unlike traditional generative AI, Mythos can identify “zero-day” flaws that are otherwise unknown to software developers. This capability could make sophisticated cyberattacks more accessible to a wider range of threat actors, presenting challenges for established cybersecurity defenses.
Although regulators have expressed concern over the rapid pace at which this AI could be used to identify vulnerabilities in critical banking infrastructure, it is not yet clear how widespread or immediate the threat may be. The Treasury Department’s emergency meeting highlighted the need for banks to remain vigilant in adapting to new technologies that could affect their security protocols.
Project Glasswing: A Collaborative Approach to Mitigating Risks
To address these potential threats, U.S. regulators have proposed Project Glasswing, a joint initiative between Anthropic and select technology companies. The aim of this initiative is to assist financial institutions in using AI to strengthen their internal cybersecurity measures. Through Project Glasswing, a group of banks will have early access to Claude Mythos for defensive purposes—not for commercial use but to help build a “digital immune system” for financial organizations.
This program is designed to help banks use AI proactively to identify and address vulnerabilities within their systems. By conducting “autonomous self-audits,” institutions can potentially patch long-standing flaws before they are exploited. However, the increasing speed and complexity of cyber threats mean that regulators are urging banks to act quickly to stay ahead of emerging risks.
Autonomous Escalation: Addressing Market Stability Risks
In addition to the direct cybersecurity threats posed by AI systems like Claude Mythos, regulators are concerned about the broader implications of autonomous AI in financial markets. Agentic AI, systems capable of performing complex tasks without human oversight, could introduce new risks to market stability. These AI-driven systems, including automated trading algorithms and loan processing tools, could be vulnerable to manipulation or malfunction, leading to disruptions in the financial markets.
Identity theft and fraud are also emerging concerns, as advanced AI models can replicate human behavior patterns, making it more difficult to secure financial transactions through traditional methods like multi-factor authentication (MFA). As AI technology continues to evolve, regulators are working with financial institutions to develop safeguards against potential AI-driven fraud and other security vulnerabilities.
New Regulations: Emphasizing Resilience Over Prescriptiveness
In response to these challenges, regulators are adjusting their approach to financial technology. Historically, efforts focused on strict enforcement, but in 2026, the emphasis is shifting toward operational resilience rather than rigid prescriptive rules. The Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC) are encouraging banks to adopt a more flexible, “technology-neutral” framework that holds financial institutions accountable for managing AI-related risks.
Comptroller Jonathan Gould has suggested that 2026 will be a critical year for enforcing cybersecurity standards. Specifically, regulations are expected to focus on requiring banks to adopt more secure authentication methods, such as phishing-resistant MFA based on FIDO2 standards. This approach aims to ensure that while AI is integrated into banking systems, the necessary controls are in place to mitigate any potential cybersecurity risks.
AI and Quantum Computing: Preparing for Future Challenges
Another significant issue raised during the emergency meeting was the potential intersection of AI and quantum computing. Together, these technologies pose a dual threat to financial systems. While AI models like Claude Mythos can already be used to expose vulnerabilities, quantum computing has the potential to break current encryption systems that protect sensitive data.
To address these future risks, the Federal Reserve has advised banks to begin transitioning to quantum-safe infrastructure. Although quantum computing is still in its early stages, its eventual impact could render current encryption methods obsolete, making it even more critical for banks to strengthen their cybersecurity frameworks to remain secure.







