European financial institutions are moving quickly to assess the cybersecurity risks posed by Anthropic’s latest frontier model, as concerns mount that advanced AI could expose vulnerabilities across the banking system.
Germany’s banking sector is now actively consulting with cyber experts, government officials, and regulators following the release of Claude Mythos Preview. The model, which has demonstrated the ability to identify and exploit software vulnerabilities at an advanced level, is prompting a coordinated response across both industry and government.
Kolja Gabriel, a board member at the German Banking Association, said discussions involve major banks as well as authorities including the finance ministry, the Bundesbank, and regulator BaFin.
Growing Concern Across Financial Systems
Regulators are increasingly focused on how rapidly AI could surface hidden weaknesses in legacy infrastructure. According to BaFin, financial institutions must be prepared for scenarios where vulnerabilities are discovered and need to be addressed immediately.
“Mythos is being used in a controlled manner by IT security firms to close potential vulnerabilities as quickly as possible,” Gabriel said, adding that a wave of software updates is expected as a result.
The concern is not limited to Germany. Supervisors at the European Central Bank are preparing to question banks about their exposure to AI-driven cyber risks, signaling a broader regulatory push across Europe.
Similar discussions are already underway in the United States. Officials from the Federal Reserve and the Treasury have met with major bank CEOs to examine the potential risks tied to Mythos, underscoring how seriously policymakers are treating the issue as AI capabilities approach real-world attack potential.
A Model Too Powerful for Open Release
Anthropic has taken an unusually cautious approach with Mythos. The company has said the model will not be made generally available, citing its advanced capabilities in identifying and exploiting vulnerabilities.
Instead, access is being restricted through initiatives like Project Glasswing, where select organizations, including major tech firms and financial institutions such as JPMorgan Chase, are evaluating the model in controlled environments.
This controlled rollout reflects a broader shift in how frontier AI systems are being deployed. Rather than wide releases, the most capable models are increasingly distributed through limited-access programs aimed at trusted partners.
Defense and Risk, at the Same Time
While the risks are clear, the same capabilities driving concern are also being used defensively. Security teams are leveraging Mythos to identify weaknesses faster than traditional tools, potentially shortening the time between vulnerability discovery and remediation.
That dual-use nature is at the heart of the challenge facing regulators. AI models that can strengthen defenses can also be repurposed to accelerate attacks, especially if access expands beyond tightly controlled environments.
For banks, which rely heavily on complex and often outdated systems, the stakes are particularly high. The ability of AI to uncover long-hidden flaws could force institutions into a new cycle of continuous patching, monitoring, and system upgrades.
A New Phase of AI Risk Management
The response from European and U.S. authorities signals that AI cybersecurity is no longer a theoretical issue for financial institutions. It is becoming an operational and regulatory priority.
As more powerful models emerge, banks and regulators are being pushed to rethink how they manage cyber risk in an environment where vulnerabilities can be discovered faster, exploited more easily, and at greater scale than ever before.