Anthropic to Roll Out Mythos Access to U.K. Banks as Cybersecurity Tensions Escalate

Anthropic is giving U.K. banks controlled access to its Mythos model, marking a major step in the global rollout of AI-powered cybersecurity tools.

By Samantha Reed Edited by Maria Konash Published:
Anthropic to Roll Out Mythos Access to U.K. Banks as Cybersecurity Tensions Escalate
Anthropic expands Mythos to U.K. banks, underscoring rising AI-driven cybersecurity risks. Image: Chris Lawton / Unsplash

Just after regulators across Europe and the United States raised alarms, Anthropic is moving ahead with plans to give U.K. financial institutions controlled access to its most advanced cybersecurity model, Mythos.

The rollout, expected within days, marks a new phase of Project Glasswing, the company’s tightly controlled initiative designed to equip defenders with frontier AI tools before such capabilities become widely available.

A Rapid Escalation in AI Security

Mythos has drawn intense attention for its ability to identify and chain together zero-day vulnerabilities across major operating systems and web browsers.

That capability has triggered urgent responses from regulators. In the U.S., officials including Treasury leadership and the Federal Reserve have already held high-level meetings with major banks. In Europe, authorities such as the Bank of England, Financial Conduct Authority, and national cybersecurity agencies have been actively assessing the risks.

The concern is straightforward. A tool that can systematically uncover hidden weaknesses in critical infrastructure could dramatically improve defenses, but it could also accelerate attacks if misused.

Controlled Access, Not Public Release

Anthropic has made clear that Mythos will not be broadly released. Instead, access is being tightly restricted to vetted organizations through Project Glasswing.

Participants already include major players such as JPMorgan Chase, Amazon Web Services, Google, and Microsoft.

The addition of U.K. banks expands that circle, bringing one of the world’s most interconnected financial systems into direct engagement with frontier AI security tools.

A Double-Edged Tool for Banks

For financial institutions, Mythos presents both an opportunity and a threat.

Used defensively, it could function as a powerful red-team system, uncovering vulnerabilities before attackers exploit them. That could significantly shorten response times and strengthen resilience across complex banking infrastructure.

At the same time, the very capabilities that make Mythos valuable also increase the stakes. If similar tools become accessible to malicious actors, the scale and speed of cyberattacks could rise sharply.

Industry Braces for an AI-Native Era

Anthropic CEO Dario Amodei has described the current moment as a transitional period where risks are real but manageable with the right safeguards. The company has emphasized that the threat is no longer theoretical.

According to Anthropic’s regional leadership, engagement from U.K. bank executives has intensified in recent days, reflecting growing awareness that AI-driven cybersecurity is no longer a future concern.

The Bigger Shift

The rollout of Mythos into the banking sector signals a broader turning point.

AI is no longer just a productivity tool or research breakthrough. It is becoming a core component of national and financial security infrastructure.

For banks in the U.K. and beyond, the message is increasingly clear. The era of AI-native cybersecurity has begun, and adaptation is no longer optional.

AI & Machine Learning, Cybersecurity & Privacy, News

EU Pushes Google to Share Search Data With Rivals Under New Rules

The European Commission is proposing that Google share search data with rivals, including AI chatbots, as part of enforcement under the Digital Markets Act.

By Samantha Reed Edited by Maria Konash Published:
EU Pushes Google to Share Search Data With Rivals Under New Rules
EU may force Google to share search data under DMA, raising privacy concerns. Image: Guillaume Périgois / Unsplash

The European Commission is stepping up pressure on Google, proposing new rules that would require the company to share portions of its search data with competitors, including AI-powered services.

The move is part of ongoing enforcement of the Digital Markets Act, which aims to curb the dominance of large technology platforms and open up competition across digital markets.

Opening Google’s Data to Rivals

Under the proposal, Google would be required to give third-party search engines and AI tools access to certain search data. This includes defining how frequently data must be shared, how it is anonymized, and how access is priced.

The goal is to allow smaller competitors to improve their own search services and better compete with Google Search, which remains the dominant gateway to information online.

Importantly, the rules could also apply to AI chatbots with search capabilities, a fast-growing category as companies race to integrate real-time information into conversational interfaces.

Privacy Concerns Take Center Stage

Google has strongly pushed back against the proposal, arguing that sharing search data could put user privacy at risk.

The company says Europeans rely on its platform for sensitive queries related to health, finances, and personal matters, and that handing data to third parties could weaken protections despite anonymization requirements.

This tension highlights a core challenge for regulators. Increasing competition often requires data sharing, but search data is among the most sensitive categories of user information.

A High-Stakes Regulatory Battle

The European Commission is now consulting stakeholders, with feedback open until May 1 and a final decision expected in July.

The proposal follows earlier charges against Google for alleged violations of the Digital Markets Act. While the company has offered concessions, rivals have argued those measures do not go far enough.

Financial stakes are significant. Under the DMA, companies can face fines of up to 10 percent of global annual revenue for non-compliance. Google has already accumulated nearly €10 billion in antitrust fines in Europe since 2017.

AI Changes the Equation

The inclusion of AI-powered search tools adds a new dimension to the dispute.

As AI chatbots increasingly act as intermediaries for information retrieval, access to high-quality search data becomes a competitive advantage. Regulators appear to be anticipating this shift, aiming to ensure that emerging AI players are not locked out by incumbents controlling critical data.

The outcome of the case could shape not just traditional search competition, but also how data is shared in the next generation of AI-driven services.

AI & Machine Learning, News, Regulation & Policy

Anthropic Expands London Hub as AI Talent Race Heats Up

Anthropic is expanding its London presence with space for 800 staff, intensifying competition with OpenAI and other AI firms for talent in the U.K.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Expands London Hub as AI Talent Race Heats Up
Anthropic expands London office to 800 staff, intensifying AI talent competition in the U.K. Image: Marcin Nowak / Unsplash

Anthropic is expanding its footprint in London, securing new office space that can accommodate up to 800 employees as competition for AI talent intensifies across Europe.

The company currently has more than 200 staff in the city, but the new expansion signals a long-term commitment to building one of its key hubs outside the United States.

A Growing AI Hub in London

Anthropic’s new office will be located in London’s Knowledge Quarter, a fast-growing cluster of AI research, startups, and academic institutions. The area already hosts major players including OpenAI, Google DeepMind, and Meta, along with companies like Synthesia and Wayve.

The move comes just days after OpenAI announced plans for its first permanent office in London, underscoring how central the U.K. has become in the global AI race.

Anthropic’s Pip White said the expansion reflects both the strength of local talent and the U.K.’s focus on AI safety and regulation, two areas increasingly shaping where companies choose to invest.

Momentum Behind the Business

The expansion follows a period of rapid growth for Anthropic. The company’s annualized revenue has surpassed $30 billion, with more than 1,000 enterprise customers each spending over $1 million annually.

Its valuation has surged as well, reaching $380 billion earlier this year, with reports suggesting investor interest at even higher levels.

Recent product launches have helped fuel that momentum, including its coding-focused tools and advanced models designed for tasks like software analysis and security.

Strategy and Geopolitics

The London expansion also comes amid shifting geopolitical dynamics. U.K. officials have reportedly been courting Anthropic more aggressively following its high-profile dispute with the U.S. Department of Defense over how its AI models could be used.

Establishing a larger presence in London gives Anthropic access not only to talent, but also to a regulatory environment that is actively shaping its approach to AI governance and safety.

Talent Is the New Battleground

As AI development becomes more compute-intensive and capital-heavy, talent remains one of the few truly scarce resources.

Cities like London are emerging as key battlegrounds, where top researchers, engineers, and product teams are increasingly concentrated. Companies are responding by expanding physical hubs, even as much of AI work can technically be done remotely.

Anthropic’s move makes clear that the race is not just about models and infrastructure, but also about where the people building them choose to work.

AI & Machine Learning, News

ASML and TSMC Forecast Strong AI Chip Demand as Spending Surge Continues

Strong forecasts from ASML and TSMC signal continued massive AI spending by tech giants, despite rising concerns over sustainability and supply constraints.

By Olivia Grant Edited by Maria Konash Published:
ASML and TSMC Forecast Strong AI Chip Demand as Spending Surge Continues
ASML and TSMC boost forecasts, signaling sustained AI chip demand despite supply constraints. Image: Louis Reed / Unsplash

Strong outlooks from ASML and TSMC are reinforcing a clear message across the industry. The AI infrastructure boom is far from slowing down.

Both companies raised their forecasts this week, pointing to sustained demand for advanced chips used in training and running large AI models. The signals suggest that major cloud providers are continuing to invest heavily in compute, even as questions grow about returns on those investments.

AI Spending Machine Keeps Running

Executives say demand is still being driven by hyperscale customers. Companies like Microsoft, Amazon, and Meta are expected to collectively spend more than $600 billion this year on data centers and AI infrastructure.

TSMC CEO C.C. Wei pointed to strong signals across the supply chain, noting that demand is not only coming from direct customers but also from their downstream clients. In practice, that means cloud providers racing to secure chips before competitors do.

This demand flows directly to chip designers such as Nvidia, AMD, and Broadcom, all of which rely heavily on TSMC’s manufacturing capacity.

The Bottleneck Problem

Despite strong demand, the industry faces a fundamental constraint. There are only a few companies capable of producing cutting-edge chips at scale.

ASML, which supplies the lithography machines required to manufacture advanced semiconductors, expects demand to exceed supply for the foreseeable future. That constraint is now affecting not just AI, but also smartphones and PCs.

TSMC echoed similar concerns, noting that capacity remains tight even as the company ramps up capital spending to expand production.

This dynamic has pushed companies toward long-term agreements to lock in manufacturing capacity, sometimes years in advance. Securing supply has become as critical as designing the chips themselves.

Shift Toward Advanced AI Chips

Another notable shift is where demand is concentrating. Increasingly, spending is moving toward high-performance processors used for inference, the stage where trained AI models generate real-world outputs.

This reflects the next phase of AI adoption. After an initial wave focused on training large models, companies are now scaling deployment across products and services, which requires massive inference capacity.

Boom With Questions Attached

Even as forecasts remain strong, investor pressure is building. Markets are increasingly focused on whether massive AI spending will translate into meaningful returns.

Some analysts warn that the current pace of investment could eventually face limits, especially if monetization lags behind infrastructure build-out.

For now, however, the signals from ASML and TSMC suggest the opposite. Demand is still accelerating, supply is constrained, and the race for AI compute is intensifying across the entire technology stack.

AI & Machine Learning, Cloud & Infrastructure, News

European Banks Scrutinize Anthropic’s Mythos Model Over Cybersecurity Risks

German banks and regulators are assessing risks tied to Anthropic’s Mythos model as concerns grow over AI-driven cyber threats to financial systems.

By Marcus Lee Edited by Maria Konash Published:
European Banks Scrutinize Anthropic’s Mythos Model Over Cybersecurity Risks
European banks probe risks from Anthropic’s Mythos AI, raising concerns over financial cybersecurity. Image: Maheshkumar Painam / Unsplash

European financial institutions are moving quickly to assess the cybersecurity risks posed by Anthropic’s latest frontier model, as concerns mount that advanced AI could expose vulnerabilities across the banking system.

Germany’s banking sector is now actively consulting with cyber experts, government officials, and regulators following the release of Claude Mythos Preview. The model, which has demonstrated the ability to identify and exploit software vulnerabilities at an advanced level, is prompting a coordinated response across both industry and government.

Kolja Gabriel, a board member at the German Banking Association, said discussions involve major banks as well as authorities including the finance ministry, the Bundesbank, and regulator BaFin.

Growing Concern Across Financial Systems

Regulators are increasingly focused on how rapidly AI could surface hidden weaknesses in legacy infrastructure. According to BaFin, financial institutions must be prepared for scenarios where vulnerabilities are discovered and need to be addressed immediately.

“Mythos is being used in a controlled manner by IT security firms to close potential vulnerabilities as quickly as possible,” Gabriel said, adding that a wave of software updates is expected as a result.

The concern is not limited to Germany. Supervisors at the European Central Bank are preparing to question banks about their exposure to AI-driven cyber risks, signaling a broader regulatory push across Europe.

Similar discussions are already underway in the United States. Officials from the Federal Reserve and the Treasury have met with major bank CEOs to examine the potential risks tied to Mythos, underscoring how seriously policymakers are treating the issue as AI capabilities approach real-world attack potential.

A Model Too Powerful for Open Release

Anthropic has taken an unusually cautious approach with Mythos. The company has said the model will not be made generally available, citing its advanced capabilities in identifying and exploiting vulnerabilities.

Instead, access is being restricted through initiatives like Project Glasswing, where select organizations, including major tech firms and financial institutions such as JPMorgan Chase, are evaluating the model in controlled environments.

This controlled rollout reflects a broader shift in how frontier AI systems are being deployed. Rather than wide releases, the most capable models are increasingly distributed through limited-access programs aimed at trusted partners.

Defense and Risk, at the Same Time

While the risks are clear, the same capabilities driving concern are also being used defensively. Security teams are leveraging Mythos to identify weaknesses faster than traditional tools, potentially shortening the time between vulnerability discovery and remediation.

That dual-use nature is at the heart of the challenge facing regulators. AI models that can strengthen defenses can also be repurposed to accelerate attacks, especially if access expands beyond tightly controlled environments.

For banks, which rely heavily on complex and often outdated systems, the stakes are particularly high. The ability of AI to uncover long-hidden flaws could force institutions into a new cycle of continuous patching, monitoring, and system upgrades.

A New Phase of AI Risk Management

The response from European and U.S. authorities signals that AI cybersecurity is no longer a theoretical issue for financial institutions. It is becoming an operational and regulatory priority.

As more powerful models emerge, banks and regulators are being pushed to rethink how they manage cyber risk in an environment where vulnerabilities can be discovered faster, exploited more easily, and at greater scale than ever before.

Nvidia Launches First Open-Source AI Models for Quantum Computing

Nvidia unveils Ising, a new open-source AI model family designed to tackle quantum computing’s biggest bottlenecks: calibration and error correction.

By Daniel Mercer Edited by Maria Konash Published:
Nvidia Launches First Open-Source AI Models for Quantum Computing
Nvidia launches Ising, open-source AI models to improve quantum computing calibration and error correction. Image: Nvidia

Nvidia is pushing deeper into the future of computing with the launch of Ising, a new family of open-source AI models designed to solve some of quantum computing’s hardest problems, calibration and error correction.

The models aim to bridge a critical gap. Today’s quantum systems are powerful but fragile, and scaling them into reliable, real-world machines depends on overcoming persistent noise, instability, and error rates. Nvidia is betting that AI, not just physics, will be the key unlock.

Turning AI Into Quantum Infrastructure

Named after the Ising model, the system provides tools that act almost like an operating layer for quantum machines. According to Nvidia, Ising models can deliver up to 2.5x faster performance and 3x greater accuracy in quantum error correction compared to traditional approaches.

The family includes two core components:

  • Ising Calibration: A vision language model that interprets quantum processor signals and automates calibration, reducing processes that once took days down to hours.
  • Ising Decoding: Neural network models that handle real-time error correction, a fundamental requirement for scaling quantum systems.

Together, they move AI closer to being the “control plane” for quantum hardware. CEO Jensen Huang described this as essential to making quantum computing practical.

From Fragile Qubits to Scalable Systems

Quantum computers rely on qubits, which are notoriously sensitive to environmental noise. Even small disturbances can introduce errors, making large-scale, reliable computation extremely difficult.

Ising directly targets this bottleneck by automating both calibration and error correction. These are processes that traditionally require intensive manual tuning and specialized expertise.

The models are also designed to integrate with Nvidia’s broader ecosystem, including CUDA-Q software and NVQLink hardware. This enables hybrid systems where classical GPUs and quantum processors work together in real time.

Open Source as a Strategic Move

Unlike many frontier AI systems, Nvidia is releasing Ising as open source. The models can run locally, allowing researchers and enterprises to maintain full control over sensitive data and customize them for specific quantum architectures.

This approach reflects a broader shift in AI infrastructure. Open models are increasingly used to accelerate adoption in specialized domains where customization and data privacy are critical.

A Bigger Bet on AI-Driven Science

Ising is part of Nvidia’s expanding portfolio of domain-specific AI models, joining systems like Nemotron for agents, BioNeMo for biotech, and Isaac GR00T for robotics.

The broader strategy is clear. Apply AI not just to software, but to foundational scientific and industrial challenges, from biology to robotics to quantum computing.

With the quantum computing market projected to exceed $11 billion by 2030, tools like Ising could play a critical role in determining whether the technology transitions from experimental promise to real-world utility.

AI & Machine Learning, News