Anthropic Introduces Identity Verification for Claude Users

Anthropic is rolling out identity verification for Claude users to strengthen safety and compliance. The move introduces ID checks for certain features and use cases.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Introduces Identity Verification for Claude Users
Anthropic adds ID verification to Claude via Persona to curb abuse and meet compliance. Image: Farhat Altaf / Unsplash

Anthropic has begun rolling out identity verification requirements for users of its Claude platform, signaling a stronger push toward safety, compliance, and misuse prevention as AI systems become more powerful. The new process will apply selectively, with users prompted to verify their identity when accessing certain features or during routine integrity checks.

The verification system is powered by Persona, a third-party provider specializing in digital identity checks. Users are required to submit a government-issued photo ID and, in some cases, complete a live selfie capture using a phone or webcam. The process typically takes a few minutes and is designed to confirm identity without collecting unnecessary data.

Anthropic says the verification rollout is tied to broader efforts to enforce its usage policies and comply with legal obligations, particularly as advanced AI capabilities raise concerns about misuse. The company emphasized that verification data is used solely for identity confirmation and not for training AI models or other secondary purposes.

Accepted identification includes passports, driver’s licenses, and national ID cards, provided they are physical, valid, and clearly legible. The system explicitly rejects digital IDs, photocopies, or non-government credentials such as student cards or employee badges. Failed verification attempts can result from poor image quality, expired documents, or technical issues, though users are allowed multiple retries.

From a data handling perspective, Anthropic positions itself as the data controller, while Persona processes the information on its behalf. Importantly, identity documents and selfies are stored on Persona’s systems rather than Anthropic’s infrastructure. The company says all data is encrypted in transit and at rest, and Persona is contractually restricted from using the data beyond verification and fraud prevention purposes.

Anthropic also clarified that identity data will not be shared with third parties for marketing or advertising. Access is limited to verification and compliance workflows, with exceptions only in cases where legal obligations require disclosure.

Accounts may still face suspension or bans after verification if they violate platform rules, including repeated misuse, operating from unsupported regions, or breaching terms of service. Users who believe enforcement actions are incorrect can submit appeals for review.

Why This Matters

Identity verification marks a shift toward stricter governance in AI platforms. As models gain more advanced capabilities, companies face increasing pressure to prevent harmful use cases, particularly in areas like cybersecurity, fraud, and misinformation.

For businesses and developers, this introduces an additional compliance step that may affect onboarding and user experience. However, it could also improve trust in AI systems by reducing anonymous misuse and enforcing accountability.

For users, the tradeoff is clear: access to more powerful features may require sharing sensitive identity information, even if safeguards are in place.

Context

Anthropic’s move aligns with a broader industry trend toward tighter controls on AI access. Competitors like OpenAI have also explored verification, tiered access, and usage restrictions for advanced AI tools.

The rollout comes alongside Anthropic’s increasing focus on safety frameworks, including recent efforts to limit high-risk capabilities and introduce safeguards in newer models. As regulators worldwide examine AI risks more closely, identity verification may become a standard requirement across leading platforms.

AI & Machine Learning, News

Claude Opus 4.7 Launches With Stronger Coding, Vision Capabilities

Anthropic has released Claude Opus 4.7 with improved coding, vision, and reliability features. The update also introduces new safety controls for cybersecurity use cases.

By Daniel Mercer Edited by Maria Konash Published:
Claude Opus 4.7 Launches With Stronger Coding, Vision Capabilities
Anthropic unveils Claude Opus 4.7 with stronger coding, vision, and safety for enterprise AI. Image: Anthropic

Anthropic has announced the general availability of its latest AI model, Claude Opus 4.7, positioning it as a direct upgrade over Opus 4.6 with significant gains in advanced software engineering and multimodal capabilities. The release comes as the company continues to iterate toward more powerful systems, while cautiously testing safety mechanisms ahead of broader deployment of its more advanced Claude Mythos Preview.

Anthropic says Opus 4.7 performs better on complex, long-running coding tasks, allowing users to delegate work that previously required close oversight. The model shows improved instruction-following, with a more literal interpretation of prompts, which may require developers to adjust existing workflows. It also introduces stronger self-verification behavior, meaning it attempts to validate its outputs before returning results.

A key upgrade is in multimodal performance. Opus 4.7 can process images up to 2,576 pixels on the long edge, more than triple the resolution of earlier Claude models. This enables use cases such as analyzing dense screenshots, extracting data from diagrams, and supporting pixel-precise design workflows. Internally, Anthropic reports improved performance in domains such as finance, legal reasoning, and document analysis, including stronger results on third-party benchmarks measuring economically valuable knowledge work.

The model is now available across Anthropic’s ecosystem, including its API and integrations with platforms such as Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. Pricing remains unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens.

Anthropic is also introducing new controls for developers, including an “xhigh” effort level that balances reasoning depth and latency, as well as task budgeting tools to manage token usage in longer workflows. Additional features in its coding environment include an automated code review tool and expanded autonomous execution modes.

On safety, Opus 4.7 is the first model released under Anthropic’s new cybersecurity framework introduced with Project Glasswing. The system includes safeguards that detect and block high-risk cyber-related queries. For vetted professionals, the company has launched a Cyber Verification Program to allow legitimate security research and testing.

Why This Matters

The release reflects a broader shift in enterprise AI toward reliability and autonomy. Improvements in coding and long-task execution make models like Opus 4.7 more viable for real-world development workflows, reducing the need for constant human supervision.

Enhanced vision capabilities also expand AI’s role in design, analytics, and operations, where interpreting complex visuals is critical. At the same time, the introduction of cybersecurity safeguards highlights growing concerns about misuse as models become more capable.

For businesses, the combination of higher performance and unchanged pricing could accelerate adoption, particularly in software development, finance, and knowledge work automation.

Context

Anthropic has been steadily iterating on its Claude model family, competing with offerings from companies like OpenAI and Google. The company’s strategy emphasizes safety alongside capability, often limiting access to its most advanced systems while testing controls on intermediate models.

The mention of Claude Mythos Preview suggests Anthropic is preparing for a next generation of more powerful AI systems, but is proceeding cautiously due to potential risks, particularly in cybersecurity.

The addition of finer-grained control over compute effort and token usage also reflects an industry-wide trend toward giving developers more control over cost-performance tradeoffs, as AI systems are increasingly deployed in production environments.

Anthropic to Roll Out Mythos Access to U.K. Banks as Cybersecurity Tensions Escalate

Anthropic is giving U.K. banks controlled access to its Mythos model, marking a major step in the global rollout of AI-powered cybersecurity tools.

By Samantha Reed Edited by Maria Konash Published:
Anthropic to Roll Out Mythos Access to U.K. Banks as Cybersecurity Tensions Escalate
Anthropic expands Mythos to U.K. banks, underscoring rising AI-driven cybersecurity risks. Image: Chris Lawton / Unsplash

Just after regulators across Europe and the United States raised alarms, Anthropic is moving ahead with plans to give U.K. financial institutions controlled access to its most advanced cybersecurity model, Mythos.

The rollout, expected within days, marks a new phase of Project Glasswing, the company’s tightly controlled initiative designed to equip defenders with frontier AI tools before such capabilities become widely available.

A Rapid Escalation in AI Security

Mythos has drawn intense attention for its ability to identify and chain together zero-day vulnerabilities across major operating systems and web browsers.

That capability has triggered urgent responses from regulators. In the U.S., officials including Treasury leadership and the Federal Reserve have already held high-level meetings with major banks. In Europe, authorities such as the Bank of England, Financial Conduct Authority, and national cybersecurity agencies have been actively assessing the risks.

The concern is straightforward. A tool that can systematically uncover hidden weaknesses in critical infrastructure could dramatically improve defenses, but it could also accelerate attacks if misused.

Controlled Access, Not Public Release

Anthropic has made clear that Mythos will not be broadly released. Instead, access is being tightly restricted to vetted organizations through Project Glasswing.

Participants already include major players such as JPMorgan Chase, Amazon Web Services, Google, and Microsoft.

The addition of U.K. banks expands that circle, bringing one of the world’s most interconnected financial systems into direct engagement with frontier AI security tools.

A Double-Edged Tool for Banks

For financial institutions, Mythos presents both an opportunity and a threat.

Used defensively, it could function as a powerful red-team system, uncovering vulnerabilities before attackers exploit them. That could significantly shorten response times and strengthen resilience across complex banking infrastructure.

At the same time, the very capabilities that make Mythos valuable also increase the stakes. If similar tools become accessible to malicious actors, the scale and speed of cyberattacks could rise sharply.

Industry Braces for an AI-Native Era

Anthropic CEO Dario Amodei has described the current moment as a transitional period where risks are real but manageable with the right safeguards. The company has emphasized that the threat is no longer theoretical.

According to Anthropic’s regional leadership, engagement from U.K. bank executives has intensified in recent days, reflecting growing awareness that AI-driven cybersecurity is no longer a future concern.

The Bigger Shift

The rollout of Mythos into the banking sector signals a broader turning point.

AI is no longer just a productivity tool or research breakthrough. It is becoming a core component of national and financial security infrastructure.

For banks in the U.K. and beyond, the message is increasingly clear. The era of AI-native cybersecurity has begun, and adaptation is no longer optional.

AI & Machine Learning, Cybersecurity & Privacy, News

EU Pushes Google to Share Search Data With Rivals Under New Rules

The European Commission is proposing that Google share search data with rivals, including AI chatbots, as part of enforcement under the Digital Markets Act.

By Samantha Reed Edited by Maria Konash Published:
EU Pushes Google to Share Search Data With Rivals Under New Rules
EU may force Google to share search data under DMA, raising privacy concerns. Image: Guillaume Périgois / Unsplash

The European Commission is stepping up pressure on Google, proposing new rules that would require the company to share portions of its search data with competitors, including AI-powered services.

The move is part of ongoing enforcement of the Digital Markets Act, which aims to curb the dominance of large technology platforms and open up competition across digital markets.

Opening Google’s Data to Rivals

Under the proposal, Google would be required to give third-party search engines and AI tools access to certain search data. This includes defining how frequently data must be shared, how it is anonymized, and how access is priced.

The goal is to allow smaller competitors to improve their own search services and better compete with Google Search, which remains the dominant gateway to information online.

Importantly, the rules could also apply to AI chatbots with search capabilities, a fast-growing category as companies race to integrate real-time information into conversational interfaces.

Privacy Concerns Take Center Stage

Google has strongly pushed back against the proposal, arguing that sharing search data could put user privacy at risk.

The company says Europeans rely on its platform for sensitive queries related to health, finances, and personal matters, and that handing data to third parties could weaken protections despite anonymization requirements.

This tension highlights a core challenge for regulators. Increasing competition often requires data sharing, but search data is among the most sensitive categories of user information.

A High-Stakes Regulatory Battle

The European Commission is now consulting stakeholders, with feedback open until May 1 and a final decision expected in July.

The proposal follows earlier charges against Google for alleged violations of the Digital Markets Act. While the company has offered concessions, rivals have argued those measures do not go far enough.

Financial stakes are significant. Under the DMA, companies can face fines of up to 10 percent of global annual revenue for non-compliance. Google has already accumulated nearly €10 billion in antitrust fines in Europe since 2017.

AI Changes the Equation

The inclusion of AI-powered search tools adds a new dimension to the dispute.

As AI chatbots increasingly act as intermediaries for information retrieval, access to high-quality search data becomes a competitive advantage. Regulators appear to be anticipating this shift, aiming to ensure that emerging AI players are not locked out by incumbents controlling critical data.

The outcome of the case could shape not just traditional search competition, but also how data is shared in the next generation of AI-driven services.

AI & Machine Learning, News, Regulation & Policy

Anthropic Expands London Hub as AI Talent Race Heats Up

Anthropic is expanding its London presence with space for 800 staff, intensifying competition with OpenAI and other AI firms for talent in the U.K.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Expands London Hub as AI Talent Race Heats Up
Anthropic expands London office to 800 staff, intensifying AI talent competition in the U.K. Image: Marcin Nowak / Unsplash

Anthropic is expanding its footprint in London, securing new office space that can accommodate up to 800 employees as competition for AI talent intensifies across Europe.

The company currently has more than 200 staff in the city, but the new expansion signals a long-term commitment to building one of its key hubs outside the United States.

A Growing AI Hub in London

Anthropic’s new office will be located in London’s Knowledge Quarter, a fast-growing cluster of AI research, startups, and academic institutions. The area already hosts major players including OpenAI, Google DeepMind, and Meta, along with companies like Synthesia and Wayve.

The move comes just days after OpenAI announced plans for its first permanent office in London, underscoring how central the U.K. has become in the global AI race.

Anthropic’s Pip White said the expansion reflects both the strength of local talent and the U.K.’s focus on AI safety and regulation, two areas increasingly shaping where companies choose to invest.

Momentum Behind the Business

The expansion follows a period of rapid growth for Anthropic. The company’s annualized revenue has surpassed $30 billion, with more than 1,000 enterprise customers each spending over $1 million annually.

Its valuation has surged as well, reaching $380 billion earlier this year, with reports suggesting investor interest at even higher levels.

Recent product launches have helped fuel that momentum, including its coding-focused tools and advanced models designed for tasks like software analysis and security.

Strategy and Geopolitics

The London expansion also comes amid shifting geopolitical dynamics. U.K. officials have reportedly been courting Anthropic more aggressively following its high-profile dispute with the U.S. Department of Defense over how its AI models could be used.

Establishing a larger presence in London gives Anthropic access not only to talent, but also to a regulatory environment that is actively shaping its approach to AI governance and safety.

Talent Is the New Battleground

As AI development becomes more compute-intensive and capital-heavy, talent remains one of the few truly scarce resources.

Cities like London are emerging as key battlegrounds, where top researchers, engineers, and product teams are increasingly concentrated. Companies are responding by expanding physical hubs, even as much of AI work can technically be done remotely.

Anthropic’s move makes clear that the race is not just about models and infrastructure, but also about where the people building them choose to work.

AI & Machine Learning, News

ASML and TSMC Forecast Strong AI Chip Demand as Spending Surge Continues

Strong forecasts from ASML and TSMC signal continued massive AI spending by tech giants, despite rising concerns over sustainability and supply constraints.

By Olivia Grant Edited by Maria Konash Published:
ASML and TSMC Forecast Strong AI Chip Demand as Spending Surge Continues
ASML and TSMC boost forecasts, signaling sustained AI chip demand despite supply constraints. Image: Louis Reed / Unsplash

Strong outlooks from ASML and TSMC are reinforcing a clear message across the industry. The AI infrastructure boom is far from slowing down.

Both companies raised their forecasts this week, pointing to sustained demand for advanced chips used in training and running large AI models. The signals suggest that major cloud providers are continuing to invest heavily in compute, even as questions grow about returns on those investments.

AI Spending Machine Keeps Running

Executives say demand is still being driven by hyperscale customers. Companies like Microsoft, Amazon, and Meta are expected to collectively spend more than $600 billion this year on data centers and AI infrastructure.

TSMC CEO C.C. Wei pointed to strong signals across the supply chain, noting that demand is not only coming from direct customers but also from their downstream clients. In practice, that means cloud providers racing to secure chips before competitors do.

This demand flows directly to chip designers such as Nvidia, AMD, and Broadcom, all of which rely heavily on TSMC’s manufacturing capacity.

The Bottleneck Problem

Despite strong demand, the industry faces a fundamental constraint. There are only a few companies capable of producing cutting-edge chips at scale.

ASML, which supplies the lithography machines required to manufacture advanced semiconductors, expects demand to exceed supply for the foreseeable future. That constraint is now affecting not just AI, but also smartphones and PCs.

TSMC echoed similar concerns, noting that capacity remains tight even as the company ramps up capital spending to expand production.

This dynamic has pushed companies toward long-term agreements to lock in manufacturing capacity, sometimes years in advance. Securing supply has become as critical as designing the chips themselves.

Shift Toward Advanced AI Chips

Another notable shift is where demand is concentrating. Increasingly, spending is moving toward high-performance processors used for inference, the stage where trained AI models generate real-world outputs.

This reflects the next phase of AI adoption. After an initial wave focused on training large models, companies are now scaling deployment across products and services, which requires massive inference capacity.

Boom With Questions Attached

Even as forecasts remain strong, investor pressure is building. Markets are increasingly focused on whether massive AI spending will translate into meaningful returns.

Some analysts warn that the current pace of investment could eventually face limits, especially if monetization lags behind infrastructure build-out.

For now, however, the signals from ASML and TSMC suggest the opposite. Demand is still accelerating, supply is constrained, and the race for AI compute is intensifying across the entire technology stack.

AI & Machine Learning, Cloud & Infrastructure, News