Anthropic Seeks Dealmaker for Europe AI Expansion with Six-Figure Role

Anthropic is actively recruiting a senior executive to source and negotiate data center deals across Europe, signaling the company’s intent to expand its AI compute infrastructure beyond the United States.

By Olivia Grant Edited by Maria Konash Published:
Anthropic hires for London deal lead to secure data center capacity as infrastructure expansion accelerates. Image: Christian Lue / Unsplash

Anthropic is moving to establish a European data center footprint, posting a London-based hiring role focused on sourcing and negotiating compute capacity deals across the region. The position, titled Transaction Principal, carries a salary between £225,000 and £270,000 and is described in the job listing as critical to securing infrastructure that powers Anthropic’s frontier AI systems in Europe. The company declined to comment on the posting or its broader European data center plans.

The hiring push follows a period of significant infrastructure commitment in the United States. Earlier this week Anthropic announced it would spend more than $100 billion on Amazon Web Services over the next decade. The company also signed an expanded agreement with Broadcom this month covering roughly 3.5 gigawatts of computing capacity. A source familiar with discussions told CNBC that Anthropic is currently evaluating data center capacity deals directly from developers across multiple regions worldwide.

The advertised role requires experience in FLAP-D markets, an industry term for Frankfurt, London, Amsterdam, Paris, and Dublin, as well as in the Nordics and Southern Europe. The candidate will be responsible for developer outreach, term sheet negotiation, and managing the full transaction process for commercial capacity deals. Anthropic is also hiring for a comparable role in Australia, suggesting the infrastructure push extends beyond Europe.

The Stakes

For Anthropic, securing owned or contracted compute in Europe carries both operational and strategic weight. European data sovereignty concerns and emerging AI regulation under the EU AI Act create pressure on frontier AI companies to demonstrate local infrastructure rather than routing all capacity through U.S.-based cloud providers.

Establishing a regional presence also positions Anthropic to compete more directly with Microsoft, Google, and OpenAI for European enterprise contracts, where data residency requirements are increasingly a procurement factor. The Nordics, flagged explicitly in the job listing, have drawn particular attention from AI companies due to comparatively low energy costs, a significant variable when running large-scale model inference and training workloads.

Competitive Landscape

Europe’s AI infrastructure build-out is accelerating across the board. Microsoft last week secured additional compute capacity at an Nscale site in Norway and has committed billions to data centers in Portugal and Spain. Nebius announced plans in March to build one of Europe’s largest AI factories in Finland. Oracle has also outlined cloud infrastructure expansion in Italy. OpenAI, meanwhile, confirmed it halted its planned UK Stargate project earlier this month, citing energy costs and the regulatory environment, illustrating that not all European bets are paying off evenly.

U.S. hyperscaler AI infrastructure spending is projected to exceed $600 billion in 2026, and Anthropic’s European push appears designed to capture a share of the capacity being built to support that demand rather than remain dependent on third-party cloud allocation.

AI & Machine Learning, Cloud & Infrastructure, News

OpenAI Introduces GPT-5.5 as Its Most Capable Model for Real Work Yet

OpenAI has launched GPT-5.5, a new flagship model designed for coding, computer use, knowledge work, and scientific research, with stronger performance, lower token usage, and broader real-world autonomy than GPT-5.4.

By Maria Konash Edited by AIstify Team Published: Updated:
OpenAI has introduced GPT-5.5 as a new class of intelligence for real work, combining stronger coding, reasoning, and computer-use abilities with faster, more efficient performance. Photo: OpenAI

OpenAI has just launched GPT-5.5, a major new model release that the company describes as its most capable system yet for real-world work. The model is designed to move beyond traditional chatbot interactions and into sustained execution of complex, multi-step tasks across software development, research, business operations, and data analysis.

The release reflects a broader shift inside OpenAI toward building systems that act less like assistants and more like collaborators. “GPT-5.5 is built for real work,” the company said, emphasizing its ability to plan, execute, and refine tasks across long time horizons while maintaining coherence and accuracy.

At its core, GPT-5.5 is optimized for coding, computer use, knowledge work, and scientific reasoning, areas where the company says previous models still required significant human supervision. The goal, according to OpenAI, is to close the gap between what frontier models can theoretically do and what they can reliably deliver in practice.

A Leap in Coding, Reasoning, and Execution

GPT-5.5 shows measurable gains across major industry benchmarks. On Terminal-Bench 2.0, which evaluates command-line workflows requiring tool use and planning, the model achieves 82.7 percent, up from 75.1 percent in GPT-5.4. On SWE-Bench Pro, a widely used benchmark for real-world software engineering, it reaches 58.6 percent, again improving on its predecessor.

These improvements translate into tangible gains for developers. OpenAI says GPT-5.5 is better at understanding the structure of large codebases, identifying root causes of failures, and implementing fixes that work across multiple files and systems. Early testers described the model as more reliable in “end-to-end engineering tasks,” where success depends on coordinating multiple steps rather than producing isolated snippets.

One tester noted that GPT-5.5 “feels like it understands the system, not just the code,” highlighting a shift toward deeper reasoning and contextual awareness.

The model also advances OpenAI’s broader push toward agentic workflows, where AI systems can independently complete tasks across tools. On OSWorld-Verified, a benchmark that measures real-world computer use, GPT-5.5 scores 78.7 percent, demonstrating its ability to operate software environments with minimal human intervention.

From Productivity Tool to Economic Engine

The company says the biggest impact of GPT-5.5 may be in knowledge work, where it can generate presentations, build spreadsheets, analyze data, and produce structured outputs at scale. On GDPval, OpenAI’s benchmark covering 44 occupations, GPT-5.5 reaches 84.9 percent, outperforming GPT-5.4 and approaching expert-level performance across a wide range of tasks.

“GPT-5.5 is better at producing real work products, not just answers,” OpenAI said. “It can generate deliverables that are closer to what a professional would produce.”

The model is also more efficient. OpenAI says GPT-5.5 achieves higher quality results with fewer tokens, reducing the number of iterations needed to complete a task. This efficiency lowers the cost of reaching a given level of output quality, even as the model itself becomes more advanced.

Inside OpenAI, the shift is already visible. The company reports that more than 85 percent of employees now use Codex weekly, applying AI to tasks across engineering, finance, marketing, and communications. In one example, teams used GPT-5.5 to analyze speaking request data and generate structured reports, saving several hours per week per employee.

“This is where AI becomes infrastructure,” OpenAI said, describing the model as a system that supports entire workflows rather than isolated tasks.

Advancing Scientific Discovery

Beyond enterprise use, GPT-5.5 is also pushing into scientific research. On GeneBench, a benchmark focused on genetics and quantitative biology, the model shows significant improvement over previous versions. OpenAI says it is better at exploring hypotheses, interpreting ambiguous results, and iterating across complex research workflows.

In one internal experiment, a customized version of GPT-5.5 contributed to discovering a new proof related to Ramsey numbers, a core concept in combinatorics. The result was later verified by researchers, illustrating how AI can assist in advancing mathematical knowledge under human supervision.

“We’re beginning to see AI meaningfully accelerate science,” the company said, while noting that human oversight remains essential.

Safety, Security, and Deployment

OpenAI also highlighted improvements in factual reliability and safety. GPT-5.5 reduces error rates compared to GPT-5.4 and includes stricter safeguards for high-risk domains, particularly cybersecurity. The company says it is expanding controlled access to cyber-related capabilities through its Trusted Access for Cyber program while maintaining tighter usage controls.

“Security and alignment are core to how we deploy these systems,” OpenAI said, adding that stronger guardrails are necessary as models gain more autonomy.

GPT-5.5 is now rolling out to ChatGPT Plus, Pro, Business, and Enterprise users, with GPT-5.5 Pro available for higher-complexity tasks. API access is expected to follow, with pricing set at $5 per million input tokens and $30 per million output tokens, while Pro usage carries higher rates.

The model was trained using infrastructure developed in collaboration with Microsoft and NVIDIA, leveraging Azure data centers and GPU systems including H100, H200, and next-generation architectures.

Toward AI That Can Work End-to-End

For OpenAI, GPT-5.5 represents more than another incremental release. It signals a transition toward AI systems capable of carrying real work from start to finish.

“Everything is controlled by code,” the company said. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of work.”

With GPT-5.5, OpenAI is betting that the future of AI will be defined not just by intelligence, but by execution — systems that can plan, act, and deliver outcomes at scale.

Anthropic Outperforms OpenAI with $1 Trillion Valuation on Secondary Markets

Anthropic has reached a $1 trillion implied valuation on secondary markets, surpassing OpenAI amid rapid revenue growth. The surge highlights strong investor demand but raises questions ahead of a potential IPO.

By Samantha Reed Edited by Maria Konash Published:
Microsoft expands AI investment in Australia, boosting cloud, cybersecurity, and workforce training. Image: Anthropic

Anthropic has reached an implied valuation of $1 trillion on secondary markets, overtaking OpenAI to become the most valuable private AI company by this measure. The pricing, reported on platforms such as Forge Global and cited by Business Insider, reflects strong demand for limited shares. Anthropic’s rise comes as investor interest in artificial intelligence companies intensifies globally. The valuation, however, is based on private share trades rather than a formal funding round or public listing.

A key driver behind the surge is the company’s rapid revenue expansion. According to Bloomberg, Anthropic’s annualized revenue increased from $9 billion at the end of 2025 to $30 billion by March 2026, representing a 233 percent jump in one quarter. Much of this growth has been attributed to demand for AI-powered coding tools, a segment seeing strong enterprise adoption. The sharp increase in revenue has strengthened investor confidence and pushed secondary market pricing higher.

Supply constraints have also played a significant role in inflating valuations. Shares in Anthropic remain tightly held, with employees and early investors having limited opportunities to sell. This scarcity has led to competitive bidding among buyers, pushing some individual offers as high as $1.15 trillion, above the roughly $1 trillion average on secondary platforms. Such dynamics are common in private markets, where pricing can be influenced by limited liquidity rather than broad investor consensus.

What It Means

The implied valuation underscores how quickly capital is flowing into leading AI companies, particularly those demonstrating strong revenue growth. For businesses, this signals intensifying competition in AI tools, especially in high-demand areas like software development automation. Investors may view Anthropic’s performance as a benchmark for the sector, potentially influencing valuations of other private AI firms.

At the same time, the gap between secondary market pricing and expected IPO valuation highlights uncertainty. Reports suggest Anthropic is targeting a public offering in the $400 billion to $500 billion range, significantly below current private market estimates. If accurate, this discrepancy could lead to repricing when shares become publicly traded, affecting investor expectations across the AI market.

The Bigger Picture

Secondary market valuations have historically diverged from eventual public market outcomes. During the 2021 market peak, many private technology companies traded at elevated valuations before experiencing corrections of 60 to 70 percent between 2022 and 2024. This precedent suggests caution in interpreting current pricing as a definitive measure of long-term value.

Anthropic is reportedly working with major banks including Goldman Sachs and JPMorgan on a potential IPO as early as October 2026. The company’s eventual S-1 filing will provide clearer insight into its financials and valuation framework. Until then, secondary market activity offers a snapshot of investor sentiment, but not a final verdict on the company’s worth.

AI & Machine Learning, News, Startups & Investment

Microsoft Commits $18 Billion to Expand Australia AI Infrastructure

Microsoft will invest A$25 billion in Australia to expand cloud, AI, and cybersecurity capabilities while training millions of workers. The move strengthens the country’s position as a global AI infrastructure hub.

By Samantha Reed Edited by Maria Konash Published: Updated:
Microsoft expands AI investment in Australia, boosting cloud, cybersecurity, and workforce training. Image: Simon Ray / Unsplash

Microsoft announced a A$25 billion, or about $18 billion, investment in Australia to expand its digital infrastructure and artificial intelligence capabilities. The plan includes scaling its Microsoft Azure footprint in the country by more than 140 percent by the end of 2029. The investment, described as Microsoft’s largest in Australia, will also support cybersecurity initiatives and workforce training programs. The announcement comes as governments and technology firms accelerate efforts to secure leadership in AI and cloud infrastructure.

The initiative includes partnerships with Australian government agencies such as the Australian Signals Directorate and the Department of Home Affairs to strengthen protections for critical infrastructure. Microsoft also plans to train three million Australians in AI-related skills by 2028, aiming to broaden adoption across industries. The agreement builds on a prior A$5 billion investment announced in October 2023, previously its largest in the country. Microsoft executives signed a memorandum of understanding committing to national standards for data centers, including sustainability and alignment with national interests.

The investment aligns with Australia’s broader push to attract AI infrastructure spending. Under the government’s National AI Plan launched in December 2025, policymakers aim to create a more competitive and resilient AI-enabled economy. Other major technology firms have made similar commitments, including Amazon Web Services with a AU$20 billion pledge and OpenAI with a AU$7 billion investment. Australia has positioned itself as an attractive destination for such projects, citing a regulatory environment designed to balance oversight with innovation.

The Implications

Microsoft’s investment is expected to accelerate the expansion of AI and cloud infrastructure in Australia, increasing capacity for businesses and government services. For enterprises, greater access to cloud computing and AI tools could lower barriers to adoption and improve productivity. The workforce training component signals a focus on preparing employees for AI integration across sectors, from finance to public services. At the same time, closer collaboration with government agencies highlights rising concerns about cybersecurity risks tied to critical digital infrastructure.

The move also reinforces competition among global technology providers to secure regional footholds in AI infrastructure. As countries seek to localize data processing and reduce reliance on foreign systems, large-scale investments like this could influence where companies build and deploy AI services. For end users, improved infrastructure may translate into faster, more reliable digital services, though it could also bring tighter regulatory oversight.

Market Context

Australia has emerged as a key destination for global data center and AI investment, ranking among the top markets worldwide for infrastructure spending. Government policies have actively encouraged foreign investment while setting expectations around sustainability and national interest. Recent agreements with companies such as Anthropic on AI safety research further underscore the country’s strategic positioning.

Microsoft’s expansion builds on its existing presence, which included three operational data centers as of late 2025, with additional sites under construction in Sydney and Melbourne. The investment comes during a challenging period for the company’s stock performance, which has declined from recent highs amid broader market reactions to AI-driven shifts in the software sector. Despite these pressures, Microsoft continues to prioritize long-term infrastructure growth tied to AI demand.

AI & Machine Learning, News, Startups & Investment

White House Accuses China of Targeting US AI Labs

The White House has accused China of large-scale theft of U.S. AI intellectual property, citing coordinated campaigns targeting leading labs.

By Samantha Reed Edited by Maria Konash Published:
White House warns of China-linked AI theft campaigns targeting U.S. labs, raising security concerns. Image: Jack O'Rourke / Unsplash

The White House has accused China of conducting industrial-scale theft of intellectual property from U.S. artificial intelligence labs, according to a report by the Financial Times. The allegation is outlined in a memo authored by Michael Kratsios, director of the White House Office of Science and Technology Policy. The memo claims that foreign actors are systematically exploiting American AI innovation. The warning comes amid increasing geopolitical competition in advanced technologies and ahead of planned engagements with Chinese leadership.

According to the memo, the alleged campaigns involve the use of tens of thousands of proxy accounts to access and extract proprietary information. These efforts reportedly include “jailbreaking techniques,” which bypass safeguards in AI systems to expose underlying models or data. The administration plans to share intelligence about these threats with U.S. AI companies to strengthen defenses. This step reflects growing concern that current safeguards may be insufficient against coordinated and well-resourced intrusion attempts.

The accusations build on broader U.S. efforts to secure its technological edge in artificial intelligence. Since his reappointment in March 2025, Kratsios has been involved in shaping policies aimed at reinforcing American leadership in science and technology. These include initiatives related to federal research and development funding for fiscal year 2027 and promoting public-private partnerships. The memo positions AI security as a central pillar of national strategy, particularly as the technology becomes more commercially and militarily significant.

The Stakes

The allegations could lead to stricter controls on access to U.S. AI systems and increased scrutiny of international collaborations. For AI companies, this may translate into new compliance requirements, enhanced cybersecurity measures, and closer coordination with federal agencies. The move also underscores the vulnerability of advanced AI models, which can be probed or exploited through indirect methods such as account manipulation. For end users, tighter controls could affect how AI tools are accessed globally, potentially limiting openness in favor of security.

At a broader level, the claims are likely to intensify tensions between the United States and China in the technology sector. AI has become a critical area of competition, influencing economic growth, national security, and global influence. Actions stemming from these allegations could reshape supply chains, research collaboration, and regulatory frameworks across the industry.

Industry Backdrop

The memo arrives during a period of heightened focus on AI governance and security. U.S. policymakers have increasingly emphasized safeguarding sensitive technologies while maintaining innovation. Previous efforts have included export controls on advanced semiconductors and restrictions on technology transfers. Meanwhile, AI companies have rapidly expanded their capabilities, making them attractive targets for intellectual property theft.

Globally, governments are balancing openness in research with the need to protect strategic assets. The reported tactics, such as large-scale account creation and system manipulation, reflect evolving methods used to access proprietary technologies. As AI systems grow more powerful and widely deployed, securing them has become a central challenge for both industry and policymakers.

AI & Machine Learning, Cybersecurity & Privacy, News, Regulation & Policy

Google Adds Gemini AI Overviews to Gmail for Workspaces

Google expands Gemini AI Overviews to Gmail and Drive, helping Workspace users summarize emails and retrieve insights faster across conversations.

By Samantha Reed Edited by Maria Konash Published:
Google brings AI Overviews to Gmail with Gemini, enabling instant summaries and answers across Workspace. Image: Rubaitul Azad / Unsplash

Google is expanding its AI-powered Overviews feature to Gmail, allowing users to summarize emails and retrieve answers across conversations using natural language. The update, announced at Google Cloud Next, extends capabilities already available in Google Search into workplace communication tools.

Powered by Gemini, the feature enables users to ask questions directly within Gmail and receive concise summaries drawn from multiple emails and threads. Instead of manually opening and reading messages, users can query topics such as project updates, invoices, or travel details and get immediate answers.

The rollout reflects Google’s broader strategy of embedding AI across its productivity ecosystem, making AI-assisted workflows the default experience for many users.

Turning Email Into a Queryable Knowledge Base

AI Overviews in Gmail effectively transform inboxes into searchable knowledge systems. By aggregating information from multiple conversations, the tool can provide structured summaries of ongoing discussions, helping users quickly understand context without navigating individual messages.

Google said the feature is designed for common workplace use cases, including tracking project milestones, reviewing feedback, and extracting key information from long email threads. It will be enabled by default for users with Gemini for Workspace and relevant smart features turned on.

This builds on earlier AI integrations in Workspace, where Gemini has been used to draft emails, summarize documents, and assist with meeting notes.

Expanding Across Workspace Products

In addition to Gmail, Google is making AI Overviews in Google Drive generally available after a beta period. The feature allows users to summarize files and extract insights across documents stored in Drive.

The rollout spans multiple customer segments, including business, enterprise, education, and consumer plans such as Google AI Pro and Ultra. This broad availability signals Google’s push to standardize AI features across its entire user base.

AI Becomes the Default Interface

The introduction of AI Overviews as a default feature highlights a shift in how users interact with information. Rather than manually searching and reading, users increasingly rely on AI to synthesize and present answers.

While some users remain cautious about relying on AI for critical information, adoption continues to grow as these tools become more integrated and capable.

For Google, embedding AI directly into core products like Gmail and Drive strengthens its position in the productivity software market, where competition is intensifying around AI-driven workflows and automation.

AI & Machine Learning, Consumer Tech, Enterprise Tech, News
Exit mobile version