Anthropic Taps CoreWeave to Scale Claude AI Deployment

Anthropic has signed a multi-year deal with CoreWeave to power its Claude AI models, expanding compute capacity as demand for AI infrastructure surges.

By Olivia Grant Edited by Maria Konash Published:
Anthropic partners with CoreWeave to scale Claude, expanding compute for rising AI demand. Image: Growtika / Unsplash

Anthropic has signed a multi-year agreement with CoreWeave to expand the infrastructure supporting its Claude family of AI models, as demand for large-scale compute continues to surge. The deal will bring additional capacity online starting later this year, enabling Anthropic to run production workloads at greater scale across its growing base of enterprise and developer customers.

The partnership positions Anthropic at the center of a rapidly expanding AI infrastructure ecosystem. As one of the leading developers of large language models, the company is increasingly reliant on high-performance cloud providers to meet rising demand. CoreWeave, which specializes in AI-optimized cloud infrastructure, will provide the computing resources needed to train, deploy, and operate Claude models in real-world applications.

Under the agreement, Anthropic will use CoreWeave’s platform to support production-scale workloads, benefiting from performance and reliability tailored for modern AI systems. The rollout will take place in phases, with the potential to expand over time as demand grows. The move adds another infrastructure partner to Anthropic’s network, which already includes major cloud and hardware providers.

Expanding Compute to Meet AI Demand

The deal reflects the scale at which Anthropic is operating. Its Claude models are being adopted across startups, enterprises, and developers, driving significant compute requirements. As AI applications move from experimentation to deployment, infrastructure has become a critical bottleneck, pushing companies to diversify their cloud partnerships.

CoreWeave said it now supports nine of the top ten AI model providers, highlighting the concentration of demand among a small group of leading developers. For Anthropic, adding capacity through CoreWeave helps ensure it can continue scaling without being constrained by a single provider or platform.

Infrastructure as Competitive Advantage

The partnership underscores a broader shift in the AI industry, where access to compute is becoming as important as model performance. Companies like Anthropic are increasingly competing not just on the quality of their models, but on their ability to deliver reliable, high-speed inference and training at scale.

Specialized AI cloud providers such as CoreWeave are emerging as key players in this landscape, offering optimized environments designed specifically for machine learning workloads. These platforms can deliver higher efficiency and performance compared with general-purpose cloud infrastructure, making them attractive partners for AI labs.

For Anthropic, the agreement is part of a broader strategy to secure the infrastructure needed to support rapid growth. The importance of this strategy is underscored by parallel moves across the industry, including Meta Platforms committing an additional $21 billion to CoreWeave for AI cloud infrastructure. As adoption of AI models accelerates, large-scale investments like these highlight how access to high-performance compute is becoming a defining factor in the competitive landscape.

AI & Machine Learning, Cloud & Infrastructure, News

What Do We Really Think About AI? This Movie Tries to Answer

A new documentary featuring top AI leaders explores the tension between optimism and fear surrounding artificial intelligence, highlighting public uncertainty about its future.

By Samantha Reed Edited by Maria Konash Published:
New AI documentary spotlights industry leaders debating risks, reflecting public uncertainty about AI’s future. Image: Sam McGhee / Unsplash

A new documentary, The AI Doc: Or How I Became an Apocaloptimist, is bringing the debate around artificial intelligence to a broader audience, exploring both the promise and anxiety surrounding the technology. Directed by filmmaker Daniel Roher alongside Charlie Tyrell, the film premiered in theaters on March 27 and follows Roher’s personal journey as he grapples with the implications of AI while preparing to become a parent.

The documentary features interviews with some of the most influential figures in AI, including Sam Altman, Dario Amodei, and Demis Hassabis. The filmmakers conducted dozens of on-camera interviews and hundreds more off the record, aiming to capture a wide range of perspectives across the industry. Despite outreach to many high-profile figures, including Mark Zuckerberg and Elon Musk, not all agreed to participate.

Rather than focusing on breaking news, the filmmakers chose to explore deeper, more enduring questions about AI. Early in production, rapid developments in the industry, including leadership turmoil at OpenAI, made it clear that chasing headlines would quickly date the film. Instead, the project centers on fundamental issues such as what AI is, how it works, and what it means for society.

Between Optimism and Fear

A central theme of the documentary is the polarized way AI is often discussed. According to the filmmakers, public perception tends to swing between two extremes: AI as a transformative force for good or as an existential threat. The film attempts to guide viewers through that tension, presenting a more nuanced view that acknowledges both possibilities.

Producers said one of the most revealing aspects of the process was asking experts to explain AI in simple terms. Even highly accomplished scientists and executives struggled to distill complex concepts into accessible explanations, underscoring the gap between technical understanding and public awareness.

A Broader Public Conversation

The filmmakers said audience reactions have highlighted how differently people perceive AI depending on their background. Screenings have sparked discussions ranging from skepticism about the technology’s impact to concerns about its concentration among a small group of companies.

The project also reflects a shift in how AI is entering public discourse. As tools like ChatGPT and Claude become more widely used, people are interacting with AI systems directly, often without fully understanding how they work or their limitations.

For the filmmakers, the takeaway is less about providing definitive answers and more about encouraging broader participation in the conversation. As AI continues to evolve rapidly, they argue that its future should not be shaped solely by technology companies, but by a wider public engaged in questioning, debating, and understanding its impact.

AI & Machine Learning, News

Anthropic Rolls Out Claude Cowork With Enterprise Controls

Anthropic has made Claude Cowork generally available across paid plans, adding enterprise controls and analytics to support company-wide AI deployment.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Anthropic expands Claude Cowork with enterprise controls, analytics, and integrations for large-scale adoption. Image: Claude

Anthropic has made its Claude Cowork assistant generally available across all paid plans, alongside a new set of enterprise controls designed to support organization-wide deployment. The update reflects growing adoption of AI tools beyond engineering teams, as companies increasingly integrate assistants into everyday workflows such as reporting, research, and internal collaboration.

Claude Cowork, a desktop-based AI assistant for macOS and Windows, is positioned as a non-developer counterpart to Anthropic’s coding tools. Unlike browser-based chat interfaces, it can access local files directly and integrate with enterprise systems, enabling more context-aware workflows. Early usage data shows that the majority of activity comes from non-technical teams, including operations, marketing, finance, and legal, where employees are using the tool to handle supporting tasks around core business functions.

To support broader rollout, Anthropic has introduced governance features aimed at IT and admin teams. These include role-based access controls, allowing organizations to define which teams can use specific AI capabilities, as well as group-level spending limits to manage costs. The company has also added usage analytics, enabling administrators to track adoption patterns, active users, and workflow trends across teams.

Enterprise-Ready Controls and Visibility

The update places a strong emphasis on visibility and control. Claude Cowork now integrates with OpenTelemetry, allowing organizations to monitor AI activity through standard security and observability tools. Events such as tool usage, file access, and connector interactions can be tracked and analyzed, helping companies maintain oversight as AI becomes embedded in workflows.

Anthropic has also expanded its connector ecosystem. A new integration with Zoom enables the assistant to pull meeting summaries, transcripts, and action items directly into workflows. Administrators can configure permissions at a granular level, including restricting write access while allowing read-only interactions. These controls are designed to address concerns around data security and unintended actions by AI systems.

From Tools to Workflows

The rollout highlights a broader shift in how organizations use AI. Rather than asking isolated questions, employees are increasingly delegating multi-step tasks to assistants. Early adopters have used Claude Cowork to automate processes such as performance reviews, incident response workflows, and internal reporting dashboards by connecting the tool to systems like Slack, Jira, and internal databases.

This transition from query-based usage to task execution mirrors trends seen in developer tools, where AI agents are taking on more complex responsibilities. For Anthropic, expanding Cowork across all paid tiers positions the company to capture a wider share of enterprise demand.

As AI assistants become more deeply embedded in business operations, the focus is shifting from raw capability to governance, integration, and reliability. Claude Cowork’s expansion reflects that evolution, with Anthropic aiming to balance increased adoption with the controls needed to manage AI at scale.

OpenAI Launches $100 ChatGPT Pro Tier to Boost Codex Usage

OpenAI has introduced a $100 ChatGPT Pro tier offering 5x higher Codex usage limits, targeting developers amid rising competition in AI coding tools.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI unveils $100 ChatGPT Pro tier, expanding Codex access to boost developer adoption. Image: Emiliano Vittoriosi / Unsplash

OpenAI has introduced a new $100-per-month ChatGPT Pro subscription tier aimed at developers, offering significantly expanded usage limits for its Codex coding tools. The move adds a mid-range option between its $20 Plus plan and $200 Pro tier, as the company looks to attract more “vibe coders” and professional developers amid intensifying competition in AI-powered software development.

The primary appeal of the new plan is a fivefold increase in Codex usage compared to the Plus tier. Codex, OpenAI’s agentic coding system, allows users to generate, review, and execute code using natural language. Under the new pricing structure, Pro users receive substantially higher limits across both local and cloud-based tasks within rolling five-hour windows. For example, GPT-5.3-Codex usage increases from 30–150 local messages and 10–60 cloud tasks on Plus to 300–1,500 local messages and 100–600 cloud tasks on the $100 plan.

OpenAI said the new tier reflects strong demand from developers who need higher throughput for longer or more complex coding sessions. CEO Sam Altman noted the plan was introduced in response to user feedback. At the same time, the company is adjusting usage patterns on the Plus tier, shifting toward more distributed access throughout the week rather than extended daily sessions. This effectively reduces peak usage flexibility for lower-tier subscribers while encouraging upgrades.

Competing for Developers

The launch comes as competition in AI coding tools intensifies, particularly with Anthropic. Anthropic’s Claude-based coding products have gained traction in enterprise environments, contributing to rapid revenue growth and setting new benchmarks for autonomous coding systems.

OpenAI’s strategy appears designed to counter that momentum. By offering higher usage limits at a mid-tier price point, the company is targeting developers who require more capacity than casual users but may not need the full $200 plan. The move also follows OpenAI’s hiring of Peter Steinberger, creator of the OpenClaw agent framework, signaling a broader push into agent-driven development workflows.

Shifting Economics of AI Coding

The new pricing structure reflects the changing economics of AI-powered development. High-usage customers, particularly those running automated agents or working with large codebases, can quickly exceed the cost assumptions of lower-tier subscriptions.

Anthropic recently tightened restrictions on how its subscription plans can be used with third-party tools, pushing developers toward API-based pricing. OpenAI, by contrast, is positioning its plans to accommodate heavier usage directly within its subscription model.

The introduction of the $100 tier suggests a broader segmentation strategy, with pricing aligned more closely to usage intensity. As AI coding tools become central to software development workflows, companies are increasingly competing not just on model capability, but on pricing flexibility and developer experience.

The move highlights how the AI coding market is evolving into a high-stakes battleground, where access, limits, and economics may be as important as the underlying technology itself.

AI & Machine Learning, News

Will You Be Able to Invest in OpenAI? IPO Plans Suggest Yes

OpenAI to include retail investors in IPO as demand surges, highlighting shift toward broader ownership and funding for massive AI infrastructure plans.

By Samantha Reed Edited by Maria Konash Published:
OpenAI plans to reserve IPO shares for retail investors, widening access. Image: Aditya Vyas / Unsplash

OpenAI is planning to allocate a portion of shares to individual investors in its expected initial public offering, marking a notable shift in how major AI companies approach public markets. Chief Financial Officer Sarah Friar said the company saw strong demand from retail participants during its latest private funding round and intends to include them in a future listing. The move comes as OpenAI prepares for what could be one of the largest IPOs in the technology sector.

The company has already tested retail appetite through private placements facilitated by banks including JPMorgan Chase, Morgan Stanley, and Goldman Sachs. OpenAI initially aimed to raise $1 billion from individual investors but ultimately secured roughly three times that amount, underscoring intense interest in the company’s growth. According to Friar, demand was so high that one bank’s systems briefly failed after opening investor access to financial materials.

OpenAI’s valuation has surged alongside this demand. The company was recently valued at $852 billion following a record-breaking funding round, up sharply from earlier estimates. While Friar declined to confirm a specific IPO timeline, she indicated that the company is preparing operationally to function like a public entity. Reports suggest a potential listing could occur as early as the fourth quarter.

Broadening Ownership in AI

The decision to include retail investors reflects a broader effort to democratize access to high-growth AI companies, which have historically been dominated by institutional capital. Friar said widespread ownership is important for building trust, particularly as AI becomes more deeply integrated into everyday life.

The approach echoes strategies used by other high-profile companies. Friar pointed to her experience at Block, as well as examples from Tesla and SpaceX, where retail participation played a role in shaping investor engagement. SpaceX is also expected to reserve a significant portion of shares for individual investors in its anticipated IPO.

Funding the Compute Race

OpenAI’s push toward public markets is closely tied to its capital needs. The company plans to spend as much as $600 billion over the next five years on semiconductors and data centers, reflecting the growing importance of compute infrastructure in AI competition. Friar described compute as the company’s most critical asset, directly tied to product performance and revenue growth.

Enterprise adoption is also accelerating. According to company executives, enterprise customers currently account for about 40% of OpenAI’s revenue and are expected to reach parity with consumer revenue by 2026. The shift is driven by businesses moving beyond basic productivity use cases to deploying AI systems that manage complex workflows and teams of autonomous agents.

The scale of OpenAI’s ambitions highlights a broader trend across the industry, where access to capital and infrastructure is becoming a decisive factor. By opening its IPO to retail investors, the company is not only tapping a new funding source but also signaling a more inclusive approach to ownership in the AI era.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment

OpenAI Limits Access to Cybersecurity AI Model Over Misuse Risks

OpenAI plans to restrict access to a powerful new cybersecurity-focused AI model, reflecting growing concern over misuse as capabilities approach real-world attack potential.

By Marcus Lee Edited by Maria Konash Published:
OpenAI limits release of advanced cybersecurity model, citing rising AI attack risks. Image: Adi Goldstein / Unsplash

OpenAI is preparing to limit access to a new artificial intelligence model with advanced cybersecurity capabilities, signaling rising concern among AI developers about the risks of misuse. The model, still in development, is expected to be released only to a small group of vetted organizations, according to reports. The approach mirrors recent moves by Anthropic, which restricted access to its Mythos Preview model due to similar concerns about its ability to identify and exploit software vulnerabilities.

The shift reflects a broader turning point in AI development. Models are increasingly capable of autonomously analyzing code, discovering weaknesses, and even generating exploits. OpenAI has already begun testing controlled access through its “Trusted Access for Cyber” program, launched earlier this year alongside its GPT-5.3-Codex model. The initiative provides selected organizations with access to more advanced and less restricted systems for defensive cybersecurity work, backed by $10 million in API credits.

Security experts say the capabilities now emerging represent a fundamental change in the threat landscape. AI tools that were once limited to assisting developers are now approaching the level of skilled human hackers. This raises the risk that such systems could be used to target critical infrastructure, including energy grids, water systems, and financial networks. Industry leaders warn that the timeline for widespread availability of these capabilities may be measured in months rather than years.

A Shift Toward Controlled Deployment

The decision to restrict access highlights a growing tension between innovation and safety. AI companies are under pressure to advance model capabilities while preventing misuse. Limiting access to trusted partners allows developers to study risks and refine safeguards before broader release.

This approach resembles established practices in cybersecurity, where vulnerabilities are disclosed gradually to allow time for patches before public exposure. Some experts argue that staggered deployment of powerful AI models may become standard as capabilities continue to advance.

At the same time, there are limits to how much control companies can maintain. Researchers note that existing publicly available models are already capable of identifying certain vulnerabilities, suggesting that the underlying capabilities are spreading across the industry.

An Irreversible Turning Point

The move by OpenAI and Anthropic underscores a growing consensus that AI has crossed a critical threshold in cybersecurity. Once these capabilities exist, they cannot easily be contained. Even if leading companies restrict access, similar models are likely to emerge elsewhere.

For enterprises and governments, the implication is clear: defenses must evolve quickly. Organizations may need to adopt AI-driven security tools at scale to keep pace with increasingly automated threats.

While it remains unclear whether OpenAI will eventually release the model more broadly, the current strategy reflects a cautious approach to a rapidly changing risk environment. The balance between openness and control is likely to remain a defining issue as AI systems become more powerful and more widely deployed.

Exit mobile version