Microsoft Commits $33B to Nebius and Neocloud Deals to Ease AI Crunch

Microsoft has committed $33 billion to neocloud partnerships, including a $19.4B deal with Nebius, to secure GPU capacity and avoid looming AI infrastructure bottlenecks.

By Samantha Reed Edited by AIstify Team Published: Updated:
Microsoft’s $19.4B agreement with Nebius anchors its $33B neocloud strategy to secure GPUs and stabilize AI supply. Photo: Nebius Group NV

Microsoft is doubling down on infrastructure to confront the global shortage of AI compute. The company has committed $33 billion to a series of neocloud deals, including a $19.4 billion agreement with Nebius, to guarantee access to advanced GPUs. The strategy is designed to protect Microsoft’s AI roadmap against capacity constraints and ensure it can continue scaling models like GPT-powered assistants and enterprise AI systems.

The Nebius partnership will provide Microsoft with more than 100,000 Nvidia GB300 GPUs, directly supporting internal research and customer-facing AI tools. By diversifying through neocloud partners, Microsoft can expand compute without building every new data center itself — a process that has become increasingly expensive and energy intensive.

These contracts highlight the rise of “neoclouds,” infrastructure providers that operate massive GPU clusters outside the traditional hyperscale players. For Microsoft, the model reduces capital pressure and secures guaranteed supply at a time when GPU demand far exceeds production.

Strategic Leverage Through Neoclouds

The $33B neocloud push marks a shift in Microsoft’s strategy. Rather than solely investing in its own data centers, the company is treating compute as a critical resource to lock in through long-term supply deals. This provides stability in both cost and capacity as AI demand surges globally.

The Nebius deal is the largest of the agreements, but Microsoft is also exploring partnerships with other providers to diversify geographic reach and risk. Neocloud arrangements give flexibility in markets with strict data governance rules, enabling compute to be located closer to data sources while still integrating into Microsoft’s Azure ecosystem.

By securing these partnerships early, Microsoft also gains a competitive edge over rivals. Locking in GPU access ahead of projected shortages could allow the company to deliver AI products consistently while competitors scramble to source hardware. This strengthens Microsoft’s positioning not only against other cloud giants like Amazon and Google but also against fast-rising AI platform companies.

Risks and Industry Implications

Despite its scale, the neocloud approach carries risk. Microsoft will need to ensure that external GPU partners deliver consistent performance, uptime, and compliance. Latency, interoperability, and service-level agreements may all pose challenges as workloads expand. If a partner underdelivers, the knock-on effects could disrupt Microsoft’s AI services.

For neocloud providers like Nebius, these contracts are both an opportunity and a financial burden. Building and maintaining GPU megafarms requires vast capital, high energy consumption, and strong supply chain management. Margins are often thin, and any disruption in chip supply or export policies could strain operations.

Industry-wide, Microsoft’s $33B move may accelerate a new race. Other major players — from Google to Meta — could pursue similar arrangements, driving up competition for neocloud capacity and pushing valuations higher. Instead of competing solely on models, the next phase of the AI race may be fought over who controls the compute supply chain.

Looking ahead, Microsoft is expected to expand its neocloud partnerships further, potentially investing directly in providers or co-developing infrastructure. As AI continues to consume unprecedented levels of GPU power, those who secure capacity first may hold decisive influence over the industry’s future.

Anthropic Leak Reveals New Claude Mythos Model

A data leak at Anthropic exposed details of its upcoming Claude Mythos model, described as a major leap in AI capabilities, along with internal documents.

By Samantha Reed Edited by Maria Konash Published:
Anthropic leak reveals Claude Mythos, an advanced reasoning and cybersecurity AI model. Image: Anthropic

Anthropic has confirmed details of a forthcoming AI model after a security lapse exposed internal documents, revealing what the company describes as a significant advancement in its Claude family of systems.

The leak, caused by a configuration error in Anthropic’s content management system, made nearly 3,000 unpublished assets publicly accessible. The exposed data included draft blog posts, images, and internal PDFs. Security researchers identified the issue and alerted the company, which then restricted access.

Anthropic said the incident resulted from “human error” and described the materials as early drafts intended for future publication.

New Model Tier Above Opus

Among the leaked documents was information about a new model referred to as Claude Mythos, internally codenamed “Capybara.” The model is expected to introduce a new tier above Anthropic’s current lineup, which includes Opus, Sonnet, and Haiku.

According to the draft materials, the new system is designed to be more capable than the existing Opus models, particularly in areas such as coding, academic reasoning, and cybersecurity. Anthropic confirmed it is developing a next-generation general-purpose model and described it as a “step change” in capability.

The addition of a higher-tier model suggests Anthropic is continuing to scale its systems in response to growing competition in advanced AI, particularly in enterprise and technical domains.

Cybersecurity Concerns and Controlled Release

The leaked documents highlighted cybersecurity as a key area of focus for the new model. Anthropic reportedly considers its capabilities in this domain to be significantly ahead of existing systems, raising concerns about potential misuse.

To address these risks, the company plans to limit early access to organizations focused on cybersecurity defense. This approach is intended to allow institutions to strengthen protections before broader deployment.

Anthropic has previously taken steps to mitigate misuse of its models, including blocking attempts to use its tools for cybercrime. The enhanced capabilities described in the leaked materials indicate a growing emphasis on both offensive and defensive implications of AI systems.

Broader Implications and Internal Exposure

In addition to model details, the leak revealed plans for internal events, including an invite-only gathering for European business leaders. The exposure of such materials underscores the risks associated with managing sensitive information in rapidly evolving AI organizations.

The incident comes at a time when Anthropic is expanding its influence in the AI sector, with increased enterprise adoption and ongoing infrastructure investments. It also aligns with broader strategic developments, as the company is reportedly targeting an IPO as early as October while intensifying its enterprise push and scaling infrastructure to compete more directly with OpenAI.

While Anthropic has moved quickly to secure the exposed data, the leak provides an early look at its next-generation model strategy. It also illustrates how operational vulnerabilities can expose critical information in an industry where technological advances are closely watched.

SoftBank Secures $40B Loan to Expand OpenAI Investment

SoftBank has secured a $40 billion bridge loan to deepen its investment in OpenAI and accelerate its broader AI strategy.

By Samantha Reed Edited by Maria Konash Published:
SoftBank secures $40B loan to double down on OpenAI and AI infrastructure. Image: insung yoon / Unsplash

SoftBank Group has secured a $40 billion bridge loan to fund its growing investments in artificial intelligence, including a deeper commitment to OpenAI, as competition intensifies across the sector.

The Japanese investment firm said the unsecured loan will be used to support its AI strategy and general corporate purposes. The financing, which matures in March 2027, was arranged by a group of major lenders including JPMorgan Chase, Goldman Sachs, Mizuho Bank, Sumitomo Mitsui Banking Corporation, and MUFG Bank.

The move marks one of SoftBank’s largest financing efforts in recent years and highlights founder Masayoshi Son’s renewed focus on AI following a period of volatility in the company’s Vision Fund performance.

Expanding Partnership With OpenAI

SoftBank has been steadily increasing its exposure to OpenAI, the developer of ChatGPT, as generative AI adoption accelerates globally. The company previously committed $30 billion to OpenAI through its Vision Fund 2, positioning itself among the largest investors in the space.

The new financing is expected to further strengthen that relationship, as SoftBank seeks to capitalize on the rapid growth of AI-driven applications and infrastructure. OpenAI, backed by Microsoft, has emerged as a central player in the industry, attracting significant enterprise demand and investor interest.

SoftBank and OpenAI have also collaborated on large-scale initiatives, including the Stargate Project, which aims to invest up to $500 billion in AI infrastructure in the United States over four years. The project reflects the increasing importance of computing capacity and data centers in supporting advanced AI systems.

Strategic Shift Toward AI Infrastructure

The loan underscores SoftBank’s broader strategy to position itself at the center of the AI ecosystem, spanning both software and infrastructure investments. The company has signaled plans to deploy substantial capital into AI-related projects, including a previously announced $100 billion investment in U.S. technology and infrastructure.

This approach aligns with a wider industry trend, where companies are investing heavily in data centers, chips, and cloud platforms to support the growing computational demands of AI models.

SoftBank’s renewed focus on AI comes after years of mixed performance from its Vision Fund, which saw both significant gains and losses across technology investments. By concentrating on AI, the firm is betting on a sector widely viewed as a key driver of future economic growth.

The scale of the financing also reflects the capital-intensive nature of AI development. As companies race to build more powerful systems, access to funding and infrastructure is becoming a critical competitive factor.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment

Google Expands Search Live With Voice and Camera AI Globally

Google has expanded Search Live globally, enabling users to interact with AI using voice and camera through its Gemini-powered search experience.

By Daniel Mercer Edited by Maria Konash Published:
Google launches Search Live with Gemini, enabling real-time voice and camera search worldwide. Image: Google

Google has expanded its Search Live feature globally, bringing real-time, conversational search capabilities to users in more than 200 countries and territories.

The feature, powered by the company’s Gemini 3.1 Flash Live model, allows users to interact with search using voice and camera input. The rollout extends access to all regions where Google’s AI Mode is available, marking a significant step in the company’s effort to transform search into a more interactive experience.

Search Live enables users to ask questions verbally and receive spoken responses, with the ability to continue conversations through follow-up queries. The system is designed to support natural, back-and-forth interaction rather than traditional keyword-based searches.

Multimodal Search Experience

A key component of the update is multimodal interaction. Users can activate their device’s camera to provide visual context, allowing the system to analyze real-world objects and deliver relevant guidance.

For example, a user can point their camera at a physical object or task, such as assembling furniture, and receive step-by-step suggestions along with links to additional resources. The feature integrates with Google Lens, enabling seamless transitions between visual search and conversational AI.

The underlying Gemini 3.1 Flash Live model is designed to support real-time responses with low latency. It is also inherently multilingual, allowing users to interact with Search Live in their preferred language without switching settings.

Shift Toward Conversational Search

The global rollout reflects a broader shift in how users interact with search engines. Instead of typing queries and browsing results, users can now engage in continuous conversations that combine text, voice, and visual inputs.

Google said the feature is intended for scenarios where typing is impractical or inefficient, such as when users need immediate assistance or are interacting with physical environments.

The expansion also highlights increasing competition in AI-powered search, as companies race to integrate conversational interfaces into everyday tools. By embedding these capabilities directly into its core search product, Google is aiming to maintain its position in a rapidly evolving landscape.

Search Live is available through the Google app on Android and iOS devices. Users can activate the feature by selecting the Live option within the search interface.

The update comes alongside broader efforts to deepen personalization in Google’s AI ecosystem. The company is expanding Personal Intelligence across Search and Gemini to all U.S. users, enabling more tailored responses by connecting data from services such as Gmail and Photos, further reinforcing its shift toward context-aware, AI-driven experiences.

AI & Machine Learning, Consumer Tech, News

Anthropic Wins Court Ruling Against Pentagon Ban

A U.S. federal judge has blocked the Pentagon’s attempt to ban Anthropic’s AI tools, allowing continued use of its systems during an ongoing legal dispute.

By Samantha Reed Edited by Maria Konash Published:
Anthropic wins court ruling to block Pentagon AI ban, keeping Claude in use. Image: Tingey Injury Law Firm / Unsplash

Anthropic has secured an early legal victory in its dispute with the U.S. government, after a federal judge blocked efforts by the Pentagon to halt the use of its artificial intelligence tools.

Judge Rita Lin ruled that directives issued by President Donald Trump and Defense Secretary Pete Hegseth, which sought to immediately suspend the use of Anthropic’s systems across government agencies, could not be enforced while the case proceeds.

In her decision, the judge wrote that the government’s actions appeared aimed at “crippling” the company and suppressing public debate over how its technology was being used by the military. She described the move as potentially constituting “classic First Amendment retaliation.”

Continued Use of AI in Government

The ruling allows Anthropic’s products, including its Claude AI models, to remain in use across federal agencies and by contractors working with the Department of Defense. The decision avoids an immediate disruption to systems that had become embedded in government workflows.

Anthropic had filed the lawsuit earlier this month after being designated a “supply chain risk” by the Pentagon, a classification that would have barred its technology from government use. The designation followed public criticism of the company by senior officials.

The case highlights the growing reliance of government agencies on AI tools for tasks such as data analysis, operational planning, and software development. Removing such systems would have required a complex and potentially lengthy transition to alternative providers.

Legal and Strategic Implications

The court’s decision underscores the legal complexities surrounding government intervention in the rapidly evolving AI sector. It also raises questions about how national security concerns intersect with constitutional protections and commercial competition.

Anthropic said it was pleased with the ruling but emphasized its intention to continue working with government partners to ensure safe and reliable AI deployment.

The dispute reflects broader tensions between policymakers and technology companies over the use of advanced AI systems in sensitive environments. As AI becomes more integrated into defense and national security operations, regulatory and legal frameworks are still evolving.

The outcome of the case could have wider implications for how the U.S. government evaluates and restricts technology providers, particularly in areas involving emerging technologies and strategic competition.

For now, Anthropic’s tools will remain operational within government systems, maintaining continuity for users while the legal process moves forward.

AI & Machine Learning, News

Anthropic Targets October IPO Amid Intensifying AI Competition

Anthropic is exploring an IPO as early as October, as both it and OpenAI accelerate enterprise strategies and infrastructure investments ahead of potential public listings.

By Samantha Reed Edited by Maria Konash Published: Updated:
Anthropic eyes IPO amid AI race, signaling strong enterprise growth and investor demand. Image: Anthropic

Anthropic is exploring a potential initial public offering as early as October, signaling a new phase in the artificial intelligence industry as leading companies prepare to tap public markets.

As per Bloomberg report, the company has begun early discussions with major investment banks, including Goldman Sachs, JPMorgan Chase, and Morgan Stanley, about leading roles in the listing. While plans remain preliminary, the IPO could value Anthropic at more than $60 billion, according to reports.

Anthropic was last valued at $380 billion following a $30 billion funding round completed in February 2026. The round included participation from global investors such as GIC, Coatue Management, MGX, D.E. Shaw, Dragoneer, Founders Fund, and ICONIQ.

Enterprise Focus and Infrastructure Expansion

Anthropic has positioned itself as a leading provider of enterprise-focused AI systems, with its Claude models widely adopted across businesses and government applications. Strategic partnerships with major technology companies have supported its rapid growth.

The company counts Google and Amazon among its largest investors, alongside Microsoft and Nvidia, which joined in earlier funding rounds. These relationships provide access to advanced computing infrastructure and cloud platforms critical for scaling AI models.

Anthropic has also committed to investing $50 billion in building its own data center infrastructure across the United States, including projects in Texas and New York. This move reflects a broader industry trend toward vertical integration, as companies seek greater control over computing resources.

Earlier this year, Anthropic faced regulatory challenges when the Pentagon flagged the company as a potential supply-chain risk. The issue was resolved after a court order blocked the proposed restrictions, allowing Anthropic to continue operating within government contracts.

Parallel IPO Plans at OpenAI

Anthropic’s IPO considerations come as rival OpenAI is also preparing for a potential public listing. The company has been increasing its focus on enterprise applications, which are seen as a key driver of long-term revenue.

OpenAI is refining its investment strategy, targeting approximately $600 billion in compute spending by 2030. The company has also projected revenue exceeding $280 billion by the end of the decade, reflecting expectations of continued growth in AI adoption.

Both companies are competing to secure enterprise customers and scale their infrastructure, with strategies that include partnerships, custom deployments, and large-scale capital investment.

The potential public listings of Anthropic and OpenAI highlight the maturation of the AI sector. As companies transition from private funding to public markets, investor scrutiny is expected to increase, particularly around profitability, cost management, and long-term growth.

Exit mobile version