China Orders Meta to Abandon $2 Billion Manus Deal

China’s top economic planner has ordered Meta to unwind its $2 billion acquisition of AI startup Manus. The decision underscores tightening controls on foreign access to Chinese AI technology.

By Samantha Reed Edited by Maria Konash Published:
China Orders Meta to Abandon $2 Billion Manus Deal
China blocks Meta-Manus deal over AI security concerns, tightening rules on foreign tech investment. Image: Othman Alghanmi / Unsplash

China’s top economic planner, the National Development and Reform Commission, has ordered Meta Platforms to unwind its $2 billion acquisition of Manus. In a brief statement, regulators said the decision to prohibit foreign investment in the company was made in accordance with existing laws and regulations. Authorities have asked the parties involved to withdraw from the transaction, marking a rare direct intervention in a high-profile cross-border AI deal. The move follows months of scrutiny from both Beijing and Washington over the implications of the acquisition.

Manus, originally founded in China before relocating to Singapore, develops general-purpose AI agents capable of performing tasks such as coding, market research, and data analysis. The startup gained rapid traction, surpassing $100 million in annual recurring revenue within months of launching its product. It also raised $75 million in funding led by U.S. venture firm Benchmark. Meta had planned to integrate Manus technology into its AI offerings, including its Meta AI assistant, to accelerate automation across consumer and enterprise products.

The deal had already triggered regulatory reviews in China, including an investigation by the Ministry of Commerce into compliance with export control and foreign investment rules. The acquisition became a focal point for concerns about so-called “Singapore-washing,” where Chinese startups relocate overseas to attract foreign capital and avoid regulatory scrutiny. Beijing’s intervention signals growing resistance to such strategies, particularly in sensitive sectors like artificial intelligence.

Cross-Border Tensions

The decision highlights escalating tensions over control of advanced technologies between China and the United States. Washington has already restricted U.S. investment in certain Chinese AI and semiconductor sectors, citing national security concerns. Beijing’s move mirrors that approach by tightening oversight of foreign acquisitions involving Chinese-developed technology.

For global technology companies, the ruling introduces greater uncertainty around cross-border deals in AI. Transactions involving startups with ties to China may face increased regulatory scrutiny, even if companies are incorporated elsewhere. This could slow international expansion plans and complicate efforts to integrate global AI capabilities.

Shifting Deal Landscape

The blocked acquisition also signals a shift in how China manages its technology ecosystem. For years, startups were encouraged to seek foreign investment and expand internationally. Recent actions suggest a pivot toward retaining control over strategic assets and limiting the transfer of intellectual property abroad.

The implications extend to venture capital and startup strategy. Founders may find it harder to rely on offshore structures or foreign funding to scale their businesses. At the same time, investors could face reduced access to high-growth AI companies in China. As governments on both sides tighten controls, the global AI market is becoming more fragmented, with separate ecosystems emerging around national priorities.

AI & Machine Learning, News, Regulation & Policy, Startups & Investment

OpenAI Rewrites Microsoft Deal to Reduce Dependence

OpenAI and Microsoft have revised their partnership to cap revenue sharing and allow broader cloud distribution. The changes reflect growing competition and OpenAI’s push for flexibility.

By Olivia Grant Edited by Maria Konash Published:
OpenAI Rewrites Microsoft Deal to Reduce Dependence
OpenAI-Microsoft deal update caps revenue share and expands cloud flexibility, signaling a shift in AI alliances. Image: OpenAI

OpenAI and Microsoft have announced a revised partnership agreement that reshapes their long-standing collaboration in artificial intelligence. The updated deal introduces a cap on revenue-sharing payments from OpenAI to Microsoft while maintaining the arrangement through 2030. It also removes a previous clause tied to artificial general intelligence, eliminating the need for Microsoft to reassess its position if OpenAI achieves that milestone. The changes come as both companies expand their AI ambitions and navigate increasing overlap in their business strategies.

Under the new terms, OpenAI will continue to pay Microsoft a 20% share of revenue, though total payments will now be capped. Microsoft will no longer pay revenue share back to OpenAI. The agreement also loosens restrictions on cloud distribution, allowing OpenAI to offer its products across multiple providers, including competitors such as Amazon and Google. Despite this flexibility, Microsoft remains OpenAI’s primary cloud partner, and OpenAI products will still launch first on its Azure platform unless Microsoft opts out.

The partnership continues to include significant infrastructure and intellectual property provisions. Microsoft retains access to OpenAI’s models through a licensing agreement that now runs until 2032, though the license is no longer exclusive. The companies emphasized ongoing collaboration on areas such as data center expansion, custom silicon development, and cybersecurity applications. Microsoft has invested more than $13 billion in OpenAI since 2019 and remains a major shareholder.

Strategic Realignment

The revised agreement reflects a shift toward greater independence for OpenAI as it scales its business. By enabling multi-cloud distribution, the company can reach enterprise customers that rely on different providers, addressing limitations highlighted in recent internal discussions. At the same time, the revenue cap provides more predictability for both parties, reducing long-term financial uncertainty as AI adoption accelerates.

For Microsoft, the changes preserve a central role in OpenAI’s ecosystem while allowing flexibility to pursue its own AI initiatives. The continued licensing arrangement ensures access to key technologies, even as exclusivity is removed. This balance suggests both companies are adapting to a more competitive environment while maintaining core ties.

Evolving AI Alliances

The update comes amid a wave of large-scale infrastructure and partnership deals across the AI industry. OpenAI has expanded relationships with cloud providers, including a major agreement with Amazon’s AWS, while companies like Meta are investing heavily in additional compute capacity through partners such as CoreWeave and Nebius.

These developments highlight how access to computing power and distribution channels is reshaping alliances. As AI systems become more resource-intensive, companies are diversifying partnerships to secure infrastructure and reduce dependency on single providers. The revised Microsoft OpenAI agreement reflects this broader trend, signaling a move toward more flexible, multi-partner ecosystems in the global AI market.

Anthropic Tested How AI Agents Negotiate and Trade Among Themselves

Anthropic ran an internal experiment where AI agents negotiated and closed real-world transactions between employees. The results show stronger models secure better deals, often without users noticing.

By Maria Konash Published:
Anthropic Tested How AI Agents Negotiate and Trade Among Themselves
Anthropic experiment shows AI agents negotiating real deals, with stronger models quietly securing better outcomes. Image: Anthropic

Anthropic has tested how AI agents could handle real-world commerce through an internal experiment called Project Deal, where models negotiated transactions on behalf of employees. In the week-long trial, 69 participants allowed AI agents powered by Claude models to buy and sell personal items without human intervention during negotiations. The agents completed 186 deals worth more than $4,000, covering items such as a snowboard, bicycle, books, and even experiential offers like spending time with a pet. Humans only stepped in at the final stage to exchange goods physically.

The experiment aimed to explore whether AI agents could independently represent users in a marketplace and negotiate outcomes aligned with human preferences. Agents handled the full process, including writing listings, making offers, negotiating prices, and closing deals. Anthropic found that the system worked reliably, with participants reporting generally neutral perceptions of fairness across transactions. The setup mimicked a simplified classifieds marketplace, similar to platforms like Craigslist, but fully operated by AI.

A key finding was the impact of model quality on outcomes. More advanced models, such as Claude Opus 4.5, consistently outperformed smaller versions like Claude Haiku 4.5. Stronger agents secured higher selling prices and lower purchase costs, with measurable gains relative to average transaction values. However, participants represented by weaker models often did not recognize that they had received worse deals. This gap between objective performance and user perception emerged as one of the experiment’s most notable insights.

Uneven Outcomes

The results suggest that AI-driven marketplaces could introduce subtle advantages based on the quality of the agent representing each user. In the experiment, stronger models extracted better terms in negotiations, while weaker ones lagged behind. Despite this, users did not consistently perceive differences in deal quality, raising concerns about transparency and fairness in automated transactions.

If similar dynamics emerge in real-world markets, access to more advanced AI systems could become a competitive advantage. Individuals or organizations using higher-performing agents may consistently secure better outcomes, potentially widening economic gaps. The findings indicate that disparities in AI capability may influence markets even when participants believe outcomes are fair.

Early Signals of Agent Economy

The experiment provides an early glimpse into a potential shift toward agent-to-agent commerce, where AI systems handle transactions on behalf of humans. Researchers have increasingly explored this concept, but most prior studies relied on simulated environments rather than real goods and participants. Anthropic’s approach adds practical insight by demonstrating how such systems behave in a live setting.

The broader context includes growing interest in “agentic AI,” systems capable of planning and executing multi-step tasks autonomously. As these systems improve, they may play a larger role in everyday economic activity, from shopping to business negotiations. However, the experiment also highlights unresolved challenges, including governance, security risks such as manipulation of agents, and the absence of clear regulatory frameworks.

AI & Machine Learning, News

Google Commits Up to $40 Billion to Anthropic

Google plans to invest up to $40 billion in Anthropic while expanding cloud and chip support. The deal underscores the growing importance of compute capacity in the AI race.

By Samantha Reed Edited by Maria Konash Published:
Google Commits Up to $40 Billion to Anthropic
Google boosts Anthropic with multibillion investment, expanding AI compute and cloud capacity. Image: Anthropic

Google is planning to invest up to $40 billion in Anthropic, according to a report by Bloomberg. The Alphabet subsidiary will commit $10 billion upfront at a $350 billion valuation, with an additional $30 billion tied to performance milestones. The investment comes as Anthropic scales its infrastructure to support increasingly complex AI models. It also deepens an existing relationship in which Google provides key cloud and chip resources.

The funding follows the limited release of Anthropic’s latest model, Mythos, which the company describes as its most powerful system to date. The model is being tested with select partners due to concerns about misuse, particularly in cybersecurity applications. Running such advanced models requires significant compute resources, which has become a defining factor in the AI industry. Anthropic has faced recent pressure on capacity, including user complaints about usage limits for its Claude models.

To address these constraints, Anthropic has secured a series of infrastructure deals. The company recently partnered with CoreWeave for data center capacity and expanded its relationship with Amazon, which committed an additional $5 billion as part of a broader agreement that could total $100 billion in compute spending. Anthropic also works with Broadcom to access custom AI chips used by Google. These arrangements highlight the scale of resources required to train and deploy next-generation AI systems.

Compute Arms Race

The deal reflects intensifying competition among AI companies to secure computing power. Access to chips, data centers, and energy is becoming as important as model design. Anthropic relies heavily on Google Cloud infrastructure, including tensor processing units, specialized chips optimized for AI workloads and seen as alternatives to Nvidia processors.

The expanded agreement includes a commitment from Google Cloud to provide around 5 gigawatts of compute capacity over the next five years, with potential for further scaling. This level of infrastructure is critical for running advanced models and supporting enterprise demand. For businesses, increased capacity could improve reliability and performance of AI services, while also shaping pricing and availability.

Investment Momentum

Anthropic’s valuation and funding trajectory reflect strong investor interest in leading AI developers. The company was valued at $350 billion earlier in 2026, with some investors reportedly willing to back it at significantly higher levels. It is also considering a potential initial public offering as early as October, which could provide a clearer benchmark for its market value.

The broader backdrop includes aggressive moves by competitors such as OpenAI, which has pursued large-scale infrastructure agreements across cloud providers and chipmakers. As companies race to build more powerful models, securing long-term access to compute resources is emerging as a key strategic priority, shaping partnerships across the AI ecosystem.

China Moves to Restrict US Investment in AI Firms

China plans to limit US investment in domestic AI companies without government approval following scrutiny of a major cross-border deal. The move signals tighter control over sensitive technologies.

By Samantha Reed Edited by Maria Konash Published:
China Moves to Restrict US Investment in AI Firms
China tightens AI investment rules on U.S. capital, reshaping startup funding after Manus deal. Image: CARLOS DE SOUZA / Unsplash

Chinese regulators are moving to restrict domestic technology firms from accepting U.S. investment without prior government approval, according to Bloomberg report. Agencies including the National Development and Reform Commission have advised several AI startups to avoid U.S.-origin funding unless explicitly cleared. Companies such as Moonshot AI and StepFun were among those reportedly given guidance. The policy shift follows scrutiny of Meta Platformsacquisition of Manus, which raised concerns about foreign access to Chinese-developed AI technology.

The restrictions are part of a broader effort to safeguard sensitive sectors tied to national security. Regulators are also applying similar oversight to ByteDance, limiting its ability to approve secondary share sales to U.S. investors without approval. The probe into the Manus transaction involves multiple agencies, including the Ministry of Commerce, reflecting the importance of the issue at a national level. The deal, valued at about $2 billion, had initially been seen as a model for global expansion before triggering backlash over potential technology transfer.

The move could significantly affect China’s technology funding ecosystem. For decades, U.S. investors such as pension funds and endowments have played a major role in financing Chinese startups. Recent measures, including restrictions on so-called red-chip companies listing in Hong Kong, suggest regulators are increasingly focused on preventing capital and technology from moving offshore. Startups like Moonshot AI, reportedly seeking up to $1 billion in funding, and StepFun, considering a $500 million listing, may need to restructure to comply with new rules.

Capital Controls Tighten

The new guidance signals a shift toward tighter control over foreign participation in China’s AI sector. By limiting access to U.S. capital, regulators aim to reduce the risk of sensitive technologies being transferred abroad. This could lead to greater reliance on domestic funding sources and state-backed investment, potentially slowing fundraising for some startups while strengthening government influence over strategic industries.

For companies, the restrictions introduce additional complexity in structuring funding rounds and planning international expansion. Deals involving offshore entities or foreign investors may face delays or require restructuring, increasing costs and uncertainty. For global investors, the move reduces access to one of the world’s largest and fastest-growing AI markets.

Policy Shift After Manus Deal

The crackdown follows the acquisition of Manus, a Singapore-incorporated AI startup founded by Chinese entrepreneurs, which relocated staff and operations ahead of its sale. The deal prompted investigations in both China and the United States, highlighting regulatory gaps in cross-border transactions involving advanced technology. Chinese authorities are now examining how similar transactions can be prevented or more tightly controlled.

The broader backdrop includes rising geopolitical tension over technology leadership. Washington has already imposed restrictions on U.S. investment in certain Chinese sectors, including semiconductors and AI, citing national security concerns. Beijing’s latest measures mirror that approach, marking a more defensive stance after years of encouraging foreign investment to build globally competitive companies.

AI & Machine Learning, News, Regulation & Policy

Anthropic Brings Claude to 30,000 NEC Employees in Japan Push

NEC will deploy Anthropic’s Claude AI to 30,000 employees and co-develop industry-specific tools for Japan. The partnership targets regulated sectors and enterprise AI adoption.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Brings Claude to 30,000 NEC Employees in Japan Push
NEC expands Anthropic Claude rollout to 30,000 employees, building AI solutions for regulated sectors in Japan. Image: Anthropic

Anthropic has partnered with NEC Corporation to deploy its Claude AI models across approximately 30,000 employees worldwide. The collaboration marks Anthropic’s first Japan-based global partnership and reflects growing enterprise demand for AI tools tailored to local markets. NEC plans to use Claude to build one of Japan’s largest AI-native engineering organizations. The rollout is already underway, with employees gaining access to Claude for internal and customer-facing applications.

As part of the agreement, the two companies will jointly develop secure, industry-specific AI products for sectors including finance, manufacturing, and government. Claude will also be integrated into NEC’s cybersecurity offerings, including its Security Operations Center services, to help detect and respond to advanced threats. The partnership extends to NEC’s BluStellar platform, where Claude models such as Claude Opus 4.7 and developer tools like Claude Code will support enterprise services ranging from data-driven management to customer experience.

Internally, NEC plans to establish a Center of Excellence focused on training and enabling an AI-driven workforce. The initiative includes technical support from Anthropic and expanded use of tools such as Claude Code and Claude Cowork across internal operations. The effort builds on NEC’s “Client Zero” strategy, in which it tests its own technologies before offering them to customers. By embedding AI into both internal workflows and external products, NEC aims to accelerate adoption across its business lines.

Enterprise AI Push

The partnership highlights increasing demand for AI systems designed to meet strict enterprise and regulatory requirements. In Japan, companies and public institutions place strong emphasis on reliability, data security, and compliance. By focusing on domain-specific solutions, NEC and Anthropic are targeting organizations that require tailored AI rather than general-purpose tools.

For businesses, this approach could enable more practical deployment of AI in areas such as risk analysis, operational efficiency, and cybersecurity. For governments and regulated industries, it offers a path to adopt AI while maintaining control over data and infrastructure. The scale of deployment, covering tens of thousands of employees, also signals a shift from pilot programs to full organizational integration.

Japan’s AI Landscape

Japan has been accelerating efforts to expand its AI capabilities, with both domestic and international companies investing in the market. Partnerships between global AI developers and local firms are becoming a key strategy for navigating regulatory requirements and cultural expectations. NEC’s collaboration with Anthropic reflects this trend, combining local expertise with advanced AI models.

The focus on secure and controllable AI aligns with broader global concerns about governance and safety. As organizations seek to balance innovation with oversight, partnerships like this may shape how AI is deployed in highly regulated environments. NEC’s emphasis on internal adoption before external rollout also mirrors a wider industry pattern of validating AI systems at scale before commercial release.

AI & Machine Learning, Enterprise Tech, News