Meta Expands AWS Partnership With Massive Graviton Deployment

Meta will deploy tens of millions of AWS Graviton cores to power its next generation of AI systems. The deal highlights rising demand for CPU-driven infrastructure alongside GPUs.

By Olivia Grant Edited by Maria Konash Published:

Meta has signed an agreement with Amazon Web Services to deploy tens of millions of Graviton processor cores, marking a major expansion of their long-standing partnership. The deployment will support Meta’s next generation of artificial intelligence systems and is expected to scale further over time. The move positions Meta as one of the largest customers of AWS’s custom-designed Graviton chips. It comes as demand for AI infrastructure grows rapidly, particularly for systems that require real-time processing and coordination.

The chips will power a range of workloads across Meta’s platforms, including AI systems that handle billions of user interactions. While graphics processing units remain central to training large AI models, the rise of agentic AI has increased demand for CPU-based computing. These workloads include real-time reasoning, code generation, and orchestrating multi-step processes. AWS’s Graviton5 is designed for such tasks, featuring 192 cores and significantly expanded cache to improve data flow and reduce latency.

Graviton processors are built on the AWS Nitro System, which combines dedicated hardware and software to deliver high performance and security. The infrastructure also supports features such as low-latency communication between compute instances, enabling distributed AI workloads to run efficiently. Meta has previously relied on AWS services, including large-scale use of its AI tools, and this agreement deepens that relationship. The deployment also aligns with Meta’s strategy to diversify its compute resources as it scales AI capabilities.

Infrastructure Shift

The deal reflects a broader shift in how AI infrastructure is designed. While GPUs dominate model training, many emerging AI applications require sustained, high-volume processing that is better suited to CPUs. Agentic AI systems, which can plan and execute multi-step tasks autonomously, rely heavily on this type of compute. By investing in purpose-built chips like Graviton, companies can optimize performance while managing costs more effectively.

For businesses, this trend signals a more complex infrastructure landscape, where different types of processors are used for specific workloads. It may also influence how cloud providers package and price AI services, as demand grows for specialized compute resources. For end users, improved infrastructure can enable faster and more responsive AI-driven features across platforms.

The Road Ahead

The expansion underscores the increasing importance of custom silicon in the AI race. AWS designs Graviton chips to be more energy efficient and cost-effective than traditional processors, with the latest generation delivering up to 25% performance gains. Built on advanced manufacturing processes, these chips help address both cost pressures and sustainability goals as AI workloads scale.

As AI adoption accelerates, infrastructure efficiency is becoming a key competitive factor. Companies like Meta are balancing performance, cost, and energy use while building systems capable of supporting billions of interactions. The partnership with AWS suggests that purpose-built processors will play a larger role in future AI deployments, shaping how large-scale systems are developed and operated.

China Moves to Restrict US Investment in AI Firms

China plans to limit US investment in domestic AI companies without government approval following scrutiny of a major cross-border deal. The move signals tighter control over sensitive technologies.

By Samantha Reed Edited by Maria Konash Published:
China tightens AI investment rules on U.S. capital, reshaping startup funding after Manus deal. Image: CARLOS DE SOUZA / Unsplash

Chinese regulators are moving to restrict domestic technology firms from accepting U.S. investment without prior government approval, according to Bloomberg report. Agencies including the National Development and Reform Commission have advised several AI startups to avoid U.S.-origin funding unless explicitly cleared. Companies such as Moonshot AI and StepFun were among those reportedly given guidance. The policy shift follows scrutiny of Meta Platformsacquisition of Manus, which raised concerns about foreign access to Chinese-developed AI technology.

The restrictions are part of a broader effort to safeguard sensitive sectors tied to national security. Regulators are also applying similar oversight to ByteDance, limiting its ability to approve secondary share sales to U.S. investors without approval. The probe into the Manus transaction involves multiple agencies, including the Ministry of Commerce, reflecting the importance of the issue at a national level. The deal, valued at about $2 billion, had initially been seen as a model for global expansion before triggering backlash over potential technology transfer.

The move could significantly affect China’s technology funding ecosystem. For decades, U.S. investors such as pension funds and endowments have played a major role in financing Chinese startups. Recent measures, including restrictions on so-called red-chip companies listing in Hong Kong, suggest regulators are increasingly focused on preventing capital and technology from moving offshore. Startups like Moonshot AI, reportedly seeking up to $1 billion in funding, and StepFun, considering a $500 million listing, may need to restructure to comply with new rules.

Capital Controls Tighten

The new guidance signals a shift toward tighter control over foreign participation in China’s AI sector. By limiting access to U.S. capital, regulators aim to reduce the risk of sensitive technologies being transferred abroad. This could lead to greater reliance on domestic funding sources and state-backed investment, potentially slowing fundraising for some startups while strengthening government influence over strategic industries.

For companies, the restrictions introduce additional complexity in structuring funding rounds and planning international expansion. Deals involving offshore entities or foreign investors may face delays or require restructuring, increasing costs and uncertainty. For global investors, the move reduces access to one of the world’s largest and fastest-growing AI markets.

Policy Shift After Manus Deal

The crackdown follows the acquisition of Manus, a Singapore-incorporated AI startup founded by Chinese entrepreneurs, which relocated staff and operations ahead of its sale. The deal prompted investigations in both China and the United States, highlighting regulatory gaps in cross-border transactions involving advanced technology. Chinese authorities are now examining how similar transactions can be prevented or more tightly controlled.

The broader backdrop includes rising geopolitical tension over technology leadership. Washington has already imposed restrictions on U.S. investment in certain Chinese sectors, including semiconductors and AI, citing national security concerns. Beijing’s latest measures mirror that approach, marking a more defensive stance after years of encouraging foreign investment to build globally competitive companies.

AI & Machine Learning, News, Regulation & Policy

Anthropic Brings Claude to 30,000 NEC Employees in Japan Push

NEC will deploy Anthropic’s Claude AI to 30,000 employees and co-develop industry-specific tools for Japan. The partnership targets regulated sectors and enterprise AI adoption.

By Samantha Reed Edited by Maria Konash Published:
NEC expands Anthropic Claude rollout to 30,000 employees, building AI solutions for regulated sectors in Japan. Image: Anthropic

Anthropic has partnered with NEC Corporation to deploy its Claude AI models across approximately 30,000 employees worldwide. The collaboration marks Anthropic’s first Japan-based global partnership and reflects growing enterprise demand for AI tools tailored to local markets. NEC plans to use Claude to build one of Japan’s largest AI-native engineering organizations. The rollout is already underway, with employees gaining access to Claude for internal and customer-facing applications.

As part of the agreement, the two companies will jointly develop secure, industry-specific AI products for sectors including finance, manufacturing, and government. Claude will also be integrated into NEC’s cybersecurity offerings, including its Security Operations Center services, to help detect and respond to advanced threats. The partnership extends to NEC’s BluStellar platform, where Claude models such as Claude Opus 4.7 and developer tools like Claude Code will support enterprise services ranging from data-driven management to customer experience.

Internally, NEC plans to establish a Center of Excellence focused on training and enabling an AI-driven workforce. The initiative includes technical support from Anthropic and expanded use of tools such as Claude Code and Claude Cowork across internal operations. The effort builds on NEC’s “Client Zero” strategy, in which it tests its own technologies before offering them to customers. By embedding AI into both internal workflows and external products, NEC aims to accelerate adoption across its business lines.

Enterprise AI Push

The partnership highlights increasing demand for AI systems designed to meet strict enterprise and regulatory requirements. In Japan, companies and public institutions place strong emphasis on reliability, data security, and compliance. By focusing on domain-specific solutions, NEC and Anthropic are targeting organizations that require tailored AI rather than general-purpose tools.

For businesses, this approach could enable more practical deployment of AI in areas such as risk analysis, operational efficiency, and cybersecurity. For governments and regulated industries, it offers a path to adopt AI while maintaining control over data and infrastructure. The scale of deployment, covering tens of thousands of employees, also signals a shift from pilot programs to full organizational integration.

Japan’s AI Landscape

Japan has been accelerating efforts to expand its AI capabilities, with both domestic and international companies investing in the market. Partnerships between global AI developers and local firms are becoming a key strategy for navigating regulatory requirements and cultural expectations. NEC’s collaboration with Anthropic reflects this trend, combining local expertise with advanced AI models.

The focus on secure and controllable AI aligns with broader global concerns about governance and safety. As organizations seek to balance innovation with oversight, partnerships like this may shape how AI is deployed in highly regulated environments. NEC’s emphasis on internal adoption before external rollout also mirrors a wider industry pattern of validating AI systems at scale before commercial release.

AI & Machine Learning, Enterprise Tech, News

Cohere Plans Aleph Alpha Acquisition to Expand Europe Presence

Cohere plans to acquire Aleph Alpha and secure a $600 million investment to expand its footprint in Europe. The deal targets demand for sovereign AI in regulated industries.

By Samantha Reed Edited by Maria Konash Published:
Cohere plans to acquire Aleph Alpha and raise $600M, expanding sovereign AI in Europe. Image: Cohere

Cohere said it plans to acquire Aleph Alpha as part of a push to expand its presence in Europe. The proposed deal, which has not yet closed and remains subject to regulatory approval, will also bring new funding. Schwarz Group, a key backer of Aleph Alpha, intends to invest $600 million into Cohere’s upcoming Series E round. The company expects to complete that funding round in 2026, according to a source familiar with the matter.

Cohere, founded in 2019, has raised about $1.6 billion to date from investors including Nvidia and AMD, and was valued at $7 billion in 2025. The acquisition would strengthen its ability to deliver customized AI systems tailored for regulated sectors such as government, finance, defense, and healthcare. Aleph Alpha brings established relationships with German public sector clients, including work with federal and regional authorities, which could accelerate Cohere’s entry into Europe’s largest economy.

The deal also reflects a strategic shift toward so-called sovereign AI, where organizations retain greater control over data, infrastructure, and deployment. Aleph Alpha, founded in 2019, initially focused on building large language models before pivoting to enterprise applications. It has raised more than $600 million in funding, including grants, and built a presence in Europe’s public sector. Cohere said combining capabilities would enhance its ability to meet demand for secure and compliant AI systems.

Strategic Implications

The planned acquisition highlights growing demand for AI solutions that meet strict regulatory and data sovereignty requirements. Governments and enterprises are increasingly seeking alternatives to global cloud providers that allow greater control over sensitive data. By integrating Aleph Alpha’s regional expertise, Cohere could position itself as a provider of localized AI infrastructure tailored to European standards.

The additional $600 million investment from Schwarz Group also signals continued investor confidence in enterprise-focused AI companies. Access to capital will be critical as firms compete to build infrastructure, scale operations, and meet rising demand for customized AI deployments. For customers, the deal could expand access to AI systems designed specifically for compliance-heavy industries.

European Market Dynamics

Europe has become a focal point for AI development shaped by regulation, including strict data protection and emerging AI governance frameworks. This environment has encouraged the growth of providers that emphasize transparency, security, and local control. Aleph Alpha’s existing contracts with German government entities highlight the importance of trusted domestic partnerships in this market.

For Cohere, the acquisition offers a faster route into Europe compared with building operations from scratch. It also positions the company against both U.S. and regional competitors seeking to capture enterprise AI demand. As governments and corporations prioritize sovereignty and compliance, partnerships like this may become more common, reshaping how AI services are delivered across regions.

AI & Machine Learning, Enterprise Tech, News

DeepSeek Unveils V4 Model With Lower AI Costs

DeepSeek has released a preview of its V4 language model, highlighting lower inference costs and strong performance. The launch intensifies competition in China’s fast-growing AI market.

By Daniel Mercer Edited by Maria Konash Published:
DeepSeek V4 debuts with lower inference costs and strong performance, intensifying China’s AI race. Image: Solen Feyissa / Unsplash

DeepSeek has released a preview version of its V4 large language model, offering developers early access to its latest capabilities. The Hangzhou-based company said the model is available in both “pro” and “flash” versions, designed for different performance and size requirements. Like its earlier releases, V4 is open-source, allowing users to download, modify, and run the model locally. The launch follows more than a year after DeepSeek introduced its R1 reasoning model, which drew global attention for its performance and low development cost.

DeepSeek claims that V4 delivers strong results in agent-based tasks, knowledge processing, and inference, which refers to the computational cost of generating outputs from a trained model. Analysts at Counterpoint Research said the model shows improved efficiency, with lower inference costs compared to earlier versions. The system has also been optimized to work with agent tools such as Claude Code from Anthropic and OpenClaw. These integrations point to a growing focus on AI agents, which automate multi-step tasks using language models.

The release comes as competition in China’s AI sector accelerates. Companies including Alibaba and ByteDance have launched new models this year, intensifying rivalry in both enterprise and open-source segments. Market reactions reflected the shifting landscape, with shares of firms such as MiniMax and Zhipu declining, while chip manufacturers like SMIC and Hua Hong Semiconductor rose following the announcement.

Cost and Capability Shift

DeepSeek’s V4 reinforces a key trend in AI development: improving performance while reducing costs. Lower inference costs make AI tools more accessible to businesses, particularly for applications that require continuous or large-scale usage. For developers, open-source availability allows customization and deployment without relying on centralized providers, potentially accelerating innovation.

The model’s reported ability to run on domestic chips could also shift the balance in the global AI supply chain. If widely adopted, this capability may reduce dependence on U.S.-based hardware providers such as Nvidia. That would support China’s push for greater technological self-sufficiency and reshape how AI infrastructure is built and deployed worldwide.

China’s AI Arms Race

DeepSeek first gained prominence with its V3 model in late 2024 and the R1 reasoning model in early 2025, which reportedly matched or exceeded leading systems at a fraction of the cost. The company said R1 was developed in about two months for under $6 million using lower-capacity chips, raising questions about the scale of spending by larger AI players.

Since then, the market has evolved, with investors and companies increasingly recognizing that Chinese AI developers can compete on both cost and capability. Analysts suggest V4 is unlikely to trigger the same market shock as R1, as expectations have adjusted. However, the model’s positioning against domestic rivals highlights how quickly competition within China has intensified, signaling a more crowded and mature AI ecosystem.

AI & Machine Learning, News

OpenAI Introduces GPT-5.5 as Its Most Capable Model for Real Work Yet

OpenAI has launched GPT-5.5, a new flagship model designed for coding, computer use, knowledge work, and scientific research, with stronger performance, lower token usage, and broader real-world autonomy than GPT-5.4.

By Maria Konash Edited by AIstify Team Published: Updated:
OpenAI has introduced GPT-5.5 as a new class of intelligence for real work, combining stronger coding, reasoning, and computer-use abilities with faster, more efficient performance. Photo: OpenAI

OpenAI has just launched GPT-5.5, a major new model release that the company describes as its most capable system yet for real-world work. The model is designed to move beyond traditional chatbot interactions and into sustained execution of complex, multi-step tasks across software development, research, business operations, and data analysis.

The release reflects a broader shift inside OpenAI toward building systems that act less like assistants and more like collaborators. “GPT-5.5 is built for real work,” the company said, emphasizing its ability to plan, execute, and refine tasks across long time horizons while maintaining coherence and accuracy.

At its core, GPT-5.5 is optimized for coding, computer use, knowledge work, and scientific reasoning, areas where the company says previous models still required significant human supervision. The goal, according to OpenAI, is to close the gap between what frontier models can theoretically do and what they can reliably deliver in practice.

A Leap in Coding, Reasoning, and Execution

GPT-5.5 shows measurable gains across major industry benchmarks. On Terminal-Bench 2.0, which evaluates command-line workflows requiring tool use and planning, the model achieves 82.7 percent, up from 75.1 percent in GPT-5.4. On SWE-Bench Pro, a widely used benchmark for real-world software engineering, it reaches 58.6 percent, again improving on its predecessor.

These improvements translate into tangible gains for developers. OpenAI says GPT-5.5 is better at understanding the structure of large codebases, identifying root causes of failures, and implementing fixes that work across multiple files and systems. Early testers described the model as more reliable in “end-to-end engineering tasks,” where success depends on coordinating multiple steps rather than producing isolated snippets.

One tester noted that GPT-5.5 “feels like it understands the system, not just the code,” highlighting a shift toward deeper reasoning and contextual awareness.

The model also advances OpenAI’s broader push toward agentic workflows, where AI systems can independently complete tasks across tools. On OSWorld-Verified, a benchmark that measures real-world computer use, GPT-5.5 scores 78.7 percent, demonstrating its ability to operate software environments with minimal human intervention.

From Productivity Tool to Economic Engine

The company says the biggest impact of GPT-5.5 may be in knowledge work, where it can generate presentations, build spreadsheets, analyze data, and produce structured outputs at scale. On GDPval, OpenAI’s benchmark covering 44 occupations, GPT-5.5 reaches 84.9 percent, outperforming GPT-5.4 and approaching expert-level performance across a wide range of tasks.

“GPT-5.5 is better at producing real work products, not just answers,” OpenAI said. “It can generate deliverables that are closer to what a professional would produce.”

The model is also more efficient. OpenAI says GPT-5.5 achieves higher quality results with fewer tokens, reducing the number of iterations needed to complete a task. This efficiency lowers the cost of reaching a given level of output quality, even as the model itself becomes more advanced.

Inside OpenAI, the shift is already visible. The company reports that more than 85 percent of employees now use Codex weekly, applying AI to tasks across engineering, finance, marketing, and communications. In one example, teams used GPT-5.5 to analyze speaking request data and generate structured reports, saving several hours per week per employee.

“This is where AI becomes infrastructure,” OpenAI said, describing the model as a system that supports entire workflows rather than isolated tasks.

Advancing Scientific Discovery

Beyond enterprise use, GPT-5.5 is also pushing into scientific research. On GeneBench, a benchmark focused on genetics and quantitative biology, the model shows significant improvement over previous versions. OpenAI says it is better at exploring hypotheses, interpreting ambiguous results, and iterating across complex research workflows.

In one internal experiment, a customized version of GPT-5.5 contributed to discovering a new proof related to Ramsey numbers, a core concept in combinatorics. The result was later verified by researchers, illustrating how AI can assist in advancing mathematical knowledge under human supervision.

“We’re beginning to see AI meaningfully accelerate science,” the company said, while noting that human oversight remains essential.

Safety, Security, and Deployment

OpenAI also highlighted improvements in factual reliability and safety. GPT-5.5 reduces error rates compared to GPT-5.4 and includes stricter safeguards for high-risk domains, particularly cybersecurity. The company says it is expanding controlled access to cyber-related capabilities through its Trusted Access for Cyber program while maintaining tighter usage controls.

“Security and alignment are core to how we deploy these systems,” OpenAI said, adding that stronger guardrails are necessary as models gain more autonomy.

GPT-5.5 is now rolling out to ChatGPT Plus, Pro, Business, and Enterprise users, with GPT-5.5 Pro available for higher-complexity tasks. API access is expected to follow, with pricing set at $5 per million input tokens and $30 per million output tokens, while Pro usage carries higher rates.

The model was trained using infrastructure developed in collaboration with Microsoft and NVIDIA, leveraging Azure data centers and GPU systems including H100, H200, and next-generation architectures.

Toward AI That Can Work End-to-End

For OpenAI, GPT-5.5 represents more than another incremental release. It signals a transition toward AI systems capable of carrying real work from start to finish.

“Everything is controlled by code,” the company said. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of work.”

With GPT-5.5, OpenAI is betting that the future of AI will be defined not just by intelligence, but by execution — systems that can plan, act, and deliver outcomes at scale.

Exit mobile version