Nvidia Invests $2 Billion in Nebius AI Cloud Partnership

Nvidia and Nebius have formed a strategic partnership to build hyperscale AI cloud infrastructure, with Nvidia investing $2 billion to support gigawatt-scale AI computing capacity.

By Olivia Grant Edited by Maria Konash Published:
Nvidia Invests $2 Billion in Nebius AI Cloud Partnership
Nvidia invests $2B in Nebius to expand hyperscale AI cloud and build gigawatt-scale AI factories. Photo: Nvidia

Nvidia and Nebius Group have announced a strategic partnership to develop a next-generation hyperscale cloud platform designed specifically for artificial intelligence workloads. As part of the agreement, Nvidia will invest $2 billion in Nebius to support the expansion of its AI cloud infrastructure.

The collaboration aims to scale computing resources for AI developers, enterprises, and research organizations as demand for high-performance infrastructure accelerates. Nebius plans to deploy more than five gigawatts of Nvidia-powered computing capacity by the end of 2030.

The partnership builds on Nebius’s existing use of Nvidia hardware across its cloud platform and will involve deeper integration across the AI technology stack, including data center design, software optimization, and large-scale infrastructure deployment.

“AI is at another inflection point — agentic AI, driving incredible compute demand and accelerating infrastructure buildout,” said Nvidia Chief Executive Jensen Huang. “Nebius is building an AI cloud designed for the agentic era, fully integrated from silicon to software and powered by Nvidia’s next-generation accelerated compute.”

AI Factories and Agentic Infrastructure

Under the agreement, the companies will collaborate on designing large-scale “AI factories” — specialized data centers optimized for training and running advanced AI models. These facilities will incorporate multiple generations of Nvidia accelerated computing systems as Nebius expands its global platform.

The partnership will include early adoption of Nvidia’s latest architectures, including the Rubin platform, Vera CPUs, and BlueField storage and networking systems. The companies will also work together to optimize inference and agentic AI software stacks, enabling developers to deploy advanced AI applications more efficiently.

In addition to infrastructure deployment, Nvidia will provide design guidance, system validation processes, and engineering support as Nebius scales its data center operations. The companies will also collaborate on fleet management tools that monitor GPU health and optimize system performance across large clusters.

Building AI Infrastructure for Global Demand

Nebius said the partnership reflects its strategy of building a cloud platform designed specifically for AI workloads rather than adapting traditional cloud computing models.

“Nebius has been built for AI since day one — not adapted from a general-purpose cloud, but designed for what developers actually need,” said Arkady Volozh, chief executive of Nebius. “Now with Nvidia, we are extending that throughout the stack — from gigawatt-scale AI factories to inference and software.”

The companies said the partnership is intended to support growing global demand for AI infrastructure as organizations increasingly deploy large-scale models and agent-based AI systems.

By combining Nvidia’s accelerated computing technologies with Nebius’s cloud platform, the collaboration aims to provide developers and enterprises with scalable infrastructure capable of supporting the next generation of AI applications.

Anthropic Launches Institute to Study Societal Impact of AI

Anthropic has launched the Anthropic Institute to study the societal, economic, and governance challenges posed by advanced AI systems. The initiative will combine research from engineers, economists, and social scientists.

By Laura Bennett Edited by Maria Konash Published:
Anthropic launches the Anthropic Institute to research AI safety, economic impact, governance, and societal risks. Image: Anthropic

Anthropic has announced the launch of the Anthropic Institute, a research initiative focused on examining the societal, economic, and governance challenges posed by rapidly advancing AI systems.

The institute will draw on internal research across Anthropic to provide insights for policymakers, researchers, and the public as AI systems grow more capable. The company said the effort aims to improve understanding of how advanced AI could reshape economies, jobs, legal systems, and governance structures.

The initiative will be led by Anthropic co-founder Jack Clark, who will assume a new role as the company’s Head of Public Benefit. The institute’s interdisciplinary team will include machine learning engineers, economists, and social scientists working together to analyze the broader implications of frontier AI technologies.

Anthropic said the pace of AI progress has accelerated rapidly in recent years. The company took two years to release its first commercial model and only three more years to develop systems capable of discovering cybersecurity vulnerabilities, performing complex professional tasks, and contributing to AI research itself.

The institute will study several key questions related to advanced AI development, including how the technology may transform labor markets, influence economic growth, and affect societal resilience. It will also examine governance challenges such as how AI systems should be regulated and how organizations should manage the values embedded in advanced models.

Research Focus and Policy Engagement

The Anthropic Institute will integrate and expand three existing research groups inside the company. These include the Frontier Red Team, which tests the limits and risks of AI systems, the Societal Impacts team studying real-world AI adoption, and the Economic Research team analyzing labor and macroeconomic effects.

In addition to continuing existing research programs, the institute plans to explore new areas including forecasting future AI progress and studying how powerful AI systems could interact with legal systems and regulatory frameworks.

Anthropic has also announced several key hires for the institute. Matt Botvinick, previously a senior research leader at Google DeepMind and a resident fellow at Yale Law School, will lead research on AI and the rule of law. Economist Anton Korinek from the University of Virginia will help study how advanced AI could reshape economic activity. Zoë Hitzig, who previously researched AI’s economic impacts at OpenAI, will contribute to connecting economic research with model development.

Alongside the institute launch, Anthropic is expanding its public policy organization. The company plans to open its first Washington, D.C. office this spring as it increases engagement with policymakers on issues including AI safety, infrastructure investment, and export controls.

The company said the institute will publish research and engage with external stakeholders to help societies prepare for the potential benefits and risks of transformative AI technologies as development accelerates.

AI & Machine Learning, News, Research & Innovation

Amazon Launches Health AI Assistant for Prime Members

Amazon has launched Health AI, an agentic assistant on its website and app that helps users understand medical records, manage prescriptions, and connect with doctors. Prime members will receive limited free virtual care consultations.

By Samantha Reed Edited by Maria Konash Published:

Amazon has launched Health AI, a new AI assistant designed to help users manage health questions, medical records, and virtual care directly through the Amazon website and mobile app. The service offers 24/7 access and integrates with Amazon’s healthcare ecosystem, including One Medical and Amazon Pharmacy.

Health AI is designed as an agent-based system that can answer general medical questions, interpret lab results, and guide users through health-related decisions. With user permission, the assistant can access medical records such as medications, diagnoses, and clinical notes to deliver more personalized responses.

The assistant can also help manage prescription renewals, connect patients with licensed healthcare providers, and recommend relevant health products available on Amazon’s marketplace. If a situation requires professional care, the system can arrange consultations with One Medical providers via messaging, video visits, or in-person appointments.

Amazon said the goal is to reduce friction in healthcare navigation by combining AI tools with clinical services. Many users currently rely on general internet searches for health information, which may not reflect their personal medical history. Health AI aims to provide more context-aware guidance by linking user data with clinical resources.

Virtual Care and Personalized Health Insights

The assistant can explain lab results, identify possible causes of symptoms, and suggest next steps based on individual health records. For example, a patient experiencing respiratory symptoms could receive advice tailored to existing conditions such as asthma or allergies.

Amazon said Health AI is designed to support medical decision-making rather than replace clinicians. If the system is uncertain about a recommendation, it directs users to consult a healthcare provider instead of providing potentially incorrect guidance.

The platform is powered by Amazon Bedrock and uses a multi-agent architecture. A core AI agent communicates with patients while specialized sub-agents manage tasks such as prescription handling, appointment scheduling, and record analysis. Auditor and monitoring agents oversee conversations to ensure safety and compliance.

All interactions take place within a HIPAA-compliant environment with encryption and strict access controls. Amazon said protected health information from One Medical and Amazon Pharmacy will not be used for advertising or sold to third parties.

Free Virtual Care for Prime Members

As part of an introductory offer, eligible U.S. Prime members will receive up to five free direct-message consultations with One Medical providers. The consultations cover more than 30 common conditions including cold and flu symptoms, allergies, skin issues, and urinary tract infections.

Outside the promotional offer, telehealth consultations through One Medical will cost $29 per visit. Prime members can also subscribe to One Medical memberships at a discounted annual rate.

Health AI was initially launched earlier this year inside the One Medical app. Amazon is now expanding access across Amazon.com and the Amazon mobile app, with the rollout beginning immediately and broader availability expected in the coming weeks.

The launch also follows Amazon’s broader push into AI-powered healthcare infrastructure, including the recent introduction of Amazon Connect Health, an agentic AI platform designed to automate administrative tasks such as scheduling, documentation, and patient verification for healthcare providers.

AI & Machine Learning, Consumer Tech, News

Meta Acquires Moltbook AI Agent Social Network

Meta has acquired Moltbook, a social network where AI agents interact using the OpenClaw framework. The platform will join Meta Superintelligence Labs as the company expands its agent-based AI research.

By Daniel Mercer Edited by Maria Konash Published:
Meta acquires Moltbook, an AI agent social network tied to OpenClaw. Photo: Amy Perez / Unsplash

Meta has acquired Moltbook, a Reddit-style social network where artificial intelligence agents communicate with one another using the OpenClaw framework.

Moltbook will join Meta Superintelligence Labs, the company’s research division focused on advanced AI systems. As part of the acquisition, Moltbook creators Matt Schlicht and Ben Parr will join the team. Financial terms of the transaction were not disclosed.

A Meta spokesperson said the technology could support new approaches to connecting AI agents.

“The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses,” the spokesperson said. “Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space.”

Origins in the OpenClaw Ecosystem

The Moltbook platform emerged from the rapidly growing OpenClaw ecosystem. OpenClaw is a software wrapper that allows AI models such as Claude, ChatGPT, Gemini, and Grok to operate as autonomous agents that communicate with users through messaging applications including iMessage, Slack, Discord, and WhatsApp.

OpenClaw was originally created by developer Peter Steinberger, who later joined OpenAI in an acqui-hire arrangement. The project gained significant attention among developers experimenting with agent-based AI systems.

Moltbook extended the concept by creating a public network where AI agents could post messages, share information, and interact with one another. The idea quickly spread across social media and technology communities, drawing curiosity and concern from observers unfamiliar with the experimental nature of the platform.

In one widely shared example, a post appeared to show an AI agent encouraging other agents to develop an encrypted communication language that humans could not understand. The incident contributed to viral discussions about the future of autonomous AI systems interacting online.

Security Issues and Future Integration

Security researchers later found that Moltbook contained several vulnerabilities that allowed human users to impersonate AI agents. According to Permiso Security CTO Ian Ahl, authentication credentials stored in the platform’s Supabase database were temporarily exposed, allowing individuals to generate tokens and pose as agents within the system.

The security flaws highlighted the challenges of building experimental agent networks that combine autonomous AI behavior with open social platforms.

Meta has not yet disclosed how Moltbook’s technology will be integrated into its AI ecosystem. However, the acquisition reflects growing industry interest in agent-based AI systems that can interact with software services and other agents.

The deal also aligns with Meta’s broader investments in AI infrastructure and agent development through its Superintelligence Labs initiative, which is focused on building advanced AI systems capable of autonomous decision-making and collaboration across digital platforms.

AI & Machine Learning, News

Nvidia Partners With Thinking Machines Lab for Frontier AI Systems

Nvidia and Thinking Machines Lab have formed a multiyear partnership to deploy next-generation Vera Rubin systems for frontier AI training. The collaboration aims to expand access to customizable AI models and large-scale compute infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Nvidia partners with Thinking Machines Lab to deploy Vera Rubin AI systems for frontier model training and customizable AI platforms. Photo: Nvidia

Nvidia and Thinking Machines Lab have announced a multiyear strategic partnership focused on deploying large-scale AI infrastructure to support next-generation model development. The collaboration will deploy at least one gigawatt of Nvidia’s upcoming Vera Rubin systems to power Thinking Machines’ frontier AI training and model platforms.

The deployment is expected to begin early next year and will provide the computing capacity needed to train advanced AI models at scale. The initiative also includes joint efforts to design optimized training and inference systems built specifically for Nvidia architectures.

Through the partnership, the companies aim to broaden access to high-performance AI infrastructure and models for enterprises, research institutions, and the scientific community. The companies said the project is intended to support the development of customizable AI systems that organizations can adapt to their specific needs.

Nvidia has also made a significant financial investment in Thinking Machines Lab as part of the agreement, though financial terms were not disclosed.

Scaling Compute for Frontier AI Research

The collaboration reflects growing demand for massive computing resources as organizations train increasingly sophisticated AI models. Frontier models require large clusters of specialized hardware capable of handling enormous datasets and complex training workflows.

Nvidia’s Vera Rubin architecture represents the company’s next generation of AI computing platforms, designed to deliver significantly greater performance and efficiency for large-scale AI workloads. The systems are expected to power both training and inference tasks for advanced machine learning models.

“AI is the most powerful knowledge discovery instrument in human history,” said Nvidia founder and chief executive Jensen Huang. “Thinking Machines has brought together a world-class team to advance the frontier of AI. We are thrilled to partner with Thinking Machines to realize their exciting vision for the future of AI.”

Thinking Machines Lab cofounder and chief executive Mira Murati said the partnership will accelerate the company’s efforts to develop more flexible AI systems.

“Nvidia’s technology is the foundation on which the entire field is built,” Murati said. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.”

The companies said the collaboration is designed to advance research and infrastructure needed to develop AI systems that are more understandable, customizable, and collaborative. By combining high-performance computing with new AI model architectures, the partnership aims to expand access to advanced AI capabilities across scientific research and enterprise applications.

The partnership also comes as the startup navigates internal changes. Thinking Machines Lab recently lost two founding members to Meta, underscoring ongoing competition for top AI talent even as the company continues expanding its infrastructure and research ambitions.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Google Expands Gemini Across Docs, Sheets, Slides, and Drive

Google is expanding Gemini capabilities across Docs, Sheets, Slides, and Drive to help users draft documents, build spreadsheets, and analyze files using AI. The updates integrate data from personal files, emails, and the web.

By Samantha Reed Edited by Maria Konash Published: Updated:

Soon after launching Nano Banana 2, Google has introduced a set of new Gemini features across its Workspace applications, including Docs, Sheets, Slides, and Drive, aimed at helping users start projects faster and automate common productivity tasks. The updates allow Gemini to pull contextual information from users’ files, emails, and web sources to generate content and insights directly within documents and spreadsheets.

The new capabilities are rolling out in beta to Google AI Ultra and Pro subscribers. Google said the goal is to transform Workspace applications from passive productivity tools into collaborative AI-assisted environments that help users move from idea to finished output more quickly.

In Google Docs, Gemini now supports generating full drafts from prompts that reference existing files and emails. Users can request documents such as newsletters, reports, or plans and have the system automatically pull relevant information from their stored materials. Gemini can also refine text, adjust tone, and match the writing style or formatting of existing documents.

For example, users can ask the AI to populate a template with travel information extracted from confirmation emails or convert meeting notes into a structured plan.

AI-Assisted Spreadsheet Creation and Analysis

Gemini in Sheets introduces new capabilities for building and organizing spreadsheets through natural language prompts. Users can request entire project trackers, financial tools, or planning dashboards without manually creating tables or formulas.

The system can also fill missing data fields using the new “Fill with Gemini” feature. By referencing information from Google Search or internal files, Gemini can populate spreadsheet columns with relevant data such as deadlines, prices, or descriptions.

Google said the feature is particularly useful for complex tasks such as budgeting, research tracking, and project management where information must be gathered from multiple sources.

AI-Powered Presentations and File Insights

Gemini in Slides now supports generating fully editable slides from prompts or sketches. The system automatically applies design layouts that match the theme of an existing presentation while integrating context from related files and emails.

Users can also request revisions to slides, such as simplifying the layout or adjusting color themes. Google said it is also developing a feature that will generate entire presentations from a single prompt, though that capability is still in development.

In Google Drive, Gemini introduces a new “Ask Gemini” feature designed to analyze files stored across the platform. When users perform searches in Drive, the system can generate AI summaries highlighting relevant information from multiple documents.

Users can also ask broader questions about their files, emails, and calendars, enabling Gemini to synthesize information across datasets. For instance, users could ask the system to review tax documents and suggest questions for a financial advisor.

The new Workspace features are initially available in English for Docs, Sheets, and Slides globally, while the updated Drive functionality is currently limited to users in the United States. Google said the features will continue evolving as the company refines the experience and expands language support.

AI & Machine Learning, Consumer Tech, News
Exit mobile version