Meta Acquires Moltbook AI Agent Social Network

Meta has acquired Moltbook, a social network where AI agents interact using the OpenClaw framework. The platform will join Meta Superintelligence Labs as the company expands its agent-based AI research.

By Daniel Mercer Edited by Maria Konash Published:
Meta Acquires Moltbook AI Agent Social Network
Meta acquires Moltbook, an AI agent social network tied to OpenClaw. Photo: Amy Perez / Unsplash

Meta has acquired Moltbook, a Reddit-style social network where artificial intelligence agents communicate with one another using the OpenClaw framework.

Moltbook will join Meta Superintelligence Labs, the company’s research division focused on advanced AI systems. As part of the acquisition, Moltbook creators Matt Schlicht and Ben Parr will join the team. Financial terms of the transaction were not disclosed.

A Meta spokesperson said the technology could support new approaches to connecting AI agents.

“The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses,” the spokesperson said. “Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space.”

Origins in the OpenClaw Ecosystem

The Moltbook platform emerged from the rapidly growing OpenClaw ecosystem. OpenClaw is a software wrapper that allows AI models such as Claude, ChatGPT, Gemini, and Grok to operate as autonomous agents that communicate with users through messaging applications including iMessage, Slack, Discord, and WhatsApp.

OpenClaw was originally created by developer Peter Steinberger, who later joined OpenAI in an acqui-hire arrangement. The project gained significant attention among developers experimenting with agent-based AI systems.

Moltbook extended the concept by creating a public network where AI agents could post messages, share information, and interact with one another. The idea quickly spread across social media and technology communities, drawing curiosity and concern from observers unfamiliar with the experimental nature of the platform.

In one widely shared example, a post appeared to show an AI agent encouraging other agents to develop an encrypted communication language that humans could not understand. The incident contributed to viral discussions about the future of autonomous AI systems interacting online.

Security Issues and Future Integration

Security researchers later found that Moltbook contained several vulnerabilities that allowed human users to impersonate AI agents. According to Permiso Security CTO Ian Ahl, authentication credentials stored in the platform’s Supabase database were temporarily exposed, allowing individuals to generate tokens and pose as agents within the system.

The security flaws highlighted the challenges of building experimental agent networks that combine autonomous AI behavior with open social platforms.

Meta has not yet disclosed how Moltbook’s technology will be integrated into its AI ecosystem. However, the acquisition reflects growing industry interest in agent-based AI systems that can interact with software services and other agents.

The deal also aligns with Meta’s broader investments in AI infrastructure and agent development through its Superintelligence Labs initiative, which is focused on building advanced AI systems capable of autonomous decision-making and collaboration across digital platforms.

AI & Machine Learning, News

Meta Unveils New MTIA Chips for AI Data Centers

Meta introduced four new in-house MTIA chips designed for AI training and inference as the company accelerates data center expansion. The chips aim to improve performance and reduce reliance on external hardware suppliers.

By Olivia Grant Edited by Maria Konash Published:
Meta unveils MTIA AI chips for data centers, targeting AI training and inference while reducing reliance on GPUs. Photo: Brian Kostiuk / Unsplash

Meta has unveiled four new custom chips designed for artificial intelligence workloads as part of its expanding data center infrastructure. The processors are part of the company’s Meta Training and Inference Accelerator (MTIA) family, a line of chips built to handle specific AI tasks within Meta’s platforms.

The announcement marks the latest step in Meta’s push to reduce reliance on external chip vendors by designing its own silicon optimized for internal workloads. According to Meta Vice President of Engineering Yee Jiun Song, custom chips allow the company to improve price-performance efficiency across its data centers while diversifying its hardware supply chain.

“This also provides us with more diversity in terms of silicon supply and insulates us from price changes to some extent,” Song said.

Meta first revealed the MTIA architecture in 2023 and released a second-generation version in 2024. The new lineup significantly expands the platform as the company scales its AI infrastructure to support recommendation systems and generative AI features across its products.

Four New Chips Target Different AI Workloads

The first of the new chips, MTIA 300, has already been deployed in Meta data centers. It is designed to train smaller AI models used for core platform tasks such as ranking and recommendation algorithms. These systems determine which posts, ads, and videos appear in users’ feeds across services including Facebook and Instagram.

Three additional chips are currently in development. The MTIA 400 is nearing deployment after completing testing, while the MTIA 450 and MTIA 500 are scheduled to become operational by 2027.

Unlike the MTIA 300, the upcoming chips will focus on generative AI inference workloads such as generating images and videos from text prompts. However, Meta said the processors will not be used to train large language models.

Song noted that Meta plans to release new chips roughly every six months as the company rapidly expands computing capacity. Each chip generation is expected to remain in service for more than five years.

Data Center Expansion and AI Infrastructure

The new processors will support Meta’s massive data center expansion. The company is currently building a large facility in Louisiana and additional centers in Ohio and Indiana. Reports also indicate that Meta is exploring leasing space at a major AI data center site in Texas.

The custom chips are manufactured by Taiwan Semiconductor Manufacturing Company, though Meta did not confirm whether production will occur at the company’s new fabrication facilities in Arizona.

Meta’s in-house silicon strategy follows a broader industry trend among major technology companies developing application-specific integrated circuits, or ASICs, for AI workloads. These chips are typically smaller and more energy-efficient than general-purpose GPUs but are optimized for narrower tasks.

Google pioneered this approach with its Tensor Processing Units in 2015, and Amazon followed with custom chips for its cloud services in 2018. Unlike those companies, Meta uses its MTIA chips exclusively for internal operations rather than offering them through a public cloud platform.

Despite building its own silicon, Meta continues to rely heavily on external hardware suppliers. The company recently signed agreements to deploy millions of Nvidia GPUs and up to six gigawatts of AMD GPUs across its data centers.

Song acknowledged that securing high-bandwidth memory remains a potential constraint as AI infrastructure spending increases across the technology industry. However, he said Meta has diversified its supply chain and believes it has secured the resources required for its planned deployments.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Launches Institute to Study Societal Impact of AI

Anthropic has launched the Anthropic Institute to study the societal, economic, and governance challenges posed by advanced AI systems. The initiative will combine research from engineers, economists, and social scientists.

By Laura Bennett Edited by Maria Konash Published:
Anthropic launches the Anthropic Institute to research AI safety, economic impact, governance, and societal risks. Image: Anthropic

Anthropic has announced the launch of the Anthropic Institute, a research initiative focused on examining the societal, economic, and governance challenges posed by rapidly advancing AI systems.

The institute will draw on internal research across Anthropic to provide insights for policymakers, researchers, and the public as AI systems grow more capable. The company said the effort aims to improve understanding of how advanced AI could reshape economies, jobs, legal systems, and governance structures.

The initiative will be led by Anthropic co-founder Jack Clark, who will assume a new role as the company’s Head of Public Benefit. The institute’s interdisciplinary team will include machine learning engineers, economists, and social scientists working together to analyze the broader implications of frontier AI technologies.

Anthropic said the pace of AI progress has accelerated rapidly in recent years. The company took two years to release its first commercial model and only three more years to develop systems capable of discovering cybersecurity vulnerabilities, performing complex professional tasks, and contributing to AI research itself.

The institute will study several key questions related to advanced AI development, including how the technology may transform labor markets, influence economic growth, and affect societal resilience. It will also examine governance challenges such as how AI systems should be regulated and how organizations should manage the values embedded in advanced models.

Research Focus and Policy Engagement

The Anthropic Institute will integrate and expand three existing research groups inside the company. These include the Frontier Red Team, which tests the limits and risks of AI systems, the Societal Impacts team studying real-world AI adoption, and the Economic Research team analyzing labor and macroeconomic effects.

In addition to continuing existing research programs, the institute plans to explore new areas including forecasting future AI progress and studying how powerful AI systems could interact with legal systems and regulatory frameworks.

Anthropic has also announced several key hires for the institute. Matt Botvinick, previously a senior research leader at Google DeepMind and a resident fellow at Yale Law School, will lead research on AI and the rule of law. Economist Anton Korinek from the University of Virginia will help study how advanced AI could reshape economic activity. Zoë Hitzig, who previously researched AI’s economic impacts at OpenAI, will contribute to connecting economic research with model development.

Alongside the institute launch, Anthropic is expanding its public policy organization. The company plans to open its first Washington, D.C. office this spring as it increases engagement with policymakers on issues including AI safety, infrastructure investment, and export controls.

The company said the institute will publish research and engage with external stakeholders to help societies prepare for the potential benefits and risks of transformative AI technologies as development accelerates.

AI & Machine Learning, News, Research & Innovation

Amazon Launches Health AI Assistant for Prime Members

Amazon has launched Health AI, an agentic assistant on its website and app that helps users understand medical records, manage prescriptions, and connect with doctors. Prime members will receive limited free virtual care consultations.

By Samantha Reed Edited by Maria Konash Published:

Amazon has launched Health AI, a new AI assistant designed to help users manage health questions, medical records, and virtual care directly through the Amazon website and mobile app. The service offers 24/7 access and integrates with Amazon’s healthcare ecosystem, including One Medical and Amazon Pharmacy.

Health AI is designed as an agent-based system that can answer general medical questions, interpret lab results, and guide users through health-related decisions. With user permission, the assistant can access medical records such as medications, diagnoses, and clinical notes to deliver more personalized responses.

The assistant can also help manage prescription renewals, connect patients with licensed healthcare providers, and recommend relevant health products available on Amazon’s marketplace. If a situation requires professional care, the system can arrange consultations with One Medical providers via messaging, video visits, or in-person appointments.

Amazon said the goal is to reduce friction in healthcare navigation by combining AI tools with clinical services. Many users currently rely on general internet searches for health information, which may not reflect their personal medical history. Health AI aims to provide more context-aware guidance by linking user data with clinical resources.

Virtual Care and Personalized Health Insights

The assistant can explain lab results, identify possible causes of symptoms, and suggest next steps based on individual health records. For example, a patient experiencing respiratory symptoms could receive advice tailored to existing conditions such as asthma or allergies.

Amazon said Health AI is designed to support medical decision-making rather than replace clinicians. If the system is uncertain about a recommendation, it directs users to consult a healthcare provider instead of providing potentially incorrect guidance.

The platform is powered by Amazon Bedrock and uses a multi-agent architecture. A core AI agent communicates with patients while specialized sub-agents manage tasks such as prescription handling, appointment scheduling, and record analysis. Auditor and monitoring agents oversee conversations to ensure safety and compliance.

All interactions take place within a HIPAA-compliant environment with encryption and strict access controls. Amazon said protected health information from One Medical and Amazon Pharmacy will not be used for advertising or sold to third parties.

Free Virtual Care for Prime Members

As part of an introductory offer, eligible U.S. Prime members will receive up to five free direct-message consultations with One Medical providers. The consultations cover more than 30 common conditions including cold and flu symptoms, allergies, skin issues, and urinary tract infections.

Outside the promotional offer, telehealth consultations through One Medical will cost $29 per visit. Prime members can also subscribe to One Medical memberships at a discounted annual rate.

Health AI was initially launched earlier this year inside the One Medical app. Amazon is now expanding access across Amazon.com and the Amazon mobile app, with the rollout beginning immediately and broader availability expected in the coming weeks.

The launch also follows Amazon’s broader push into AI-powered healthcare infrastructure, including the recent introduction of Amazon Connect Health, an agentic AI platform designed to automate administrative tasks such as scheduling, documentation, and patient verification for healthcare providers.

AI & Machine Learning, Consumer Tech, News

Nvidia Invests $2 Billion in Nebius AI Cloud Partnership

Nvidia and Nebius have formed a strategic partnership to build hyperscale AI cloud infrastructure, with Nvidia investing $2 billion to support gigawatt-scale AI computing capacity.

By Olivia Grant Edited by Maria Konash Published:
Nvidia invests $2B in Nebius to expand hyperscale AI cloud and build gigawatt-scale AI factories. Photo: Nvidia

Nvidia and Nebius Group have announced a strategic partnership to develop a next-generation hyperscale cloud platform designed specifically for artificial intelligence workloads. As part of the agreement, Nvidia will invest $2 billion in Nebius to support the expansion of its AI cloud infrastructure.

The collaboration aims to scale computing resources for AI developers, enterprises, and research organizations as demand for high-performance infrastructure accelerates. Nebius plans to deploy more than five gigawatts of Nvidia-powered computing capacity by the end of 2030.

The partnership builds on Nebius’s existing use of Nvidia hardware across its cloud platform and will involve deeper integration across the AI technology stack, including data center design, software optimization, and large-scale infrastructure deployment.

“AI is at another inflection point — agentic AI, driving incredible compute demand and accelerating infrastructure buildout,” said Nvidia Chief Executive Jensen Huang. “Nebius is building an AI cloud designed for the agentic era, fully integrated from silicon to software and powered by Nvidia’s next-generation accelerated compute.”

AI Factories and Agentic Infrastructure

Under the agreement, the companies will collaborate on designing large-scale “AI factories” — specialized data centers optimized for training and running advanced AI models. These facilities will incorporate multiple generations of Nvidia accelerated computing systems as Nebius expands its global platform.

The partnership will include early adoption of Nvidia’s latest architectures, including the Rubin platform, Vera CPUs, and BlueField storage and networking systems. The companies will also work together to optimize inference and agentic AI software stacks, enabling developers to deploy advanced AI applications more efficiently.

In addition to infrastructure deployment, Nvidia will provide design guidance, system validation processes, and engineering support as Nebius scales its data center operations. The companies will also collaborate on fleet management tools that monitor GPU health and optimize system performance across large clusters.

Building AI Infrastructure for Global Demand

Nebius said the partnership reflects its strategy of building a cloud platform designed specifically for AI workloads rather than adapting traditional cloud computing models.

“Nebius has been built for AI since day one — not adapted from a general-purpose cloud, but designed for what developers actually need,” said Arkady Volozh, chief executive of Nebius. “Now with Nvidia, we are extending that throughout the stack — from gigawatt-scale AI factories to inference and software.”

The companies said the partnership is intended to support growing global demand for AI infrastructure as organizations increasingly deploy large-scale models and agent-based AI systems.

By combining Nvidia’s accelerated computing technologies with Nebius’s cloud platform, the collaboration aims to provide developers and enterprises with scalable infrastructure capable of supporting the next generation of AI applications.

Nvidia Partners With Thinking Machines Lab for Frontier AI Systems

Nvidia and Thinking Machines Lab have formed a multiyear partnership to deploy next-generation Vera Rubin systems for frontier AI training. The collaboration aims to expand access to customizable AI models and large-scale compute infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Nvidia partners with Thinking Machines Lab to deploy Vera Rubin AI systems for frontier model training and customizable AI platforms. Photo: Nvidia

Nvidia and Thinking Machines Lab have announced a multiyear strategic partnership focused on deploying large-scale AI infrastructure to support next-generation model development. The collaboration will deploy at least one gigawatt of Nvidia’s upcoming Vera Rubin systems to power Thinking Machines’ frontier AI training and model platforms.

The deployment is expected to begin early next year and will provide the computing capacity needed to train advanced AI models at scale. The initiative also includes joint efforts to design optimized training and inference systems built specifically for Nvidia architectures.

Through the partnership, the companies aim to broaden access to high-performance AI infrastructure and models for enterprises, research institutions, and the scientific community. The companies said the project is intended to support the development of customizable AI systems that organizations can adapt to their specific needs.

Nvidia has also made a significant financial investment in Thinking Machines Lab as part of the agreement, though financial terms were not disclosed.

Scaling Compute for Frontier AI Research

The collaboration reflects growing demand for massive computing resources as organizations train increasingly sophisticated AI models. Frontier models require large clusters of specialized hardware capable of handling enormous datasets and complex training workflows.

Nvidia’s Vera Rubin architecture represents the company’s next generation of AI computing platforms, designed to deliver significantly greater performance and efficiency for large-scale AI workloads. The systems are expected to power both training and inference tasks for advanced machine learning models.

“AI is the most powerful knowledge discovery instrument in human history,” said Nvidia founder and chief executive Jensen Huang. “Thinking Machines has brought together a world-class team to advance the frontier of AI. We are thrilled to partner with Thinking Machines to realize their exciting vision for the future of AI.”

Thinking Machines Lab cofounder and chief executive Mira Murati said the partnership will accelerate the company’s efforts to develop more flexible AI systems.

“Nvidia’s technology is the foundation on which the entire field is built,” Murati said. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.”

The companies said the collaboration is designed to advance research and infrastructure needed to develop AI systems that are more understandable, customizable, and collaborative. By combining high-performance computing with new AI model architectures, the partnership aims to expand access to advanced AI capabilities across scientific research and enterprise applications.

The partnership also comes as the startup navigates internal changes. Thinking Machines Lab recently lost two founding members to Meta, underscoring ongoing competition for top AI talent even as the company continues expanding its infrastructure and research ambitions.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment
Exit mobile version