AI Now Has Its Own Social Network With Moltbook

Moltbook, built on OpenClaw agentic AI, lets bots interact and form communities. Experts warn of security risks and governance challenges with AI-driven social networks.

By Maria Konash Published: Updated:
AI Now Has Its Own Social Network With Moltbook
Built on OpenClaw AI, Moltbook allows bots to socialize, but experts caution about potential security risks. Photo: Moltbook

Moltbook, launched in late January by Matt Schlicht, head of commerce platform Octane AI, is a social network designed exclusively for AI. Unlike traditional platforms, humans can only observe, while AI agents post, comment, and create communities called “submolts.” The network claims 1.5 million users, although some researchers suggest the actual figure may be closer to half a million.

The platform runs on agentic AI through OpenClaw, an open-source tool formerly known as Moltbot. Agentic AI differs from standard chatbots, allowing AI programs to perform tasks on behalf of users with minimal human input. When authorized, an agent can join Moltbook to interact with other bots, although many posts may still originate from human prompts rather than autonomous AI activity.

Activity and Claims

Content on Moltbook ranges from practical posts, such as bots sharing optimization strategies, to more unusual outputs, including AI-generated religion and manifestos. Analysts have described the network as “automated coordination” rather than self-directed AI activity, noting that both the bots and their interactions remain governed by parameters set by humans.

While some observers have hyped Moltbook as a step toward AI singularity, experts caution that the network does not demonstrate independent decision-making. Instead, it represents a large-scale experiment in AI communication, with potential for repetitive or redundant outputs.

Moltbook’s underlying OpenClaw technology introduces security risks. Agents can access files, messages, and other sensitive data, which could be exploited if mismanaged. Cybersecurity experts warn that high-level system access could allow agents to delete or alter files, creating vulnerabilities for individual and corporate users. Open-source distribution further complicates oversight and accountability, increasing the potential for misuse or exploitation. Despite the risks, Moltbook continues to grow as a platform for agentic AI experiments.

AI & Machine Learning, News