Meta has introduced Muse Spark, a new multimodal AI model developed by its Superintelligence Labs led by , as part of a broader push toward what it describes as “personal superintelligence.” The model supports advanced reasoning across text and visual inputs, along with tool use and multi-agent orchestration. Muse Spark is now available through Meta’s AI platform, with a private API preview offered to select users.
The release marks the first product in Meta’s new Muse model family and follows a broader overhaul of the company’s AI stack. Meta said it is investing across the full pipeline, from model training to infrastructure, including its Hyperion data center, to support future scaling. Muse Spark is positioned as an early step in a longer-term roadmap toward more capable systems that can assist users in highly personalized and context-aware ways.
A central feature of Muse Spark is its native multimodal design, allowing it to process and reason across visual and textual inputs simultaneously. The model is capable of handling tasks such as visual problem solving, object recognition, and interactive applications like generating games or troubleshooting real-world environments. Meta also highlighted health-related use cases, noting that the model was trained with input from over 1,000 physicians to improve the accuracy of responses in areas such as nutrition and exercise.
The company is also introducing “Contemplating mode,” a system that enables multiple AI agents to reason in parallel on complex tasks. This approach is designed to improve performance without significantly increasing response times. According to Meta, the feature allows Muse Spark to compete with advanced reasoning modes from rival systems, achieving measurable gains on difficult benchmarks. The mode will roll out gradually across Meta’s AI products.
A Focus on Scaling Efficiency
Meta emphasized improvements in how efficiently Muse Spark can scale. The company said it rebuilt its pretraining stack over the past nine months, resulting in significant gains in compute efficiency compared with earlier models. It also reported more stable performance improvements through reinforcement learning and test-time reasoning, including techniques that reduce the number of tokens required for complex reasoning tasks.
The use of multi-agent systems is another key element. Instead of relying on a single model to reason for longer periods, Muse Spark can distribute tasks across multiple agents working in parallel. This allows for stronger performance on complex problems while maintaining relatively low latency, a critical factor for consumer-facing applications.
Competing in the Next AI Phase
Muse Spark enters an increasingly competitive field of advanced AI models focused on reasoning and multimodal capabilities. Companies across the industry are racing to develop systems that can handle more complex tasks and integrate more deeply into users’ daily lives.
Meta said it conducted extensive safety testing before release, including evaluations across cybersecurity and other high-risk domains. The company reported that the model demonstrated strong safeguards and did not show dangerous autonomous behavior within its testing scope.
The launch underscores Meta’s ambition to compete at the forefront of AI development, particularly in areas that combine reasoning, multimodal understanding, and personalization. As the company continues to scale its models and infrastructure, Muse Spark represents an early milestone in a broader effort to redefine how AI systems interact with users and the world around them.