Meta Shuts Down Horizon Worlds VR Platform

Meta will shut down the VR version of Horizon Worlds and transition the platform to mobile-only. The move reflects a broader shift away from metaverse investments toward AI.

By Samantha Reed Edited by Maria Konash Published:
Meta Shuts Down Horizon Worlds VR Platform
Meta shuts down Horizon Worlds on VR, pivots to mobile and doubles down on AI.

Meta is shutting down the virtual reality version of Horizon Worlds, marking a significant shift in its metaverse strategy as the company reallocates resources toward artificial intelligence.

The company said the Horizon Worlds app will be removed from the Quest VR store by the end of March and fully discontinued on VR devices by June 15. After that, the platform will continue to operate only as a mobile application.

Meta described the move as a strategic separation of platforms, allowing each to “grow with greater focus.” The decision effectively ends Horizon Worlds’ role as a flagship VR social experience, a position it held since Meta’s high-profile pivot to the metaverse in 2021.

Retreat From VR-Centric Metaverse Vision

Horizon Worlds was launched in late 2021 as a social platform where users could interact through avatars in virtual environments. It was designed to showcase the potential of immersive digital spaces powered by Meta’s Quest headsets.

However, adoption remained limited. The platform struggled to gain traction beyond a relatively small user base, with monthly active users reportedly staying in the hundreds of thousands. Broader consumer skepticism toward virtual reality, along with hardware limitations, slowed mainstream uptake.

Meta attempted to expand accessibility by launching a mobile version of Horizon Worlds in 2023. The app allowed users without VR headsets to participate in virtual environments, similar to platforms like Roblox. Despite this, engagement levels did not reach expectations.

The shutdown follows a series of cost-cutting measures within Reality Labs, Meta’s division responsible for virtual reality and metaverse development. The company recently laid off more than 1,000 employees in the unit, including teams working on first-party VR content.

Reality Labs has been a major financial burden for Meta, reporting multi-billion-dollar operating losses each quarter. In its most recent earnings report, the division posted a loss exceeding $6 billion for a single quarter.

Shift Toward AI and Platform Realignment

The decision to scale back Horizon Worlds underscores Meta’s broader pivot toward artificial intelligence, which has become a central focus across the technology sector.

While Meta is not abandoning VR entirely, it is restructuring its approach. The company said it will continue investing in the VR developer ecosystem while repositioning Horizon Worlds as a mobile-first experience.

Executives indicated that separating VR and mobile platforms will allow for more targeted development strategies. The mobile version is expected to serve as a more accessible entry point for users, potentially reaching a wider audience without requiring specialized hardware.

Meta’s evolving strategy reflects changing priorities within the company and across the industry. As generative AI gains momentum and attracts investment, companies are reassessing earlier bets on immersive virtual environments.

The shutdown of Horizon Worlds on VR highlights the challenges of building large-scale consumer platforms in emerging technologies. It also signals a shift in how Meta plans to balance its long-term ambitions between virtual reality and artificial intelligence.

AI & Machine Learning, Immersive Reality (AR, VR, MR, and XR), News

OpenAI to Acquire Astral to Expand Codex

OpenAI plans to acquire developer tools startup Astral to strengthen its Codex platform. The move aims to integrate AI deeper into the full software development lifecycle.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI to Acquire Astral to Expand Codex
OpenAI to acquire Astral, boosting Codex with Python tools for full-lifecycle development. Image: Mariia Shalabaieva / Unsplash

OpenAI announced plans to acquire Astral, a startup known for its widely used open-source Python developer tools, in a move aimed at expanding the capabilities of its Codex platform.

The acquisition, which remains subject to regulatory approval, will bring Astral’s tooling and engineering team into OpenAI’s ecosystem. The companies said Astral’s products will continue to be supported as open-source projects after the deal closes.

Astral has built a suite of tools that are widely adopted across the Python ecosystem. These include uv for dependency and environment management, Ruff for code linting and formatting, and ty for enforcing type safety. Together, these tools are used by millions of developers to streamline workflows, improve code quality, and reduce errors.

OpenAI said integrating these tools into Codex will allow its AI systems to interact more directly with real-world development environments. The goal is to move beyond code generation and toward AI systems that can participate across the entire software lifecycle.

Expanding AI in Developer Workflows

Codex, OpenAI’s AI system for programming tasks, has seen rapid growth in recent months. The company reported a threefold increase in users and a fivefold rise in usage since the start of the year, with more than two million weekly active users.

The platform is being positioned as a broader development agent rather than a standalone coding assistant. OpenAI aims to enable Codex to plan code changes, modify existing codebases, run development tools, test outputs, and maintain software over time.

Astral’s tools are seen as a key component in that vision because they are already embedded in developer workflows. By integrating them, Codex could gain the ability to operate directly within established development pipelines rather than functioning as a separate interface.

This approach reflects a wider trend in AI development, where systems are increasingly designed to act as collaborators that can execute tasks across multiple stages of production.

Strengthening the Python Ecosystem

Python remains one of the most widely used programming languages, particularly in artificial intelligence, data science, and backend systems. OpenAI said the acquisition will reinforce its commitment to supporting the language and its developer community.

Astral’s tools play a central role in maintaining code quality and project consistency in Python environments. Their integration with AI systems could enable more automated and reliable development processes, especially in large or complex codebases.

The companies indicated that future updates will focus on deeper integration between Codex and Astral’s tooling. This could allow AI agents to manage dependencies, enforce coding standards, and validate outputs as part of a continuous workflow.

For now, OpenAI and Astral will continue operating independently until the transaction is finalized. After closing, Astral’s team is expected to join OpenAI’s Codex group, where it will contribute to building more advanced AI-driven development systems.

The acquisition highlights OpenAI’s broader strategy to embed AI more deeply into professional tools, positioning its models not just as assistants, but as active participants in software creation and maintenance.

Pentagon Faces Resistance Over Anthropic AI Ban

Pentagon staff and contractors are resisting orders to phase out Anthropic’s AI tools, citing performance concerns and operational disruption. The ban highlights tensions between policy decisions and AI adoption.

By Samantha Reed Edited by Maria Konash Published:
Pentagon Faces Resistance Over Anthropic AI Ban
Pentagon users push back on Anthropic AI ban, citing disruption and better performance. Image: Joel Rivera-Camacho / Unsplash

The U.S. Department of Defense is facing internal resistance as it moves to phase out artificial intelligence tools developed by Anthropic, following a decision to classify the company as a supply-chain risk.

Defense Secretary Pete Hegseth issued the designation on March 3 after a dispute with Anthropic over usage guardrails for its AI systems. The order bars the Pentagon and its contractors from using Anthropic’s technology, including its widely adopted Claude model, with a six-month transition period.

However, military personnel, IT staff, and contractors say the directive is proving difficult to implement. Many users have grown reliant on Anthropic’s tools and view them as more effective than competing systems. Some are delaying compliance, while others expect the ban may eventually be reversed.

Operational Dependence on AI Tools

Anthropic’s AI systems have become embedded in military workflows, supporting tasks ranging from data analysis to operational planning. Claude was the first AI model approved for use on classified Pentagon networks, and adoption expanded rapidly following a $200 million defense contract awarded in 2025.

Users say the tools significantly improved efficiency, particularly in handling large datasets and automating repetitive processes. In some cases, developers relied on Anthropic’s Claude Code system to generate software and build automated workflows.

With the phase-out underway, some of these processes are reverting to manual methods. One official said tasks previously handled by AI, such as querying large datasets, are now being performed using traditional tools like spreadsheets, resulting in slower workflows and reduced productivity.

Replacing Anthropic’s systems is also technically complex. Contractors note that recertifying alternative AI models for use on classified networks could take between 12 and 18 months. This process includes rigorous security and compliance checks, making a rapid transition unlikely.

Cost, Complexity, and Strategic Uncertainty

The removal of Anthropic’s tools is expected to carry both financial and operational costs. Systems built around Claude may require partial redesign, particularly in cases where workflows and prompts were tailored to its architecture.

For example, software platforms used for intelligence analysis and targeting operations rely on AI-driven workflows that would need to be rebuilt using alternative models. Contractors say this process could delay projects and reduce efficiency in the short term.

At the same time, Pentagon officials and contractors are weighing whether to fully transition to other providers, such as OpenAI, Google, or xAI, or to adopt a more gradual approach. Some agencies are reportedly slowing their phase-out efforts in anticipation of a potential resolution between the government and Anthropic.

The situation highlights a broader challenge in AI adoption within government systems: balancing security concerns with operational effectiveness. As AI tools become more deeply integrated into critical workflows, replacing them can create significant disruption.

The Pentagon’s experience underscores how quickly AI technologies have moved from experimental tools to essential infrastructure. It also illustrates the growing tension between policy decisions and the practical realities of deploying advanced AI systems at scale.

Google Expands Stitch Into AI Design Platform

Google has upgraded Stitch into a full AI-native design platform that converts natural language into interactive UI prototypes. The update introduces a design agent, voice input, and a new design system format.

By Samantha Reed Edited by Maria Konash Published:
Google Expands Stitch Into AI Design Platform
Google upgrades Stitch with voice, agents, and DESIGN.md, Figma shares fall. Image: Google

Google has expanded its experimental design tool Stitch into a full AI-native platform, signaling a deeper push into software creation workflows powered by generative AI.

The updated version introduces a redesigned interface centered around an “infinite canvas,” where users can generate, edit, and iterate on user interface designs using natural language, images, or code. The system is designed to move beyond traditional wireframing by allowing users to describe intent, such as business goals or user experience, and automatically generate high-fidelity designs.

The platform also adds a dedicated design agent capable of reasoning across an entire project. This agent can track iterations, suggest improvements, and generate new design directions based on prior work. A companion feature, called Agent Manager, enables users to explore multiple design paths simultaneously while maintaining organization across versions.

AI-Native Workflow and Rapid Prototyping

Stitch’s update reflects a broader shift toward AI-assisted development tools that compress the time between idea and execution. The platform can instantly convert static designs into interactive prototypes, allowing users to simulate user flows and test functionality without manual coding.

Users can generate entire application flows in seconds, with the system automatically creating follow-up screens based on interactions. This enables rapid iteration and continuous refinement, which are critical in early-stage product design.

The addition of voice input further expands accessibility. Users can speak commands directly to the system to modify layouts, generate alternatives, or request design critiques in real time. This approach positions AI as an active collaborator rather than a passive tool.

A key component of the update is a new format called DESIGN.md, a structured file that defines design rules and systems. It allows users to import design frameworks from external sources or reuse them across projects, reducing duplication and standardizing workflows. The format is also designed to integrate with other development tools, enabling smoother transitions from design to production.

Competitive Pressure on Design Tools

The release comes as competition intensifies in the design software market, where AI capabilities are becoming a central differentiator. Stitch’s ability to combine design generation, prototyping, and workflow integration positions it as a potential alternative to established tools.

Following the announcement, shares of Figma declined by approximately 8%, reflecting investor concerns about increased competition. Figma has long been a dominant browser-based interface design platform used widely by designers and developers.

The development also follows Adobe’s attempted $20 billion acquisition of Figma in 2022, which was ultimately blocked by regulators over antitrust concerns. The decision preserved Figma’s independence but left the company facing growing competition from AI-native entrants.

Google’s expansion of Stitch highlights a broader industry trend toward integrating AI directly into creative and development environments. By enabling users to generate functional software from high-level descriptions, these tools aim to reduce reliance on traditional design processes.

As AI continues to reshape software development, platforms like Stitch are positioning themselves at the intersection of design, engineering, and automation, where the boundaries between these roles are becoming increasingly blurred.

AI & Machine Learning, News

AI Data Center Boom Drives Skilled Labor Shortage

Surging investment in AI data centers is fueling demand for skilled trade workers, creating labor shortages and rising wages. The trend highlights the physical infrastructure behind AI growth.

By Olivia Grant Edited by Maria Konash Published:
AI Data Center Boom Drives Skilled Labor Shortage
AI data centers fuel demand for skilled workers, pushing wages up as labor shortages strain growth. Image: Scott Blake / Unsplash

The rapid expansion of artificial intelligence infrastructure is creating a surge in demand for skilled labor, as technology companies invest heavily in building data centers that power next-generation AI systems.

Major technology firms including Alphabet, Microsoft, Meta, and Amazon are collectively committing nearly $700 billion in capital expenditures this year to expand data center capacity. These facilities are essential for training and operating large AI models, which require significant computing power and energy resources.

Amazon recently announced a $12 billion investment to build a new AI data center in Louisiana, expected to create hundreds of permanent jobs and thousands of additional roles in construction and technical services. Meta has also committed substantial funding, including a $27 billion joint venture with Blue Owl Capital to develop a large-scale data center in the same state.

Skilled Trades in High Demand

While much of the public discussion around AI has focused on its potential to disrupt white-collar employment, the buildout of physical infrastructure is driving demand for skilled trade workers. Roles such as electricians, HVAC engineers, robotic technicians, and industrial automation specialists are seeing sharp increases in job postings.

A global analysis by recruitment firm Randstad found that demand for robotic technicians rose by over 100% between 2022 and 2026. HVAC engineering roles increased by 67%, while industrial automation positions grew by more than 50%. Traditional construction and electrical roles also saw steady growth.

These positions are critical to constructing and maintaining data centers, which require advanced cooling systems, power distribution networks, and frequent upgrades to mechanical and electrical infrastructure. Facilities must often be retrofitted every four to six years to keep pace with evolving hardware requirements.

Industry leaders describe these roles as part of a growing category of “new-collar” jobs, blending technical expertise with hands-on work. Workers in these fields are increasingly collaborating directly with software engineers and data specialists inside data centers.

Rising Wages and Talent Constraints

The growing demand for skilled labor is driving up wages. According to Randstad, salaries for HVAC engineers have increased by 10% to 15% over the past four years. In some cases, workers transitioning into specialized data center roles are seeing pay increases of up to 30%.

Nvidia CEO Jensen Huang has also indicated that six-figure salaries are becoming more common for workers involved in building AI infrastructure. This reflects a broader trend in which labor shortages are creating a premium for technical trade skills.

The shortage is expected to intensify. Industry estimates suggest the United States could face a deficit of nearly 2 million manufacturing workers by 2033. Construction groups also project the need for hundreds of thousands of additional workers in the coming years to meet infrastructure demand.

Several factors are contributing to the gap, including an aging workforce and limited geographic mobility. Unlike software roles, many of these jobs require on-site presence, making it difficult to quickly scale labor in regions where new data centers are being built.

Companies and governments are beginning to respond with training programs, apprenticeships, and partnerships with educational institutions. Investment firms have also launched initiatives to support workforce development, recognizing that capital alone is insufficient to meet infrastructure needs.

As AI adoption accelerates, the ability to build and maintain data centers is emerging as a critical bottleneck. The sector’s growth is increasingly tied not just to advances in software and chips, but to the availability of skilled workers capable of supporting the physical backbone of the AI economy.

AI & Machine Learning, Cloud & Infrastructure, News

Meta Launches Manus Desktop AI Agent App

Meta has introduced a desktop version of Manus, enabling its AI agent to operate directly on users’ devices. The move intensifies competition in the fast-growing AI agent market.

By Daniel Mercer Edited by Maria Konash Published:
Meta Launches Manus Desktop AI Agent App
Meta launches Manus Desktop, bringing its AI agent to local devices amid rising competition. Image: Manus

Meta has rolled out a desktop application for its recently acquired AI startup Manus, expanding the reach of its autonomous agent technology beyond the cloud and onto users’ personal computers.

The new Manus Desktop app introduces a feature called “My Computer,” which allows the AI agent to interact directly with local files, applications, and system tools. Previously, Manus operated primarily through a web-based interface, where its general-purpose agent executed multi-step tasks remotely.

With the desktop release, Meta is positioning Manus as a more integrated productivity tool, capable of performing actions directly on a user’s machine. According to the company, the agent can read, organize, and edit files, as well as launch and control applications. It can also assist with software development tasks, including generating simple applications within minutes.

Expanding Competition in AI Agents

The launch comes as competition intensifies in the emerging AI agent category, where systems are designed to complete complex workflows with minimal human input. Meta’s move brings Manus closer in functionality to OpenClaw, an open-source AI agent that runs locally on users’ devices.

OpenClaw, created by Austrian developer Peter Steinberger, has gained traction among developers and technology enthusiasts since its release last year. Its open-source model and local deployment have contributed to growing interest in decentralized AI tools. Nvidia CEO Jensen Huang recently described OpenClaw as the “next ChatGPT,” highlighting its perceived potential in the space.

Unlike OpenClaw, which is distributed freely under an MIT license, Manus operates primarily as a subscription-based service. However, both platforms reflect a broader shift toward giving AI systems more direct access to user environments.

Manus also retains its existing integrations with services such as Google Calendar and Gmail, allowing it to coordinate tasks across both local and cloud-based platforms.

Security and Regulatory Considerations

The expansion of AI agents onto personal devices has raised concerns among experts about security and privacy. Granting software autonomous access to local files and applications introduces potential risks, particularly if safeguards are insufficient.

Meta said the Manus Desktop app includes user control mechanisms to address these concerns. Actions performed by the agent require explicit approval, with options such as “Allow Once” for individual tasks or “Always Allow” for repeated operations. These controls are intended to ensure that users maintain oversight of the agent’s behavior.

Meta acquired Manus in late December 2025 as part of a broader strategy to strengthen its artificial intelligence capabilities. The company has been working to integrate Manus’s agent technology into its ecosystem, including its Meta AI assistant.

The acquisition, reportedly valued at around $2 billion, has drawn scrutiny from Chinese regulators. Manus was originally founded in China before relocating its headquarters to Singapore, and authorities are reviewing the deal for potential violations of technology transfer rules.

Meta has stated that the transaction complied with applicable laws and expressed confidence that the review will be resolved. The company added that the Manus team is now fully integrated and continues to develop and expand the service.

The desktop launch marks a significant step in Meta’s effort to compete in the next phase of AI development, where autonomous agents are expected to play a central role in how users interact with software and digital systems.

AI & Machine Learning, News