Amazon to Cut 16,000 Corporate Jobs Amid AI Investments

Amazon announced plans to cut roughly 16,000 corporate roles, the second major reduction since October, as it invests heavily in artificial intelligence and organizational efficiency.

By Samantha Reed Edited by Maria Konash Published: Updated:
Amazon to Cut 16,000 Corporate Jobs Amid AI Investments
Amazon to cut 16,000 corporate positions, prioritizing AI and operational efficiency, but continues targeted hiring. Photo: BoliviaInteligente / Unsplash

Amazon on Wednesday said it will reduce its corporate workforce by approximately 16,000 jobs, marking the company’s second major round of layoffs since October. The cuts are part of a broader effort to streamline operations, reduce management layers, and remove bureaucracy while accelerating investments in artificial intelligence.

The company’s senior vice president of people experience and technology, Beth Galetti, said in a blog post that the layoffs aim to strengthen ownership, speed, and capacity across teams. Employees affected in the U.S. will generally have 90 days to apply for other internal positions, while those unable or unwilling to transition will receive severance, outplacement support, and applicable benefits.

“This is not the start of a new rhythm of layoffs,” Galetti said, adding that every team will continue to evaluate its structure and adjust as needed.

Continued Workforce Adjustments

The new reduction follows 14,000 corporate layoffs in October and comes as Amazon seeks additional efficiency gains across its roughly 350,000 corporate and tech employees. Combined with prior cuts, the company has eliminated about 30,000 corporate roles since last year, roughly 10% of its corporate and tech workforce. Overall, Amazon employs about 1.58 million people, the majority in warehouses and logistics operations.

CEO Andy Jassy has emphasized transforming Amazon’s corporate culture to operate like a startup, reducing bureaucracy, and accelerating decision-making. This includes internal initiatives such as a “no bureaucracy email alias” to identify inefficiencies and cut management layers.

Amazon has also been cutting costs across its business to increase AI investments and expand data center infrastructure. The company recently closed its Fresh and Go grocery chains after years of experimentation. Capital expenditures are forecast to reach $125 billion in 2026, the highest among major U.S. technology companies.

Jassy previously indicated that efficiency gains from AI would likely reduce the need for some corporate roles while creating demand for new skill sets. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” he said last June.

Strategic Focus

Despite workforce reductions, Amazon said it continues to hire in strategic areas critical to long-term growth, including AI, cloud computing, and other high-priority initiatives. Galetti highlighted that many teams are still in early stages of building their businesses, presenting significant opportunities for the company and its employees.

Amazon’s latest cuts also reflect a wider trend in the industry as companies reshape their workforces around artificial intelligence. Pinterest recently announced plans to cut around 780 jobs, or roughly 15% of its staff, to focus on AI-driven products and strategy. In Europe, banks are planning deep workforce reductions, with more than 200,000 jobs potentially eliminated by 2030 as AI transforms operations and accelerates branch closures, particularly in back-office, risk, and compliance roles.

The company’s approach reflects a dual strategy of trimming operational complexity while investing in technology-driven growth to remain competitive in e-commerce, cloud services, and AI innovation.

AI & Machine Learning, Enterprise Tech, News

AI Data Center Boom Drives Skilled Labor Shortage

Surging investment in AI data centers is fueling demand for skilled trade workers, creating labor shortages and rising wages. The trend highlights the physical infrastructure behind AI growth.

By Olivia Grant Edited by Maria Konash Published:
AI Data Center Boom Drives Skilled Labor Shortage
AI data centers fuel demand for skilled workers, pushing wages up as labor shortages strain growth. Image: Scott Blake / Unsplash

The rapid expansion of artificial intelligence infrastructure is creating a surge in demand for skilled labor, as technology companies invest heavily in building data centers that power next-generation AI systems.

Major technology firms including Alphabet, Microsoft, Meta, and Amazon are collectively committing nearly $700 billion in capital expenditures this year to expand data center capacity. These facilities are essential for training and operating large AI models, which require significant computing power and energy resources.

Amazon recently announced a $12 billion investment to build a new AI data center in Louisiana, expected to create hundreds of permanent jobs and thousands of additional roles in construction and technical services. Meta has also committed substantial funding, including a $27 billion joint venture with Blue Owl Capital to develop a large-scale data center in the same state.

Skilled Trades in High Demand

While much of the public discussion around AI has focused on its potential to disrupt white-collar employment, the buildout of physical infrastructure is driving demand for skilled trade workers. Roles such as electricians, HVAC engineers, robotic technicians, and industrial automation specialists are seeing sharp increases in job postings.

A global analysis by recruitment firm Randstad found that demand for robotic technicians rose by over 100% between 2022 and 2026. HVAC engineering roles increased by 67%, while industrial automation positions grew by more than 50%. Traditional construction and electrical roles also saw steady growth.

These positions are critical to constructing and maintaining data centers, which require advanced cooling systems, power distribution networks, and frequent upgrades to mechanical and electrical infrastructure. Facilities must often be retrofitted every four to six years to keep pace with evolving hardware requirements.

Industry leaders describe these roles as part of a growing category of “new-collar” jobs, blending technical expertise with hands-on work. Workers in these fields are increasingly collaborating directly with software engineers and data specialists inside data centers.

Rising Wages and Talent Constraints

The growing demand for skilled labor is driving up wages. According to Randstad, salaries for HVAC engineers have increased by 10% to 15% over the past four years. In some cases, workers transitioning into specialized data center roles are seeing pay increases of up to 30%.

Nvidia CEO Jensen Huang has also indicated that six-figure salaries are becoming more common for workers involved in building AI infrastructure. This reflects a broader trend in which labor shortages are creating a premium for technical trade skills.

The shortage is expected to intensify. Industry estimates suggest the United States could face a deficit of nearly 2 million manufacturing workers by 2033. Construction groups also project the need for hundreds of thousands of additional workers in the coming years to meet infrastructure demand.

Several factors are contributing to the gap, including an aging workforce and limited geographic mobility. Unlike software roles, many of these jobs require on-site presence, making it difficult to quickly scale labor in regions where new data centers are being built.

Companies and governments are beginning to respond with training programs, apprenticeships, and partnerships with educational institutions. Investment firms have also launched initiatives to support workforce development, recognizing that capital alone is insufficient to meet infrastructure needs.

As AI adoption accelerates, the ability to build and maintain data centers is emerging as a critical bottleneck. The sector’s growth is increasingly tied not just to advances in software and chips, but to the availability of skilled workers capable of supporting the physical backbone of the AI economy.

AI & Machine Learning, Cloud & Infrastructure, News

Meta Launches Manus Desktop AI Agent App

Meta has introduced a desktop version of Manus, enabling its AI agent to operate directly on users’ devices. The move intensifies competition in the fast-growing AI agent market.

By Daniel Mercer Edited by Maria Konash Published:
Meta Launches Manus Desktop AI Agent App
Meta launches Manus Desktop, bringing its AI agent to local devices amid rising competition. Image: Manus

Meta has rolled out a desktop application for its recently acquired AI startup Manus, expanding the reach of its autonomous agent technology beyond the cloud and onto users’ personal computers.

The new Manus Desktop app introduces a feature called “My Computer,” which allows the AI agent to interact directly with local files, applications, and system tools. Previously, Manus operated primarily through a web-based interface, where its general-purpose agent executed multi-step tasks remotely.

With the desktop release, Meta is positioning Manus as a more integrated productivity tool, capable of performing actions directly on a user’s machine. According to the company, the agent can read, organize, and edit files, as well as launch and control applications. It can also assist with software development tasks, including generating simple applications within minutes.

Expanding Competition in AI Agents

The launch comes as competition intensifies in the emerging AI agent category, where systems are designed to complete complex workflows with minimal human input. Meta’s move brings Manus closer in functionality to OpenClaw, an open-source AI agent that runs locally on users’ devices.

OpenClaw, created by Austrian developer Peter Steinberger, has gained traction among developers and technology enthusiasts since its release last year. Its open-source model and local deployment have contributed to growing interest in decentralized AI tools. Nvidia CEO Jensen Huang recently described OpenClaw as the “next ChatGPT,” highlighting its perceived potential in the space.

Unlike OpenClaw, which is distributed freely under an MIT license, Manus operates primarily as a subscription-based service. However, both platforms reflect a broader shift toward giving AI systems more direct access to user environments.

Manus also retains its existing integrations with services such as Google Calendar and Gmail, allowing it to coordinate tasks across both local and cloud-based platforms.

Security and Regulatory Considerations

The expansion of AI agents onto personal devices has raised concerns among experts about security and privacy. Granting software autonomous access to local files and applications introduces potential risks, particularly if safeguards are insufficient.

Meta said the Manus Desktop app includes user control mechanisms to address these concerns. Actions performed by the agent require explicit approval, with options such as “Allow Once” for individual tasks or “Always Allow” for repeated operations. These controls are intended to ensure that users maintain oversight of the agent’s behavior.

Meta acquired Manus in late December 2025 as part of a broader strategy to strengthen its artificial intelligence capabilities. The company has been working to integrate Manus’s agent technology into its ecosystem, including its Meta AI assistant.

The acquisition, reportedly valued at around $2 billion, has drawn scrutiny from Chinese regulators. Manus was originally founded in China before relocating its headquarters to Singapore, and authorities are reviewing the deal for potential violations of technology transfer rules.

Meta has stated that the transaction complied with applicable laws and expressed confidence that the review will be resolved. The company added that the Manus team is now fully integrated and continues to develop and expand the service.

The desktop launch marks a significant step in Meta’s effort to compete in the next phase of AI development, where autonomous agents are expected to play a central role in how users interact with software and digital systems.

AI & Machine Learning, News

A Mysterious AI Just Appeared And It Might Be DeepSeek’s Next Big Move

An anonymous AI model called Hunter Alpha has surfaced online, prompting speculation that it may be an early test of DeepSeek’s upcoming system. Its advanced capabilities and similarities to rumored specifications have drawn attention from developers.

By Daniel Mercer Edited by Maria Konash Published:
A Mysterious AI Just Appeared And It Might Be DeepSeek’s Next Big Move
Hunter Alpha emerges, rumored DeepSeek AI with advanced reasoning. Image: Solen Feyissa / Unsplash

A high-capacity artificial intelligence model released anonymously on a developer platform is drawing scrutiny from engineers and researchers, with some suggesting it could be an early test version of a forthcoming system from Chinese startup DeepSeek.

The model, named Hunter Alpha, appeared on AI gateway platform OpenRouter on March 11 without attribution. It was later labeled a “stealth model” by the platform. Neither OpenRouter nor DeepSeek has confirmed its origin, and both organizations have not responded to requests for comment.

During independent testing by Reuters, the chatbot identified itself as “a Chinese AI model primarily trained in Chinese,” with a knowledge cutoff extending to May 2025. This detail aligns with the reported cutoff of DeepSeek’s current systems, though the model declined to disclose its developer when prompted.

Hunter Alpha’s technical profile has contributed to the speculation. It is described as a one-trillion-parameter model, placing it among the largest known language models. Parameter count is a key measure of model scale and computational complexity. Larger models typically require substantial infrastructure and are associated with advanced reasoning capabilities.

The system also advertises a context window of up to one million tokens, significantly exceeding most publicly available models. A larger context window allows the system to process and retain more information within a single interaction, which is useful for tasks such as long document analysis or multi-step reasoning.

Advanced Capabilities Raise Questions

Developers testing the system have highlighted its combination of scale, reasoning ability, and free access as unusual. Comparable models with similar specifications are typically restricted or priced at a premium due to their operational cost.

Some engineers have pointed to the model’s reasoning patterns as a potential indicator of its origin. Observers noted similarities in how Hunter Alpha structures multi-step responses, a feature often shaped by training methods and data design.

The model’s capabilities also align with reports from Chinese media about DeepSeek’s anticipated next-generation system, commonly referred to as V4. Those reports suggest the upcoming release could feature enhanced reasoning and expanded memory capacity, with a potential launch timeline as early as April.

Uncertainty Remains Over Origin

Despite the overlap in specifications, not all analysts are convinced of a direct connection. Independent benchmarking efforts have identified differences in token handling and architectural behavior compared to DeepSeek’s known models.

These discrepancies have led some researchers to conclude that Hunter Alpha may originate from a different developer or represent an experimental system with distinct design choices.

DeepSeek has gained attention in the AI sector for its rapid development of large-scale models and its unconventional structure as a subsidiary of a quantitative hedge fund. The company has positioned itself among a growing group of Chinese firms competing in the global AI race.

For now, Hunter Alpha remains an unidentified entrant in the increasingly competitive landscape of large language models. Its emergence highlights both the pace of development and the limited transparency that can accompany new AI releases.

AI & Machine Learning, News

Nvidia CEO Calls OpenClaw “Next ChatGPT,” Sending Chinese AI Stocks Higher

Chinese AI stocks jumped after Nvidia CEO Jensen Huang praised OpenClaw as “the next ChatGPT,” boosting companies building agent-based AI systems.

By Samantha Reed Edited by Maria Konash Published:
Nvidia CEO Calls OpenClaw “Next ChatGPT,” Sending Chinese AI Stocks Higher
Chinese AI stocks surge after Jensen Huang calls OpenClaw the “next ChatGPT”. Image: Arturo Añez / Unsplash

Chinese AI stocks surged after Nvidia CEO Jensen Huang praised OpenClaw, calling it “definitely the next ChatGPT” and highlighting its potential to transform how users interact with AI.

The comments fueled investor enthusiasm around agent-based AI systems, which are gaining traction as the next phase of AI development beyond chatbots.

MiniMax and Zhipu Lead Gains

Shares of MiniMax jumped 22%, while Zhipu (Knowledge Atlas Technology) rose 14% in Hong Kong trading. Both companies have been expanding their agentic AI offerings and recently introduced tools built on OpenClaw.

The firms are part of China’s emerging group of “AI tigers”, startups developing large language models to compete with global players like OpenAI and Anthropic.

Zhipu recently launched GLM-5, an open-source model designed for coding and agent-based workflows. The company claims performance close to Anthropic’s Claude Opus 4.5 and, in some cases, stronger results than Google’s Gemini 3 Pro, though these benchmarks have not been independently verified.

Broader Market Gains

Other AI-related stocks also advanced. SenseTime, which has shifted from facial recognition to AI platforms and integrated OpenClaw into its assistant products, rose 2.43%, while cloud provider UCloud Technology gained 13%.

The rally extended beyond China. South Korean chipmakers SK Hynix and Samsung Electronics rose sharply after Huang reiterated expectations of $1 trillion in demand for Nvidia’s AI systems by 2027.

China’s Growing Role in AI

According to Moody’s, China’s rapid adoption of AI reinforces its position as a leading global market. However, uptake remains uneven across industries, with large technology firms driving the most advanced deployments.

The surge in AI stocks underscores how quickly agentic AI platforms like OpenClaw are reshaping investor sentiment, and signaling a broader shift toward autonomous systems that can act, not just respond.

Google Expands Personal Intelligence Across Search and Gemini to All US Users

Google is expanding Personal Intelligence in the U.S., enabling Gemini and AI Search to deliver personalized answers by connecting data across apps like Gmail and Photos.

By Samantha Reed Edited by Maria Konash Published:
Google Expands Personal Intelligence Across Search and Gemini to All US Users
Google expands Personal Intelligence across Search and Gemini. Image: Google

Google is expanding its Personal Intelligence capabilities across AI Mode in Search, the Gemini app, and Gemini in Chrome in the United States, aiming to deliver more personalized and context-aware AI experiences.

The feature allows users to connect data across Google services such as Gmail and Google Photos, enabling AI systems to generate responses tailored to individual preferences, history, and behavior—without requiring users to manually provide context.

More Personalized, Context-Aware Responses

With Personal Intelligence enabled, users can receive highly customized recommendations and assistance. For example, the system can suggest products based on past purchases, troubleshoot devices using purchase history, or generate travel plans based on previous trips and bookings.

Google says the goal is to move beyond generic responses and provide answers that reflect each user’s unique context, such as preferred brands, habits, and schedules.

The feature also enables more dynamic use cases, including:

  • Personalized shopping recommendations aligned with style and past purchases
  • Context-aware tech support based on owned devices
  • Travel suggestions tailored to itineraries and preferences
  • Activity recommendations based on interests and behavior

Privacy and User Control

Google emphasized that Personal Intelligence is built with privacy, transparency, and user control at its core. Users must explicitly choose to connect their data sources and can disable access at any time.

The company also stated that its AI models do not directly train on personal data from Gmail or Photos. Instead, limited information, such as prompts and responses, is used to improve system performance.

Rolling Out in the U.S.

Personal Intelligence is now available in the U.S. within AI Mode in Search and is beginning to roll out in the Gemini app and Chrome integration for free-tier users.

The feature is currently limited to personal Google accounts and is not yet available for Workspace business or enterprise users.

The expansion reflects Google’s broader push to make AI more personal, proactive, and seamlessly integrated into everyday digital workflows.

AI & Machine Learning, Consumer Tech, News