US Military Uses Anthropic’s Claude AI in Iran Strike After Trump Ban

Despite President Trump’s directive to cease federal use of Anthropic’s Claude AI, U.S. military forces reportedly employed the model for intelligence, target selection, and battlefield simulations in airstrikes on Iran.

By Maria Konash Published: Updated:
US Military Uses Anthropic’s Claude AI in Iran Strike After Trump Ban
U.S. military used Anthropic’s Claude AI in Iran operations hours after Trump banned its federal use. Photo: Diego González / Unsplash

U.S. military forces reportedly employed Anthropic’s Claude AI language model during a major joint operation against Iran, just hours after President Donald Trump’s administration ordered all federal agencies to immediately cease use of Anthropic’s technology. Sources familiar with the matter told media outlets that Claude was used by U.S. Central Command for intelligence analysis, target identification, and battlefield scenario simulations tied to the March 1 strikes. 

The air operation, conducted in coordination with Israeli forces, marked one of the most significant U.S. military actions in the Middle East in years. Claude’s involvement in the mission highlights its integration into military planning processes and classified defence systems, making a rapid removal difficult. 

Political and Tech Sector Fallout

On February 27, the Trump administration directed all federal agencies to discontinue using AI tools developed by Anthropic, including its Claude models, citing national security concerns. President Trump described Anthropic’s leadership in sharply critical terms in a social media post, framing the move as necessary to prevent what he called undue influence over military operations. The directive stipulated a six-month phase-out period for agencies, including the Department of Defense, to transition away from the technology. 

Defense Secretary Pete Hegseth also designated Anthropic a “supply chain risk,” a label typically applied to firms considered threats to national security, and warned that any continued use could jeopardize future government contracts. Anthropic’s refusal to grant the Pentagon unrestricted access to its models, particularly for tasks without stringent safeguards, underpinned the dispute. 

Industry analysts note that the military’s continued reliance on Claude, even amid a government ban, reflects how advanced AI tools can become deeply embedded in mission-critical workflows. Claude had been integrated into classified networks and defence analytics through partnerships with third-party platforms, making an abrupt disconnect operationally challenging. 

Shift to Alternative AI Providers

As the standoff with Anthropic has escalated, other AI firms have moved to fill the anticipated gap. In the wake of the breakdown in relations, OpenAI announced an agreement with the Pentagon to deploy its AI models, including those underpinning ChatGPT, across classified defence infrastructure. Elon Musk’s xAI has also secured terms to make its Grok model available for secure military environments, offering additional alternatives for defence AI workloads. 

The clashes between the U.S. government and Anthropic highlight broader tensions at the intersection of AI ethics, national security, and the pace at which advanced technologies are adopted in defense contexts.

AI & Machine Learning, News

OpenAI Secures Pentagon AI Deal Amid Anthropic Dispute

OpenAI reached a rapid agreement with the Department of Defense to deploy its AI models in classified environments, following the breakdown of Anthropic’s negotiations. The move sparked debate over safeguards, deployment, and AI ethics in national security operations.

By Daniel Mercer Edited by Maria Konash Published: Updated:
OpenAI Secures Pentagon AI Deal Amid Anthropic Dispute
OpenAI finalizes Pentagon AI deal after Anthropic talks collapse. Photo: Clem Onojeghuo / Unsplash

OpenAI has finalized a deal with the Department of Defense to deploy its AI models, including those powering ChatGPT, in classified U.S. military environments. The announcement followed the collapse of negotiations between Anthropic and the Pentagon, after which President Donald Trump directed federal agencies to cease using Anthropic technology following a six-month transition period. Secretary of Defense Pete Hegseth also designated Anthropic as a supply-chain risk, citing limitations on unrestricted military use.

Chief Executive Sam Altman acknowledged the speed of the negotiations, describing the agreement as “definitely rushed” and admitting that “the optics don’t look good.” The move quickly drew scrutiny from media and industry observers, with some questioning how OpenAI could secure a deal while Anthropic did not.

Safety and Deployment Measures

In response, OpenAI outlined its safeguards through a blog post and executive commentary. The company emphasized three areas where its models will not be used: mass domestic surveillance, autonomous weapons, and high-stakes automated decisions, such as social credit systems. OpenAI framed its approach as multi-layered, contrasting it with other AI companies that rely primarily on usage policies.

The deployment will occur via cloud infrastructure, with cleared OpenAI personnel overseeing operations. Contractual protections further enforce the safety red lines, the company said. Katrina Mulligan, OpenAI’s head of national security partnerships, highlighted that deployment architecture, rather than contract language alone, prevents models from being integrated directly into weapons systems or operational sensors.

Despite these assurances, analysts have questioned whether compliance with U.S. Executive Order 12333 could allow some domestic surveillance indirectly, as the order governs the collection of communications outside the U.S. that may include information about U.S. persons. OpenAI has stated it does not fully understand why Anthropic could not reach a similar agreement, expressing hope that other labs will consider comparable arrangements in the future.

Industry and Operational Implications

Altman acknowledged backlash over the deal, noting that Anthropic’s Claude briefly surpassed ChatGPT in the Apple App Store rankings following the announcement. He described the agreement as an attempt to de-escalate tensions between the Defense Department and AI companies, while protecting safety and ethical boundaries.

The development highlights the operational and political challenges of integrating advanced AI into military workflows. As U.S. defense agencies increasingly rely on AI for intelligence analysis, operational planning, and simulation, companies face scrutiny over ethical use, contractual safeguards, and alignment with government standards.

AI & Machine Learning, News

Honor Robot Phone Set for Second-Half Launch

Honor confirmed its Robot Phone will launch in the second half of the year, featuring a 200MP camera with a built-in three-axis mechanical gimbal system.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Honor Robot Phone Set for Second-Half Launch
Honor plans to release the Robot Phone with a 200MP sensor and three-axis gimbal stabilization. Photo: Honor

Honor has confirmed that its Robot Phone will launch in the second half of the year, following its showcase at Mobile World Congress 2026 in Barcelona. The device was first teased in October and is positioned as a new category of smartphone built around advanced stabilization and robotics-inspired mechanics.

The Robot Phone builds on technologies seen in Honor’s Magic V6 foldable but introduces more complex mechanical systems designed to support motion tracking and stabilized imaging. The company describes it as “a new species of smartphone,” emphasizing its hardware-driven approach to mobile photography.

200MP Camera With Integrated Mechanical Gimbal

The standout feature is a 200-megapixel camera integrated with what Honor calls an industry-first three-axis mechanical gimbal system inside a smartphone body. Unlike conventional optical image stabilization, the system physically rotates and stabilizes the camera module across multiple axes, similar in concept to dedicated handheld stabilizers used in professional videography.

Honor says the integrated system enables smoother video capture across varied shooting scenarios. The camera is supported by AI-driven features, including object tracking that allows the lens module to follow subjects as they move within the frame. This blend of mechanical stabilization and AI tracking aims to enhance both video and still photography performance.

The company is collaborating with ARRI, a German motion picture equipment specialist, to refine the imaging experience. The partnership signals Honor’s attempt to position the Robot Phone closer to professional-grade video tools rather than standard consumer smartphones.

Broader Innovation Showcase at MWC

In addition to the Robot Phone, Honor used the MWC 2026 platform to present a humanoid robot prototype and unveil new silicon-carbon battery technology designed for foldable devices. The battery innovation is intended to improve energy density while maintaining slim device profiles, a critical factor for foldables.

While full hardware specifications for the Robot Phone have not yet been disclosed, further details are expected closer to launch. The initial release is scheduled for China, with a phased rollout to additional markets thereafter.

Pricing has not been announced, though the mechanical complexity and lack of comparable devices suggest a premium positioning. By integrating robotics-style hardware directly into a smartphone chassis, Honor is testing whether mechanical innovation, alongside AI enhancements, can differentiate devices in a saturated flagship market.

OpenAI Raises $110 Billion at $730 Billion Valuation

OpenAI secured $110 billion in new funding at a $730 billion pre-money valuation, backed by SoftBank, NVIDIA, and Amazon to expand AI infrastructure and global reach.

By Maria Konash Published:
OpenAI Raises $110 Billion at $730 Billion Valuation
OpenAI raises $110B from SoftBank, NVIDIA, and Amazon. Photo: Zac Wolff / Unsplash

OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, marking one of the largest private funding rounds in technology history. The round includes $30 billion each from SoftBank Group Corp and NVIDIA, and $50 billion from Amazon. Additional financial investors are expected to join as the round progresses.

The company said the funding will support rising global demand for artificial intelligence products across consumers, developers, and enterprises. OpenAI identified compute, distribution, and capital as the core requirements to scale access to its AI systems worldwide.

As part of the announcement, OpenAI signed a multi-year strategic partnership with Amazon and expanded its collaboration with NVIDIA to secure next-generation inference and training infrastructure.

Infrastructure Expansion and Strategic Partnerships

Under the NVIDIA agreement, OpenAI will utilize 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Vera Rubin systems. This builds on Hopper and Blackwell systems already deployed across partners including Microsoft, Oracle Cloud Infrastructure, and CoreWeave. The expanded compute footprint is designed to accelerate both model training and real-time deployment at global scale.

The Amazon partnership focuses on accelerating AI adoption among enterprises, startups, and consumers. OpenAI said the collaboration strengthens its distribution channels and infrastructure capabilities while deepening integration across cloud environments.

Product Growth Across Consumer and Enterprise

The funding follows significant growth across OpenAI’s product portfolio. Codex, the company’s AI coding system, has seen weekly users more than triple since the start of the year to 1.6 million. The tool enables individuals to build and deploy software workflows that previously required larger engineering teams.

ChatGPT remains the company’s largest consumer-facing product, with more than 900 million weekly active users and over 50 million subscribers. OpenAI reported that January and February are on track to be the strongest months for new subscriber additions in its history. The company said product performance continues to improve with faster responses, greater reliability, and stronger safety systems as usage scales.

In the enterprise segment, more than nine million paying business users rely on ChatGPT for workplace applications. Organizations across sectors are deploying AI systems across engineering, support, finance, sales, and operations. OpenAI’s Frontier platform supports enterprise customers in building and managing AI-powered workflows.

Foundation Impact

The new valuation increases the value of the OpenAI Foundation’s stake in OpenAI Group to more than $180 billion. The company said the strengthened balance sheet will expand philanthropic capacity in areas including health research and AI resilience.

Chief Executive Sam Altman said the partnerships reflect a shared ambition to scale reliable and broadly useful AI systems globally. The funding positions OpenAI to expand infrastructure capacity and accelerate deployment as frontier AI moves into daily use.

Dog Vibe-Codes Video Games with a Little Help From Claude AI

A YouTuber built a system that lets his cavapoo trigger AI-generated video games by mashing a keyboard, using Anthropic’s Claude with custom guardrails and automated feedback tools.

By Samantha Reed Edited by AIstify Team Published: Updated:

A YouTuber has created an unusual AI experiment that turns random dog keystrokes into playable video games. Caleb Leak documented the project, which connects his nine-pound cavapoo, Momo, to an AI coding workflow powered by Claude Code from Anthropic.

The system combines consumer hardware, custom software, and structured AI prompting. Momo types on a Bluetooth keyboard connected to a Raspberry Pi 5. The keystrokes travel over a network to DogKeyboard, a lightweight Rust application that filters special keys and forwards usable input to Claude Code. After a preset amount of typing, a smart pet feeder dispenses a treat. A chime signals when the AI is ready for more input.

Leak says the technical challenge was not the keyboard interface but ensuring the AI interpreted nonsensical input as intentional creative direction. Claude, like many large language models, is designed to disregard accidental or meaningless strings. To address this, Leak engineered prompt instructions that framed the keyboard mashing as cryptic but meaningful design language.

In the system prompt, Claude is told that it is collaborating with an eccentric game designer who communicates through riddles and random-looking characters. The AI is instructed to interpret every input as a valid creative signal and update the game accordingly.

Strong guardrails and automated feedback tools, including screenshot capture, play-testing routines, scene linting, and shader validation, ensure that the output remains functional.

From Random Input to Playable Build

Leak reports that a typical game takes one to two hours from first keystroke to playable build. All projects are developed in Godot 4.6, with game logic written entirely in C#. The automation layer translates keyboard noise into structured development steps, allowing Claude to generate assets, mechanics, and scene updates iteratively.

One resulting title, described as an arcade-style action game, features retro visuals reminiscent of 1980s console design. The experiment demonstrates how AI coding agents can transform loosely defined input into structured software output when supported by well-designed system prompts and validation pipelines.

The project highlights a broader shift in how developers interact with generative AI tools. Rather than precise commands, structured context and feedback loops increasingly shape outcomes. By reframing meaningless input as creative intent, the workflow tests the boundaries of AI-assisted development.

While the experiment is largely playful, it underscores how modern AI coding systems can generate functional applications rapidly when provided with constraints, interpretation rules, and automated testing. Whether used for novelty or serious prototyping, the setup illustrates the expanding flexibility of AI development tools in unconventional environments.

AI & Machine Learning, News

Burger King Tests AI Headset System for Employees

Burger King is testing an AI-powered headset system called Patty, designed to monitor operations, assist employees, and track service patterns in real time.

By Samantha Reed Edited by Maria Konash Published:
Burger King Tests AI Headset System for Employees
Burger King trials ‘Patty’ in 500 U.S. outlets, using AI to optimize inventory, resolve issues, and boost customer service. Photo: Musmuliady Jahi / Unsplash

Burger King is testing an AI-powered headset system, Patty, across 500 U.S. restaurants. Developed by parent company Restaurant Brands International using OpenAI technology, the system provides real-time guidance to staff while monitoring operational and service metrics. The rollout represents one of the most ambitious AI experiments in fast food this year.

Operational Assistance Through AI

Patty connects to restaurant systems and communicates directly with employees through headsets. The AI flags low inventory, such as drink dispensers running out, and alerts managers when operational issues arise, including customer-reported incidents like messy restrooms.

Employees can interact with Patty to ask operational questions, including food preparation instructions, cleaning procedures, and digital menu management when ingredients are unavailable. The system integrates with Burger King’s broader BK Assistant platform and aims to reduce friction during busy shifts, providing managers with real-time insights rather than reactive reporting.

Hospitality Monitoring and Coaching

Beyond operational support, Patty tracks service behaviors. The AI recognizes key phrases like “welcome,” “please,” and “thank you,” allowing managers to monitor service patterns. Burger King emphasized the system is not intended to score employees or enforce scripts but to reinforce hospitality and provide actionable insights.

The company also stressed that technology will not replace human interaction. “Hospitality is fundamentally human,” a Burger King spokesperson said. “The role of this technology is to support our teams so they can stay present with guests.” Patty’s monitoring features, including emerging tone detection, remain under refinement.

AI in Fast Food

Burger King joins other chains experimenting with AI to reduce labor pressures and improve operational efficiency. Yum Brands has partnered with Nvidia for AI tools across KFC, Taco Bell, and Pizza Hut, while McDonald’s has explored AI in drive-thru operations with IBM and now works with Google on new systems.

Patty combines digital oversight with hands-on operational support, assisting staff while tracking service quality. How employees respond to the headset, whether as a helpful assistant or perceived supervisor, could shape the broader adoption of AI in fast-food operations.

By integrating AI directly into staff workflows, Burger King is testing the limits of real-time assistance and monitoring, balancing operational efficiency with the human touch in hospitality.

AI & Machine Learning, News