Anthropic’s Claude Tops US App Store Amid OpenAI Pentagon Backlash

Anthropic’s Claude rose to No. 1 on the U.S. App Store as backlash grew over OpenAI’s Pentagon deal. Market data shows surging downloads for Claude and a sharp spike in ChatGPT uninstallations.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Anthropic’s Claude Tops US App Store Amid OpenAI Pentagon Backlash
Claude tops the U.S. App Store amid backlash over OpenAI’s Pentagon deal. Photo: Aerps.com / Unsplash

Anthropic’s Claude mobile app climbed to the top position in the U.S. Apple App Store over the weekend, overtaking OpenAI’s ChatGPT amid controversy surrounding OpenAI’s agreement with the Department of Defense, which has been rebranded under the Trump administration as the Department of War.

Data from market intelligence firm Sensor Tower indicates that U.S. uninstalls of ChatGPT surged 295% day over day on Saturday, February 28. That compares with a typical 9% daily uninstall rate over the past 30 days. At the same time, downloads of Claude in the U.S. rose 37% on Friday and 51% on Saturday after Anthropic announced it would not move forward with a defense partnership.

Anthropic cited concerns that its AI systems could be used for domestic surveillance or fully autonomous weapons, applications it has said exceed safe deployment boundaries. A portion of consumers appeared to support that stance.

ChatGPT’s U.S. download growth reversed sharply after news of its defense agreement became public. Downloads fell 13 percent day over day on Saturday and declined an additional 5 percent on Sunday. The day before the announcement, downloads had risen 14 percent.

Claude’s ranking improvement was rapid. The app moved more than 20 positions within roughly a week, reaching No. 1 in the U.S. by Saturday, March 2. Two months earlier, the app had ranked outside the top 40.

User Sentiment and Global Momentum

Sensor Tower reported that one star reviews for ChatGPT increased 775% on Saturday, followed by another 100% rise on Sunday. Five star reviews fell by 50 percent during the same period. The ratings shift reflected growing user reaction to OpenAI’s national security partnership.

Other analytics firms reported similar trends. Appfigures said Claude’s total daily U.S. downloads surpassed ChatGPT’s for the first time on Saturday. Its estimates showed Claude downloads increasing 88 percent day over day, a steeper jump than Sensor Tower’s figures.

Claude also reached the No. 1 free iPhone app position in six additional countries: Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.

Similarweb reported that Claude’s U.S. downloads over the past week were approximately 20 times higher than in January, though it cautioned that not all growth may be tied directly to political developments.

Celebrity Endorsement and Social Momentum

On X, pop star Katy Perry shared a screenshot of her subscription to Claude Pro, circling the $214.99 annual plan and captioning the post “done” — in what many users interpreted as a show of support for Anthropic’s stance against certain military use cases. Her post quickly gained traction online, reinforcing broader public engagement with Claude amid the dispute. 

Consumer discussion on social media showed a wave of users canceling ChatGPT subscriptions and migrating to Claude, supported by community-generated screenshots of cancellation confirmations and comments. These social signals coincided with claims on platforms such as Reddit that the environment around AI ethics and national security was influencing individual choices.

The episode underscores how AI policy decisions can rapidly influence consumer behavior. While OpenAI has strengthened its standing in Washington through classified deployments, Anthropic appears to be gaining traction among retail users concerned about military applications of artificial intelligence.

Major US Agencies Shift From Anthropic to OpenAI Over Security Concerns

Multiple U.S. federal agencies, including State, Treasury, and HHS, have ceased using Anthropic’s Claude following a White House directive. Agencies are transitioning to alternatives such as OpenAI amid national security and ethical concerns.

By Samantha Reed Edited by Maria Konash Published:
U.S. agencies halt Anthropic AI, including Claude, after Trump directive and Pentagon concerns, moving to OpenAI. Photo: Miguel M. / Unsplash

Following a White House directive, the U.S. Departments of State, Treasury, and Health and Human Services have moved to cease using Anthropic’s AI products, including its Claude chatbot platform. The Pentagon had already begun transitioning to alternative providers such as OpenAI.

Treasury Secretary Scott Bessent confirmed on X that the department was terminating all use of Anthropic technology, while HHS notified employees to adopt platforms such as OpenAI’s ChatGPT and Google-backed Gemini. The State Department similarly announced it would switch its in-house chatbot, StateChat, to OpenAI’s GPT-4.1. A State Department spokesperson emphasized that these steps align with President Donald Trump’s directive to cancel Anthropic contracts and bring programs into full compliance.

William Pulte, director of the Federal Housing Finance Agency, said his bureau and affiliated agencies, including Fannie Mae and Freddie Mac, were also ending all use of Anthropic products.

National Security and Industry Implications

President Trump labeled Anthropic a supply-chain risk, a designation typically reserved for foreign suppliers deemed a potential threat. The move follows a standoff between Anthropic and the Pentagon over AI deployment safeguards. Sources indicate the dispute centered on preventing the U.S. military and intelligence agencies from using Anthropic’s AI for autonomous weapons targeting or domestic surveillance.

OpenAI, backed by Microsoft and Amazon, quickly moved to fill the gap. The company announced a deal to deploy AI models in the Defense Department’s classified networks. CEO Sam Altman later posted on X that OpenAI would amend the agreement to clarify that its technology would not be used to deliberately track or surveil U.S. persons or nationals, including through the acquisition of commercial data.

Transition Challenges and Broader Impact

The rapid agency transitions underscore the operational complexity of replacing AI tools deeply integrated into federal workflows. Claude’s prior use in sensitive military and intelligence tasks highlights the difficulty of enforcing swift cutoffs. Analysts note that the shifts also reflect broader tensions over how AI safety, ethics, and governance intersect with national security priorities.

Meanwhile, Anthropic’s Claude has seen a surge in consumer adoption, rising to No. 1 on the U.S. App Store as public backlash grew against OpenAI’s Pentagon deal, highlighting a growing divergence between federal and retail users.

AI & Machine Learning, News, Regulation & Policy

OpenAI Secures Pentagon AI Deal Amid Anthropic Dispute

OpenAI reached a rapid agreement with the Department of Defense to deploy its AI models in classified environments, following the breakdown of Anthropic’s negotiations. The move sparked debate over safeguards, deployment, and AI ethics in national security operations.

By Daniel Mercer Edited by Maria Konash Published: Updated:
OpenAI finalizes Pentagon AI deal after Anthropic talks collapse. Photo: Clem Onojeghuo / Unsplash

OpenAI has finalized a deal with the Department of Defense to deploy its AI models, including those powering ChatGPT, in classified U.S. military environments. The announcement followed the collapse of negotiations between Anthropic and the Pentagon, after which President Donald Trump directed federal agencies to cease using Anthropic technology following a six-month transition period. Secretary of Defense Pete Hegseth also designated Anthropic as a supply-chain risk, citing limitations on unrestricted military use.

Chief Executive Sam Altman acknowledged the speed of the negotiations, describing the agreement as “definitely rushed” and admitting that “the optics don’t look good.” The move quickly drew scrutiny from media and industry observers, with some questioning how OpenAI could secure a deal while Anthropic did not.

Safety and Deployment Measures

In response, OpenAI outlined its safeguards through a blog post and executive commentary. The company emphasized three areas where its models will not be used: mass domestic surveillance, autonomous weapons, and high-stakes automated decisions, such as social credit systems. OpenAI framed its approach as multi-layered, contrasting it with other AI companies that rely primarily on usage policies.

The deployment will occur via cloud infrastructure, with cleared OpenAI personnel overseeing operations. Contractual protections further enforce the safety red lines, the company said. Katrina Mulligan, OpenAI’s head of national security partnerships, highlighted that deployment architecture, rather than contract language alone, prevents models from being integrated directly into weapons systems or operational sensors.

Despite these assurances, analysts have questioned whether compliance with U.S. Executive Order 12333 could allow some domestic surveillance indirectly, as the order governs the collection of communications outside the U.S. that may include information about U.S. persons. OpenAI has stated it does not fully understand why Anthropic could not reach a similar agreement, expressing hope that other labs will consider comparable arrangements in the future.

Industry and Operational Implications

Altman acknowledged backlash over the deal, noting that Anthropic’s Claude briefly surpassed ChatGPT in the Apple App Store rankings following the announcement. He described the agreement as an attempt to de-escalate tensions between the Defense Department and AI companies, while protecting safety and ethical boundaries.

The development highlights the operational and political challenges of integrating advanced AI into military workflows. As U.S. defense agencies increasingly rely on AI for intelligence analysis, operational planning, and simulation, companies face scrutiny over ethical use, contractual safeguards, and alignment with government standards.

AI & Machine Learning, News

US Military Uses Anthropic’s Claude AI in Iran Strike After Trump Ban

Despite President Trump’s directive to cease federal use of Anthropic’s Claude AI, U.S. military forces reportedly employed the model for intelligence, target selection, and battlefield simulations in airstrikes on Iran.

By Maria Konash Published: Updated:
U.S. military used Anthropic’s Claude AI in Iran operations hours after Trump banned its federal use. Photo: Diego González / Unsplash

U.S. military forces reportedly employed Anthropic’s Claude AI language model during a major joint operation against Iran, just hours after President Donald Trump’s administration ordered all federal agencies to immediately cease use of Anthropic’s technology. Sources familiar with the matter told media outlets that Claude was used by U.S. Central Command for intelligence analysis, target identification, and battlefield scenario simulations tied to the March 1 strikes. 

The air operation, conducted in coordination with Israeli forces, marked one of the most significant U.S. military actions in the Middle East in years. Claude’s involvement in the mission highlights its integration into military planning processes and classified defence systems, making a rapid removal difficult. 

Political and Tech Sector Fallout

On February 27, the Trump administration directed all federal agencies to discontinue using AI tools developed by Anthropic, including its Claude models, citing national security concerns. President Trump described Anthropic’s leadership in sharply critical terms in a social media post, framing the move as necessary to prevent what he called undue influence over military operations. The directive stipulated a six-month phase-out period for agencies, including the Department of Defense, to transition away from the technology. 

Defense Secretary Pete Hegseth also designated Anthropic a “supply chain risk,” a label typically applied to firms considered threats to national security, and warned that any continued use could jeopardize future government contracts. Anthropic’s refusal to grant the Pentagon unrestricted access to its models, particularly for tasks without stringent safeguards, underpinned the dispute. 

Industry analysts note that the military’s continued reliance on Claude, even amid a government ban, reflects how advanced AI tools can become deeply embedded in mission-critical workflows. Claude had been integrated into classified networks and defence analytics through partnerships with third-party platforms, making an abrupt disconnect operationally challenging. 

Shift to Alternative AI Providers

As the standoff with Anthropic has escalated, other AI firms have moved to fill the anticipated gap. In the wake of the breakdown in relations, OpenAI announced an agreement with the Pentagon to deploy its AI models, including those underpinning ChatGPT, across classified defence infrastructure. Elon Musk’s xAI has also secured terms to make its Grok model available for secure military environments, offering additional alternatives for defence AI workloads. 

The clashes between the U.S. government and Anthropic highlight broader tensions at the intersection of AI ethics, national security, and the pace at which advanced technologies are adopted in defense contexts.

AI & Machine Learning, News

Honor Robot Phone Set for Second-Half Launch

Honor confirmed its Robot Phone will launch in the second half of the year, featuring a 200MP camera with a built-in three-axis mechanical gimbal system.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Honor plans to release the Robot Phone with a 200MP sensor and three-axis gimbal stabilization. Photo: Honor

Honor has confirmed that its Robot Phone will launch in the second half of the year, following its showcase at Mobile World Congress 2026 in Barcelona. The device was first teased in October and is positioned as a new category of smartphone built around advanced stabilization and robotics-inspired mechanics.

The Robot Phone builds on technologies seen in Honor’s Magic V6 foldable but introduces more complex mechanical systems designed to support motion tracking and stabilized imaging. The company describes it as “a new species of smartphone,” emphasizing its hardware-driven approach to mobile photography.

200MP Camera With Integrated Mechanical Gimbal

The standout feature is a 200-megapixel camera integrated with what Honor calls an industry-first three-axis mechanical gimbal system inside a smartphone body. Unlike conventional optical image stabilization, the system physically rotates and stabilizes the camera module across multiple axes, similar in concept to dedicated handheld stabilizers used in professional videography.

Honor says the integrated system enables smoother video capture across varied shooting scenarios. The camera is supported by AI-driven features, including object tracking that allows the lens module to follow subjects as they move within the frame. This blend of mechanical stabilization and AI tracking aims to enhance both video and still photography performance.

The company is collaborating with ARRI, a German motion picture equipment specialist, to refine the imaging experience. The partnership signals Honor’s attempt to position the Robot Phone closer to professional-grade video tools rather than standard consumer smartphones.

Broader Innovation Showcase at MWC

In addition to the Robot Phone, Honor used the MWC 2026 platform to present a humanoid robot prototype and unveil new silicon-carbon battery technology designed for foldable devices. The battery innovation is intended to improve energy density while maintaining slim device profiles, a critical factor for foldables.

While full hardware specifications for the Robot Phone have not yet been disclosed, further details are expected closer to launch. The initial release is scheduled for China, with a phased rollout to additional markets thereafter.

Pricing has not been announced, though the mechanical complexity and lack of comparable devices suggest a premium positioning. By integrating robotics-style hardware directly into a smartphone chassis, Honor is testing whether mechanical innovation, alongside AI enhancements, can differentiate devices in a saturated flagship market.

OpenAI Raises $110 Billion at $730 Billion Valuation

OpenAI secured $110 billion in new funding at a $730 billion pre-money valuation, backed by SoftBank, NVIDIA, and Amazon to expand AI infrastructure and global reach.

By Maria Konash Published:
OpenAI raises $110B from SoftBank, NVIDIA, and Amazon. Photo: Zac Wolff / Unsplash

OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, marking one of the largest private funding rounds in technology history. The round includes $30 billion each from SoftBank Group Corp and NVIDIA, and $50 billion from Amazon. Additional financial investors are expected to join as the round progresses.

The company said the funding will support rising global demand for artificial intelligence products across consumers, developers, and enterprises. OpenAI identified compute, distribution, and capital as the core requirements to scale access to its AI systems worldwide.

As part of the announcement, OpenAI signed a multi-year strategic partnership with Amazon and expanded its collaboration with NVIDIA to secure next-generation inference and training infrastructure.

Infrastructure Expansion and Strategic Partnerships

Under the NVIDIA agreement, OpenAI will utilize 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Vera Rubin systems. This builds on Hopper and Blackwell systems already deployed across partners including Microsoft, Oracle Cloud Infrastructure, and CoreWeave. The expanded compute footprint is designed to accelerate both model training and real-time deployment at global scale.

The Amazon partnership focuses on accelerating AI adoption among enterprises, startups, and consumers. OpenAI said the collaboration strengthens its distribution channels and infrastructure capabilities while deepening integration across cloud environments.

Product Growth Across Consumer and Enterprise

The funding follows significant growth across OpenAI’s product portfolio. Codex, the company’s AI coding system, has seen weekly users more than triple since the start of the year to 1.6 million. The tool enables individuals to build and deploy software workflows that previously required larger engineering teams.

ChatGPT remains the company’s largest consumer-facing product, with more than 900 million weekly active users and over 50 million subscribers. OpenAI reported that January and February are on track to be the strongest months for new subscriber additions in its history. The company said product performance continues to improve with faster responses, greater reliability, and stronger safety systems as usage scales.

In the enterprise segment, more than nine million paying business users rely on ChatGPT for workplace applications. Organizations across sectors are deploying AI systems across engineering, support, finance, sales, and operations. OpenAI’s Frontier platform supports enterprise customers in building and managing AI-powered workflows.

Foundation Impact

The new valuation increases the value of the OpenAI Foundation’s stake in OpenAI Group to more than $180 billion. The company said the strengthened balance sheet will expand philanthropic capacity in areas including health research and AI resilience.

Chief Executive Sam Altman said the partnerships reflect a shared ambition to scale reliable and broadly useful AI systems globally. The funding positions OpenAI to expand infrastructure capacity and accelerate deployment as frontier AI moves into daily use.

Exit mobile version