Anthropic Nears $20B Revenue Even as US Flags Supply Chain Risks

Anthropic’s annualized revenue has surged to nearly $20 billion even as the U.S. government classifies the company as a supply chain risk. The AI firm plans to challenge the designation while demand for its Claude models continues to grow.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic’s revenue nears a $20B run rate as Claude adoption surges, despite a U.S. supply-chain risk designation. Photo: Anthropic

Anthropic is experiencing rapid revenue expansion even as it faces a political and regulatory dispute with the U.S. government. The artificial intelligence company has increased its annualized revenue run rate to more than $19 billion, more than doubling from roughly $9 billion at the end of 2025.

The growth marks a sharp rise from around $14 billion reported only weeks earlier. The surge has been driven by strong adoption of Anthropic’s AI models and developer tools, particularly the programming-focused product Claude Code.

Anthropic, which was recently valued at about $380 billion, has also gained traction among individual users. Its Claude mobile app recently climbed to the top of Apple’s U.S. App Store rankings as debate intensified over AI partnerships with the Pentagon. The ranking shift came as some users reacted to rival OpenAI’s defense agreement by uninstalling ChatGPT and switching platforms.

The company has also expanded its product portfolio with tools such as Claude Cowork, which integrates AI into collaborative software workflows. The launch of these products has disrupted segments of the software-as-a-service market, contributing to volatility among some SaaS company stocks.

Pentagon Conflict and Supply Chain Classification

Despite strong commercial momentum, Anthropic is facing growing pressure from the U.S. Department of Defense. Defense Secretary Pete Hegseth recently designated the company as a supply chain risk, a classification typically reserved for firms linked to geopolitical adversaries.

The designation followed months of negotiations between Anthropic and the Pentagon over how its AI systems could be used by military and intelligence agencies.

Anthropic insisted on maintaining safeguards preventing two specific applications: mass domestic surveillance of U.S. citizens and the use of AI systems in fully autonomous weapons. The company has argued that current AI models are not reliable enough to safely operate without human oversight and that large-scale surveillance of Americans would violate fundamental rights.

U.S. defense officials have previously said the military has no intention of deploying AI for mass surveillance or autonomous weapons but has argued that lawful uses of AI should remain unrestricted.

Legal Challenge and Industry Implications

Anthropic has described the supply chain risk designation as legally unsound and signaled it will challenge the move in court if necessary. The company argues the authority cited by the Defense Department applies only to contracts directly involving the Pentagon and should not restrict broader commercial use of its technology.

Under the company’s interpretation, the designation would not affect private customers or contractors using Claude outside Department of Defense agreements. Anthropic also says it should not restrict how defense contractors deploy the model in non-Pentagon projects.

Industry observers say the dispute highlights tensions between AI safety policies and national security priorities as governments accelerate adoption of generative AI tools.

Despite the political standoff, Anthropic’s commercial business continues to expand rapidly. Strong enterprise demand for coding and productivity tools built on Claude has driven revenue growth while consumer adoption has surged, reflected in the app’s recent rise to the No. 1 position on the U.S. App Store amid backlash over competing defense AI partnerships. Meanwhile, OpenAI is considering a deal to deploy its AI models across NATO’s unclassified networks following its Pentagon agreement, underscoring how defense alliances are rapidly integrating generative AI technologies even as debates over safeguards and governance continue.

AI & Machine Learning, News

SpaceX May Acquire Cursor for $60B Later This Year

SpaceX has secured rights to acquire AI coding startup Cursor for up to $60 billion, deepening its push into AI alongside xAI ahead of a potential IPO.

By Samantha Reed Edited by Maria Konash Published:
SpaceX secures option to buy Cursor for $60B, signaling major AI push into coding tools. Image: SpaceX

SpaceX has struck a deal with AI coding startup Cursor that gives it the option to acquire the company for up to $60 billion later this year. Alternatively, SpaceX can pay $10 billion tied to ongoing collaboration between the two firms, according to a statement posted on X.

The agreement highlights SpaceX’s growing ambitions in artificial intelligence, following Elon Musk’s earlier move to merge the company with his AI venture xAI in a deal valued at $1.25 trillion. The combined entity is expected to pursue a public listing, potentially becoming one of the largest IPOs in technology history.

Cursor CEO Michael Truell said the partnership will focus on scaling the company’s AI systems, including its “Composer” model, as part of a broader effort to build advanced coding and knowledge work tools.

Strategic Push Into AI Development Tools

Cursor develops AI tools designed to assist software engineers with tasks such as testing code, tracking changes, and documenting workflows through logs, screenshots, and video. The company has gained traction as part of a growing wave of startups building AI-powered coding agents.

The partnership with SpaceX signals an effort to compete more directly with offerings from OpenAI and Anthropic, which provide similar tools through products like Codex and Claude.

SpaceX said the collaboration aims to create “the world’s best coding and knowledge work AI,” suggesting a broader ambition beyond software development into general productivity applications.

Deal Comes Amid Fundraising and Industry Competition

The announcement comes as Cursor is reportedly in talks to raise $2 billion at a valuation exceeding $50 billion. Investors expected to participate include Andreessen Horowitz, Nvidia, and Thrive Capital, all of which have backed AI companies across the sector.

The structure of the SpaceX deal gives the company flexibility, allowing it to deepen collaboration before committing to a full acquisition. It also positions SpaceX to secure a strategic asset in a rapidly evolving market where AI coding tools are becoming central to software development.

Broader Implications for Musk’s AI Strategy

The move reflects Musk’s broader effort to build a vertically integrated AI ecosystem spanning infrastructure, models, and applications. His previous acquisition of X (formerly Twitter) through xAI and ongoing hiring from Cursor indicate a strategy focused on consolidating talent and capabilities.

The timing is notable, coming just days before a high-profile legal case involving Musk and Sam Altman, further underscoring tensions between leading players in the AI industry.

If completed, the Cursor deal would rank among the largest acquisitions in the AI sector, reinforcing the growing importance of coding agents and developer tools as a battleground for next-generation software platforms.

AI & Machine Learning, News, Startups & Investment

Recursive Superintelligence Raises $500M to Build Self-Improving AI

AI startup Recursive Superintelligence has raised $500 million from Nvidia and GV to pursue self-improving AI systems, despite having no public product.

By Laura Bennett Edited by Maria Konash Published:

A new artificial intelligence startup, Recursive Superintelligence, has raised $500 million in fresh funding, reaching a $4 billion valuation despite not yet releasing a public product. The round was backed by Nvidia and GV, underscoring continued investor appetite for next-generation AI systems.

The company was founded by former researchers from Google DeepMind and OpenAI, and is focused on developing AI models capable of recursive self-improvement. The concept aims to move beyond current approaches that rely heavily on human-labeled data and manual fine-tuning.

Instead, Recursive Superintelligence is building systems that can design, evaluate, and refine their own architectures with minimal human input, potentially accelerating the pace of AI development.

Toward Self-Teaching AI Systems

At the core of the company’s strategy is the idea that human involvement has become a bottleneck in AI progress. As models grow more complex, the need for human supervision slows iteration cycles.

Recursive’s approach seeks to create a “closed-loop” system where AI models continuously improve themselves. This includes generating hypotheses, testing them, and integrating successful changes into future versions without external intervention.

If successful, this could significantly reduce development timelines. Instead of requiring months or years between major model upgrades, new iterations could emerge in hours or days.

The company is also exploring deeper integration between software and hardware, working closely with Nvidia to optimize AI systems alongside the chips they run on. This could enable more efficient training and faster experimentation cycles.

A High-Risk, High-Reward Bet

The funding comes at a time of intense competition and consolidation in the AI sector. While some startups face pressure to demonstrate clear revenue models, companies focused on foundational AI technologies continue to attract large investments.

Recent funding activity across the industry, including major rounds for infrastructure and model developers, suggests that investors are prioritizing long-term breakthroughs over short-term returns.

However, the valuation has raised questions. Critics warn that the company’s $4 billion price tag, achieved without a commercial product, reflects broader concerns about a potential AI investment bubble.

Building Toward First Autonomous Training Run

Recursive Superintelligence plans to use the funding to recruit top AI talent and build the large-scale compute infrastructure required for its first autonomous training cycle, referred to internally as a “Level 1” run. This milestone is expected later this year.

The outcome of that effort will be closely watched. Demonstrating meaningful self-improvement without human intervention would represent a major shift in how AI systems are developed.

For now, the company embodies a growing trend in the industry: betting that the next leap in AI will come not just from bigger models, but from systems that can redesign themselves.

AI & Machine Learning, News, Research & Innovation

OpenAI Launches ‘Chronicle’ Screen Memory Feature in Codex

OpenAI has introduced Chronicle, a new Codex feature that tracks screen activity and builds context automatically. The tool raises privacy and security concerns.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI launches Chronicle for Codex, adding screen-aware memory and automation while raising privacy concerns. Image: OpenAI

OpenAI has introduced Chronicle, a new experimental feature for its Codex app that allows the AI to observe a user’s screen activity and build contextual memory automatically. The feature, now available in preview for ChatGPT Pro users on macOS, represents a significant step toward more autonomous and context-aware AI assistants.

Chronicle operates in the background by periodically capturing screenshots, analyzing them, and converting them into structured text summaries. These summaries are stored locally and used to provide context for future interactions, allowing Codex to understand ongoing tasks without requiring users to repeatedly explain their work.

OpenAI president Greg Brockman described the feature as giving the assistant the ability to “see and remember” recent activity, enabling a more seamless and responsive workflow.

Turning Activity Into Context

The core goal of Chronicle is to reduce friction in AI-assisted work. By tracking what users are doing across applications, Codex can infer project context, tools in use, and recent actions, making interactions more efficient.

This approach aligns with a broader trend in AI development toward persistent memory and agent-like behavior, where systems can operate continuously and build knowledge over time. Instead of responding to isolated prompts, Codex can maintain continuity across sessions and tasks.

However, this deeper integration also introduces technical and operational trade-offs, particularly around data handling and system performance.

Privacy and Security Concerns

Chronicle’s architecture has raised concerns about user privacy and security. Screenshots captured by the system are sent to OpenAI servers for processing and are deleted within six hours. However, the generated summaries are stored locally as unencrypted Markdown files, potentially accessible to other applications.

OpenAI has acknowledged the risks, noting that the feature could increase exposure to prompt injection attacks and accidental leakage of sensitive information visible on screen. The company advises users to disable Chronicle when working with confidential data.

The feature may also increase usage costs, as continuous background processing consumes more request capacity within subscription limits.

Echoes of Industry Challenges

The launch draws comparisons to Microsoft’s earlier attempt to introduce a similar feature, Recall, in Windows. That tool also captured user activity for AI processing but faced strong backlash over privacy concerns, leading Microsoft to delay its rollout and make it optional.

Chronicle reflects the same tension facing the industry: balancing the benefits of highly contextual AI systems with the risks of continuous data capture. As AI tools become more integrated into daily workflows, managing that balance will be critical for user trust and adoption.

The feature signals OpenAI’s push toward more proactive, agent-like assistants, but its long-term success may depend on how effectively the company addresses privacy and security challenges.

AI & Machine Learning, News

GPT-Image-2 Dominates Image Arena Rankings with Record Lead

OpenAI’s GPT-Image-2 has taken the top spot across all Image Arena benchmarks, outperforming rivals by record margins in human preference rankings.

By Samantha Reed Edited by Maria Konash Published:

OpenAI’s latest image model, GPT-Image-2, has secured the top position across all major categories on Image Arena, a widely followed benchmark based on blind human evaluations. The model, which powers ChatGPT Images 2.0, ranked first in Text-to-Image, Single-Image Edit, and Multi-Image Edit tasks, marking one of the most dominant performances recorded on the platform.

Image Arena evaluates models using anonymous user voting, where participants compare outputs without knowing which system generated them. This method is considered one of the more reliable ways to measure real-world performance and user preference in generative AI.

GPT-Image-2 achieved a score of 1,512 in Text-to-Image, outperforming Nano Banana 2 from Google by 242 points. According to Arena, this represents the largest gap ever recorded between first and second place.

Dominance Across Categories

The model’s performance extended beyond a single leaderboard. GPT-Image-2 scored 1,513 in Single-Image Edit and 1,464 in Multi-Image Edit, maintaining significant leads over competing models in each category.

It also ranked first across all seven Text-to-Image subcategories, including art, photorealism, product design, and 3D imagery. Improvements over its predecessor were substantial, with gains ranging from nearly 200 to over 300 points depending on the category.

One of the most notable advances is in text rendering within images, an area where AI models have historically struggled. GPT-Image-2 showed strong improvements in accurately generating text, including non-Latin scripts such as Japanese, Korean, and Hindi, suggesting deeper underlying improvements rather than incremental tuning.

From Anonymous Testing to Public Release

Before its official launch, versions of the model appeared anonymously on Arena under codenames such as “maskingtape” and “gaffertape.” These early tests generated attention for their strong performance, a strategy similar to how Google tested its own models before release.

The approach allows companies to validate models in real-world conditions before announcing them publicly, using Arena’s ranking system to build credibility.

Intensifying Competition in Image AI

The results highlight intensifying competition in AI image generation. Google’s Nano Banana models previously led the leaderboard and drove significant user growth for its Gemini platform, demonstrating how performance improvements can translate into broader adoption.

GPT-Image-2’s lead suggests OpenAI has regained momentum in this segment, at least in terms of benchmark performance. However, whether these gains will translate into widespread user adoption remains uncertain.

For developers and enterprises choosing AI tools, the results provide a strong signal of capability. A large margin in blind human evaluations indicates that GPT-Image-2 may offer more consistent and preferred outputs in practical use cases, from design and marketing to content creation.

Anthropic Probes Unauthorized Access to Mythos AI Model

Anthropic is investigating reports that unauthorized users accessed its powerful Mythos AI model. The incident raises concerns about security and misuse risks.

By Marcus Lee Edited by Maria Konash Published:
Anthropic probes unauthorized Mythos access, underscoring risks of advanced cybersecurity AI misuse. Image: Anthropic

Anthropic is investigating reports that unauthorized users gained access to its unreleased Claude Mythos Preview, a highly advanced system designed for cybersecurity applications. According to a report by Bloomberg, a small group accessed the model through a third-party vendor environment on the same day Anthropic began limited testing with approved organizations.

The company confirmed it is reviewing the incident, stating that it is examining claims of unauthorized access through external infrastructure. The model, part of Anthropic’s Project Glasswing initiative, is being deployed under strict controls to a limited number of partners for defensive cybersecurity purposes.

The reported breach raises concerns given the model’s capabilities. Mythos is designed to identify software vulnerabilities and simulate cyberattacks, functions that are typically restricted due to potential misuse.

High-Stakes Capabilities and Risks

Anthropic has positioned Mythos as a tool for strengthening cybersecurity by helping organizations detect weaknesses before attackers do. However, its capabilities also highlight the dual-use nature of advanced AI systems.

Reports suggest the model can identify complex and previously unknown vulnerabilities, construct multi-step attack scenarios, and generate functional exploit code. Such capabilities could significantly lower the barrier to entry for cyberattacks if misused.

Regulators and industry observers have already expressed concern about systems like Mythos, which blur the line between defensive and offensive cybersecurity tools. Anthropic has limited access to the model and avoided a full public release, citing safety considerations.

Security Controls Under Scrutiny

The incident underscores the challenges of securing highly capable AI systems, particularly when they are deployed through third-party infrastructure. Even controlled rollouts can introduce vulnerabilities if external systems are involved.

Anthropic said it is continuing discussions with government and industry partners about safe deployment of the technology. The company has emphasized that Mythos is intended for defensive use, such as vulnerability research and red-teaming.

The situation also highlights broader questions about how AI developers should manage access to powerful models. As capabilities increase, ensuring that systems are used responsibly becomes more complex, especially when demand for such tools is high.

Growing Pressure Around AI Governance

The reported access comes amid heightened scrutiny of advanced AI models and their potential impact on cybersecurity. Governments and organizations are increasingly focused on balancing innovation with risk mitigation.

Anthropic has already faced regulatory attention and internal debate over how widely to release Mythos. The company has framed the model as a way to stay ahead of attackers, but the incident illustrates the difficulty of maintaining strict control over such systems.

As AI models become more capable of executing complex, real-world tasks, incidents like this may shape future policies on access, oversight, and deployment of high-risk technologies.

AI & Machine Learning, Cybersecurity & Privacy, News
Exit mobile version