Anthropic has inadvertently exposed the full source code of its Claude Code tool for the second time in a year, following a packaging error that left sensitive development files publicly accessible.
The issue was discovered on March 31 by security researcher Chaofan Shou, who found that the latest version of Claude Code included a source map file in its npm package. This file allowed reconstruction of the tool’s underlying TypeScript codebase, effectively revealing the entire internal implementation.
Claude code source code has been leaked via a map file in their npm registry!
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
— Chaofan Shou (@Fried_rice) March 31, 2026
The exposure was not the result of a cyberattack but a configuration oversight, highlighting potential gaps in software release processes at a time when AI tools are increasingly used in enterprise environments.
Source Map Error Reveals Full Codebase
The leak stemmed from a file known as a source map, typically used during development to map compiled code back to its original human-readable form. While useful for debugging, such files are usually removed before public release.
In this case, the source map enabled access to approximately 1,900 internal source files, including components related to API design, telemetry systems, encryption mechanisms, and inter-process communication.
Because the file was included in a public npm package, the code was easily accessible and quickly archived in external repositories. Within hours, copies of the codebase had spread across developer platforms.
Importantly, the leak did not include model weights or user data, and there is no indication that customer information was compromised.
Repeat Incident Raises Concerns
This is the second time Anthropic has faced a similar issue. An earlier version of Claude Code was exposed in 2025 under comparable circumstances, prompting the company to remove the affected files.
The recurrence of the same type of error has raised questions about internal controls and release validation processes, particularly for tools aimed at professional developers.
While the exposure does not pose immediate risks to users, it reveals detailed insights into the system’s architecture and internal logic. This level of transparency could make it easier for external parties to analyze, replicate, or potentially exploit aspects of the tool.
Growing Scrutiny on AI Tooling
The incident comes as AI development platforms are becoming central to software engineering workflows, increasing the importance of reliability and security in their deployment.
Anthropic has not issued a public statement on the latest leak. However, the situation is likely to draw attention from both developers and enterprise customers who rely on such tools for critical operations.
The episode also unfolds alongside broader developments at the company, including a separate data leak that revealed details of its upcoming Claude Mythos model, described internally as a major leap in AI capabilities. Together, these incidents highlight the operational risks facing fast-moving AI firms as they scale both their technology and product ecosystems.