Claude Code Source Leak: When Your AI Vendor Becomes the Vulnerability
By Dritan Saliovski
On March 31, 2026, Anthropic accidentally shipped the complete source code of its AI coding assistant, Claude Code, inside a routine npm package update. A debugging source map file, left in the build by human error, pointed to a zip archive on Anthropic's cloud storage containing nearly 2,000 files and 500,000 lines of TypeScript. Within hours, the codebase was mirrored across GitHub and forked more than 41,500 times. Anthropic has since issued copyright takedown requests for over 8,000 copies.
Key Takeaways
- Anthropic confirmed the leak was a release packaging error, not a breach; no customer data or credentials were exposed
- The exposed source revealed 44 feature flags for unshipped capabilities, internal system prompts, and the full orchestration architecture for hooks, MCP servers, and autonomous daemon modes
- The leak coincided with a separate malicious supply chain attack on the axios npm package, which deployed a Remote Access Trojan between 00:21 and 03:29 UTC on the same day
- Within days, security firm Adversa AI disclosed a critical vulnerability in Claude Code, accelerated by full source visibility
- Anthropic's Claude Code has reached an annualized revenue run-rate exceeding $2.5 billion, with enterprise adoption accounting for approximately 80% of revenue
What Happened
The source map file shipped inside @anthropic-ai/claude-code version 2.1.88 on npm. Source maps are development artifacts used to connect bundled, minified code back to its original source. They are never intended for production distribution. Security researcher Chaofan Shou identified the exposure and published the finding. The map file referenced a complete zip archive of the original TypeScript sources hosted on Anthropic's own Cloudflare R2 storage bucket.
Anthropic's official statement confirmed the incident: the cause was a release packaging error where the build tool (Bun) generated a full source map by default, and the .npmignore configuration failed to exclude it. The company characterized it as human error, not a security breach.
This is the second time Claude Code's internals have been publicly exposed in just over a year. The tool had already been partially reverse-engineered through prior community efforts, but this leak provided the complete, current production source with unreleased features and internal tooling.
Why This Matters Beyond the Headlines
The surface-level narrative is straightforward: a safety-focused AI company made an operational security mistake. The deeper issue is structural.
Claude Code is not a web application that runs on Anthropic's servers. It is a CLI tool that runs locally on developer workstations with shell access, file system permissions, and the ability to execute arbitrary commands through its hooks system. When the source code of a tool with that level of system access is fully exposed, the risk calculus shifts. Attackers can now study the exact permission logic, hook execution flow, and MCP server integration points to craft targeted exploits.
Zscaler's ThreatLabz analysis identified the practical consequences: pre-existing vulnerabilities in Claude Code's configuration handling are now significantly easier to weaponize. Threat actors with full source visibility can design malicious repositories or project files that trigger arbitrary shell execution or credential theft when a developer clones or opens an untrusted repo. The exposed hook and permission logic makes silent workstation compromise more reliable.
This is the scenario we outlined in our analysis of AI assistant attack surfaces: embedded AI tools with local execution capabilities represent high-value targets precisely because they bridge the gap between user intent and system-level action. The Claude Code leak provides the specific technical roadmap that makes exploitation more efficient.
The Supply Chain Collision
The timing compounds the risk. On the same day the source code leaked, a separate, unrelated supply chain attack targeted the axios npm package, a widely used HTTP client that Claude Code lists as a dependency. Malicious versions (1.14.1 and 0.30.4) were published to npm between 00:21 and 03:29 UTC on March 31, containing a Remote Access Trojan. Any developer who installed or updated Claude Code via npm during that window may have pulled in the compromised axios package.
This is not a theoretical scenario. It is a documented overlap between a vendor's accidental exposure and a third party's deliberate attack on the same dependency chain within the same hours. The SentinelOne EDR detection of a similar trojanized AI-adjacent package in 44 seconds, reported the same week, illustrates what happens when detection works and what happens when it does not. For a deeper look at how AI development tooling creates bidirectional supply chain risk, see our companion analysis.
For organizations evaluating AI vendor risk, this incident crystallizes a point we raised in our Trust Shockwaves analysis: vendor evaluation for AI tools must extend beyond traditional SOC 2 and penetration testing assessments. Build pipeline hygiene, dependency management practices, and incident response for accidental exposure are now material risk factors.
What the Exposed Code Reveals
The leaked source contained 44 feature flags for capabilities that are fully built but not yet released. These are not conceptual; they are compiled code behind boolean flags. Key unreleased features include a persistent daemon mode (internally referenced as "KAIROS") that allows Claude Code to operate autonomously in the background even when the user is idle, performing memory consolidation and cross-session learning. Remote control capabilities allowing users to operate Claude Code from a phone or secondary browser were also flagged.
The exposed system prompts reveal how Claude Code reasons about tasks, manages permissions, and handles its own memory, treating stored context as hints that require verification against the actual codebase. For competitors, this provides a detailed engineering blueprint for building production-grade AI coding agents. For security teams, it provides a map of every trust boundary and permission gate in the tool.
Implications for Enterprise AI Governance
The incident exposes three governance gaps that most organizations have not addressed:
First, AI coding tools are not evaluated as critical supply chain components. Most enterprises assess AI tools through IT procurement workflows designed for SaaS applications. Claude Code, Cursor, GitHub Copilot, and similar tools operate with fundamentally different system access than a typical SaaS product. They read and write files, execute shell commands, and install packages. The vendor's build pipeline security directly affects the security of every developer workstation running the tool.
Second, dependency chain risk multiplies at the intersection of AI tools and package managers. When an AI coding assistant both depends on npm packages and can autonomously install npm packages for the user, a single supply chain compromise can propagate in two directions simultaneously: through the tool's own dependencies and through the packages the tool recommends or installs.
Third, incident response playbooks do not account for AI tool vendor exposures. When your AI coding tool's source code leaks, the immediate question is not whether customer data was exposed. It is whether the architecture and permission model of a tool running on your developers' machines with elevated access is now available to anyone building targeted exploits. That requires a different response workflow than a traditional vendor breach notification.
Organizations already working through the four-framework regulatory alignment should note that NIS2's supply chain security requirements (Article 21.d) and DORA's ICT third-party risk management obligations both extend to AI development tooling. If your developers use AI coding assistants, those tools are in scope for vendor risk assessment under both frameworks.
What to Do Now
For organizations using Claude Code or similar AI coding assistants: audit the specific version installed across your development environment, review npm lockfiles for compromised axios versions, and verify that no developer installed or updated during the March 31 exposure window. Beyond the immediate response, establish a vendor risk assessment process specifically for AI development tools that evaluates build pipeline practices, dependency management, and the tool's local system access model. For governance frameworks that map AI agent permissions and controls, see the security-first deployment framework.
The full Intelligence Brief covers the complete AI coding tool vendor risk assessment framework, a dependency chain risk matrix, incident timeline reconstruction, and a comparison of major AI coding tools' default security postures and permission models.
Assess Your AI Development Tool Risk
Innovaiden works with leadership teams deploying AI agents across their organizations - from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.
Get in TouchFrequently Asked Questions
What happened with the Claude Code source code leak?
On March 31, 2026, Anthropic accidentally shipped the complete source code of Claude Code inside a routine npm package update. A debugging source map file left in the build pointed to a zip archive containing nearly 2,000 files and 500,000 lines of TypeScript. Within hours, the codebase was mirrored across GitHub and forked more than 41,500 times.
What was exposed in the Claude Code source leak?
The leaked source contained 44 feature flags for unshipped capabilities including a persistent daemon mode (internally referenced as KAIROS) for autonomous background operation, remote control capabilities, internal system prompts, and the full orchestration architecture for hooks, MCP servers, and permission gates.
How does the Claude Code leak overlap with the axios supply chain attack?
On the same day the source code leaked, malicious versions of the axios npm package (1.14.1 and 0.30.4) were published containing a Remote Access Trojan. The attack window was between 00:21 and 03:29 UTC on March 31. Any developer who installed or updated Claude Code via npm during that window may have pulled in the compromised axios package.
Why is this a vendor risk issue and not just a security incident?
Claude Code runs locally on developer workstations with shell access, file system permissions, and the ability to execute arbitrary commands. When the source code of a tool with that level of system access is exposed, attackers can study the exact permission logic, hook execution flow, and integration points to craft targeted exploits. This shifts AI coding tools from productivity tools to critical supply chain components.
What should organizations do in response?
Audit the specific Claude Code version installed across your development environment, review npm lockfiles for compromised axios versions, verify no developer installed or updated during the March 31 exposure window, and establish a vendor risk assessment process specifically for AI development tools that evaluates build pipeline practices, dependency management, and local system access models.
Related Insights
Sources
- Anthropic. Official statement on Claude Code source map exposure. anthropic.com. 2026.
- Chaofan Shou. Claude Code source map discovery and disclosure. Published via social media. 2026.
- Zscaler ThreatLabz. Analysis of Claude Code exposure implications. zscaler.com. 2026.
- Adversa AI. Claude Code vulnerability disclosure post-leak. adversa.ai. 2026.
- SentinelOne. Trojanized AI-adjacent package detection report. sentinelone.com. 2026.
- npm Registry. axios versions 1.14.1 and 0.30.4 incident report. npmjs.com. 2026.
- Fortune. Anthropic Claude Code revenue and enterprise adoption data. fortune.com. 2026.