A coding oversight has unexpectedly brought Anthropic into focus after parts of its internal tool were exposed online.
The incident revolves around Claude Code, a terminal-based AI coding assistant, whose source code became publicly accessible through a source map file on the npm registry. The issue was first noticed by Chaofan Shou, who shared the finding on X.
From early observations, this doesn’t appear to be a security breach. Instead, it looks like a simple packaging mistake—where a file that should have remained private was accidentally included in a public release.
What surfaced is a fairly large chunk of the codebase, running into tens of megabytes and spread across thousands of files. It gives a behind-the-scenes look at how the tool works, including its internal structure, workflows, and even some features that haven’t been officially announced yet. References to experimental modes, system prompts, and upcoming models were also spotted.
There’s some relief in the fact that no user data or model weights were exposed. However, the leak still reveals internal design details and engineering decisions that companies usually keep under wraps.
The code didn’t stay hidden for long. It was quickly picked up, shared, and even uploaded to a public repository, drawing a lot of attention from developers within hours.
Justin Schroeder, a developer at FormKit, pointed out that several system prompts seem to be embedded directly into the client-side code. This has raised eyebrows, as such elements are generally expected to be handled on the server side for better control and security.
The tool itself is built using TypeScript and React, with integration through Bun. It also depends on axios, a widely used library that has seen some security concerns recently—highlighting the ongoing risks tied to software dependencies.
Another detail that stood out is how the code is written. Reviewers noted a large number of inline comments, possibly designed to help AI systems interpret the code, not just human developers.
Despite all this, the code is not open source. It is still protected under licensing terms, which means using or redistributing it without permission could lead to legal trouble.
As of now, Anthropic has not publicly responded to the situation.
Also Read: Anthropic Accidentally Reveals Its Most Advanced AI Model, Mythos








