📅 Wednesday, April 2, 2026
AITechnologySpaceDefenceBusinessScience
BREAKING
NASA Artemis II crew completes historic lunar flyby  │  AI spending to hit $1 trillion by 2027  │  Tesla Cybercab production confirmed for Q2 2026  │  India to triple GPU capacity to 100,000 units  │  Google Artemis II 4K Moon livestream goes global

Anthropic Fires 8,100 DMCA Takedowns After Claude AI Source Code Leak Exposes Secret ‘Kairos’ Agent

A debugging file accidentally bundled into a Claude Code update exposed nearly 500,000 lines of Anthropic's source code — revealing plans for a secret always-on AI agent called Kairos.
1 Min Read 0

A routine software update at one of Silicon Valley’s most closely watched AI companies has triggered a legal firestorm and the inadvertent exposure of plans for an AI agent that was never meant to be public.

On March 31, software engineer Chaofan Shou discovered that Anthropic — the company behind the Claude family of AI models — had accidentally published the complete source code for its Claude Code tool to the public npm registry. A file intended for internal debugging had been bundled into a standard update, and that file pointed to a zip archive on Anthropic’s own cloud storage. Inside: nearly 2,000 files and approximately 500,000 lines of proprietary code.

Within hours, the code had spread across GitHub repositories worldwide.

The Takedown Spiral

Anthropic’s response was swift — and initially counterproductive. The company filed a Digital Millennium Copyright Act (DMCA) takedown notice targeting an estimated 8,100 GitHub repositories. The problem: the notice swept up legitimate forks of Anthropic’s own publicly released Claude Code repository — repositories that contained no stolen material.

Facing immediate developer backlash, Anthropic retracted the bulk of its takedowns. The final scope was narrowed to just one repository — the original source of the leak — and its 96 direct forks.

The damage had already spread. A programmer independently rewrote Claude Code’s core functionality in other programming languages using separate AI tools, keeping the technical knowledge in wide circulation. Boris Cherny, Anthropic’s head of Claude Code, confirmed the leak was entirely unintentional.

An Anthropic spokesperson said: “The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”

What the Code Revealed: Project Kairos

For developers who examined the source before the takedowns landed, the contents were revealing. Beyond the technical architecture of how Claude Code processes API requests, the leak disclosed two unannounced features Anthropic had not publicly acknowledged.

The first was relatively innocuous: a Tamagotchi-style virtual pet embedded in the development build.

The second was far more significant: a planned persistent AI agent called Kairos.

According to the leaked code, Kairos would operate continuously in the background — 24 hours a day, seven days a week — autonomously acting on behalf of users without requiring direct prompting. Among its described features was autoDream: a process that would run overnight, consolidating and updating the agent’s internal memories to improve future performance. The vision closely echoes open-source autonomous agent projects already in circulation.

A Pattern of Accidental Disclosure

The Claude Code leak was not an isolated incident. The previous month, internal documentation for a new Anthropic model codenamed Mythos was found in a publicly accessible database — the second significant unintentional disclosure in just over a year.

The timing is sensitive: Anthropic is widely reported to be eyeing an initial public offering later in 2026. While some speculated the leaks could be calculated investor signalling, the company’s rapid legal response suggested otherwise.

One important caveat: the leaked files contained only source code — not model weights or training data, which represent the true core of Anthropic’s systems. The secrets behind what makes Claude intelligent remain, for now, intact.

But the episode has exposed a gap between Anthropic’s public positioning as a safety-focused AI company and its ability to secure its own infrastructure — a vulnerability it can ill afford in the run-up to a public offering.


Also Read:

Tarun Mishra

Managing Editor & CEO, Core Machine. Covering AI, Space, Defence and Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *