Did Anthropic Just DMCA 8,100 GitHub Repos by Accident? What the Claude Code Leak Reveals
Did Anthropic Just DMCA 8,100 GitHub Repos by Accident? What the Claude Code Leak Reveals
Anthropic accidentally caused thousands of code repositories on GitHub to be taken down while trying to pull copies of its most popular product’s source code off the internet. It’s the kind of mistake that makes you wonder: if a $60 billion AI company can’t execute a basic DMCA takedown without collateral damage, what does that say about AI companies managing their most sensitive assets?
The incident is now one of the most discussed events in the open-source developer community this week, and it raises serious questions about how AI companies handle intellectual property, developer relations, and their upcoming IPOs.
How the Claude Code Source Code Leak Happened
On Tuesday, a software engineer discovered something alarming in a recent Anthropic release. The company had seemingly accidentally included access to the source code for Claude Code, its category-leading command-line application. AI enthusiasts immediately pored over the leaked code for clues about how Anthropic harnesses the large language model that underlies the application.
The leaked code spread rapidly. Developers shared it on GitHub, discussed it on social media, and dissected Anthropic’s engineering decisions in real time. For a company that positions itself as the responsible AI leader, the exposure was uncomfortable.
What happened next made things significantly worse.

The DMCA Takedown That Collapsed 8,100 Repositories
Anthropic’s legal team issued a takedown notice under the U.S. Digital Millennium Copyright Act (DMCA), asking GitHub to remove repositories containing the leaked code. Standard procedure — until you look at the numbers.
According to GitHub’s public DMCA records, the notice was executed against roughly 8,100 repositories. That’s not a typo. Eight thousand one hundred repos went dark.
The problem? Many of those repositories were legitimate forks of Anthropic’s own publicly released Claude Code repository. Developers who had forked Anthropic’s public code — with full permission — suddenly found their projects offline. Their work, gone. Their CI/CD pipelines, broken. Their contributions to the ecosystem, suspended.
The backlash was immediate and furious. Developers took to social media, demanding answers. Robert McLaws, a well-known developer advocate, posted an angry thread about losing access to his legitimate fork. Theo Browne, another prominent tech creator, amplified the criticism to his massive audience.
Anthropic’s Damage Control: “It Was an Accident”
Boris Cherny, Anthropic’s head of Claude Code, stepped in to contain the damage. He acknowledged the mistake on social media and retracted the bulk of the takedown notices. The company limited its action to one repository and 96 forks that actually contained the accidentally released source code.
An Anthropic spokesperson explained to TechCrunch: “The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”
Plausible? Sure. But the incident exposes a deeper problem.

Why This Matters for AI Companies and Open Source
The Anthropic DMCA debacle sits at the intersection of three tensions that will define the AI industry for the next decade:
1. Open Source vs. Corporate IP Protection
AI companies benefit enormously from open-source ecosystems. They publish research, release models, and encourage developers to build on their platforms. But when something goes wrong — a leak, a security breach — they reach for the legal hammer. The collateral damage to the developer community is treated as an acceptable cost.
This creates a trust deficit. Why fork a company’s public repository if they can nuke it with a single DMCA notice?
2. IPO Readiness and Operational Competence
Anthropic is reportedly planning an initial public offering. The DMCA mess is exactly the kind of operational failure that raises eyebrows at the SEC and among potential investors. Leaking your source code is bad. Issuing a botched takedown notice that wipes out thousands of innocent repositories is worse. Doing both in the same week — while your lawyers should be focused on IPO compliance — is a red flag.
As startup funding breaks records in Q1 2026, investor scrutiny on AI company governance has never been higher.
3. The DMCA Is Not Built for This
The DMCA was written in 1998. It was designed for music piracy and bootleg DVDs. Using it to manage source code leaks in a fork network of 8,100 interconnected repositories is like using a sledgehammer to perform surgery. The tool simply doesn’t have the precision required.
Github’s fork network architecture means a single takedown can cascade across thousands of unrelated projects. Anthropic’s legal team either didn’t understand this, didn’t care, or both. None of those options inspire confidence.
The Bigger Picture: AI Companies and Developer Trust
This isn’t just an Anthropic problem. It’s a pattern. AI companies court developers, then treat them as collateral damage when something goes sideways. Remember when OpenAI’s API changes broke thousands of applications? Same energy.
The difference is that Anthropic has positioned itself as the ethical AI company. The one that cares about safety, about alignment, about doing things right. An accidental mass DMCA takedown doesn’t exactly reinforce that brand.
Developers are watching. They’re taking notes. And in a market where every AI company is competing for developer mindshare, incidents like this have compound consequences.
What Should Developers Do About It?
If you’re building on any AI company’s platform or codebase, protect yourself:
- Mirror your repositories. Don’t rely solely on GitHub. Push to GitLab, Codeberg, or a self-hosted instance.
- Document everything. If you have a legitimate fork, keep records of when and why you forked it.
- Understand the licensing. Read the terms carefully. Know what rights you have — and what you don’t.
- Have a backup plan. If your repo goes dark tomorrow, what breaks? Build resilience into your workflow.
The Verdict
Anthropic’s DMCA incident is a cautionary tale. The company leaked its own source code, then overreacted and collateralized thousands of developers in the process. It’s the kind of compounding error that erodes trust faster than any marketing campaign can rebuild it.
For a company heading toward an IPO, this should be a wake-up call. Operational competence isn’t optional. Developer trust isn’t a luxury. And the DMCA is a terrible tool for managing modern software ecosystems.
The AI industry is built on developer goodwill. Squander it, and no amount of safety research will save you.
Sources: TechCrunch | GitHub DMCA Records
