← Blog
securityIP Protectionpythonobfuscation

Claude Code Was Never Hidden, Just Obfuscated

CLI tools can't live on a server. The code has to be on your machine. So how do companies protect it?

·6 min read

TL;DR: Claude Code's source was leaked via a .map file, but the code was always on your machine, just obfuscated. CLI tools can't live on a server, so companies use obfuscation (Node.js) and compilation (Python/Nuitka) to protect IP. After the leak, Claude Code moved from obfuscated JS to a compiled binary.

A few weeks ago, a source map file (.map) was accidentally shipped with Claude Code, making the entire source readable. The internet reacted like it was a leak. But that code had always been on your computer. Every line of it. You just couldn't read it.

This is the fundamental problem with CLI tools. Unlike a SaaS where your code stays safely on your servers, a CLI has to run on the user's machine. The code lands in their node_modules/ folder and Node.js reads it locally. There is no server in the middle. So if you want to protect your source, you can't encrypt it because then the runtime can't execute it either. What you can do is obfuscate it.

Obfuscation is not encryption

Encrypted code cannot be executed by anyone or anything without a decryption key first. The bytes are scrambled. The machine can't run it, you can't read it, nobody can do anything with it until it's decrypted back.

Obfuscated code is different. The machine runs it perfectly. Every function call, every conditional, every loop works exactly as the original. But a human looking at the source sees variable names like _0x3c4f, control flow that jumps around nonsensically, and strings encoded into hex arrays. The logic is preserved, the readability is destroyed.

A simple example. This is a function before obfuscation:

javascript
function calculateDiscount(price, userTier) {
  if (userTier === "premium") {
    return price * 0.8;
  }
  return price * 0.95;
}

console.log(calculateDiscount(100, "premium"));
console.log(calculateDiscount(100, "basic"));

And this is the same function after running it through an obfuscator:

javascript
var _0xa3=['\x70\x72\x65\x6d\x69\x75\x6d'];
(function(_0x3c,_0x1a){var _0x2d=function(_0x4e){
while(--_0x4e){_0x3c['push'](_0x3c['shift']());}};
_0x2d(++_0x1a);}(_0xa3,0x1));var _0x4b=function(_0xc,_0xd){
return _0xa3[_0xc-0x0];};function _0x7f2(_0xe,_0xf){
if(_0xf===_0x4b(0x0)){return _0xe*0.8;}return _0xe*0.95;}

console.log(_0x7f2(100, "premium"));
console.log(_0x7f2(100, "basic"));

How Node.js packages handle this

In the JavaScript world there are two layers. Minification (Terser, Webpack) strips whitespace, shortens variable names, and collapses code into a compact form. It's primarily for performance, but it makes the code harder to read as a side effect. Actual obfuscation goes much further: tools like javascript-obfuscator encode string literals into arrays, flatten control flow so if/else blocks become switch statements jumping between random labels, inject dead code that looks real but never runs, and wrap everything in self-invoking functions. Functionally identical. Extremely painful to reverse engineer.

bash
# Minification (Terser) - smaller, less readable
npx terser src/index.js -o dist/index.min.js -c -m

# Obfuscation (javascript-obfuscator) - intentionally unreadable
npx javascript-obfuscator src/index.js \
  --output dist/index.obfuscated.js \
  --string-array true \
  --string-array-encoding rc4 \
  --control-flow-flattening true \
  --dead-code-injection true

Then there are source maps. A .map file is a dictionary that maps obfuscated code back to the original source, line by line, so that when an error hits in production the stack trace still points somewhere useful. Companies generate them for internal debugging. They are not supposed to ship them. That's what happened with Claude Code: the .map file went out by mistake, and suddenly anyone could reconstruct readable source from the obfuscated bundle.

How to do this in Python

Python is a different beast. It's traditionally interpreted: you ship .py files, everyone can read them. There are lighter options like shipping .pyc bytecode or using Cython to compile to C, but they have limitations. Bytecode is trivially decompilable, and Cython requires annotating your code.

The real solution is Nuitka. It compiles your entire Python program into native machine code, follows all your imports, resolves dependencies, and produces a binary you can package as a wheel. The user runs pip install your-package and gets a working module. No .py files to open. No bytecode to decompile. Just a compiled binary that Python imports like any other module.

bash
# Compile a Python package with Nuitka
nuitka --module my_package \
  --include-package=my_package \
  --output-dir=dist/
bash
# What a normal wheel looks like inside
unzip your_package-1.0.0-py3-none-any.whl -d unpacked/
ls unpacked/your_package/
bash
# What a Nuitka-compiled wheel looks like inside
unzip your_package-1.0.0-cp311-cp311-linux_x86_64.whl -d unpacked/
ls unpacked/your_package/

What Claude Code did after the leak

Here's where it gets interesting. I was curious about what Claude Code's code actually looks like now, so I went to check. When it was an npm package, you could navigate to node_modules/@anthropic-ai/claude-code/ and find obfuscated .js files. That's the code the .map file decoded.

But when I looked at the current installation, there are no .js files. No node_modules. The entire thing is now a compiled Mach-O binary. A single ~200MB arm64 executable.

bash
file ~/.local/share/claude/versions/2.1.104
bash
ls -lh ~/.local/share/claude/versions/

They moved from obfuscated JavaScript to a fully compiled native binary. No more source to obfuscate. No more .map files that could accidentally leak. No more readable code at all. Just machine instructions.

This is the exact same progression we described for Python: first you try bytecode (.pyc), then you compile to native code (Cython/Nuitka). Claude Code went through the same evolution in real time, just on the JavaScript side. The .map incident was the trigger that pushed them from "code you can't read" to "there is no code, only a binary."

The code was always there

The source map incident didn't expose something hidden. It made something readable. The obfuscated JavaScript was already sitting in the node_modules/ folder of every developer who installed it. Their computers were reading it, parsing it, executing it every day. The .map file just gave humans the same ability.

And this is the fundamental reality of CLI tools: when your product has to run on someone else's machine, your code is on their machine. Obfuscation makes it impractical to read, not impossible. A veil, not a wall. With enough time and skill, someone can always reverse engineer what the machine is executing. Even a compiled binary is not truly safe from a determined reverse engineer with a disassembler.

The tools exist. They work. And the next time a source map leaks, remember: the code was always there. You just couldn't read it.