Security tools for the age of AI coding. Keep your system safe while AI builds your projects.
Millions of people who have never written code are now building software with AI. Tools like Claude Code, Gemini CLI, and Codex let you describe what you want in plain English, and AI writes the code for you. It is incredible.
But here is the catch: these AI tools need deep access to your computer. They read files, write files, install packages, and run commands. Until now, the people using developer environments were security-aware professionals who understood the risks.
New vibe coders often don't know what's risky. They run commands without checking them, install unknown packages, and give AI agents full filesystem access without a second thought.
Malicious actors know this. They are already targeting new developers with fake packages, poisoned dependencies, and social engineering tricks designed to exploit people who don't know what to look for.
We built AirLock to reduce that gap.
Think of AirLock as a locked room for your AI coding tool. You put the AI inside, give it a copy of your project, and let it work freely. But it can only see your project and talk to its own API. Nothing else.
When the AI is done working, AirLock scans everything it did and shows you exactly what changed. You review the changes, decide what to keep, and only then do those changes reach your real files.
You sign off on the result with a digital signature proving you approved every change. Full control, full transparency, zero surprises.
Six steps from start to finish. The setup takes about two minutes.
One-time setup that takes about two minutes. AirLock installs everything it needs and checks that your system is ready.
Tell AirLock which folder contains your project. It makes a sealed copy so the original stays untouched.
Your project gets copied into an isolated container. The AI is dropped inside with no way to reach anything else.
The AI coding tool runs with full power inside the sealed environment. It can create files, install packages, and run code - all safely contained.
When you're done, AirLock checks every file the AI created or changed. It flags anything suspicious and shows you a clear summary.
You see exactly what changed. Accept what you want, reject what you don't. Only approved changes reach your real project files.
AirLock doesn't rely on a single wall. It stacks eight independent layers so that even if one is bypassed, the others still hold.
Only the AI's own API is reachable. All other internet traffic is blocked.
System files are locked. The AI can only write to your project folder and temporary space.
Every admin capability is dropped. No root access, no special permissions.
Only safe system operations are allowed. Dangerous low-level calls are blocked.
A mandatory access control policy restricts what the container can touch.
Memory, CPU, and process count are all capped so nothing can run away.
A background process watches for suspicious behavior and can kill the session instantly.
The AI cannot look up any website addresses. Only pre-approved connections work.
Using Claude Code, Gemini CLI, or any AI coding agent to build projects.
People running AI with auto-approve modes who want a safety net underneath.
Working on client code and need to prove that changes were reviewed and approved.
Anyone who wants to use AI coding tools without worrying about what's happening behind the scenes.
No security tool is perfect. Here's what AirLock does not cover yet, and what you can do about it.
If a malicious package is downloaded, compiled, and executed entirely within the container during the AI session, AirLock's extraction scan won't catch it because the harmful action already happened inside the sealed environment.
What you can do: Keep network lockdown enabled (the default). With only the AI's API reachable, a malicious package has nowhere to send stolen data and can't download additional payloads.
If the AI installs a package that has a malicious install script, that script runs during installation inside the container. AirLock contains the damage but cannot prevent the script from running in the first place.
What you can do: Review the packages your AI installs. Ask it to explain why it needs each dependency. Stick to well-known packages with high download counts and active maintainers.
The extraction scan checks files for known dangerous patterns. But if the AI writes obfuscated or encoded malicious code into output files, the scan may not flag it because it doesn't look suspicious at first glance.
What you can do: Always review the diff of changes before approving extraction. If you see base64 strings, encoded blobs, or code you don't understand, ask your AI tool to explain it before accepting.
Docker containers are strong but not unbreakable. Kernel vulnerabilities could theoretically allow a container escape. AirLock's 8 layers make this extremely difficult, but no sandbox is 100% escape-proof.
What you can do: Keep your host system and Docker updated. Run AirLock on a VPS rather than your personal machine if you want an extra layer of separation. The watchdog will detect unusual behavior and kill the session.