Diffract sandboxes AI agents with kernel-level isolation — Landlock, seccomp, network namespaces. Route inference across any provider. YAML-based policy control. Built for enterprises that can't afford to trust blindly.
Powered by battle-tested infrastructure
Diffract doesn't just run agents — it contains them. Every process, network call, and file access is audited and enforced at the kernel level.
Landlock filesystem ACLs, seccomp BPF syscall filters, and Linux network namespaces wrap every agent in an impenetrable sandbox. The agent cannot escape. Period.
Route to NVIDIA, OpenAI, Anthropic, or Ollama. Switch models at runtime without restarting anything.
Define exactly which hosts, ports, HTTP methods agents can reach. Presets for Slack, Jira, Telegram, npm, and 5+ more. Apply instantly.
Telegram, Discord, Slack, WhatsApp, and more. Built-in rate limiting and allowlist controls per channel.
Auto-restart watchdog, preflight checks, runtime recovery diagnostics, and session-resumable onboarding.
Install agent capabilities from the hub. Reviewed, sandboxed, and deployed in seconds with diffract hub install.
From CLI command to isolated agent — every layer enforces the security model below it.
Diffract's privacy router strips credentials at the network layer. The agent calls inference.local — the host injects the real key, forwards to the provider, and returns the response. The sandbox remains blind.
All outbound network traffic is blocked until you explicitly allow it via YAML policy. Apply presets for known services or write custom rules.
The proxy terminates TLS inside the sandbox, inspects every HTTP request — method, path, headers — and enforces per-binary restrictions.
API keys are stored on the host, root-owned, mode 600. The L7 proxy injects them on forwarded requests. Zero risk from sandbox compromise.
When an agent tries to reach an unlisted host, OpenShell blocks it and surfaces the request in the TUI for explicit operator approval.
Process limits enforced at sandbox entry. PATH is locked down. Symlinks verified. Privilege separation between gateway and sandbox users.
Root-owned immutable config is hashed on first write. Integrity is verified on each start. Tampered configs prevent sandbox startup.
Switch providers and models at runtime with a single command. No restarts. No downtime. Add custom models without code changes.
claude-sonnet-4-6gpt-4.1gpt-4onemotron-super-120bllama-4-scoutdiffract model add <id> [provider]
Prerequisites: Ubuntu 24.04, Docker, 4+ cores, 8GB+ RAM.
# Install Docker curl -fsSL https://get.docker.com | sh # Install OpenShell runtime curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh
git clone https://github.com/hrubee/Diffraction.git ~/diffract cd ~/diffract/cli && npm install --omit=dev --ignore-scripts ln -sf ~/diffract/diffract.sh /usr/local/bin/diffract
export NVIDIA_API_KEY="nvapi-..." diffract onboard # Interactive wizard: name, keys, policies → done
diffract onboardInteractive setup wizarddiffract listList all sandboxesdiffract <name> connectShell into sandboxdiffract <name> statusShow sandbox healthdiffract <name> logs --followStream live logsdiffract <name> policy-addAdd network policy presetdiffract model listList available modelsdiffract hub install <src>Install skill from GitHub/localdiffract startStart Telegram bridge + watchdogdiffract statusShow system statusOne command. Kernel-level isolation. Any LLM provider. Enterprise-grade security out of the box.