Enterprise AI Agent Runtime — v1.0

Deploy safe,
autonomous AI agents
with one command.

Diffract sandboxes AI agents with kernel-level isolation — Landlock, seccomp, network namespaces. Route inference across any provider. YAML-based policy control. Built for enterprises that can't afford to trust blindly.

bash
$ diffract onboard
11+ AI Models
1 Command Setup
Zero Key Leakage
20+ Platforms

Powered by battle-tested infrastructure

🛡️ Landlock LSM
🔒 seccomp BPF
🌐 Network Namespaces
🤖 NVIDIA NIM
🦙 Anthropic Claude
OpenAI GPT
☸️ k3s / Kubernetes
🌊 Caddy HTTPS

Everything you need to run AI agents safely

Diffract doesn't just run agents — it contains them. Every process, network call, and file access is audited and enforced at the kernel level.

Kernel-Level Isolation

Landlock filesystem ACLs, seccomp BPF syscall filters, and Linux network namespaces wrap every agent in an impenetrable sandbox. The agent cannot escape. Period.

  • Deny-by-default network egress
  • API keys never enter the sandbox
  • L7 TLS MITM proxy inspection
  • Capability dropping + privilege separation

Multi-Provider Inference

Route to NVIDIA, OpenAI, Anthropic, or Ollama. Switch models at runtime without restarting anything.

YAML Policy Control

Define exactly which hosts, ports, HTTP methods agents can reach. Presets for Slack, Jira, Telegram, npm, and 5+ more. Apply instantly.

20+ Messaging Channels

Telegram, Discord, Slack, WhatsApp, and more. Built-in rate limiting and allowlist controls per channel.

Production Operations

Auto-restart watchdog, preflight checks, runtime recovery diagnostics, and session-resumable onboarding.

Skills Marketplace

Install agent capabilities from the hub. Reviewed, sandboxed, and deployed in seconds with diffract hub install.

Six layers, zero trust

From CLI command to isolated agent — every layer enforces the security model below it.

HOST MACHINE
⌨️
diffract CLI Onboarding, sandbox CRUD, model management, skills hub, Telegram bridge
Node.js
🔐
OpenShell Runtime Sandbox creation · L7 proxy · TLS MITM · Network policy enforcement
Rust
🌊
Caddy Reverse Proxy Auto-HTTPS · Let's Encrypt · Port 443 → Gateway
Caddy
ISOLATED SANDBOX (k3s pod)
🛡️
Sandbox entrypoint Capability drop · Fork bomb prevention · PATH lockdown · SHA256 integrity
🤖
Agent (OpenClaw Gateway) Chat interface · Skill runtime · Port 18789
🔒
Network Namespace Deny-by-default egress · L7 proxy chains all outbound
inference.local → Provider API Host injects API key · Sandbox never sees credentials

Your API keys never enter the sandbox. Ever.

Diffract's privacy router strips credentials at the network layer. The agent calls inference.local — the host injects the real key, forwards to the provider, and returns the response. The sandbox remains blind.

01

Deny-by-Default Network

All outbound network traffic is blocked until you explicitly allow it via YAML policy. Apply presets for known services or write custom rules.

02

L7 TLS Inspection

The proxy terminates TLS inside the sandbox, inspects every HTTP request — method, path, headers — and enforces per-binary restrictions.

03

Key Injection at Network Layer

API keys are stored on the host, root-owned, mode 600. The L7 proxy injects them on forwarded requests. Zero risk from sandbox compromise.

04

Operator Approval Flow

When an agent tries to reach an unlisted host, OpenShell blocks it and surfaces the request in the TUI for explicit operator approval.

05

Fork Bomb Prevention

Process limits enforced at sandbox entry. PATH is locked down. Symlinks verified. Privilege separation between gateway and sandbox users.

06

SHA256 Config Integrity

Root-owned immutable config is hashed on first write. Integrity is verified on each start. Tampered configs prevent sandbox startup.

11 built-in models. Add any.

Switch providers and models at runtime with a single command. No restarts. No downtime. Add custom models without code changes.

Anthropic
Claude Sonnet 4.6
claude-sonnet-4-6
Recommended
OpenAI
GPT-4.1
gpt-4.1
Default
OpenAI
GPT-4o
gpt-4o
NVIDIA
Nemotron-Super-120B
nemotron-super-120b
NVIDIA / Meta
Llama 4 Scout
llama-4-scout
Custom
Add Your Own
Extend without code changes via the model registry
diffract model add <id> [provider]
Switch models live — no restart needed
$ openshell inference set --provider anthropic-prod --model claude-sonnet-4-6
Provider switched. Takes effect in seconds.
$ openshell inference get
provider: anthropic-prod   model: claude-sonnet-4-6

From zero to running agent in minutes

Prerequisites: Ubuntu 24.04, Docker, 4+ cores, 8GB+ RAM.

01

Install Docker & OpenShell

# Install Docker
curl -fsSL https://get.docker.com | sh

# Install OpenShell runtime
curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh
02

Clone & Link Diffract

git clone https://github.com/hrubee/Diffraction.git ~/diffract
cd ~/diffract/cli && npm install --omit=dev --ignore-scripts
ln -sf ~/diffract/diffract.sh /usr/local/bin/diffract
03

Deploy

export NVIDIA_API_KEY="nvapi-..."
diffract onboard
# Interactive wizard: name, keys, policies → done

Command Reference

diffract onboardInteractive setup wizard
diffract listList all sandboxes
diffract <name> connectShell into sandbox
diffract <name> statusShow sandbox health
diffract <name> logs --followStream live logs
diffract <name> policy-addAdd network policy preset
diffract model listList available models
diffract hub install <src>Install skill from GitHub/local
diffract startStart Telegram bridge + watchdog
diffract statusShow system status

Ready to deploy your first agent?

One command. Kernel-level isolation. Any LLM provider. Enterprise-grade security out of the box.