If you are running an always-on AI assistant like OpenClaw, the question is not if it will try to access something it should not — it is when. NVIDIA NemoClaw exists to answer that question with a policy-enforced “no.”
NemoClaw is an open-source reference stack that wraps OpenClaw inside the NVIDIA OpenShell runtime — a sandboxed environment where every file access, network request, and inference call is governed by declarative policy. This guide walks through installing NemoClaw securely, understanding its architecture, and configuring it for safe operation.
Table of Contents
Open Table of Contents
What Is NemoClaw?
NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime and creates a sandboxed environment where the agent operates under strict security policies.
GitHub: https://github.com/NVIDIA/NemoClaw
Docs: https://docs.nvidia.com/nemoclaw/
License: Apache 2.0
⚠️ Alpha software. NemoClaw is available in early preview starting March 16, 2026. Interfaces, APIs, and behavior may change without notice. This software is not production-ready. It is shared to gather feedback and enable early experimentation.
Architecture Overview
NemoClaw orchestrates four components through a single CLI:
┌──────────────────────────────────────────────────────┐
│ Host Machine │
│ │
│ ┌─────────────┐ ┌─────────────────────────────┐ │
│ │ nemoclaw │───▶│ OpenShell Gateway (K3s) │ │
│ │ CLI │ │ │ │
│ └─────────────┘ │ ┌───────────────────────┐ │ │
│ │ │ Sandbox Container │ │ │
│ ~/.nemoclaw/ │ │ │ │ │
│ ├── credentials │ │ ┌─────────────────┐ │ │ │
│ └── sandboxes │ │ │ OpenClaw │ │ │ │
│ │ │ │ Agent │ │ │ │
│ │ │ └────────┬────────┘ │ │ │
│ │ │ │ │ │ │
│ │ │ ┌────────▼────────┐ │ │ │
│ │ │ │ Policy Engine │ │ │ │
│ │ │ │ (Landlock + │ │ │ │
│ │ │ │ seccomp + netns)│ │ │ │
│ │ │ └────────┬────────┘ │ │ │
│ │ │ │ │ │ │
│ │ └───────────┼────────────┘ │ │
│ │ │ │ │
│ │ ┌───────────▼────────────┐ │ │
│ │ │ Privacy Router / │ │ │
│ │ │ Inference Gateway │ │ │
│ │ │ (inference.local) │ │ │
│ │ └───────────┬────────────┘ │ │
│ └──────────────┼───────────────┘ │
│ │ │
└─────────────────────────────────────┼───────────────────┘
│
┌───────────▼───────────┐
│ Inference Provider │
│ (NVIDIA Endpoints / │
│ OpenAI / Anthropic / │
│ Local Ollama) │
└───────────────────────┘
Component Breakdown
| Component | Role |
|---|---|
nemoclaw CLI | TypeScript CLI that orchestrates the full stack: gateway, sandbox, inference, and policy |
| Blueprint | Versioned Python artifact that handles sandbox creation, digest verification, and reproducible setup |
| OpenShell Gateway | Control-plane running as a K3s cluster inside a Docker container — manages sandbox lifecycle |
| Sandbox | Isolated container running OpenClaw with policy-enforced egress and filesystem restrictions |
| Policy Engine | Enforces filesystem, network, and process constraints from application layer down to kernel |
| Privacy Router | Intercepts inference calls, strips caller credentials, injects backend credentials, and routes to the configured provider |
The Relationship: OpenClaw → NemoClaw → OpenShell
OpenClaw — The AI assistant (agent + gateway + channels)
│
▼
NemoClaw — Reference stack: installs OpenShell, configures
│ sandbox, inference routing, and network policy
▼
OpenShell — NVIDIA's sandboxed runtime (Landlock + seccomp +
network namespaces + privacy router)
- OpenClaw is the agent — it does the work.
- NemoClaw is the installer and orchestrator — it sets up the secure environment.
- OpenShell is the runtime — it enforces the security policies at the kernel level.
Why NemoClaw Exists
OpenClaw is a powerful autonomous agent that can make arbitrary network requests, access the host filesystem, and call any inference endpoint. Without guardrails, this creates three categories of risk:
1. Security Risk
An agent that can execute code and access the network can be exploited through prompt injection — a malicious message that causes the agent to execute unintended commands. Without sandboxing, a compromised agent has the same access as the user running it.
2. Cost Risk
An uncontrolled agent can make unlimited inference API calls. A runaway loop or prompt injection attack could generate thousands of dollars in API charges before anyone notices.
3. Compliance Risk
In regulated environments, you must demonstrate that AI agents cannot access unauthorized data, exfiltrate information, or make uncontrolled network connections. NemoClaw provides the auditable policy layer needed for compliance.
Prerequisites
Hardware Requirements
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB |
| Disk | 20 GB free | 40 GB free |
⚠️ Memory warning. The sandbox image is approximately 2.4 GB compressed. During image push, Docker, K3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this can trigger the Linux OOM killer. If you cannot add memory, configure at least 8 GB of swap.
Software Requirements
| Dependency | Version | Notes |
|---|---|---|
| Linux | Ubuntu 22.04 LTS or later | Primary supported platform |
| Node.js | 22.16 or later | Used by the nemoclaw CLI |
| npm | 10 or later | Comes with Node.js |
| Docker | Docker Engine (latest) | Must be installed and running |
Verify Prerequisites
# Ubuntu version
lsb_release -a
# Node.js version (22.16+ required)
node --version
# npm version (10+ required)
npm --version
# Docker is running
docker info
Install Missing Dependencies
If Node.js is not installed or is too old:
# Install nvm (download and inspect first)
curl -o /tmp/nvm-install.sh https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh
less /tmp/nvm-install.sh
bash /tmp/nvm-install.sh
source ~/.bashrc
# Install Node.js 22
nvm install 22
nvm use 22
If Docker is not installed:
# Download and inspect the install script
curl -fsSL https://get.docker.com -o /tmp/get-docker.sh
less /tmp/get-docker.sh
sudo sh /tmp/get-docker.sh
# Add your user to the docker group
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker info
Secure Installation
What the Installer Does Internally
Before running the installer, you should understand what it does:
- Checks for Node.js — installs it if not present
- Installs the
nemoclawnpm package globally - Installs OpenShell — the sandbox runtime
- Runs the onboarding wizard which:
- Creates an OpenShell gateway (K3s cluster in a Docker container)
- Prompts you to select an inference provider
- Validates the provider connection
- Creates a sandboxed OpenClaw environment
- Applies default network and filesystem policies
- Stores credentials in
~/.nemoclaw/credentials.json
Method 1: Official Installer (Inspect First)
The official install method uses a remote script. Always download and inspect before executing.
# Step 1 — Download the installer
curl -fsSL https://www.nvidia.com/nemoclaw.sh -o /tmp/nemoclaw-install.sh
# Step 2 — Inspect the script
less /tmp/nemoclaw-install.sh
# Step 3 — Run the installer
bash /tmp/nemoclaw-install.sh
⚠️ The official docs show
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash. We recommend downloading first. While NVIDIA is a trusted vendor, piping remote scripts directly to bash is a risky practice — you execute whatever the server sends, with no opportunity to review it.
The onboarding wizard will start automatically. Follow the prompts:
? Select an inference provider:
❯ NVIDIA Endpoints
OpenAI
Anthropic
Google Gemini
Other OpenAI-compatible endpoint
Other Anthropic-compatible endpoint
Local Ollama
When installation completes, you will see:
──────────────────────────────────────────────────
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoints)
──────────────────────────────────────────────────
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────
[INFO] === Installation complete ===
Method 2: Manual Step-by-Step Install
For environments where you need full control over each step:
# Step 1 — Install the nemoclaw CLI
npm install -g nemoclaw@latest
# Step 2 — Verify the installation
nemoclaw --version
# Step 3 — Run onboarding
nemoclaw onboard
The nemoclaw onboard command performs the same steps as the installer script but gives you more control over the process. It will:
- Check for OpenShell and install it if needed
- Create the OpenShell gateway
- Guide you through provider selection and sandbox creation
Method 3: From Source (Full Audit)
# Clone the repository
git clone https://github.com/NVIDIA/NemoClaw.git
cd NemoClaw
# Verify the latest commit (if signed)
git log --show-signature -1
# Review the code, especially:
# - nemoclaw-blueprint/policies/openclaw-sandbox.yaml (default policy)
# - install scripts
# - any network calls
# Follow the build instructions in the README
Security Deep Dive
Protection Layers
NemoClaw applies defense in depth across four domains:
| Layer | What It Protects | When It Applies | Hot-Reloadable? |
|---|---|---|---|
| Filesystem | Prevents reads/writes outside /sandbox and /tmp | Locked at sandbox creation | No |
| Network | Blocks unauthorized outbound connections | Enforced at runtime | Yes |
| Process | Blocks privilege escalation and dangerous syscalls via Landlock + seccomp | Locked at sandbox creation | No |
| Inference | Reroutes model API calls to controlled backends | Enforced at runtime | Yes |
Sandboxing (OpenShell)
OpenShell uses three Linux kernel security mechanisms:
Landlock — A Linux security module that restricts filesystem access. The sandbox can only read and write within /sandbox and /tmp. Attempts to access other paths are denied at the kernel level.
seccomp — Secure computing mode that filters system calls. Dangerous syscalls (e.g., those that could be used for privilege escalation) are blocked.
Network namespaces — The sandbox runs in its own network namespace. All outbound traffic is routed through the OpenShell proxy, which enforces the network policy.
Network Policies
Network policies are defined in declarative YAML and enforced by the OpenShell proxy:
# Example: nemoclaw-blueprint/policies/openclaw-sandbox.yaml
network:
egress:
# Allow inference endpoint (routed through OpenShell)
- host: inference.local
ports: [443]
methods: ["POST"]
# Allow npm registry (for skill installation)
- host: registry.npmjs.org
ports: [443]
methods: ["GET"]
# Everything else is DENIED by default
Key behaviors:
- Default deny — all outbound traffic is blocked unless explicitly allowed
- Operator approval — when the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for real-time approval
- Approved endpoints persist for the current session only — they do not carry over across restarts
- Presets available — NemoClaw ships preset policies for common integrations (PyPI, Docker Hub, Slack, Jira) in
nemoclaw-blueprint/policies/presets/
Customize Network Policy
# View the current policy
openshell policy get my-assistant
# Apply a custom policy
openshell policy set my-assistant --policy /path/to/custom-policy.yaml --wait
# Apply a preset (e.g., allow PyPI access)
openshell policy set my-assistant \
--policy nemoclaw-blueprint/policies/presets/pypi.yaml --wait
Inference Routing
Inference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it through the privacy router:
Agent (inside sandbox)
│
│ POST https://inference.local/v1/chat/completions
│
▼
OpenShell Privacy Router
│
│ • Strips caller credentials
│ • Injects backend credentials from host
│ • Routes to configured provider
│
▼
Inference Provider (NVIDIA Endpoints / OpenAI / Anthropic / Ollama)
Why this matters:
- The agent never sees your raw API keys — they are injected by the router on the host side
- Credentials are stored in
~/.nemoclaw/credentials.jsonon the host, not inside the sandbox - You can switch providers without reconfiguring the agent
- Provider-specific rate limits and usage tracking apply at the router level
Supported Inference Providers
| Provider | Notes |
|---|---|
| NVIDIA Endpoints | Curated hosted models on integrate.api.nvidia.com — includes Nemotron 3 Super 120B |
| OpenAI | GPT models plus manual model entry |
| Anthropic | Claude models plus manual model entry |
| Google Gemini | Google’s OpenAI-compatible endpoint |
| Other OpenAI-compatible | For proxies and compatible gateways |
| Other Anthropic-compatible | For Claude proxies and compatible gateways |
| Local Ollama | Local inference — no API calls leave the machine |
Running NemoClaw Locally
Connect to the Sandbox
# Connect to the sandbox shell
nemoclaw my-assistant connect
# You are now inside: sandbox@my-assistant:~$
Use the OpenClaw TUI
Inside the sandbox:
openclaw tui
This opens an interactive chat interface. Send a test message and verify you receive a response.
Use the OpenClaw CLI
Inside the sandbox:
openclaw agent --agent main --local -m "hello" --session-id test
This prints the response directly in the terminal.
Monitor from the Host
# Check sandbox status
nemoclaw my-assistant status
# Follow sandbox logs
nemoclaw my-assistant logs --follow
# Check OpenShell sandbox state
openshell sandbox list
# Launch the OpenShell TUI for monitoring and egress approvals
openshell term
Verifying the Installation
Run through this verification checklist:
# 1. NemoClaw CLI is installed
nemoclaw --version
# 2. Sandbox is running
nemoclaw my-assistant status
# 3. OpenShell gateway is healthy
openshell sandbox list
# 4. Connect and test the agent
nemoclaw my-assistant connect
# Inside sandbox:
openclaw agent --agent main --local -m "What is 2+2?" --session-id verify
# 5. Verify network policy is enforced
# Inside sandbox — this should be BLOCKED:
curl -sS https://example.com
# Expected: connection refused or policy_denied
# 6. Verify inference routing works
# Inside sandbox — this should succeed (routed through OpenShell):
openclaw agent --agent main --local -m "Say hello" --session-id verify
# 7. Check host-side credentials are protected
ls -la ~/.nemoclaw/credentials.json
# Should be readable only by your user (600 permissions)
Limitations
⚠️ NemoClaw is alpha software. Keep the following limitations in mind:
- Not production-ready — Interfaces, APIs, and behavior may change without notice
- Single-player mode — Designed for one developer, one environment, one gateway. Multi-tenant enterprise deployments are not yet supported
- Linux-primary — macOS is supported with Colima or Docker Desktop, but Linux is the primary target
- Podman not supported on macOS — NemoClaw depends on OpenShell support for Podman, which is not available on macOS yet
- Local vLLM is experimental — Local host-routed inference on macOS has additional requirements
- No Windows native support — Windows requires WSL2 with Docker Desktop
- GPU passthrough is experimental — Requires NVIDIA Container Toolkit and a GPU-enabled sandbox image
Best Practices for Production-Like Setups
1. Use Dedicated Infrastructure
Run NemoClaw on a dedicated VM or server — not on your development workstation. This limits the blast radius if something goes wrong.
# Recommended: 4 vCPU, 16 GB RAM, 40 GB disk
# Ubuntu 22.04 LTS or 24.04 LTS
# Docker installed and running
2. Restrict Host Credentials
# Lock down the credentials file
chmod 600 ~/.nemoclaw/credentials.json
chmod 700 ~/.nemoclaw/
# Use a dedicated API key for NemoClaw — not your personal key
# Set spending limits on your inference provider dashboard
3. Customize the Network Policy
Do not rely on the default policy alone. Create a restrictive policy tailored to your use case:
# custom-policy.yaml — example for a coding assistant
network:
egress:
- host: inference.local
ports: [443]
methods: ["POST"]
- host: registry.npmjs.org
ports: [443]
methods: ["GET"]
- host: pypi.org
ports: [443]
methods: ["GET"]
# Deny everything else
Apply it:
openshell policy set my-assistant --policy custom-policy.yaml --wait
4. Monitor Egress Requests
Use the OpenShell TUI to monitor and approve/deny egress requests in real time:
openshell term
Any request to an unlisted host will appear for your approval. Only approve hosts you expect the agent to contact.
5. Use Local Inference When Possible
For maximum privacy, use a local model via Ollama so no inference traffic leaves your machine:
# During onboarding, select "Local Ollama"
nemoclaw onboard
# Or reconfigure later
nemoclaw onboard # Re-run and select a different provider
6. Automate Backups
Back up your NemoClaw configuration regularly:
# Backup script
tar -czf nemoclaw-backup-$(date +%Y%m%d).tar.gz \
~/.nemoclaw/ \
~/.openclaw/
7. Keep Up to Date
NemoClaw is evolving rapidly. Stay current:
# Check for updates
npm outdated -g nemoclaw
# Update
npm update -g nemoclaw@latest
# Follow the release notes
# https://github.com/NVIDIA/NemoClaw/blob/main/docs/about/release-notes.md
8. Uninstall Cleanly
If you need to remove NemoClaw:
# Download and inspect the uninstall script
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/main/uninstall.sh \
-o /tmp/nemoclaw-uninstall.sh
less /tmp/nemoclaw-uninstall.sh
# Run with confirmation prompt
bash /tmp/nemoclaw-uninstall.sh
# Or skip the prompt
bash /tmp/nemoclaw-uninstall.sh --yes
The uninstall script removes sandboxes, the NemoClaw gateway, related Docker images and containers, local state directories, and the nemoclaw npm package. It does not remove Docker, Node.js, npm, or Ollama.
OpenClaw vs NemoClaw — Security Comparison
| Aspect | OpenClaw (Standalone) | OpenClaw via NemoClaw |
|---|---|---|
| Filesystem | Full access to user home directory | Restricted to /sandbox and /tmp |
| Network | Unrestricted outbound | Default deny — policy-controlled egress |
| Inference | Direct API calls with your keys exposed to the agent | Privacy-routed — keys never enter the sandbox |
| Process isolation | OS user-level only | Landlock + seccomp + network namespace |
| Credential storage | Keys in ~/.openclaw/ accessible to the agent | Keys in ~/.nemoclaw/ on host only |
| Egress monitoring | Not available | Real-time approval via OpenShell TUI |
| Syscall filtering | None | seccomp blocks dangerous syscalls |
| Setup complexity | Simple — npm install -g openclaw | Moderate — requires Docker + more resources |
| Resource overhead | Low (~200 MB RAM) | Higher (~2-4 GB RAM for gateway + sandbox) |
| Maturity | Stable releases | Alpha — expect breaking changes |
Recommendation: If you are running OpenClaw for personal use on a trusted machine, the standalone install with the hardening steps from the OpenClaw installation guide may be sufficient. If you are running it on shared infrastructure, with sensitive data, or in any environment where prompt injection is a concern, use NemoClaw.
Follow my blog for more guides on AI agent security and infrastructure.