Skip to content
yisusvii Blog
Go back

Installing NVIDIA NemoClaw Securely (Official Guide + Best Practices)

Suggest Changes

If you are running an always-on AI assistant like OpenClaw, the question is not if it will try to access something it should not — it is when. NVIDIA NemoClaw exists to answer that question with a policy-enforced “no.”

NemoClaw is an open-source reference stack that wraps OpenClaw inside the NVIDIA OpenShell runtime — a sandboxed environment where every file access, network request, and inference call is governed by declarative policy. This guide walks through installing NemoClaw securely, understanding its architecture, and configuring it for safe operation.

Table of Contents

Open Table of Contents

What Is NemoClaw?

NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime and creates a sandboxed environment where the agent operates under strict security policies.

GitHub: https://github.com/NVIDIA/NemoClaw

Docs: https://docs.nvidia.com/nemoclaw/

License: Apache 2.0

⚠️ Alpha software. NemoClaw is available in early preview starting March 16, 2026. Interfaces, APIs, and behavior may change without notice. This software is not production-ready. It is shared to gather feedback and enable early experimentation.

Architecture Overview

NemoClaw orchestrates four components through a single CLI:

┌──────────────────────────────────────────────────────┐
│                    Host Machine                       │
│                                                       │
│  ┌─────────────┐    ┌─────────────────────────────┐  │
│  │  nemoclaw    │───▶│  OpenShell Gateway (K3s)    │  │
│  │  CLI         │    │                             │  │
│  └─────────────┘    │  ┌───────────────────────┐  │  │
│                      │  │   Sandbox Container    │  │  │
│  ~/.nemoclaw/        │  │                       │  │  │
│  ├── credentials     │  │  ┌─────────────────┐  │  │  │
│  └── sandboxes       │  │  │   OpenClaw       │  │  │  │
│                      │  │  │   Agent          │  │  │  │
│                      │  │  └────────┬────────┘  │  │  │
│                      │  │           │            │  │  │
│                      │  │  ┌────────▼────────┐  │  │  │
│                      │  │  │ Policy Engine    │  │  │  │
│                      │  │  │ (Landlock +      │  │  │  │
│                      │  │  │  seccomp + netns)│  │  │  │
│                      │  │  └────────┬────────┘  │  │  │
│                      │  │           │            │  │  │
│                      │  └───────────┼────────────┘  │  │
│                      │              │               │  │
│                      │  ┌───────────▼────────────┐  │  │
│                      │  │  Privacy Router /       │  │  │
│                      │  │  Inference Gateway      │  │  │
│                      │  │  (inference.local)      │  │  │
│                      │  └───────────┬────────────┘  │  │
│                      └──────────────┼───────────────┘  │
│                                     │                   │
└─────────────────────────────────────┼───────────────────┘

                          ┌───────────▼───────────┐
                          │  Inference Provider    │
                          │  (NVIDIA Endpoints /   │
                          │   OpenAI / Anthropic / │
                          │   Local Ollama)        │
                          └───────────────────────┘

Component Breakdown

ComponentRole
nemoclaw CLITypeScript CLI that orchestrates the full stack: gateway, sandbox, inference, and policy
BlueprintVersioned Python artifact that handles sandbox creation, digest verification, and reproducible setup
OpenShell GatewayControl-plane running as a K3s cluster inside a Docker container — manages sandbox lifecycle
SandboxIsolated container running OpenClaw with policy-enforced egress and filesystem restrictions
Policy EngineEnforces filesystem, network, and process constraints from application layer down to kernel
Privacy RouterIntercepts inference calls, strips caller credentials, injects backend credentials, and routes to the configured provider

The Relationship: OpenClaw → NemoClaw → OpenShell

OpenClaw          — The AI assistant (agent + gateway + channels)


NemoClaw          — Reference stack: installs OpenShell, configures
    │                sandbox, inference routing, and network policy

OpenShell         — NVIDIA's sandboxed runtime (Landlock + seccomp +
                     network namespaces + privacy router)

Why NemoClaw Exists

OpenClaw is a powerful autonomous agent that can make arbitrary network requests, access the host filesystem, and call any inference endpoint. Without guardrails, this creates three categories of risk:

1. Security Risk

An agent that can execute code and access the network can be exploited through prompt injection — a malicious message that causes the agent to execute unintended commands. Without sandboxing, a compromised agent has the same access as the user running it.

2. Cost Risk

An uncontrolled agent can make unlimited inference API calls. A runaway loop or prompt injection attack could generate thousands of dollars in API charges before anyone notices.

3. Compliance Risk

In regulated environments, you must demonstrate that AI agents cannot access unauthorized data, exfiltrate information, or make uncontrolled network connections. NemoClaw provides the auditable policy layer needed for compliance.

Prerequisites

Hardware Requirements

ResourceMinimumRecommended
CPU4 vCPU4+ vCPU
RAM8 GB16 GB
Disk20 GB free40 GB free

⚠️ Memory warning. The sandbox image is approximately 2.4 GB compressed. During image push, Docker, K3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this can trigger the Linux OOM killer. If you cannot add memory, configure at least 8 GB of swap.

Software Requirements

DependencyVersionNotes
LinuxUbuntu 22.04 LTS or laterPrimary supported platform
Node.js22.16 or laterUsed by the nemoclaw CLI
npm10 or laterComes with Node.js
DockerDocker Engine (latest)Must be installed and running

Verify Prerequisites

# Ubuntu version
lsb_release -a

# Node.js version (22.16+ required)
node --version

# npm version (10+ required)
npm --version

# Docker is running
docker info

Install Missing Dependencies

If Node.js is not installed or is too old:

# Install nvm (download and inspect first)
curl -o /tmp/nvm-install.sh https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh
less /tmp/nvm-install.sh
bash /tmp/nvm-install.sh
source ~/.bashrc

# Install Node.js 22
nvm install 22
nvm use 22

If Docker is not installed:

# Download and inspect the install script
curl -fsSL https://get.docker.com -o /tmp/get-docker.sh
less /tmp/get-docker.sh
sudo sh /tmp/get-docker.sh

# Add your user to the docker group
sudo usermod -aG docker $USER
newgrp docker

# Verify
docker info

Secure Installation

What the Installer Does Internally

Before running the installer, you should understand what it does:

  1. Checks for Node.js — installs it if not present
  2. Installs the nemoclaw npm package globally
  3. Installs OpenShell — the sandbox runtime
  4. Runs the onboarding wizard which:
    • Creates an OpenShell gateway (K3s cluster in a Docker container)
    • Prompts you to select an inference provider
    • Validates the provider connection
    • Creates a sandboxed OpenClaw environment
    • Applies default network and filesystem policies
    • Stores credentials in ~/.nemoclaw/credentials.json

Method 1: Official Installer (Inspect First)

The official install method uses a remote script. Always download and inspect before executing.

# Step 1 — Download the installer
curl -fsSL https://www.nvidia.com/nemoclaw.sh -o /tmp/nemoclaw-install.sh

# Step 2 — Inspect the script
less /tmp/nemoclaw-install.sh

# Step 3 — Run the installer
bash /tmp/nemoclaw-install.sh

⚠️ The official docs show curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash. We recommend downloading first. While NVIDIA is a trusted vendor, piping remote scripts directly to bash is a risky practice — you execute whatever the server sends, with no opportunity to review it.

The onboarding wizard will start automatically. Follow the prompts:

? Select an inference provider:
  ❯ NVIDIA Endpoints
    OpenAI
    Anthropic
    Google Gemini
    Other OpenAI-compatible endpoint
    Other Anthropic-compatible endpoint
    Local Ollama

When installation completes, you will see:

──────────────────────────────────────────────────
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoints)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

[INFO]  === Installation complete ===

Method 2: Manual Step-by-Step Install

For environments where you need full control over each step:

# Step 1 — Install the nemoclaw CLI
npm install -g nemoclaw@latest

# Step 2 — Verify the installation
nemoclaw --version

# Step 3 — Run onboarding
nemoclaw onboard

The nemoclaw onboard command performs the same steps as the installer script but gives you more control over the process. It will:

  1. Check for OpenShell and install it if needed
  2. Create the OpenShell gateway
  3. Guide you through provider selection and sandbox creation

Method 3: From Source (Full Audit)

# Clone the repository
git clone https://github.com/NVIDIA/NemoClaw.git
cd NemoClaw

# Verify the latest commit (if signed)
git log --show-signature -1

# Review the code, especially:
# - nemoclaw-blueprint/policies/openclaw-sandbox.yaml  (default policy)
# - install scripts
# - any network calls

# Follow the build instructions in the README

Security Deep Dive

Protection Layers

NemoClaw applies defense in depth across four domains:

LayerWhat It ProtectsWhen It AppliesHot-Reloadable?
FilesystemPrevents reads/writes outside /sandbox and /tmpLocked at sandbox creationNo
NetworkBlocks unauthorized outbound connectionsEnforced at runtimeYes
ProcessBlocks privilege escalation and dangerous syscalls via Landlock + seccompLocked at sandbox creationNo
InferenceReroutes model API calls to controlled backendsEnforced at runtimeYes

Sandboxing (OpenShell)

OpenShell uses three Linux kernel security mechanisms:

Landlock — A Linux security module that restricts filesystem access. The sandbox can only read and write within /sandbox and /tmp. Attempts to access other paths are denied at the kernel level.

seccomp — Secure computing mode that filters system calls. Dangerous syscalls (e.g., those that could be used for privilege escalation) are blocked.

Network namespaces — The sandbox runs in its own network namespace. All outbound traffic is routed through the OpenShell proxy, which enforces the network policy.

Network Policies

Network policies are defined in declarative YAML and enforced by the OpenShell proxy:

# Example: nemoclaw-blueprint/policies/openclaw-sandbox.yaml
network:
  egress:
    # Allow inference endpoint (routed through OpenShell)
    - host: inference.local
      ports: [443]
      methods: ["POST"]

    # Allow npm registry (for skill installation)
    - host: registry.npmjs.org
      ports: [443]
      methods: ["GET"]

    # Everything else is DENIED by default

Key behaviors:

Customize Network Policy

# View the current policy
openshell policy get my-assistant

# Apply a custom policy
openshell policy set my-assistant --policy /path/to/custom-policy.yaml --wait

# Apply a preset (e.g., allow PyPI access)
openshell policy set my-assistant \
  --policy nemoclaw-blueprint/policies/presets/pypi.yaml --wait

Inference Routing

Inference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it through the privacy router:

Agent (inside sandbox)

    │  POST https://inference.local/v1/chat/completions


OpenShell Privacy Router

    │  • Strips caller credentials
    │  • Injects backend credentials from host
    │  • Routes to configured provider


Inference Provider (NVIDIA Endpoints / OpenAI / Anthropic / Ollama)

Why this matters:

Supported Inference Providers

ProviderNotes
NVIDIA EndpointsCurated hosted models on integrate.api.nvidia.com — includes Nemotron 3 Super 120B
OpenAIGPT models plus manual model entry
AnthropicClaude models plus manual model entry
Google GeminiGoogle’s OpenAI-compatible endpoint
Other OpenAI-compatibleFor proxies and compatible gateways
Other Anthropic-compatibleFor Claude proxies and compatible gateways
Local OllamaLocal inference — no API calls leave the machine

Running NemoClaw Locally

Connect to the Sandbox

# Connect to the sandbox shell
nemoclaw my-assistant connect

# You are now inside: sandbox@my-assistant:~$

Use the OpenClaw TUI

Inside the sandbox:

openclaw tui

This opens an interactive chat interface. Send a test message and verify you receive a response.

Use the OpenClaw CLI

Inside the sandbox:

openclaw agent --agent main --local -m "hello" --session-id test

This prints the response directly in the terminal.

Monitor from the Host

# Check sandbox status
nemoclaw my-assistant status

# Follow sandbox logs
nemoclaw my-assistant logs --follow

# Check OpenShell sandbox state
openshell sandbox list

# Launch the OpenShell TUI for monitoring and egress approvals
openshell term

Verifying the Installation

Run through this verification checklist:

# 1. NemoClaw CLI is installed
nemoclaw --version

# 2. Sandbox is running
nemoclaw my-assistant status

# 3. OpenShell gateway is healthy
openshell sandbox list

# 4. Connect and test the agent
nemoclaw my-assistant connect
# Inside sandbox:
openclaw agent --agent main --local -m "What is 2+2?" --session-id verify

# 5. Verify network policy is enforced
# Inside sandbox — this should be BLOCKED:
curl -sS https://example.com
# Expected: connection refused or policy_denied

# 6. Verify inference routing works
# Inside sandbox — this should succeed (routed through OpenShell):
openclaw agent --agent main --local -m "Say hello" --session-id verify

# 7. Check host-side credentials are protected
ls -la ~/.nemoclaw/credentials.json
# Should be readable only by your user (600 permissions)

Limitations

⚠️ NemoClaw is alpha software. Keep the following limitations in mind:

  1. Not production-ready — Interfaces, APIs, and behavior may change without notice
  2. Single-player mode — Designed for one developer, one environment, one gateway. Multi-tenant enterprise deployments are not yet supported
  3. Linux-primary — macOS is supported with Colima or Docker Desktop, but Linux is the primary target
  4. Podman not supported on macOS — NemoClaw depends on OpenShell support for Podman, which is not available on macOS yet
  5. Local vLLM is experimental — Local host-routed inference on macOS has additional requirements
  6. No Windows native support — Windows requires WSL2 with Docker Desktop
  7. GPU passthrough is experimental — Requires NVIDIA Container Toolkit and a GPU-enabled sandbox image

Best Practices for Production-Like Setups

1. Use Dedicated Infrastructure

Run NemoClaw on a dedicated VM or server — not on your development workstation. This limits the blast radius if something goes wrong.

# Recommended: 4 vCPU, 16 GB RAM, 40 GB disk
# Ubuntu 22.04 LTS or 24.04 LTS
# Docker installed and running

2. Restrict Host Credentials

# Lock down the credentials file
chmod 600 ~/.nemoclaw/credentials.json
chmod 700 ~/.nemoclaw/

# Use a dedicated API key for NemoClaw — not your personal key
# Set spending limits on your inference provider dashboard

3. Customize the Network Policy

Do not rely on the default policy alone. Create a restrictive policy tailored to your use case:

# custom-policy.yaml — example for a coding assistant
network:
  egress:
    - host: inference.local
      ports: [443]
      methods: ["POST"]
    - host: registry.npmjs.org
      ports: [443]
      methods: ["GET"]
    - host: pypi.org
      ports: [443]
      methods: ["GET"]
    # Deny everything else

Apply it:

openshell policy set my-assistant --policy custom-policy.yaml --wait

4. Monitor Egress Requests

Use the OpenShell TUI to monitor and approve/deny egress requests in real time:

openshell term

Any request to an unlisted host will appear for your approval. Only approve hosts you expect the agent to contact.

5. Use Local Inference When Possible

For maximum privacy, use a local model via Ollama so no inference traffic leaves your machine:

# During onboarding, select "Local Ollama"
nemoclaw onboard

# Or reconfigure later
nemoclaw onboard  # Re-run and select a different provider

6. Automate Backups

Back up your NemoClaw configuration regularly:

# Backup script
tar -czf nemoclaw-backup-$(date +%Y%m%d).tar.gz \
  ~/.nemoclaw/ \
  ~/.openclaw/

7. Keep Up to Date

NemoClaw is evolving rapidly. Stay current:

# Check for updates
npm outdated -g nemoclaw

# Update
npm update -g nemoclaw@latest

# Follow the release notes
# https://github.com/NVIDIA/NemoClaw/blob/main/docs/about/release-notes.md

8. Uninstall Cleanly

If you need to remove NemoClaw:

# Download and inspect the uninstall script
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/main/uninstall.sh \
  -o /tmp/nemoclaw-uninstall.sh
less /tmp/nemoclaw-uninstall.sh

# Run with confirmation prompt
bash /tmp/nemoclaw-uninstall.sh

# Or skip the prompt
bash /tmp/nemoclaw-uninstall.sh --yes

The uninstall script removes sandboxes, the NemoClaw gateway, related Docker images and containers, local state directories, and the nemoclaw npm package. It does not remove Docker, Node.js, npm, or Ollama.

OpenClaw vs NemoClaw — Security Comparison

AspectOpenClaw (Standalone)OpenClaw via NemoClaw
FilesystemFull access to user home directoryRestricted to /sandbox and /tmp
NetworkUnrestricted outboundDefault deny — policy-controlled egress
InferenceDirect API calls with your keys exposed to the agentPrivacy-routed — keys never enter the sandbox
Process isolationOS user-level onlyLandlock + seccomp + network namespace
Credential storageKeys in ~/.openclaw/ accessible to the agentKeys in ~/.nemoclaw/ on host only
Egress monitoringNot availableReal-time approval via OpenShell TUI
Syscall filteringNoneseccomp blocks dangerous syscalls
Setup complexitySimple — npm install -g openclawModerate — requires Docker + more resources
Resource overheadLow (~200 MB RAM)Higher (~2-4 GB RAM for gateway + sandbox)
MaturityStable releasesAlpha — expect breaking changes

Recommendation: If you are running OpenClaw for personal use on a trusted machine, the standalone install with the hardening steps from the OpenClaw installation guide may be sufficient. If you are running it on shared infrastructure, with sensitive data, or in any environment where prompt injection is a concern, use NemoClaw.


Follow my blog for more guides on AI agent security and infrastructure.


Suggest Changes
Share this post on:

Previous Post
MLflow vs Kubeflow (and Modern MLOps Tools): Enterprise Installation and Architecture Guide
Next Post
Installing OpenClaw Securely on Ubuntu (Step-by-Step Guide)