Following my earlier post on running OpenClaw in Docker on macOS, I wanted to explore the next level: NemoClaw, NVIDIA's open source reference stack that wraps OpenClaw in a proper security sandbox powered by NVIDIA OpenShell.
NemoClaw was demoed at the GTC keynote and is fresh out of the oven (alpha, released March 16, 2026), so this is an early look. Let's get into it.
NemoClaw adds three layers of protection around your OpenClaw agent:
| Layer | What it does | Provided by |
|---|---|---|
| Network | Blocks unauthorized outbound connections (hot-reloadable) | OpenShell network policy |
| Filesystem | Prevents reads/writes outside /sandbox and /tmp |
Linux Landlock |
| Process | Blocks privilege escalation and dangerous syscalls | seccomp |
Instead of a bare OpenClaw instance, your agent now runs inside an OpenShell sandbox: isolated, policy-controlled, and observable.
- macOS with Apple Silicon (M-series), I'm on an M4 Pro
- Docker Desktop running
- Node.js 20+ (
node --version: v24.5.0 ✓)
One command installs the NemoClaw CLI and kicks off the full onboarding wizard:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bashWhen the installer starts, you're greeted with this banner:
The installer runs three stages:
[1/3] Node.js
──────────────────────────────────────────────────
[INFO] Node.js found: v24.5.0
[INFO] Runtime OK: Node.js v24.5.0, npm 11.5.1
[2/3] NemoClaw CLI
──────────────────────────────────────────────────
[INFO] Installing NemoClaw from GitHub…
✓ Cloning NemoClaw source
✓ Preparing OpenClaw package
✓ Installing NemoClaw dependencies
✓ Building NemoClaw plugin
✓ Linking NemoClaw CLI
[INFO] Verified: nemoclaw is available at /opt/homebrew/bin/nemoclaw
[3/3] Onboarding
──────────────────────────────────────────────────
[INFO] Running nemoclaw onboard…
[INFO] Installer stdin is piped; attaching onboarding to /dev/tty…
The installer detects it's running via curl | bash and automatically attaches the interactive onboarding wizard to your terminal's TTY, so no need to run a separate command.
✓ Docker is running
✓ Container runtime: docker-desktop
openshell CLI not found. Installing...
✓ openshell CLI: openshell 0.0.14
✓ Port 8080 available (OpenShell gateway)
✓ Port 18789 available (NemoClaw dashboard)
✓ Apple GPU detected: Apple M4 Pro (20 cores), 49152 MB unified memory
ⓘ NIM requires NVIDIA GPU — will use cloud inference
The M4 Pro GPU is detected, but NIM (NVIDIA Inference Microservices) requires a discrete NVIDIA GPU. On Apple Silicon, NemoClaw falls back to cloud inference via the NVIDIA Endpoint API (more on that in step 4). The installer also automatically downloads the OpenShell CLI if it's not already present.
Using pinned OpenShell gateway image: ghcr.io/nvidia/openshell/cluster:0.0.14
✓ Checking Docker
✓ Downloading gateway
✓ Initializing environment
✓ Starting gateway
✓ Gateway ready
Name: nemoclaw
Endpoint: https://127.0.0.1:8080
✓ Active gateway set to 'nemoclaw'
✓ Gateway is healthy
OpenShell spins up a local gateway that mediates all network and inference calls from the sandbox.
Sandbox name (lowercase, numbers, hyphens) [my-assistant]: my-nemoclaw
Creating sandbox 'my-nemoclaw' (this takes a few minutes on first run)...
Building image openshell/sandbox-from:1774289339 from Dockerfile
Built image openshell/sandbox-from:1774289339
Pushing image openshell/sandbox-from:1774289339 into gateway "nemoclaw"
[progress] Exported 498 MiB
[progress] Uploaded to gateway
Image openshell/sandbox-from:1774289339 is available in the gateway.
Waiting for sandbox to become ready...
✓ Forwarding port 18789 to sandbox my-nemoclaw in the background
Access at: http://127.0.0.1:18789/
Stop with: openshell forward stop 18789 my-nemoclaw
✓ Sandbox 'my-nemoclaw' created
The wizard prompts for a sandbox name: it must be lowercase letters, numbers, and hyphens. The default is my-assistant, but I used my-nemoclaw to make it easy to identify. Pick something meaningful if you plan to run multiple sandboxes side by side.
The sandbox image is ~498 MiB. First run takes a few minutes while it builds and pushes into the gateway.
Inference options:
1) NVIDIA Endpoint API (build.nvidia.com) (recommended)
2) Local Ollama (localhost:11434)
Choose [1]:
I chose option 1 (NVIDIA Endpoint API). You'll need an API key from build.nvidia.com:
┌─────────────────────────────────────────────────────────────────┐
│ NVIDIA API Key required │
│ │
│ 1. Go to https://build.nvidia.com/settings/api-keys │
│ 2. Sign in with your NVIDIA account │
│ 3. Click 'Generate API Key' button │
│ 4. Paste the key below (starts with nvapi-) │
└─────────────────────────────────────────────────────────────────┘
NVIDIA API Key: nvapi-****
Key saved to ~/.nemoclaw/credentials.json (mode 600)
Available cloud models:
1) Nemotron 3 Super 120B (nvidia/nemotron-3-super-120b-a12b)
2) Kimi K2.5 (moonshotai/kimi-k2.5)
3) GLM-5 (z-ai/glm5)
4) MiniMax M2.5 (minimaxai/minimax-m2.5)
5) Qwen3.5 397B A17B (qwen/qwen3.5-397b-a17b)
6) GPT-OSS 120B (openai/gpt-oss-120b)
Choose model [1]:
Using NVIDIA Endpoint API with model: nvidia/nemotron-3-super-120b-a12b
I went with Nemotron 3 Super 120B, the flagship NVIDIA model and the default recommendation.
✓ Created provider nvidia-nim
Route: inference.local
Provider: nvidia-nim
Model: nvidia/nemotron-3-super-120b-a12b
Version: 1
✓ Inference route set: nvidia-nim / nvidia/nemotron-3-super-120b-a12b
All inference calls from inside the sandbox are routed through inference.local; the agent never directly reaches the internet.
✓ OpenClaw gateway launched inside sandbox
Available policy presets:
○ discord — Discord API, gateway, and CDN access
○ docker — Docker Hub and NVIDIA container registry access
○ huggingface — Hugging Face Hub, LFS, and Inference API access
○ jira — Jira and Atlassian Cloud access
○ npm — npm and Yarn registry access (suggested)
○ outlook — Microsoft Outlook and Graph API access
○ pypi — Python Package Index (PyPI) access (suggested)
○ slack — Slack API and webhooks access
○ telegram — Telegram Bot API access
Apply suggested presets (pypi, npm)? [Y/n/list]: list
Enter preset names (comma-separated): pypi,npm,telegram
✓ Policy version 2 loaded — Applied preset: pypi
✓ Policy version 3 loaded — Applied preset: npm
✓ Policy version 4 loaded — Applied preset: telegram
✓ Policies applied
Presets whitelist specific external services in the network egress policy. The wizard suggests pypi and npm for a typical development agent. The prompt has three modes:
| Response | Behavior |
|---|---|
Y (or Enter) |
Applies only the suggested presets |
n |
Skips all presets |
list |
Prompts for a comma-separated list of preset names to apply |
To apply the suggested presets plus additional ones, type list and enter all the presets you want: pypi,npm,telegram.
I added telegram here intentionally. A future goal is to connect the OpenClaw agent to a Telegram bot, so the sandbox will need outbound access to the Telegram Bot API. Adding the preset now means the network policy is already in place when that integration is ready.
──────────────────────────────────────────────────
Sandbox my-nemoclaw (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Endpoint API)
NIM not running
──────────────────────────────────────────────────
Next:
Run: nemoclaw my-nemoclaw connect
Status: nemoclaw my-nemoclaw status
Logs: nemoclaw my-nemoclaw logs --follow
──────────────────────────────────────────────────
[INFO] === Installation complete ===
NemoClaw (699s)
Your OpenClaw Sandbox is live.
Sandbox in, break things, and tell us what you find.
Next:
$ nemoclaw my-nemoclaw connect
sandbox@my-nemoclaw$ openclaw tui
The sandbox is running with Landlock + seccomp + netns, providing Linux kernel-level isolation for filesystem, syscalls, and network namespacing respectively.
nemoclaw my-nemoclaw connectThis drops you into the sandbox shell: sandbox@my-nemoclaw:~$
From there, launch the OpenClaw TUI:
openclaw tuiThe TUI opens an interactive chat interface. Send a message to verify the full inference route is working end-to-end through the sandbox:
The status bar confirms everything is wired up correctly: connected | idle, the agent session, and inference/nvidia/nemotron-3-super-120b-a12b showing the NVIDIA Endpoint API is being used.
When you type hello and hit Enter, here's what happens across the stack:
- TUI → OpenClaw agent: the message is sent over WebSocket to the OpenClaw gateway running inside the sandbox (
ws://127.0.0.1:18789) - Agent → OpenShell proxy: the agent makes an inference call to
inference.local, which is intercepted by the OpenShell gateway rather than going directly to the internet - OpenShell → NVIDIA Endpoint API: the gateway routes the call to the configured
nvidia-nimprovider atbuild.nvidia.com, forwarding the prompt tonvidia/nemotron-3-super-120b-a12b - NVIDIA Endpoint API → OpenShell → Agent: the model response travels back through the gateway to the agent
- Agent → TUI: the agent streams the reply back over WebSocket and the TUI renders it
The agent never has a direct connection to the outside world; every inference call is mediated by the OpenShell gateway, which is what makes the policy enforcement in the next section possible.
Note: Latency was noticeably variable. The first (cold) call ranged from 20 to 90 seconds across sessions, and even the second call varied between 6 and 40 seconds. After a few exchanges things settled down and responses became more consistent. This is likely a combination of cold-starting the connection through the OpenShell gateway proxy, network round-trip latency to
build.nvidia.com, queue depth on the NVIDIA Endpoint API, and the sheer size of Nemotron 3 Super 120B (120 billion parameters means higher time-to-first-token). Worth keeping in mind if you're evaluating NemoClaw for interactive use.
The TUI is best for interactive back-and-forth. For long outputs like large code generation, use the CLI instead:
openclaw agent --agent main --local -m "hello" --session-id testI wanted to try an alternative hosted model to compare latency and response quality, but ran into a blocker. The docs recommend using openshell inference set to switch models without re-running the full onboarding wizard:
openshell inference set --no-verify --provider nvidia-nim --model moonshotai/kimi-k2.5This does not work at this time — tracked in NemoClaw issue #733. There is also a related issue (#714) where selecting Kimi during the onboarding wizard still results in Nemotron being used. Par for the course with alpha software — worth keeping an eye on both issues for fixes.
To verify the network policy is actually enforced, ask the agent to curl a domain that isn't in the policy, like google.com:
The agent attempted the request and got back:
HTTP/1.1 403 Forbidden
from the OpenShell proxy at http://10.200.0.1:3128. The agent can't reach google.com because it isn't listed in the pypi, npm, or telegram presets. Crucially, the agent itself surfaces this clearly; it recognizes the proxy block, explains what happened, and suggests alternatives like OpenClaw's built-in web_fetch tool.
This is the policy working exactly as intended: all outbound traffic is intercepted by the OpenShell gateway, and anything not explicitly allowed is denied.
Since we added the telegram policy preset during onboarding, the sandbox is already allowed to reach the Telegram Bot API. To wire up the Telegram bridge, exit the sandbox shell first and work from the host terminal.
1. Exit the TUI and sandbox
If you're in the OpenClaw TUI, press Ctrl+C to exit. Then exit the sandbox shell:
exit2. Set your credentials
export NVIDIA_API_KEY=<your-nvidia-api-key>
export TELEGRAM_BOT_TOKEN=<your-bot-token>To get a bot token, message @BotFather on Telegram and send /newbot. To persist across sessions:
echo 'export NVIDIA_API_KEY=<your-key>' >> ~/.zshrc
echo 'export TELEGRAM_BOT_TOKEN=<your-token>' >> ~/.zshrc3. Start the Telegram bridge
nemoclaw start[services] telegram-bridge started (PID 92796)
[services] cloudflared not found — no public URL. Install: brev-setup.sh or manually.
┌─────────────────────────────────────────────────────┐
│ NemoClaw Services │
│ │
│ Telegram: bridge running │
│ │
│ Run 'openshell term' to monitor egress approvals │
└─────────────────────────────────────────────────────┘
The Telegram bridge runs on the host and communicates with the OpenClaw agent through the OpenShell gateway. You do not need to be connected to the sandbox shell for it to work.
4. Send a message
Open Telegram, find your bot, and send a message. The bridge forwards it to the agent and returns the response.
Known issues:
- #831 — The first message may fail with
Agent exited with code 255due to a session file lock error. The bridge fires a new SSH call before the prior one has released the lock. - #833 — A lock failure can leave the session in a broken state from accumulated failed tool calls, causing the agent to respond erratically (e.g. replying to "Hello" with "Let me open a new shell").
If this happens, reset the session from inside the sandbox:
nemoclaw my-nemoclaw connect
rm /sandbox/.openclaw-data/agents/main/sessions/*.jsonl
rm /sandbox/.openclaw-data/agents/main/sessions/sessions.json
exitA openclaw sessions reset <session-id> command has been requested in #834 to avoid this manual file deletion step.
Then send the message again. A clean session should respond with:
Hey. I just came online.
This looks like a fresh workspace — no memories yet, no identity established. Let's fix that.
Who am I? Well, that's kinda up to us to figure out. And who are you?
5. Stop the bridge
nemoclaw stopTo leave the TUI, press Ctrl+C. Then exit the sandbox shell:
exitBack on the host, stop all NemoClaw services:
nemoclaw stopOr to fully tear down the sandbox:
nemoclaw my-nemoclaw destroyTo remove NemoClaw and everything it created (sandboxes, gateway, Docker images, local state, and the CLI), run:
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bashUseful flags:
| Flag | Effect |
|---|---|
--yes |
Skip the confirmation prompt |
--keep-openshell |
Leave the openshell binary installed |
--delete-models |
Also remove NemoClaw-pulled Ollama models |
This does not remove shared tooling like Docker, Node.js, or npm.
NemoClaw is alpha software, but it's a compelling foundation. The curl | bash installer handles everything end-to-end — sandbox creation, inference routing, policy setup, and Telegram bridge — in a single guided flow. The security model is exactly right for autonomous agents: network egress is deny-by-default, filesystem access is scoped, and all inference calls are mediated by the OpenShell gateway rather than going directly to the internet.
On Apple Silicon, cloud inference via the NVIDIA Endpoint API works, though inference latency was variable — ranging from 20 to 90 seconds on cold starts. The Telegram integration worked end-to-end once session issues were cleared.
That said, being an early alpha, we ran into a few rough edges — all filed upstream:
- #714 — Selecting Kimi during onboarding still results in Nemotron being used on macOS
- #733 —
openshell inference setdoes not override the model in proxied inference requests - #831 — Telegram bridge fails with session file lock error (
Agent exited with code 255) - #833 — Session accumulates broken context after lock failure, causing corrupt agent state
- #834 — RFE: add
openclaw sessions reset <session-id>subcommand
