This guide covers the minimum and recommended hardware specifications for hosting OpenClaw across different deployment scenarios. OpenClaw is designed to be lightweight—the Gateway itself is a Node.js process that proxies messages to cloud-hosted LLMs—but resource needs scale with enabled features, concurrent sessions, and sandboxing configuration.
| Scenario | CPU | RAM | Storage | Network |
|---|---|---|---|---|
| Minimal (headless gateway) | 1 vCPU | 512MB | 1GB | 1 Mbps |
| Standard (gateway + channels) | 1-2 vCPU | 1-2GB | 5GB | 5 Mbps |
| Production (sandboxing + browser) | 2-4 vCPU | 4-8GB | 20GB+ | 10+ Mbps |
| Heavy (multi-agent + media) | 4+ vCPU | 8-16GB | 50GB+ | 25+ Mbps |
OpenClaw runs on Node.js and supports multiple CPU architectures. The Gateway is pure JavaScript/TypeScript, so it runs anywhere Node.js runs. Architecture considerations primarily affect native dependencies (like sharp for image processing) and Docker sandbox images.
| Architecture | Status | Common Platforms | Notes |
|---|---|---|---|
| x86_64 (amd64) | ✅ Full support | Intel/AMD servers, most VPS, Windows, macOS Intel | Primary development target |
| ARM64 (aarch64) | ✅ Full support | Apple Silicon, Raspberry Pi 4/5, AWS Graviton, Oracle Ampere, Azure Cobalt | Excellent performance/watt |
| ARMv7 (32-bit) | Older Raspberry Pi, embedded | Node 22 requires 64-bit; use older Node or upgrade hardware |
The default and most widely tested architecture:
- Intel Xeon / Core: Full support, all features work
- AMD EPYC / Ryzen: Full support, all features work
- Virtualized (KVM, VMware, Hyper-V): Full support
- Cloud instances: AWS EC2, Azure VMs, GCP Compute, DigitalOcean, Hetzner, Linode, Vultr
No special configuration required. All prebuilt binaries and Docker images target x86_64 by default.
Fully supported with excellent performance characteristics:
- Apple Silicon (M1/M2/M3/M4): Native support via macOS app or CLI
- AWS Graviton (2/3/4): Full support, often better price/performance than x86
- Oracle Ampere A1: Full support, included in Always Free tier
- Azure Cobalt 100: Full support on ARM-based Azure VMs
- Raspberry Pi 4/5: Full support with 64-bit OS
ARM64 considerations:
- Native npm packages (sharp, better-sqlite3) have ARM64 prebuilds
- Docker sandbox images auto-detect architecture
- Some third-party skill binaries may lack ARM builds (check per-skill)
- Use
build-essentialif native compilation is needed
OpenClaw auto-detects your architecture. Verify with:
node -p "process.arch" # Expected: x64 or arm64
uname -m # Expected: x86_64 or aarch64| Component | CPU Impact | Memory Impact | Disk Impact | Notes |
|---|---|---|---|---|
| Gateway process | Low | ~150-300MB | Minimal | Core Node.js event loop |
| WebSocket connections | Low | ~5-10MB/conn | Minimal | Operators, nodes, Control UI |
| WhatsApp (Baileys) | Low-Medium | ~100-200MB | ~50MB sessions | QR auth, message polling |
| Telegram (grammY) | Low | ~50-100MB | Minimal | Bot polling |
| Discord | Low | ~50-100MB | Minimal | Gateway connection |
| Sandbox containers | Medium-High | 256MB-2GB/container | ~500MB-2GB/image | Per-session or shared |
| Sandbox browser | High | 500MB-2GB | ~1GB | Chromium + Xvfb + VNC |
| Media processing | High (burst) | ~200-500MB | Varies | Image resizing via sharp |
| TTS generation | Low (API) | Minimal | ~10MB/audio | ElevenLabs/OpenAI/Edge |
| Skills execution | Varies | Varies | Varies | Depends on skill |
Estimate your memory needs:
Base Gateway: ~300MB
+ Per active channel: ~100MB each
+ Per WebSocket client: ~10MB each
+ Per sandbox container: ~256MB-1GB each
+ Sandbox browser (if enabled): ~500MB-2GB
+ Buffer for spikes: ~20% overhead
─────────────────────────────────────────────
Total = Sum of above
Example (standard setup):
- Gateway: 300MB
- WhatsApp + Telegram: 200MB
- 3 WebSocket clients: 30MB
- No sandboxing: 0MB
- Buffer: ~100MB
- Total: ~630MB → recommend 1GB
The lightest deployment: Gateway-only, no sandboxing, single channel, accessed via SSH tunnel.
Use case: Personal bot, budget VPS, Raspberry Pi, testing.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU (shared OK) | 1 vCPU |
| RAM | 512MB | 1GB |
| Storage | 1GB | 5GB |
| Swap | 1GB (if RAM < 1GB) | Optional |
| Network | 1 Mbps | 5 Mbps |
Configuration tips:
- Disable sandboxing:
agents.defaults.sandbox.mode = "off" - Use API-based TTS (no local processing)
- Single messaging channel
- Access via SSH tunnel (no public binding)
{
agents: {
defaults: {
sandbox: { mode: "off" },
},
},
gateway: {
bind: "loopback",
},
}Typical personal deployment with multiple channels, no sandboxing.
Use case: Personal assistant, home server, small VPS.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 1GB | 2GB |
| Storage | 5GB | 10GB |
| Network | 5 Mbps | 10 Mbps |
Includes:
- Gateway + Control UI
- 2-3 messaging channels (WhatsApp, Telegram, Discord)
- Workspace storage for agent files
- Tailscale/VPN access
Enterprise-grade deployment with Docker sandboxing for tool execution.
Use case: Team deployment, security-conscious setups, multi-agent workflows.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 2 vCPU | 4 vCPU |
| RAM | 4GB | 8GB |
| Storage | 20GB | 50GB |
| Network | 10 Mbps | 25 Mbps |
Docker requirements:
- Docker Engine 20.10+ or Docker Desktop
- Sufficient disk for sandbox images (~500MB base, ~1GB with browser)
- Container memory limits configurable per-sandbox
Sandbox resource configuration:
{
agents: {
defaults: {
sandbox: {
mode: "all",
scope: "session",
docker: {
memory: "512m", // Per-container memory limit
memorySwap: "1g", // Memory + swap limit
cpus: 1, // CPU cores allocated
pidsLimit: 100, // Process limit
},
},
},
},
}High-throughput deployment with multiple concurrent agents, media processing, and browser automation.
Use case: Team/organization, broadcast workflows, automation pipelines.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 8+ vCPU |
| RAM | 8GB | 16GB+ |
| Storage | 50GB | 100GB+ SSD |
| Network | 25 Mbps | 100+ Mbps |
Considerations:
- SSD storage strongly recommended for I/O-heavy workloads
- Multiple sandbox containers running concurrently
- Browser sandboxes for web automation
- Media processing (image/video) spikes CPU usage
OpenClaw on Windows runs inside WSL2 (Ubuntu recommended).
| Resource | Minimum | Recommended |
|---|---|---|
| Host RAM | 8GB | 16GB |
| WSL2 RAM | 4GB allocated | 8GB allocated |
| Disk | 20GB (WSL vhdx) | 50GB |
| Windows Version | Windows 10 2004+ | Windows 11 |
WSL2 memory configuration (%USERPROFILE%\.wslconfig):
[wsl2]
memory=8GB
processors=4
swap=4GBAdditional considerations:
- WSL2 has its own virtual network; use portproxy for LAN access
- Docker Desktop integrates with WSL2 for sandboxing
- Systemd must be enabled for
openclaw gateway install
Native support via the OpenClaw menubar app or CLI.
| Resource | Minimum | Recommended |
|---|---|---|
| macOS Version | 13 (Ventura) | 14+ (Sonoma) |
| Chip | Intel or Apple Silicon | Apple Silicon |
| RAM | 4GB available | 8GB available |
| Disk | 5GB | 20GB |
Notes:
- Apple Silicon (M1/M2/M3) offers excellent performance/watt
- Docker Desktop required for sandboxing (or OrbStack)
- Menubar app manages gateway lifecycle automatically
Best performance for server deployments.
| Resource | Minimum | Recommended |
|---|---|---|
| Kernel | 5.4+ | 5.15+ |
| Distro | Ubuntu 22.04, Debian 12 | Ubuntu 24.04, Debian 13 |
| RAM | 1GB | 2-4GB |
| Disk | 5GB | 20GB |
Systemd user service (recommended):
openclaw gateway install
systemctl --user enable --now openclaw-gatewayBudget self-hosted option for always-on deployments.
| Pi Model | RAM | Verdict | Notes |
|---|---|---|---|
| Pi 5 | 4-8GB | ✅ Best | Fast, recommended |
| Pi 4 | 4GB | ✅ Good | Sweet spot |
| Pi 4 | 2GB | ✅ OK | Add 2GB swap |
| Pi 4 | 1GB | Minimal config only | |
| Pi 3B+ | 1GB | Functional but sluggish | |
| Pi Zero 2 | 512MB | ❌ No | Not recommended |
Optimization tips:
- Use USB SSD instead of SD card (major performance boost)
- Set
gpu_mem=16in/boot/config.txt(headless) - Add 2GB swap file for RAM < 4GB
- Disable sandboxing unless you have 4GB+ RAM
# Create swap on low-RAM Pi
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstabARM64 architecture is fully supported.
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 OCPU/vCPU | 2-4 OCPU/vCPU |
| RAM | 2GB | 4-8GB |
| Disk | 10GB | 50GB |
Oracle Cloud Always Free tier:
- Up to 4 OCPUs, 24GB RAM (ARM Ampere A1)
- Perfect for OpenClaw gateway
- Note: capacity and signup can be finicky
ARM considerations:
- Most OpenClaw features work on ARM64
- Some external skill binaries may need ARM builds
- Use
build-essentialfor native compilation
| Plan | CPU | RAM | Disk | Price/mo | Use Case |
|---|---|---|---|---|---|
| Free | Shared | 512MB | None | $0 | Testing only |
| Starter | Shared | 512MB-2GB | 1GB+ | ~$7+ | Personal |
| Standard | 0.5-1 CPU | 2GB+ | 10GB+ | ~$25+ | Production |
Blueprint default: Starter plan with 1GB persistent disk.
| Plan | CPU | RAM | Price/mo | Notes |
|---|---|---|---|---|
| Trial | Shared | 512MB | $0 (limited) | 500 hours/month |
| Hobby | Shared | 8GB max | $5+ usage | Per-resource billing |
| Pro | Dedicated | 32GB max | $20+ | Teams |
Recommended: Hobby plan with volume mounted at /data.
| Size | CPU | RAM | Price/mo | Notes |
|---|---|---|---|---|
| shared-cpu-1x | Shared | 256MB | ~$2 | Too small |
| shared-cpu-2x | Shared | 2GB | ~$15 | Recommended |
| performance-1x | 1 CPU | 2GB | ~$30 | Production |
Default config (fly.toml):
[[vm]]
size = "shared-cpu-2x"
memory = "2048mb"
[env]
NODE_OPTIONS = "--max-old-space-size=1536"| Plan | CPU | RAM | Disk | Price/mo | Notes |
|---|---|---|---|---|---|
| CX22 | 2 vCPU | 4GB | 40GB | ~$4 | Best budget |
| CX32 | 4 vCPU | 8GB | 80GB | ~$8 | With sandboxing |
| CAX11 (ARM) | 2 vCPU | 4GB | 40GB | ~$4 | ARM option |
| Plan | CPU | RAM | Disk | Price/mo |
|---|---|---|---|---|
| Basic (s-1vcpu-1gb) | 1 vCPU | 1GB | 25GB | $6 |
| Basic (s-2vcpu-2gb) | 2 vCPU | 2GB | 50GB | $12 |
| Basic (s-2vcpu-4gb) | 2 vCPU | 4GB | 80GB | $24 |
Azure offers multiple compute options for hosting OpenClaw, from IaaS VMs to PaaS container services.
Traditional VM hosting with full control:
| Series | Size | vCPU | RAM | Price/mo (est.) | Use Case |
|---|---|---|---|---|---|
| B-series (burstable) | B1s | 1 | 1GB | ~$8 | Testing, minimal |
| B-series | B2s | 2 | 4GB | ~$30 | Personal, standard |
| B-series | B4ms | 4 | 16GB | ~$120 | Production w/ sandboxing |
| D-series (general) | D2s_v5 | 2 | 8GB | ~$70 | Production |
| D-series ARM | D2ps_v5 | 2 | 8GB | ~$60 | ARM64, cost-optimized |
Azure VM setup:
- Create VM (Ubuntu 24.04 LTS recommended)
- Open port 22 (SSH) in Network Security Group
- SSH in and install OpenClaw:
curl -fsSL https://openclaw.ai/install.sh | bash openclaw onboard --install-daemon - Access via SSH tunnel:
ssh -L 18789:127.0.0.1:18789 user@vm-ip
Tips:
- Use Azure Spot VMs for up to 90% cost savings (with eviction risk)
- B-series burstable VMs are ideal for OpenClaw's bursty workload pattern
- Attach Azure Managed Disk for persistent workspace storage
- Consider Azure Bastion for secure access without public SSH
Serverless container hosting with automatic scaling:
| Tier | vCPU | RAM | Price | Notes |
|---|---|---|---|---|
| Consumption | 0.25-4 | 0.5-8GB | Pay-per-use | Scale to zero |
| Dedicated | 1-4+ | 2-16GB+ | Reserved | Consistent performance |
Container Apps deployment:
- Create Container Apps Environment
- Deploy from Docker Hub or Azure Container Registry:
az containerapp create \ --name openclaw-gateway \ --resource-group myResourceGroup \ --environment myContainerAppEnv \ --image ghcr.io/openclaw/openclaw:latest \ --target-port 8080 \ --ingress external \ --cpu 1 --memory 2Gi \ --env-vars \ PORT=8080 \ SETUP_PASSWORD=<secret> \ OPENCLAW_STATE_DIR=/data/.openclaw - Mount Azure Files for persistent storage
- Access via the generated FQDN
Container Apps considerations:
- Supports persistent volumes via Azure Files
- Built-in HTTPS with managed certificates
- Auto-scaling based on HTTP traffic or custom metrics
- Consumption tier can scale to zero (cold start ~10-30s)
- Use Dedicated workload profile for consistent latency
Managed web app hosting with container support:
| Plan | vCPU | RAM | Price/mo (est.) | Notes |
|---|---|---|---|---|
| B1 | 1 | 1.75GB | ~$13 | Basic, no auto-scale |
| P1v3 | 2 | 8GB | ~$100 | Production, auto-scale |
| P2v3 | 4 | 16GB | ~$200 | High performance |
App Service deployment:
- Create App Service Plan (Linux, Docker)
- Create Web App with container settings:
- Image:
ghcr.io/openclaw/openclaw:latest - Port: 8080
- Image:
- Configure Application Settings:
PORT=8080 SETUP_PASSWORD=<secret> OPENCLAW_STATE_DIR=/home/data/.openclaw OPENCLAW_WORKSPACE_DIR=/home/data/workspace - Mount Azure Storage for
/home/datapersistence - Access via
https://<app-name>.azurewebsites.net
App Service considerations:
- WebSocket support requires configuration (enable in portal)
/homedirectory is persistent by default- Use deployment slots for zero-downtime updates
- Always-On setting prevents cold starts (not available on Basic tier)
| Scenario | Recommended Service | Configuration |
|---|---|---|
| Budget/testing | B1s VM | 1 vCPU, 1GB, SSH tunnel |
| Personal use | B2s VM or Container Apps | 2 vCPU, 4GB |
| Production | D2s_v5 VM or App Service P1v3 | 2 vCPU, 8GB, persistent storage |
| Sandboxing | D4s_v5 VM | 4 vCPU, 16GB, Docker installed |
| Cost-optimized | D2ps_v5 (ARM) or Spot VM | ARM64 or burstable |
| Component | Size | Notes |
|---|---|---|
| OpenClaw installation | ~200MB | node_modules + built assets |
| Configuration | <1MB | ~/.openclaw/openclaw.json |
| WhatsApp sessions | ~50MB | ~/.openclaw/sessions/ |
| Logs (rolling) | ~100MB | Depends on retention |
| Control UI | ~10MB | Bundled static assets |
The workspace (~/.openclaw/workspace) grows based on usage:
| Content | Typical Size | Notes |
|---|---|---|
| Agent files | 10MB-1GB | Code, documents, project files |
| Inbound media | 10MB-10GB | Images, audio, video received |
| Generated media | 10MB-10GB | TTS audio, processed images |
| Memory/embeddings | 50MB-500MB | If using memory extensions |
| Skills cache | 10MB-100MB | Downloaded skill assets |
Storage recommendations:
- Personal use: 5-10GB
- Team/production: 20-50GB
- Media-heavy workflows: 50-100GB+
| Image | Size | Notes |
|---|---|---|
openclaw-sandbox:bookworm-slim |
~150MB | Base sandbox image |
openclaw-sandbox-browser:bookworm-slim |
~800MB | Browser + Xvfb + VNC |
| Per-container overlay | 50-500MB | Ephemeral writes |
| Sandbox workspace | 100MB-1GB/session | Tool execution artifacts |
Build the sandbox images once:
scripts/sandbox-setup.sh # Base sandbox
scripts/sandbox-browser-setup.sh # Browser sandbox| Activity | Bandwidth | Notes |
|---|---|---|
| Idle gateway | <100 Kbps | Heartbeats, polling |
| Text messaging | <1 Mbps | Small payloads |
| Media (images) | 1-10 Mbps | Per transfer |
| Media (video/audio) | 5-50 Mbps | Per transfer |
| LLM API calls | 1-5 Mbps | Depends on response length |
| Control UI | <1 Mbps | WebSocket + static assets |
Minimum: 1 Mbps (text-only) Recommended: 10+ Mbps (media support)
| Port | Service | Required |
|---|---|---|
| 18789 | Gateway (WS + HTTP) | Yes |
| 18790 | Bridge (legacy nodes) | Optional |
| 18793 | Canvas host | Optional |
| 9222 | Sandbox browser CDP | If sandboxing |
| 5900 | Sandbox VNC | If sandboxing |
| 6080 | Sandbox noVNC | If sandboxing |
Firewall rules (minimal):
- Allow outbound HTTPS (443) for LLM APIs
- Allow outbound HTTPS for messaging services
- Inbound only if exposing gateway publicly (not recommended)
| Component | Acceptable | Optimal |
|---|---|---|
| LLM API | <500ms RTT | <100ms RTT |
| Messaging APIs | <200ms RTT | <50ms RTT |
| Control UI | <100ms RTT | <30ms RTT |
Tip: Choose a VPS region close to your LLM provider's API endpoints for best response times.
OpenClaw is primarily single-process; vertical scaling is most effective:
- CPU: Add cores for parallel sandbox execution
- RAM: Increase for more concurrent sessions/sandboxes
- Disk: Upgrade to SSD for better I/O
- Network: Higher bandwidth for media-heavy workflows
Multiple Gateway instances are supported but require isolation:
- Separate state directories
- Unique ports
- Independent messaging sessions
See Multiple Gateways for the full guide.
Node.js memory limit:
export NODE_OPTIONS="--max-old-space-size=2048" # 2GB heapSandbox resource limits:
{
agents: {
defaults: {
sandbox: {
docker: {
memory: "1g",
cpus: 2,
pidsLimit: 200,
},
},
},
},
}Reduce memory pressure:
- Limit concurrent sandbox containers via
scope: "shared" - Prune idle containers:
agents.defaults.sandbox.prune.idleHours - Disable unused channels
Symptoms: Gateway crashes, containers killed, slow performance.
Solutions:
- Add swap space (Linux/Pi)
- Increase VM/container memory
- Limit sandbox memory:
sandbox.docker.memory - Reduce concurrent sandboxes:
sandbox.scope = "shared" - Check for memory leaks:
openclaw health --verbose
Symptoms: Slow response times, high load average.
Solutions:
- Limit sandbox CPU:
sandbox.docker.cpus - Check for runaway processes in sandboxes
- Profile with
openclaw health --verbose - Consider ARM for better perf/watt (Pi, Graviton, Ampere)
Symptoms: Write failures, container creation fails.
Solutions:
- Prune old sandbox containers:
openclaw sandbox prune - Clean workspace: remove old media from
~/.openclaw/workspace - Rotate logs
- Increase disk allocation
Symptoms: Slow API responses, message delivery delays.
Solutions:
- Check connectivity to LLM providers
- Verify DNS resolution
- Use a closer VPS region
- Check bandwidth utilization
| Deployment | CPU | RAM | Storage | Best For |
|---|---|---|---|---|
| Raspberry Pi | 1-4 cores | 1-4GB | 16GB+ SD/SSD | Budget, always-on |
| Budget VPS | 1-2 vCPU | 1-2GB | 10-20GB | Personal use |
| Standard VPS | 2-4 vCPU | 4-8GB | 20-50GB | Production |
| Heavy workload | 4-8+ vCPU | 8-16GB+ | 50-100GB+ SSD | Teams, automation |
Key takeaways:
- The Gateway itself is lightweight (~300MB RAM baseline)
- Sandboxing adds significant resource overhead
- Storage needs scale with media and workspace usage
- SSD storage dramatically improves responsiveness
- Vertical scaling is more effective than horizontal
Related docs: