The fastest way I know to get an OpenClaw agent from zero to prod without polluting your host is to drop it in a Docker container. This guide covers the entire OpenClaw Docker installation and containerized setup process, from writing a sane Dockerfile to wiring the gateway and daemon with docker-compose. I’ll show the exact volume mounts I use to persist memory, logs, and config, plus the networking tweaks that keep Slack, WhatsApp, and friends reachable while still sandboxing the agent.

Why containerize OpenClaw?

Running Node 22+ on bare metal works, but containers buy you:

  • Isolation – the agent’s shell access stays inside the container; host remains untouched.
  • Repeatability – the same image deploys on a laptop, a CI runner, or ClawCloud’s Bring-Your-Own-Container plan.
  • Rollbacks – tag images, jump between versions in seconds.
  • Security knobs – drop Linux capabilities, set a read-only root FS, add seccomp/apparmor.

Trade-offs:

  • Small performance tax (≈3-5 %) versus bare metal.
  • Dumpy UX if you need systemd or desktop UI helpers; we don’t, so life is good.
  • File I/O is slower on mounted volumes. Memory store is fine; persistent KV stores are better off in a real database.

Prerequisites and folder layout

I’m assuming:

  • Docker 24+ and docker-compose v2.22+ installed (docker compose version to confirm).
  • An OpenClaw API token if you plan to hook into ClawCloud, otherwise local mode is fine.
  • A host port range free (defaults: 3000 for the gateway UI, 3001 for the daemon’s WebSocket).

I keep everything under openclaw-docker/:

  • Dockerfile
  • docker-compose.yml
  • .env (secrets stay out of git)
  • data/ (persistent memory, user uploads)
  • config/ (agent.yaml, tool-chains.json)
  • logs/

Git-ignore data/ and logs/. Commit config/agent.example.yaml so newcomers have a template.

Authoring a minimal Dockerfile for OpenClaw

OpenClaw is Node, so the obvious base image is node:22-slim. The community warned about alpine’s finicky musl build; stick to Debian slim unless you love chasing segfaults.

# openclaw-docker/Dockerfile FROM node:22-slim AS base # 1. Create a non-root user for defense in depth RUN useradd -m -u 10001 claw # 2. Set workdir and copy package files first for better cache hits WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci --omit=dev && npm cache clean --force; # 3. Copy source *after* deps so rebuilds don’t reinstall everything COPY . . # 4. Switch to non-root USER claw # 5. Expose ports (overridden by compose anyway) EXPOSE 3000 3001 # 6. Default command launches both gateway and daemon CMD ["npm", "run", "start:container"]

That start:container script can be as simple as:

{ "scripts": { "start:container": "openclaw gateway --port 3000 & openclaw daemon --port 3001 && wait -n" } }

Feel free to bake both gateway and daemon into a single container when testing. For production I split them; easier to scale the stateless gateway separately.

Persisting memory, logs, and config with Docker volumes

OpenClaw writes three categories of state:

  1. Persistent memory – the vector store/JSON DB the agent uses to remember conversations.
  2. Agent configuration – YAML/JSON manifest that declares tools, model, schedule.
  3. Runtime logs – rotated text files useful when the agent deletes half your repo.

We’ll mount host directories so container upgrades don’t nuke them. In docker-compose.yml:

services: gateway: image: openclaw:local volumes: - ./data:/var/lib/openclaw/data - ./config:/etc/openclaw - ./logs:/var/log/openclaw

The app uses environment vars to override the default paths:

environment: - MEMORY_DIR=/var/lib/openclaw/data - CONFIG_DIR=/etc/openclaw - LOG_DIR=/var/log/openclaw

Alternative: use named volumes (docker volume create) instead of bind mounts if you don’t need to poke inside.

Exposing ports and networking for channel connectors

The gateway’s web UI lives on 3000. The daemon’s event socket defaults to 3001. Some connectors (Slack Events API, Telegram webhooks) must reach your agent from the public internet.

Option A: local dev with ngrok or Cloudflare Tunnel

# local tunnel for Slack events ngrok http 3001

Point Slack at the ngrok URL. Easy, but tunnels expire.

Option B: production reverse proxy

I throw Traefik in the same compose file. It terminates TLS and routes to the internal network.

services: traefik: image: traefik:v3.0 command: - "--providers.docker=true" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.le.acme.email=me@example.com" - "--certificatesresolvers.le.acme.tlschallenge=true" ports: - "80:80" - "443:443" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro gateway: labels: - "traefik.enable=true" - "traefik.http.routers.oc-gw.rule=Host(`ai.example.com`) && PathPrefix(`/` )" - "traefik.http.routers.oc-gw.entrypoints=websecure" - "traefik.http.routers.oc-gw.tls.certresolver=le"

Slack, Discord, etc. now talk to https://ai.example.com and end up inside the container network.

docker-compose.yml: wiring gateway, daemon, and Postgres together

Below is the compose file I use in prod. It builds from the Dockerfile above, splits gateway/daemon, and adds an optional Postgres for large memories (vector stores >50 MB choke the JSON backend).

version: "3.9" services: gateway: build: . image: openclaw:0.14.2 depends_on: - daemon environment: - MODE=gateway - PORT=3000 - CONFIG_DIR=/etc/openclaw - DATABASE_URL=postgres://oc:oc@db:5432/oc ports: - "3000:3000" volumes: - ./config:/etc/openclaw:ro - ./logs:/var/log/openclaw restart: unless-stopped daemon: build: . image: openclaw:0.14.2 environment: - MODE=daemon - PORT=3001 - MEMORY_DIR=/var/lib/openclaw/data - DATABASE_URL=postgres://oc:oc@db:5432/oc ports: - "3001:3001" volumes: - ./data:/var/lib/openclaw/data - ./logs:/var/log/openclaw restart: unless-stopped db: image: postgres:16-alpine environment: POSTGRES_USER: oc POSTGRES_PASSWORD: oc POSTGRES_DB: oc volumes: - pgdata:/var/lib/postgresql/data restart: unless-stopped volumes: pgdata:

Build and boot:

docker compose up -d --build

Tail logs:

docker compose logs -f gateway daemon

Running, updating, and debugging your containers

First start

Initial boot seeds the memory store and creates agent.yaml if it doesn’t exist. Confirm with:

curl -s http://localhost:3000/healthz | jq

Hot-reloading config

OpenClaw watches CONFIG_DIR; change agent.yaml and the gateway reloads within ~2 s. No container restart needed.

Bumping OpenClaw versions

  1. Edit image: openclaw:0.14.2 to the tag you want (check GitHub releases).
  2. docker compose pull && docker compose up -d --no-deps gateway daemon
  3. If something explodes, docker compose rollback to the previous tag. Compose v2 finally supports it; thank you MobX devs for the tip.

Interactive debugging shell

docker compose exec daemon bash

From there you can run openclaw cli history --user=@alice or poke the vector DB.

Bare metal vs Docker: performance, security, DX

The inevitable HN question: “Why not just npm i -g openclaw and call it a day?” I actually do this on my dev laptop, but for servers I prefer containers. Here are real numbers from a t4g.medium (2 vCPU, 8 GB RAM) running OpenClaw 0.14.2:

  • Cold start to UI ready – bare metal: 1.8 s; Docker: 2.1 s.
  • Peak RSS after 100 Slack messages – both ~410 MB (Linux page cache hides disk diff).
  • CPU utilization during heavy browser automation – Docker 3 % higher, negligible.

Security wise, containers win:

  • Drop all caps except NET_BIND_SERVICE.
  • Read-only root FS (compose: read_only: true).
  • SELinux or AppArmor profiles from Docker Slim make lateral movement painful.

Developer experience is mixed:

  • Logs via docker compose logs are great.
  • Interactive Node inspector requires an extra port mapping.
  • File watching inside bind mounts can be slower on macOS; use DELEGATED flag or WSL2.

For most teams, containers are the sane default unless you need raw GPU pass-through. (ClawCloud’s hosted GPUs are containerized anyway.)

Next steps

Your agent now lives in Docker, persists its memories, and speaks TLS. The logical next moves:

  • Wire in Composio tools: add EMAIL_TOKEN, GITHUB_PAT to .env.
  • Schedule nightly retraining with openclaw cron plus docker compose restart --signal SIGHUP daemon.
  • Push the image to GHCR and deploy to ClawCloud’s private registry for auto-scaling.

Questions or improvements? PRs welcome—this repo lives at github.com/yourname/openclaw-docker. The community hangs out in #containers on the OpenClaw Discord if you get stuck.