If OpenClaw isn’t answering messages, nine times out of ten the problem lives in the gateway or the daemon that keeps it alive. This guide shows exactly how to configure both. I’m assuming you already have Node 22+ and npm installed and that you’ve run npm install -g openclaw@latest. Everything else happens in the gateway folder and the daemon definition files we’re about to write.

Gateway architecture in one minute

The gateway is a tiny HTTP/WS server that fronts all the transports (Slack, Telegram, etc.) and brokers messages to the agent runtime. Picture three layers:

  1. Transports – WhatsApp, Discord, custom web widget.
  2. Gateway API – Express-based router running on 0.0.0.0:3210 by default.
  3. Runtime – The actual agent logic, tool runners, persistent memory (SQLite by default).

When a message lands in Slack it hits the Gateway API, the router decides which agent instance owns the channel, the runtime processes the prompt, and the reply goes back out the same socket. If the gateway dies, every integration dies with it. Hence the daemon.

Directory layout & config files

By default, openclaw gateway init creates the following in ~/.openclaw:

  • config.yaml – high-level settings (ports, DB path, auth tokens).
  • gateway.env – per-instance secrets (Slack tokens, JWT keys).
  • agents/<name>/ – each agent gets its own sub-folder with memory and scheduled tasks.
  • logs/ – rotating gateway-YYYY-MM-DD.log files.

Power users move the folder to /etc/openclaw on servers to keep homedirs clean. If you do, pass the path with --config-dir /etc/openclaw every time or set OPENCLAW_CONFIG_DIR in the environment so you don’t forget.

Sample config.yaml

# ~/.openclaw/config.yaml port: 3210 # HTTP + WebSocket external_url: https://bot.acme.com log_level: info # debug | info | warn | error sqlite: ~/.openclaw/db.sqlite workers: 4 # node.js worker_threads for parallel tools rate_limit_per_min: 60

Nothing fancy. One gotcha: external_url must be reachable by Telegram/Slack webhooks or they’ll silently drop events.

Port and network settings

The gateway listens on a single port for both HTTP and WebSocket. Default 3210 is fine for local testing, but production wants 80/443 behind Nginx or Caddy. Two common patterns:

1. Reverse proxy TLS termination

# /etc/nginx/sites-enabled/openclaw server { listen 443 ssl; server_name bot.acme.com; ssl_certificate /etc/letsencrypt/live/bot.acme.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/bot.acme.com/privkey.pem; location / { proxy_pass http://127.0.0.1:3210; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } }

Set external_url: https://bot.acme.com so OpenClaw generates correct webhook URLs.

2. Direct bind on privileged ports (not recommended)

# ephemeral CAP_NET_BIND_SERVICE to allow node to grab 443 directly sudo setcap 'cap_net_bind_service=+ep' $(which node) openclaw gateway start --port 443

I only do this on throwaway VMs because managing caps on node upgrades is a pain.

Running the daemon on Linux with systemd

Systemd is the path of least surprise on anything Debian, Ubuntu, Arch, or RHEL-ish. The gateway process is CPU-light but spawns workers, so we want automatic restarts and log hand-off to journald.

Create a dedicated user

sudo useradd -r -s /usr/sbin/nologin openclaw sudo mkdir -p /etc/openclaw sudo chown -R openclaw:openclaw /etc/openclaw

Copy your config.yaml and gateway.env into /etc/openclaw.

Systemd unit file

# /etc/systemd/system/openclaw.service [Unit] Description=OpenClaw Gateway After=network.target [Service] User=openclaw Environment=OPENCLAW_CONFIG_DIR=/etc/openclaw ExecStart=/usr/bin/env openclaw gateway start --pretty-logs Restart=on-failure RestartSec=3 # harden NoNewPrivileges=yes PrivateTmp=yes ProtectSystem=full [Install] WantedBy=multi-user.target

Enable and start

sudo systemctl daemon-reload sudo systemctl enable --now openclaw

Check status:

systemctl status openclaw -n 50

Logs stream direct from journal:

journalctl -u openclaw -f

Tip: Many users forget to update the unit after upgrading to node 22. If the binary path changes, systemd will happily keep restarting into an old Node and crash. Run which node and verify.

Running the daemon on macOS with launchd

macOS has launchd instead of systemd. The idea is the same: plist declares when/how to start, log output goes to ~/Library/Logs by default.

Create a plist

# ~/Library/LaunchAgents/com.openclaw.gateway.plist Labelcom.openclaw.gateway ProgramArguments /usr/local/bin/openclaw gateway start --pretty-logs EnvironmentVariables OPENCLAW_CONFIG_DIR/Users/alice/.openclaw RunAtLoad KeepAlive SuccessfulExit StandardOutPath/Users/alice/Library/Logs/openclaw.out.log StandardErrorPath/Users/alice/Library/Logs/openclaw.err.log

Load it:

launchctl load ~/Library/LaunchAgents/com.openclaw.gateway.plist

Un-/reload after edits:

launchctl unload ~/Library/LaunchAgents/com.openclaw.gateway.plist launchctl load ~/Library/LaunchAgents/com.openclaw.gateway.plist

The biggest difference from systemd: launchd doesn’t give a fancy status view. You read the log files or use log stream --predicate 'process == "openclaw"'.

Debugging, logs, and healthy restarts

Quick commands I keep in muscle memory

  • openclaw gateway doctor – prints YAML sanity checks (port availability, DB permissions).
  • openclaw gateway reload – zero-downtime reload of config.yaml; works via SIGHUP.
  • openclaw gateway tail – follow logs/gateway-*.log with colorized output.
  • openclaw cache clear – wipes tooling caches when odd state leaks.
  • pkill -USR1 -f "openclaw gateway" – dumps heap profile to /tmp/heap.heapsnapshot.

Log file anatomy

2024-07-22T17:04:12.433Z [INFO ] (router.ts:45) ↪ POST /slack/interactive 200 142ms 2024-07-22T17:04:13.011Z [ERROR] (tools.ts:87) ✖ GitHub API 403 "rate limit exceeded" 2024-07-22T17:04:13.012Z [DEBUG] (retry.ts:31) waiting 30s before retry #2

Because the logs rotate daily and keep 14 files by default, disk won’t run away. Tweak retention in config.yaml with log_retention_days.

Common failure cases I hit

  • Port already in use – Usually Docker Desktop still running an old process. lsof -i :3210 tells.
  • Webhook 403sexternal_url mis-configured or bad DNS/SSL chain.
  • OOM kills on small VPS – Worker threads default to 4. Drop to 2 in config.yaml if you only have 1 GB RAM.
  • System clock drift – JWT expiry on Slack/Telegram is strict. Run chrony or ntpd.

Next steps

You now have the gateway bound to a stable port, config files under version control, and a daemon that restarts on failure whether you run Linux or macOS. The next logical thing is to connect an integration. Try openclaw connect slack --workspace myteam and watch the logs light up. If something misbehaves, grab the exact error from journalctl or log stream and drop it in the GitHub Discussions board — odds are somebody else has hit it already.