If you just ran npm create openclaw or launched a fresh container on ClawCloud, the first thing you’ll see is the onboarding wizard: openclaw onboard. This guide unpacks every prompt, flag, and file the wizard touches so you know exactly what’s happening under the hood. The whole flow takes about ten minutes, but a little context saves hours of guesswork later.
Why the “openclaw onboard” wizard exists
OpenClaw ships batteries-included, but batteries still need wiring. We learned the hard way (GitHub issue #1438) that dumping a dozen JSON files on new users resulted in misconfigured OAuth apps, duplicate gateways, and bots stuck in limbo. The onboard wizard solves that: one opinionated path that spits out a runnable agent with sane defaults and a repeatable config.
Prerequisites and what the wizard checks
- Node.js 22+ (the daemon relies on
fetch()streams only present after 21.6). - Git in
$PATH—wizard tags the repo at the end so you can diff later. - A terminal with paste support (you’ll drop in OAuth secrets).
- Optional: a ClawCloud account if you plan to offload hosting.
When you run openclaw onboard, the first thing it does is a quick health check: Node version, write permission to ./.openclaw, and whether a prior onboard.json exists. If any check fails, the wizard exits; nothing half-writes.
Step 1 – Auth configuration (choosing an identity backend)
The wizard starts by asking “Where should the agent store identity and session state?” Really you have two paths:
- Local file store (default). Good for hobby projects or anything you’ll run on a single node. Stores tokens in
.openclaw/auth.json. - Redis (choose when deploying behind multiple gateways or using ClawCloud’s auto-scaler). You get a prompt for a connection string like
redis://user:pass@clawcache.use1.cache.amazonaws.com:6379/0.
The choice impacts horizontal scaling later. Switching is possible but noisy—expect to re-auth users—so pick carefully now.
What the wizard writes
Regardless of backend, the wizard appends a block to openclaw.config.mjs:
export const auth = {
backend: "redis", // or "file"
url: "redis://user:pass@clawcache...", // only if redis
};
Nothing secret lives in code; credentials go to .env that the wizard creates if missing.
Step 2 – Dropping in your OpenAI / Anthropic keys
Even though OpenClaw can run fully local (Mistral, Llama-3-70B-Instruct, etc.), most users start with hosted LLMs for latency. The wizard asks:
→ Which model provider will this agent use by default?
1) OpenAI (gpt-4o, gpt-4-turbo)
2) Anthropic (claude-3-opus)
3) Local (no key required)
Pick one and paste the key when prompted. The key lands in .env as OPENAI_API_KEY or ANTHROPIC_API_KEY. The wizard also writes a matcher block:
export const llm = {
provider: "openai",
model: "gpt-4o",
};
Why not ask for multiple providers? Because the debate raged for months (#872, #911). We learned people forget which token goes where. You can always extend the llm array later.
Step 3 – Gateway settings (hostname, port, SSL)
The gateway is the thin Node.js web UI that proxies chat events to the daemon worker pool. The wizard needs three things:
- Public URL (defaults to
http://localhost:5123). - Port (clashes with React dev servers a lot; change if needed).
- SSL cert paths (optional). If you leave blank, wizard sets
tls = false. For prod you’ll point to Let’s Encrypt files.
You can also pass these non-interactively:
openclaw onboard \
--gateway.url=https://bot.mydomain.com \
--gateway.port=443 \
--gateway.cert=/etc/letsencrypt/live/bot/fullchain.pem \
--gateway.key=/etc/letsencrypt/live/bot/privkey.pem
The wizard validates the cert pair with openssl x509 -noout -subject. If it fails, you still finish onboard but TLS is disabled and you get a TODO comment in config.
Step 4 – Channel selection (Slack, Discord, WhatsApp, etc.)
Next the wizard lists the first-party channel adapters baked into the current release (v0.42.3 as I write):
- Slack RTM + Events
- Discord Gateway
- Telegram Bot API
- WhatsApp Cloud API
- Signal (community plug-in)
- Web chat iframe
You can select multiple with spacebar. Most folks start with Slack because it’s OAuth 2.0 instead of webhooks. The wizard then runs a mini-flow for each chosen channel.
Example: Slack token flow
- Wizard opens a browser tab to
https://api.slack.com/apps?new. - You create an app, add
chat:write,commands, and paste the Bot Token (xoxb-...) back. - Wizard writes it to
.envasSLACK_BOT_TOKENand stampschannels/slack.mjswith the scopes array.
Every channel module follows the same convention—env var for secrets, small file in channels/ for behavioral tweaks. If you skip a field, wizard drops a // TODO comment so linting blocks prod deploys.
Step 5 – Skill installation (tooling equals power)
Skills are to OpenClaw what plugins are to ChatGPT. They’re just Node modules describing an input schema, an output schema, and an exec(). During onboarding the wizard asks:
✓ Install recommended starter skills? (y/N)
Hit “y” and it adds the following to package.json plus pins versions:
@openclaw/skill-weather@3.4.1@openclaw/skill-remind@1.2.0@composio/gmail@0.9.8(needs Gmail OAuth later)
The wizard then stubs comment blocks in skills.config.mjs so you can tune rate limits. If you choose “N”, you’ll get a lone @openclaw/skill-shell because the agent can’t function without something to call.
Custom skills repository
If you have your own monorepo, pass --skills.local=./packages/skills. The wizard will symlink instead of adding NPM deps so npm install stays clean.
Step 6 – Daemon setup and first run
Last piece: the background worker. The wizard asks two things:
- Process manager:
none,pm2, orsystemd. On ClawCloud we default tosupervisordbut locally I pickpm2for log rotation. - Max concurrency: how many tasks to run in parallel. Default is
Math.max(2, os.cpus().length-1).
Choosing pm2 writes a file ecosystem.config.js:
module.exports = {
apps: [{
name: "openclaw-daemon",
script: "node",
args: "./node_modules/.bin/openclaw daemon",
env: {
NODE_ENV: "production"
},
max_memory_restart: "1G",
instances: 1,
}]
};
systemd gets a /etc/systemd/system/openclaw.service snippet you can sudo tee. The wizard prints the path but won’t enable it—that’s your call.
Smoke test
Right before quitting, the wizard runs openclaw doctor. Expect output like:
✔ Gateway reachable at http://localhost:5123
✔ Daemon responds to healthcheck
✔ 3 skills loaded (weather, remind, shell)
✔ Slack channel subscribed (/openclaw-test)
If any line shows a red ✗ the wizard keeps the exit code non-zero, which CI can flag.
Undoing or re-running the wizard
Say you fat-fingered the Anthropic key. Rerun openclaw onboard --reset. That wipes .openclaw/onboard.json but leaves your .env untouched. If you need a full nuke use --purge, which deletes channel stubs and skills too.
Non-interactive mode for CI pipelines
The wizard’s prompts are nice once, annoying always. You can script the whole thing:
openclaw onboard \
--auth.backend=redis \
--auth.url=$REDIS_URL \
--llm.provider=openai \
--llm.model=gpt-4o \
--openai.key=$OPENAI_API_KEY \
--gateway.url=https://bot.example.com \
--channels=slack,web \
--skills=weather,remind,shell \
--process=pm2 \
--concurrency=4 \
--yes
Add --commit and the wizard will run git add . && git commit -m "chore: onboard" at the end so your infra engineers can review diffs.
Troubleshooting common onboarding failures
- Port 5123 already in use: Usually React dev server. Rerun with
--gateway.port 6006. - ETIMEDOUT when validating Redis: AWS caches need
tls=truequery param. Append it. - Slack 400 “invalid_auth”: You pasted the Client Secret not the Bot Token; grab
xoxb-one. - Composio skill install hangs: NPM v10 bug (#1891). Upgrade to npm 10.2.4 or pass
--skills.local.
What lands on disk after onboarding
.
├── .env
├── .openclaw/
│ └── onboard.json
├── openclaw.config.mjs
├── skills.config.mjs
├── channels/
│ └── slack.mjs
├── package.json
└── ecosystem.config.js
The onboard.json file is intentionally minimal—one flat object with checksums so the wizard can skip prompts when rerun. It’s safe to commit (no secrets).
Next steps: push the agent live
At this point npm run dev spins up both gateway and daemon locally. If you want production uptime, two obvious paths:
- Docker Compose:
openclaw onboard --docker > docker-compose.ymldrops a ready stack. - ClawCloud one-click:
clawcloud deployreads youronboard.jsonand mirrors the setup—auth store, secrets, skills—without more questions.
From here, hook up more skills, invite your team to the Slack channel, and watch the agent hold its own in real conversations. The heavy lifting—auth, keys, gateway, channels, skills, daemon—is already wired thanks to the onboarding wizard.