If you need one OpenClaw agent that can write code, triage email, and crawl the web simultaneously, you actually need three agents. For real-world deployments we rarely run a single Swiss-army agent; instead we spin up multiple personas with isolated workspaces and let them collaborate over well-defined boundaries. This post walks through the exact steps, directory layout, and runtime knobs I use to keep agents from clobbering each other.
Why you want isolated OpenClaw workspaces
OpenClaw ships with a global default workspace at ~/.claw. That is fine for testing, but once you put agents in production you run into three pain points:
- Conflicting dependencies – the coding agent needs
ollama llama-3-70bwhile the email agent can live on a cheap OpenAIgpt-3.5-turbo. Oneclaw.yamlcan’t express both cleanly. - Memory bleed – a single vector store means your research agent’s web scraps leak into the email agent’s retrieval step. Weird recommendations ensue.
- Channel chaos – Telegram messages land in the Discord bot because both agents share the same webhook secret. Debugging that at 2 AM is not fun.
Workspaces fix all three. Each workspace is just a directory with its own claw.yaml, memory/, and plugins/. The daemon binds to that dir at startup and never touches siblings. No hidden magic.
Directory layout and workspace creation
The convention I follow (borrowed from the community Slack) is one top-level /opt/openclaw and N sub-dirs:
/opt/openclaw/
├─ code-agent/
│ ├─ claw.yaml
│ ├─ memory/
│ └─ plugins/
├─ mail-agent/
│ ├─ claw.yaml
│ └─ ...
└─ research-agent/
└─ ...
Creating a workspace is just:
mkdir /opt/openclaw/code-agent && cd $_
openclaw init --workspace # OpenClaw v0.31+
That drops a minimal claw.yaml pointing at SQLite memory. Feel free to swap in Postgres, Supabase, or an external Qdrant URL later.
Version gotchas
openclaw init --workspace landed in 0.31.0. On 0.30 you need to clone examples/multi-agent manually. Also, make sure you’re on Node 22+ globally or via nvm:
nvm install 22
nvm use 22
Spinning up multiple daemons (local and on ClawCloud)
You have two runtime paths:
- Self-host – run one daemon per workspace on different ports.
- ClawCloud – create one “Project” per agent; isolation is automatic.
Self-hosting with pm2
I like pm2 because it keeps logs separate and restarts on crash:
# code-agent
pm2 start "openclaw daemon --port 3101 --workspace /opt/openclaw/code-agent" \
--name code-agent
# mail-agent
pm2 start "openclaw daemon --port 3102 --workspace /opt/openclaw/mail-agent" \
--name mail-agent
Persist the config:
pm2 save && pm2 startup # generates a systemd unit
Docker-compose alternative
If you prefer containers, use one service per agent:
version: "3.9"
services:
code-agent:
image: ghcr.io/openclaw/openclaw:0.31.2
volumes:
- ./code-agent:/opt/workspace
ports:
- "3101:3000"
command: ["daemon", "--workspace", "/opt/workspace"]
mail-agent:
image: ghcr.io/openclaw/openclaw:0.31.2
volumes:
- ./mail-agent:/opt/workspace
ports:
- "3102:3000"
command: ["daemon", "--workspace", "/opt/workspace"]
The internal port is always 3000; we map externals 3101/3102.
On ClawCloud
The hosted flow is simpler. From the dashboard hit “New Agent”, pick a name, choose a model, done. Under the hood ClawCloud spins a workspace volume per agent, so you never share memory unless you mount an external vector store yourself.
Routing channels per agent
Every agent needs its own endpoint secrets. Example claw.yaml for the mail agent:
agent:
name: "MailBot"
model: "gpt-4o"
channels:
telegram:
botToken: "${TELEGRAM_TOKEN_MAILBOT}"
gmail:
clientId: "${GMAIL_CLIENT_ID}"
clientSecret: "${GMAIL_CLIENT_SECRET}"
label: "INBOX"
Code agent points at GitHub, Slack, and the shell integration instead. Keep tokens in per-agent .env files and load them via dotenv or systemd EnvironmentFile. Never reuse the same Slack bot token across agents unless you really want overlapping message history.
Reverse proxy tips
If you front everything with Traefik or nginx, route by sub-domain:
code.yourdomain.com -> localhost:3101
mail.yourdomain.com -> localhost:3102
That keeps the Web UI URLs stable even if you redeploy containers.
Memory: private, shared, or hybrid
Memory isolation is where most teams trip. There are three patterns floating around in GitHub Issues (#214, #398, #412):
- Fully private (default) – every workspace has its own SQLite file or Qdrant collection. Easiest, zero leakage, but duplicated data if agents need the same context.
- Central store with namespaces – point all agents at the same Postgres or Weaviate instance but set
collectionto the agent name. Saves disk, enables cross-agent queries later. - Hybrid – private vector store for short-term scratch, plus a shared “knowledge-base” collection for long-lived docs. This is what we run in production.
Example hybrid config for the research agent:
memory:
stores:
default:
provider: qdrant
url: "http://qdrant:6333"
collection: "research-agent-private"
kb:
provider: qdrant
url: "http://qdrant:6333"
collection: "shared-kb"
The agent code can choose which store at runtime: memory.use('kb').
Cleaning up orphaned vectors
With many agents you end up with abandoned collections. A quick bash one-liner I run weekly:
curl -s http://qdrant:6333/collections | jq -r '.result | keys[]' \
| grep -vE 'code-agent|mail-agent|research-agent|shared-kb' \
| xargs -I{} curl -X DELETE http://qdrant:6333/collections/{}
Patterns for specialized agents in production
Now the fun part—how teams wire these agents together.
1. Coding agent
- Runs
ollama/llama3:70bif the GPU budget allows; otherwise GPT-4o. - Integrations: GitHub, Slack, shell, browser.
- Memory: private codebase vectors + shared KB for company docs.
- Schedules: nightly
npm auditand PR summary at 08:00 UTC.
2. Email agent
- Cheap model (GPT-3.5, Claude 3 Haiku) because volume is high.
- Integrations: Gmail, Outlook Graph API, Telegram for approvals.
- Memory: mostly ephemeral; we purge after 30 days for compliance.
- Trick: use
regextool to auto-file receipts, then push stats to Notion via Composio.
3. Research agent
- Model: GPT-4o or Anthropic Opus for reasoning.
- Integrations: browser, PDF parser, Slack broadcast channel.
- Memory: heavy use of shared KB; seeds articles into it for others.
- Long-running tasks: monthly “market landscape” scrape of 200+ URLs.
We let agents communicate via Slack threads. The coding agent can @research-bot to fetch a source; the mail agent can @code-bot to draft a reply referencing a repo line. Loose coupling, no RPC.
Observability and resource quotas
More agents == more ways to DDoS your wallet. My checks:
- Per-agent OpenAI key with budget alerts.
- Prometheus scrape of
/metrics(added in 0.30.5) to graph token usage. - Systemd slice per agent:
MemoryMax=4G CPUQuota=50%keeps misbehaving scrapers in line. - ClawCloud users: switch the plan slider at project level, not global account.
Log aggregation is equally important. We ship each daemon’s JSON logs to Loki with a label agent=code etc. Makes grepping for stack traces trivial.
Practical takeaway
You can run ten OpenClaw agents on a single VPS or on ClawCloud without them stomping over each other as long as you:
- Create one workspace dir per agent
- Start a separate daemon pointing at that dir and port
- Use distinct channel secrets and optional sub-domains
- Decide upfront on memory isolation vs sharing
From there you can iterate—add a vector store, wire up Composio tools, or split out yet another persona. Most teams I talk to end up with 3-5 specialized agents. Try the pattern, measure the friction, and refine. The CLI is cheap; the architecture scales.