If you want an OpenClaw instance that mirrors a real software team—frontend engineer, backend engineer, test automation, and tech writer—this post walks through the exact setup. The community calls it “multi-agent coding.” We’ll spin up four specialized agents, wire them together with a minimal event bus, keep their workspaces sandboxed, and still let them share project context so they don’t step on each other.

Why run multiple specialized OpenClaw agents?

One monolithic agent can handle code generation, but users in GitHub Discussion #231 noticed quality drops when the prompt switched domains—UI code mixed TypeScript and JSX, then the same context produced flaky Jest specs. Splitting responsibilities lets each agent keep a tighter system prompt and a smaller chunk of memory. That translates to:

  • Cleaner diffs (the frontend agent never writes Go, the backend agent never touches CSS)
  • Fewer hallucinations because token budgets stay domain-specific
  • Parallel execution: agents can work at the same time instead of serially

Prerequisites and versions

Everything below was tested on:

  • Node 22.2.0 (OpenClaw requires >=22)
  • OpenClaw 0.38.4 (npm release from last week)
  • Redis 7.2 acting as the lightweight event bus
  • Git 2.45 with a monorepo layout (/apps for code, /docs for Markdown)
  • macOS 14.5 and Ubuntu 22.04 containers; both behaved the same

Folder layout and workspace isolation

We’ll create a top-level openclaw-agents directory and mount subfolders into each agent container. Hard boundary means no accidental file writes.

openclaw-agents/ ├── frontend/ │ └── workspace/ # React code lives here ├── backend/ │ └── workspace/ # Go API lives here ├── tests/ │ └── workspace/ # Cypress + Jest specs ├── docs/ │ └── workspace/ # Markdown + Docusaurus site └── shared/ └── event-bus/ # Redis socket volume

Each folder will host its own .clawrc.json (system prompt + tools) plus a persistent memory.jsonl file. If you run everything on ClawCloud, you can replicate the same structure by creating four separate agents and attaching the same Redis URL.

Bootstrapping the four agents

Install OpenClaw globally once, then run it with different --config flags:

npm i -g @openclaw/cli@0.38.4 # frontend openclaw daemon --config frontend/agent.json & # backend openclaw daemon --config backend/agent.json & # tests openclaw daemon --config tests/agent.json & # docs openclaw daemon --config docs/agent.json &

The agent.json files differ only in their name, system prompt, mounted tools, and workspace path. Example for the backend agent:

{ "name": "backend-gopher", "workspace": "./backend/workspace", "systemPrompt": "You are a strict Go backend engineer. Do not write UI. Use chi router. Keep code idiomatic.", "tools": [ "shell", "browser", "memory", "git|commit", "github|pr:create" ], "eventBus": { "type": "redis", "url": "redis://127.0.0.1:6379/0" } }

Repeat for the other agents, swapping out the instruction set and tools. The test agent, for instance, adds cypress|run and jest|run integrations from the Composio catalog.

Event bus for inter-agent communication

The simplest pattern: publish/subscribe via Redis channels. We don’t need NATS or Kafka until load gets ridiculous. Each agent subscribes to a channel matching its specialization and also listens to broadcast for global announcements.

# example snippet in agent bootstrap script import { createClient } from 'redis'; const bus = createClient({ url: process.env.EVENT_BUS_URL }); await bus.connect(); // route messages bus.subscribe('frontend', (msg) => handleMessage(JSON.parse(msg))); bus.subscribe('broadcast', (msg) => handleBroadcast(JSON.parse(msg))); function publish(channel, payload) { bus.publish(channel, JSON.stringify(payload)); }

Use the onFinish hook inside OpenClaw to broadcast completed tasks:

module.exports.onFinish = async function(task, result, claw) { const channel = `${task.labels.target}-done`; // e.g., frontend-done await claw.eventBus.publish(channel, { taskId: task.id, diff: result.diff }); };

Shared context without leaking entire memory files

Blindly sharing memory leads to bloated prompts. Two options emerged in the community:

  1. Pointer strategy: only share commit SHAs or doc URLs. Each agent fetches the actual content on demand via the browser tool. Lightweight, but slower.
  2. Chunked memory strategy: write significant findings to Redis with a short TTL and tag them (type:api-endpoint, type:ui-component). Other agents pull by tag.

I went with option 2. The TTL (24 h) prevents memory rot, and tags allow the docs agent to retrieve type:godoc entries only.

# store snippet after backend agent exposes a new route await bus.hSet('memory', `endpoint:${sha}`, JSON.stringify({ type: 'api-endpoint', method: 'POST', path: '/v1/users/login', description: 'User authentication endpoint', sha }));

Orchestration patterns in the wild

Three patterns pop up on Discord almost daily:

1. Producer → Consumer

Frontend agent produces lightweight interface stubs, backend agent consumes them to implement APIs. Works fine until naming mismatches creep in ("LoginCard" vs "SignInCard"). Add a strict JSON contract to the event payload to reduce drift.

2. Observer

Test and docs agents watch Git diffs and react. No publishing from others required. Low coupling, easiest to scale horizontally.

3. Round-robin refinement

A task cycles through agents in a fixed order: backend generates endpoint skeleton, frontend renders UI, tests write failing specs, docs write prose. Attractive demo, but long feedback loops. You’ll want to set a global TTL so loops don’t run forever.

Putting it all together: a docker-compose.yml you can copy

version: '3.9' services: redis: image: redis:7.2-alpine volumes: - ./shared/event-bus:/data ports: - '6379:6379' frontend: image: node:22-alpine working_dir: /app volumes: - ./frontend:/app environment: - EVENT_BUS_URL=redis://redis:6379/0 command: sh -c "npm i -g @openclaw/cli@0.38.4 && openclaw daemon --config agent.json" depends_on: - redis backend: image: node:22-alpine working_dir: /app volumes: - ./backend:/app environment: - EVENT_BUS_URL=redis://redis:6379/0 command: sh -c "npm i -g @openclaw/cli@0.38.4 && openclaw daemon --config agent.json" depends_on: - redis tests: image: node:22-alpine working_dir: /app volumes: - ./tests:/app environment: - EVENT_BUS_URL=redis://redis:6379/0 command: sh -c "npm i -g @openclaw/cli@0.38.4 && openclaw daemon --config agent.json" depends_on: - redis docs: image: node:22-alpine working_dir: /app volumes: - ./docs:/app environment: - EVENT_BUS_URL=redis://redis:6379/0 command: sh -c "npm i -g @openclaw/cli@0.38.4 && openclaw daemon --config agent.json" depends_on: - redis

Run docker compose up -d and watch logs fly by. Each agent exposes its gateway on a random high port; bind them if you prefer stable URLs.

Security footnotes

Isolating workspaces helps, but you also need to lock down tools:

  • Disable shell for docs agent—no reason it needs rm -rf
  • Provide read-only GitHub tokens to observer agents
  • Set FS_SANDBOX=true (added in 0.38.0) to prevent symlink escapes

Several users on Slack reported token leaks when using shared AWS creds across agents. The fix: mount different ~/.aws/credentials files or inject them via secrets manager.

Operational monitoring

Use the built-in Prometheus exporter (--metrics-port 9464) introduced in 0.37.7. Grafana dashboards are already in examples/grafana/ on GitHub. Key metrics:

  • agent_tasks_active – spikes mean you may need another GPU/CPU worker
  • token_budget_exhausted_total – if >0, raise memory limits or trim system prompts
  • event_bus_lag_seconds – anything >1 s degrades hand-offs

What breaks first and how to fix it

Based on two weeks of dogfooding:

  1. Namespace collisions. Two agents create Login.tsx in different folders. Mitigation: enforce capitals for frontend, snake_case for backend.
  2. Memory bloat. Shared Redis keys balloon past 200 MB. Set maxmemory-policy allkeys-lru.
  3. Deadlocks. Round-robin loops stall when an agent crashes. Add a watchdog container that deletes Redis keys older than 30 min and pings Slack.

Next step: introduce a coordinator or let agents self-organize?

If you stay under four agents, the pub/sub approach holds up fine. At six or more, consider a lightweight coordinator written in Go (community sample: openclaw-coordinator on GitHub). It assigns tasks, tracks dependencies, and surfaces a single progress UI. Personally, I keep the DIY Redis setup for side projects—fewer moving parts, easier to explain to new contributors.

That’s it. You now have a functioning OpenClaw multi-agent coding setup where each agent sticks to its lane yet collaborates through a tiny event bus. Fork the repo template, tweak prompts, and see how far the agents can carry your next release cycle without human intervention.