Need to turn your support@company.com inbox into a sane, trackable workflow? This guide shows exactly how we wired OpenClaw 0.32.1 and Composio Gmail to handle 9,000+ messages a week, assign them to six agents, and hit a 2-hour first-response SLA—without humans fighting over the same thread.

Why extend OpenClaw from personal to team email?

Single-user OpenClaw setups are straightforward: one daemon, one memory store, direct notifications. The moment you plug in a shared mailbox three things explode:

  • Concurrency: multiple humans + multiple OpenClaw instances can pick the same email.
  • State: You need persistent ticket IDs, statuses, assignees, deadlines.
  • Reliability: somebody will always reply to the wrong thread at 3 a.m.

The stock agent loop isn’t opinionated about any of this. That’s fine for tinkering, terrible for teams. Below is the architecture we landed on after two weeks of yak-shaving.

Architecture: Multi-Agent OpenClaw for Shared Inbox Monitoring

Components in the final stack:

  • Gateway (web UI) — one per environment.
  • Daemon — horizontal-scales; we run three replicas on ClawCloud.
  • Composio Gmail connector — OAuth service account tied to support@company.com.
  • Redis 7.2 — global lock + rate limiter.
  • PostgreSQL 16 — ticket store, SLA timers.
  • Six human agents on Slack, each with their own OpenClaw persona.

Every incoming message goes through the same funnel:

  1. Gmail webhook → Gateway.
  2. Gateway creates email.received event in Redis.
  3. One daemon grabs a distributed lock (SETNX email:<msgId>).
  4. Daemon creates/updates a ticket row, emits ticket.created.
  5. Routing policy assigns owner_id; agent is pinged on Slack.
  6. Clock jobs check ticket.deadline; escalate if overdue.

Nothing fancy, just explicit locks and database state so that two daemons never process the same Gmail ID.

Prerequisites and Environment Setup

Versions and tooling

  • Node.js 22.2+
  • OpenClaw 0.32.1 (npm i -g openclaw@0.32.1)
  • ClawCloud CLI 0.7.5 (npm i -g clawcloud)
  • Redis 7.2 (Elasticache or local)
  • PostgreSQL 16

Spin up the core services

# Redis docker run -d --name redis -p 6379:6379 redis:7.2 # Postgres docker run -d --name pg -e POSTGRES_PASSWORD=claw -p 5432:5432 postgres:16 # Gateway (local) openclaw gateway --port 3000

Create a .env file with connection strings so every daemon/container reads the same settings:

REDIS_URL=redis://localhost:6379 DATABASE_URL=postgres://postgres:claw@localhost:5432/postgres EMAIL_LOCK_TTL=60000 # 60s

Connecting the Shared Inbox to OpenClaw

We use Gmail because Google Workspace has sane service accounts. Steps are identical for Microsoft 365 if you swap scopes.

1. Create the Composio integration

clawcloud connect gmail --label support-inbox

The command opens OAuth; log in as the shared mailbox. ClawCloud stores the refresh token encrypted in Vault.

2. Configure a webhook in Gmail

Gmail pushes to a URL when mail arrives. Point it at the Gateway:

POST https://your-gw.claw.cloud/webhooks/gmail/support-inbox

The gateway now emits email.received with the Gmail messageId, subject, body, sender.

Auto-generating Tickets from Incoming Email

OpenClaw has no baked-in ticket model, so we rolled a bare-bones schema:

CREATE TABLE tickets ( id BIGSERIAL PRIMARY KEY, gmail_id TEXT UNIQUE NOT NULL, subject TEXT, requester TEXT, status TEXT DEFAULT 'open', owner_id TEXT, sla_minutes INT DEFAULT 120, created_at TIMESTAMP DEFAULT now(), deadline TIMESTAMP GENERATED ALWAYS AS (created_at + sla_minutes * INTERVAL '1 minute') STORED );

Then an OpenClaw action registered in src/actions/tickets.ts:

import { db } from '../db'; export async function createOrUpdateTicket(evt) { const { gmail_id, subject, from } = evt.payload; await db.query( `INSERT INTO tickets (gmail_id, subject, requester) VALUES ($1,$2,$3) ON CONFLICT (gmail_id) DO NOTHING`, [gmail_id, subject, from] ); }

Add it to the daemon pipeline:

claw.on('email.received', createOrUpdateTicket);

Routing to Team Members Without Agent Collisions

The simplest routing rule is round-robin. We kept that logic in the database so every daemon sees the same state.

CREATE TABLE agent_rotation ( id SERIAL PRIMARY KEY, agent_id TEXT UNIQUE NOT NULL, position INT UNIQUE NOT NULL ); -- seed once INSERT INTO agent_rotation (agent_id, position) VALUES ('alice',1),('bob',2),('cory',3),('dana',4),('emil',5),('faye',6);

The assignment function lives next to the ticket code:

export async function assignTicket(gmailId) { return db.tx(async t => { const ticket = await t.oneOrNone('SELECT * FROM tickets WHERE gmail_id=$1', [gmailId]); if (!ticket || ticket.owner_id) return ticket; const agent = await t.one('SELECT agent_id FROM agent_rotation ORDER BY position LIMIT 1'); await t.none('UPDATE tickets SET owner_id=$1 WHERE gmail_id=$2', [agent.agent_id, gmailId]); await t.none('UPDATE agent_rotation SET position = position + 100 WHERE agent_id=$1', [agent.agent_id]); return { ...ticket, owner_id: agent.agent_id }; }); }

We call assignTicket right after createOrUpdateTicket. Because everything is wrapped in a DB transaction, two daemons trying to assign the same row will deadlock—Postgres lets one win, the other retries.

Notifying the human on Slack

Each human agent has a personal OpenClaw instance connected to Slack via the built-in Slack app template (clawcloud connect slack --label alice-slack). We send them the thread link and a claim button so they know it’s theirs.

Adding SLA Tracking and Escalations

SLA timers are just cron + SQL. We piggyback on OpenClaw’s scheduler:

claw.schedule('*/5 * * * *', async () => { const overdue = await db.any( "SELECT id, subject, owner_id FROM tickets WHERE status='open' AND deadline < now()" ); for (const t of overdue) { claw.emit('ticket.overdue', t); } });

Escalation policy

Our rule: if a ticket is overdue, page the on-call lead on Telegram and move it to the top of the queue.

claw.on('ticket.overdue', async (t) => { await claw.actions.telegram.sendMessage({ chatId: process.env.LEAD_CHAT_ID, text: `[SLA] Ticket #${t.id} is overdue. Subject: ${t.subject}. Assigned to ${t.owner_id}.` }); await db.none('UPDATE tickets SET sla_minutes = sla_minutes / 2 WHERE id=$1', [t.id]); });

Yes, halving the remaining SLA is brutal. It keeps metrics honest.

Deploying the Multi-Agent Stack on ClawCloud

Local Docker was fine for testing. Production needed three things:

  1. Stateless daemons on auto-scaling nodes.
  2. Centralised secrets.
  3. Zero downtime deploys.

1. Ship the code

clawcloud deploy --project support-email --env production

ClawCloud builds the Dockerfile, injects Node 22.2, caches npm install.

2. Scale daemons

clawcloud scale daemon 3 --project support-email

Because we’re using Redis locks and Postgres transactions, three copies are safe. We tested with 12; Gmail API rate limits hit first.

3. Environment variables

clawcloud secrets set REDIS_URL redis://redis:6379 clawcloud secrets set DATABASE_URL postgresql://... # RDS URL

Secrets propagate automatically to new replicas.

Monitoring and Operational Tips

  • Observability: turn on OpenClaw’s built-in Prometheus export (OPENCLAW_METRICS=true). Grafana dashboard template is in /observability/grafana.json.
  • Dead letter queue: if Gmail webhooks fail, Google retries indefinitely. We store the last 1,000 failed payloads in S3 for replay.
  • Thread replies: When an agent replies on Slack, pipe it back via Gmail’s messages.send API with In-Reply-To header set. Prevents broken Gmail threads.
  • Rate limits: Gmail caps at 250 users/day per account. Shared mailbox is one account, so we back off 100 ms between sends.
  • Failover: if Redis dies, processing stops (locks can’t be acquired). Keep a standby; we use AWS Multi-AZ.

That’s the entire pipeline: Gmail → Gateway → Redis lock → Ticket DB → Slack/Telegram notifications → SLA cron. It’s boring, deterministic, and my on-call rotation has been quiet for three months.

Next step: add semantic search on historical tickets with OpenAI embeddings so agents get suggested answers before typing. But that’s another article.