If you are already using OpenClaw to answer support tickets or deploy via Slack, the next obvious target is the data layer. This guide shows how I wired OpenClaw 0.33.2 to three live databases (Supabase, vanilla PostgreSQL 15, and MongoDB 7.0), let the team run ad-hoc queries from chat, auto-generated migrations, and shipped daily backups—without ever allowing the agent to DELETE rows on its own.
Connecting OpenClaw to your data layer: Supabase, Postgres, Mongo
OpenClaw does not ship a DB client. Instead it piggybacks on Node packages you already trust. I went with:
- pg 8.11 for Postgres/Supabase
- mongodb 6.7 driver for MongoDB
The glue lives in a custom tool definition so the agent can call runQuery.
1. Install packages in the daemon
npm install pg@8 mongodb@6
2. Expose a query tool to OpenClaw
Create tools/db.js:
import { Client as PgClient } from 'pg';
import { MongoClient } from 'mongodb';
const pg = new PgClient({
connectionString: process.env.PG_URL,
});
await pg.connect();
const mongo = new MongoClient(process.env.MONGO_URL);
await mongo.connect();
export default {
name: 'runQuery',
description: 'Run a SQL or Mongo query against the configured database',
parameters: {
type: 'object',
properties: {
dialect: { type: 'string', enum: ['postgres', 'supabase', 'mongo'] },
text: { type: 'string', description: 'The raw query' }
},
required: ['dialect', 'text']
},
run: async ({ dialect, text }) => {
if (dialect === 'mongo') {
/* naive eval – okay for read ops only */
return await mongo.db().eval(text);
}
const { rows } = await pg.query(text);
return rows;
}
};
3. Register the tool
// in gateway.config.mjs
export default {
tools: ['./tools/db.js'],
};
4. Credentials via ClawCloud dashboard
On ClawCloud, open Agent → Environment and add:
PG_URL=postgresql://...MONGO_URL=mongodb+srv://...
Deploy. Runtime logs should show Connected to Postgres and Connected to MongoDB.
Chat-driven queries: talking SQL and NoSQL with your agent
Once the tool is live, any channel (Slack, Telegram, etc.) can send messages like:
@clawbot pg: how many users signed up yesterday?
The agent converts intent → tool call:
{
"dialect": "postgres",
"text": "SELECT count(*) FROM users WHERE created_at >= now() - interval '1 day'"
}
and streams back the rows. For Mongo:
@clawbot mongo: list unpaid invoices older than 30 days
Behind the scenes:
- The LLM chooses
runQuery. - We execute, truncate the result set after 200 rows, and format a Markdown table.
- If the query took > 5 s we post a threading note with total runtime.
I’ve kept the tool read-only by default. Writes require an explicit --allow-write suffix:
@clawbot pg: delete from temp_sessions where expires_at < now() --allow-write
Without the flag the agent replies Refused. Missing --allow-write.
Automating backups the boring but safe way
The next requirement was vendor-agnostic backups triggered by the agent but executed outside the LLM sandbox.
Cron-style schedules
OpenClaw’s scheduler lives in the daemon; it accepts cron strings. Example:
// gateway.config.mjs
export default {
schedules: [
{
cron: '0 3 * * *', // 03:00 daily
task: 'pgBackup',
},
{
cron: '0 4 * * 0', // Sundays
task: 'mongoBackup',
}
]
};
Tasks
// tools/pg-backup.js
import { exec } from 'child_process';
export default {
name: 'pgBackup',
run: () => new Promise((res, rej) => {
exec("pg_dump $PG_URL | gzip > /backups/$(date +%F).sql.gz", (e, out, err) => {
if (e) return rej(err);
res('backup-complete');
});
})
};
Same pattern for Mongo with mongodump.
Storage
I push the archives to S3 via aws s3 cp inside the same task. ClawCloud containers already include AWS CLI 2; just inject AWS_ACCESS_KEY_ID & friends.
Because backups happen in tasks rather than chat, the LLM never sees database credentials. That separation is worth the slight boilerplate.
Schema migration generation and review workflow
OpenClaw’s strength is synthesizing structured output. I leaned on that to draft migration SQL then hand it to a real migration tool.
Prompt template
// tools/draftMigration.js
export default {
name: 'draftMigration',
parameters: {
type: 'object',
properties: {
description: { type: 'string' }
},
required: ['description']
},
run: async ({ description }) => {
const ddl = await claw.llm.prompt(`You are a senior DB engineer. Generate a Postgres 15 migration in pure SQL that ${description}. Do not run it.`);
return ddl;
}
};
Example usage:
@clawbot migration: add index on users(email)
The agent replies with a fenced SQL block:
-- 2024-03-18_add_index_users_email.sql
BEGIN;
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);
COMMIT;
I save that file into migrations/ then run drizzle-kit or sqitch manually. GitHub Actions picks it up later.
Could the agent commit directly? Sure, but I value human-in-the-loop. We kept shipping velocity high while avoiding Friday 5 pm migrations authored by GPT-4.
Guardrails: keeping DROP DATABASE out of production
This part took longer than the happy path. Three layers ended up working:
- Static denylist inside
runQuery:- Reject queries matching
/\bDROP\b|\bTRUNCATE\b|\bDELETE\b(?!.*WHERE)/i - 90% of foot-guns killed here
- Reject queries matching
- Interactive confirmation for risky verbs:
- If query includes
DELETE, bot responds "About to delete N rows. Reply YES" - We require the same user to confirm within 2 minutes
- If query includes
- Role-based credentials:
- The Postgres user has
REVOKE ALL ON DATABASEand only owns a dedicated schema - Can’t accidentally drop prod tables it doesn’t own
- The Postgres user has
Yes, you can do fancier policy engines, but simple regex + RBAC stopped every self-inflicted wound in testing.
Putting it together: end-to-end pipeline on ClawCloud
Timeline of an actual Tuesday morning:
- Designer asks in Slack: "How many beta sign-ups on iOS last 24 h?"
- @clawbot returns
124in 8 seconds. - Product wants a unique constraint on
invitations.emailto stop dupes. @clawbot migration: add unique constraint on invitations(email)- Agent produces SQL, we push branch
invite-uniqueness. - GitHub Action applies the migration to staging, runs tests.
- After lunch, we merge. ClawCloud daily backup executes at 03:00 UTC.
Not glamorous, just fewer hops.
What we still don’t automate (yet) and how you can help
Edge cases remain:
- Supabase’s RLS policies are JSON; having GPT-4 propose diffs is hit-or-miss.
- Mongo migrations are basically handwritten scripts—no good declarative format.
- I’d love a schema diff tool baked into OpenClaw instead of external
migra.
If any of this scratches an itch, hop into the GitHub discussions. The agent framework is only 5k LOC; PRs land fast.
Next step: fork the snippets above, deploy a throw-away Postgres on Supabase, and see if you can get your first chat-based query under 20 minutes. When it works, lock down your roles before you brag in the #show-your-setup channel.