Your OpenClaw agent learns fast. After a few days it knows your project structure, your calendar habits, the Jira ticket that keeps coming back like a stray cat, and the internal slang only your team understands. Losing that context hurts. This guide shows exactly how to back up and restore your OpenClaw memory and config safely so an unexpected rm -rf, cloud outage, or laptop theft becomes an inconvenience, not a calamity.

Why OpenClaw Memory Matters and Where It Lives

OpenClaw stores long-term memory, tool credentials and runtime state on disk. By default (v0.27.4+) the daemon writes to:

  • ~/.openclaw/memory.sqlite – vector + key/value store
  • ~/.openclaw/config/agent.json – top-level agent settings
  • ~/.openclaw/config/.env – environment secrets (tokens, webhooks)
  • ~/.openclaw/skills/* – JavaScript/TypeScript skill modules you added or generated

If you run the Gateway UI, the path is the same (XDG_DATA_HOME is respected). Containers usually mount /data.

Identify Everything You Must Back Up

1. Memory Database

The SQLite file memory.sqlite grows as the agent absorbs content. It is append-heavy, safe for hot backup with the built-in SQLite .backup command or standard file copy if you pause the daemon.

2. Configuration Files

agent.json and .env define personality, key bindings and API secrets. Without them the agent boots as a stranger.

3. Skills Directory

Every skill is just a Node module, but if you generated code with the agent (“write me a GitHub triage skill”) you probably didn’t commit it anywhere. Save it.

4. Gateway UI Assets

Custom themes, snippets and uploads live under ~/.openclaw/gateway/. Optional but nice to keep.

5. Version Pin

Record the OpenClaw version (package.json or ocl -v) so you can restore with the same major build. Upgrades sometimes migrate the DB.

Option 1: Manual File-System Backup (the Quick Win)

If you’re in a hurry, stop the daemon, tar the directory, move it somewhere safe:

# stop the daemon so SQLite flushes $ systemctl --user stop openclaw # archive everything except node_modules caches $ tar --exclude='node_modules' -czf openclaw-backup-$(date +%F).tgz ~/.openclaw # restart $ systemctl --user start openclaw

Pros: trivial, zero dependencies. Cons: you will forget to run it, and the archive contains plaintext secrets.

Option 2: Automating with a Shell Script + Cron

For lone-developer setups a daily cron plus GPG encryption is enough. Below is a tested script (ocl-backup.sh) for Linux and macOS:

#!/usr/bin/env bash set -euo pipefail set -x # noisy logs TARGET_DIR="$HOME/Backups/openclaw" KEEP_DAYS=30 PASSPHRASE_CMD="gpg --batch --passphrase-file $HOME/.gpg-pass" mkdir -p "$TARGET_DIR" STAMP=$(date +"%Y-%m-%d_%H-%M") ARCHIVE="$TARGET_DIR/ocl_$STAMP.tar" # hot copy using sqlite .backup to avoid locking sqlite3 ~/.openclaw/memory.sqlite ".backup ~/.openclaw/memory.sqlite.bak" # create tar without node_modules noise TAR_EXCLUDES=(--exclude='node_modules') tar "${TAR_EXCLUDES[@]}" -cf "$ARCHIVE" -C "$HOME" .openclaw # encrypt $PASSPHRASE_CMD -c "$ARCHIVE" rm "$ARCHIVE" # keep only .gpg file # prune old files find "$TARGET_DIR" -name 'ocl_*.tar.gpg' -mtime +$KEEP_DAYS -delete

Then schedule it:

$ crontab -e # m h dom mon dow command 30 3 * * * /usr/local/bin/ocl-backup.sh >>$HOME/Backups/ocl.log 2>&1

Now you have 30 days of encrypted snapshots in ~/Backups/openclaw. Point the script at an external drive or S3 mounted via rclone for off-site redundancy.

Option 3: Version-Controlled Backups with Git + SOPS

Teams usually want change history and pull-request review. The pattern we use internally:

  1. Initialize a private Git repo, e.g., git@github.com:acme-inc/ocl-state.git.
  2. Commit everything except volatile binary data.
ocl-state/
├── agent.json
├── .env      # encrypted by SOPS
├── skills/
│   ├── deploy.js
│   └── oncall.js
└── memory/   # chunked per-week exports

Secrets are encrypted in-place with Mozilla SOPS so you can still git diff:

$ sops -e --in-place .env

Memory is exported in JSON batches using the built-in CLI (v0.26+):

$ ocl memory export --since="2024-05-01" --out memory/2024-05-first-week.json

Add a CI job to push weekly. You now have reproducible state that survives both disk failure and ransomware because the repo lives on GitHub + another remote.

Restoring Your Agent: Step-by-Step

Anxious Sunday night, laptop died, fresh machine arrives. Here’s the minimal path back:

  1. Install Node 22 (the current LTS). I use nvm install 22.
  2. npm i -g openclaw@0.27.4 to match the old version.
  3. Clone or copy your backup to ~/.openclaw.
  4. If you encrypted with GPG/SOPS, decrypt .env first:
$ gpg --decrypt ~/.openclaw/config/.env.gpg >~/.openclaw/config/.env

or

$ sops -d --output ~/.openclaw/config/.env ~/.openclaw/config/.env
  1. Start the daemon: ocl daemon start.
  2. Open the Gateway: ocl gateway --port 4000 or visit your ClawCloud instance.

Within 30–60 seconds the agent re-hydrates memory and scheduled tasks resume. Verify with a quick prompt: “What was the last topic we discussed?” If the answer matches your recollection, you’re good.

Full Disaster Recovery Drill

Doing the drill once is worth ten wikis. I block one hour per quarter and run this script:

  1. Spin up a fresh Ubuntu 24.04 VM in Hetzner.
  2. Restore from yesterday’s backup.
  3. Run a subset of integration tests (Slack echo skill, GitHub PR triage).
  4. Destroy the VM.

Common pitfalls we’ve hit:

  • Forgetting to back up skills/.cache leading to missing compiled TS output. Fix: rebuild on restore or commit the source.
  • Encrypted .env but lost the private GPG key. Store keys in a hardware token or a second off-site vault.
  • SQLite WAL file missing causing database disk image is malformed. Always pause the daemon or use .backup.

Hard-Earned Lessons from Real Incidents

The community Slack has a running thread on backup horror stories. My top three takeaways:

  1. Cloud buckets are not a backup if versioning is off. One user’s Rclone sync silently overwrote an empty archive over months of snapshots.
  2. Encrypt before you transport. At least one person accidentally published OpenAI API keys to a public S3 bucket by skipping this step.
  3. Test restores on the same major OpenClaw version. 0.25→0.27 migrated the memory schema. Restoring an older DB on a newer binary silently failed until we ran ocl migrate.

I keep a one-liner taped to my monitor: “A backup you haven’t restored from is just a random blob of bytes.”

Next Step: Set Up Your First Automated Backup Today

Copy the cron script, tweak the paths, and commit the first encrypted snapshot before you forget. Future-you will thank present-you when the disk clicks or the cloud account gets locked. And if you hit a snag, drop a note in the GitHub discussion — several of us have scars and scripts to share.