If you search “How to build a reading list manager with OpenClaw”, you probably already have three half-finished side projects and a pile of unread PDFs. This post skips the pitch deck and walks through a working setup: track books and articles, sync with Goodreads and Kindle, generate daily reading suggestions, and capture notes & highlights — all driven by an OpenClaw agent you can run locally or on ClawCloud.

Prereqs, versions, and the boring bits you’ll regret skipping

Everything below was tested on:

  • Node v22.3.1
  • OpenClaw v1.11.0 (npm openclaw@latest)
  • PostgreSQL 15.4 for custom storage
  • Goodreads API (still v2, sadly)
  • Kindle export v1.28 (the clippings .txt trick)

Other databases work, but the queries change. The agent itself doesn’t care, thanks to the Composio integration layer.

High-level: what the agent actually does

OpenClaw agents are just Node processes with a persistent memory store and a socket to the world. We’ll wire up four capabilities:

  1. Collect — Add a book/article via chat (/add https://link) or API.
  2. Recommend — Every morning at 07:30 send one unread item that matches your history, plus a random wild-card (the StumbleUpon vibe).
  3. Sync — Nightly pull from Goodreads “Want to Read” and the Kindle My Clippings.txt.
  4. Annotate — Store notes & highlights pushed from Telegram or emailed to a secret alias.

The agent exposes itself on Slack and Telegram for convenience, but any of OpenClaw’s 800+ connectors works if you hate yourself less than I do.

Bootstrapping the project

Install and scaffold

With Node 22 the install is one line:

npm create openclaw@latest reading-list-agent

The scaffold asks which integrations you want. Pick at least:

  • Slack (incoming+outgoing)
  • Telegram
  • Composio

Everything lands in ./agent.config.ts. For readability I’ve trimmed tokens below:

export default { name: "reading-list", memory: { driver: "postgres", url: process.env.DATABASE_URL, }, connectors: [ { type: "slack", botToken: process.env.SLACK_BOT_TOKEN, appToken: process.env.SLACK_APP_TOKEN, }, { type: "telegram", token: process.env.TELEGRAM_TOKEN, }, { type: "composio", apiKey: process.env.COMPOSIO_KEY, }, ], };

Spin it up:

npm run dev

Point Slack’s Event URL at https://ngrok or the ClawCloud tunnel if you’re on the hosted tier.

Create the memory schema

OpenClaw’s ORM is thin; anything odd you do in SQL is on you. Schema SQL:

create table reading_items ( id serial primary key, title text not null, source text check (source in ('manual','goodreads','kindle')), url text, status text check (status in ('unread','reading','finished')) default 'unread', added_at timestamptz default now(), metadata jsonb ); create table highlights ( id serial primary key, reading_id integer references reading_items(id) on delete cascade, text text not null, note text, created_at timestamptz default now() );

Add two indices if you care about speed:

create index on reading_items(status); create index on highlights(reading_id);

Implementing the /add command

Inside src/commands/add.ts:

import { defineCommand } from "openclaw"; import { db } from "../db"; export default defineCommand({ name: "add", description: "Add a book/article URL or ISBN", run: async (ctx, args) => { if (!args.length) { return ctx.reply("Usage: /add "); } const input = args.join(" "); // Super naive ISBN check const isISBN = /^(97(8|9))?\d{9}(\d|X)$/.test(input); const title = isISBN ? await fetchTitleFromISBN(input) : await fetchTitleFromURL(input); await db.query( `insert into reading_items (title, url, source) values ($1, $2, 'manual')`, [title, isISBN ? null : input] ); ctx.reply(`Added: ${title}`); }, });

Yes, the title scraping is hand-wavy. For URLs I call the <title> tag; for ISBNs I hit Open Library. Works 80% of the time, which is 79% more than Goodreads manages.

Daily suggestions with OpenClaw schedules

OpenClaw’s scheduler hides under-documented magic. In src/schedules.ts:

import { cron } from "openclaw"; import { db } from "./db"; import { sendMessage } from "./utils/send"; export const morningRecommendation = cron("30 7 * * *", async () => { // 1. Grab last 30 days of finished items to build a quick profile const { rows: recent } = await db.query( `select metadata->>'genre' as genre from reading_items where status = 'finished' and added_at > now() - interval '30 days'` ); const topGenre = pickTopGenre(recent); // 2. Pick one unread in that genre const { rows: [match] } = await db.query( `select * from reading_items where status = 'unread' and metadata->>'genre' = $1 limit 1`, [topGenre] ); // 3. Wild card const { rows: [random] } = await db.query( `select * from reading_items where status = 'unread' order by random() limit 1` ); await sendMessage( `Today's picks:\n• ${match?.title ?? '—'}\n• ${random?.title ?? '—'}\nHappy reading!` ); });

I run cron in UTC; adjust if your mornings start later than mine.

Sync with Goodreads and Kindle

Goodreads

Goodreads still uses an OAuth 1 flow written in 2009. Composio wraps it, sparing your sanity.

// src/integrations/goodreads.ts import { composio } from "openclaw"; export const syncGoodreads = composio.task({ provider: "goodreads", name: "pull-want-to-read", run: async ({ db, api }) => { const shelf = await api.getShelf({ name: "to-read" }); for (const item of shelf.books) { await db.query( `insert into reading_items (title, url, source, metadata) values ($1, $2, 'goodreads', $3) on conflict (title) do nothing`, [item.title, item.link, { author: item.author }] ); } }, });

Wire it to a nightly cron:

cron("0 3 * * *", syncGoodreads)

Kindle

No public API, so we parse My Clippings.txt. I push the file to an S3 bucket via a Hazel rule on macOS; the agent grabs it with signed URLs.

// src/integrations/kindle.ts import { s3 } from "@aws-sdk/client-s3"; export const syncKindle = cron("15 3 * * *", async () => { const file = await s3.getObject({ Bucket: "kindle-dumps", Key: "clippings.txt" }); const text = await file.Body.transformToString(); const items = parseClippings(text); // out of scope here for (const clip of items) { const { rows: [book] } = await db.query( `insert into reading_items (title, source, metadata) values ($1, 'kindle', $2) on conflict (title) do update set metadata = excluded.metadata returning id`, [clip.book, { author: clip.author }] ); await db.query( `insert into highlights (reading_id, text, note) values ($1, $2, $3)`, [book.id, clip.highlight, clip.note] ); } });

If you’re on ClawCloud you can mount an S3 bucket as a volume instead, which is cleaner.

Capturing notes from chat

I still read on my phone, so I want to jot a thought in Telegram, tag the book, and move on. Command syntax:

/note 42 "%text%"

(42 is the database id — not ideal but frictionless).

// src/commands/note.ts export default defineCommand({ name: "note", run: async (ctx, args) => { const [id, ...noteArr] = args; const note = noteArr.join(" "); await db.query( `insert into highlights (reading_id, text) values ($1, $2)`, [Number(id), note] ); ctx.reply("Noted ✍️"); }, });

The emoji is mandatory; fight me.

Deploying to ClawCloud

If you don’t want to babysit Node processes, the hosted path is easier:

  1. Signupclaw signup inside the repo triggers browser auth.
  2. Name the agent — “reading-list-prod”.
  3. Pushclaw deploy --env DATABASE_URL=postgres://…

The buildpack installs Node 22, runs npm ci --production, and spawns the gateway/daemon pair. Logs stream via claw logs -f. Cron tasks are first-class; no OS-level cron access needed.

Cost note: the free tier sleeps after 12h of idle. If you want the 07:30 recommendation to land on time, bump to the Micro plan ($5/mo as of June 2024).

Things that tripped me up

  • Goodreads rate limit — 1 req/s. Batch or you’ll be throttled. Composio retries, but plan for it.
  • Kindle file format — Amazon changes the header markers per firmware. I keep a unit test with sample clippings from every device I can borrow.
  • Time zones — OpenClaw cron is UTC. ClawCloud instances inherit. Set your expectations or add math.
  • Memory growth — 10K highlights eat RAM if you return them all in a chat message. Paginate.
  • Security — The Slack slash command token leaks in logs if you console.log(ctx). Scrub.

Next step: surface data where you need it

This baseline agent already replaces three separate apps for me. The obvious next upgrades:

  • Expose a lightweight HTTP REST endpoint so Obsidian or Logseq can embed the same dataset.
  • Add a “send to Kindle” action using Amazon’s undocumented email push.
  • Pipe finished books to a public “Books I Read” page via Astro + edge-cached JSON.
  • Train a local embedding model and use OpenClaw’s vector store to recommend semantically similar articles instead of the naive genre match.

Until then, ship, read, repeat.