The worst part of a serious job hunt isn’t rejection—it’s the admin. Copying postings into a spreadsheet, tweaking fifteen versions of the same résumé, remembering who to follow up with next Tuesday. I wanted one chat window that could tell me: “Four new Rust jobs on Greenhouse. Your GitHub résumé sent. Follow up with Acme tomorrow.” OpenClaw got me 90 % of the way there in a weekend. This post shows exactly how.
Why run a job search through OpenClaw instead of yet another SaaS?
Because I wanted something self-hosted, scriptable, and chat-native:
- Unified chat interface. I sit in Telegram all day anyway; an agent answering there means zero context switching.
- Composio’s 800+ integrations. Sheets, Notion, Gmail, Calendar—one API wrapper instead of five.
- Persistent memory. The agent remembers every application and status without me wiring up a database.
- Scheduled tasks. Cron-like reminders, but exposed in chat so I can snooze or reschedule.
- Browser + shell tools. For scraping the odd niche job board or running a local résumé build script.
Trade-offs: You maintain the infra (updates, keys), and the LLM prompt engineering is on you. Worth it for control and transparency.
Prereqs and accounts you’ll need
- Node.js 22+ (
node -vshould printv22.*) - OpenClaw 3.4.2 (current at publish time)
- An OpenAI API key or self-hosted Llama.cpp endpoint
- Telegram bot token (or Slack/Discord token—adjust commands accordingly)
- Composio key with Gmail, Google Sheets, and Notion scopes
- GitHub repo for version-controlled résumé (I use
pandocto render PDF) - Optional: Greenhouse/Lever/Snaphunt API creds—otherwise we’ll scrape
I’m running the agent on ClawCloud because spin-up takes 60 s and I don’t want port-forwarding headaches, but everything below works locally as well.
Bootstrapping the OpenClaw agent
If you’re fresh to OpenClaw, install the CLI:
npm install -g openclaw@3.4.2
Authenticate and launch a new agent named jobhound:
claw login --cloud # opens browser
claw init jobhound --llm openai --chat telegram
The CLI scaffolds two files:
gateway.json– UI / channelsdaemon.mjs– long-running tasks, tool wiring
We’re editing daemon.mjs for the heavy lifting.
Minimal daemon.mjs
import { cron, browser, memory } from "openclaw/tools";
import { notion, gmail, sheets } from "@composio/sdk";
export default ({ onMessage }) => {
// Remember every application by URL as the primary key
const apps = memory.collection("applications");
onMessage(/list$/i, async () => {
return apps.all().map(a => `${a.company} — ${a.stage}`).join("\n");
});
cron("0 8 * * *", async () => {
const due = apps.where({ stage: "Waiting", followup: { $lte: Date.now() }});
if (due.length) {
return `You have ${due.length} follow-ups today.`;
}
});
};
We’ll expand this as we go.
Automated job board monitoring with OpenClaw
Most mainstream boards—Greenhouse, Lever, Workable—have public JSON endpoints buried behind the careers page. For the stubborn ones, headless Chrome scraping works.
Greenhouse example
async function fetchGreenhouse(companyId) {
const res = await fetch(`https://boards-api.greenhouse.io/v1/boards/${companyId}/jobs`);
const json = await res.json();
return json.jobs.map(j => ({
id: j.id,
title: j.title,
location: j.location.name,
url: j.absolute_url,
company: json.board_token
}));
}
Add a cron that diff-checks new postings and pings chat:
cron("*/30 * * * *", async () => {
const newJobs = await fetchGreenhouse("acme");
for (const job of newJobs) {
if (!memory.get("jobs", job.id)) {
memory.set("jobs", job.id, job);
return `New opening: ${job.title} @ ${job.company}\n${job.url}`;
}
}
});
Scraping a custom board with the built-in browser tool
import { browser } from "openclaw/tools";
async function scrapeNicheBoard() {
const page = await browser.newPage();
await page.goto("https://randomjobs.io/rust");
const links = await page.$$eval(".job a", as => as.map(a => ({
title: a.textContent.trim(),
url: a.href
})));
await page.close();
return links;
}
Yes, headless Chrome is heavier than a JSON call, but we get captcha solving and screenshot support for free.
Saving every application to a Notion database
Spreadsheet works, but two columns in Notion—Status and Next action—save me from endless filter fiddling. Composio’s Notion connector gives CRUD verbs that look like this:
const notionDb = notion.database("3e9c1ecb123f4a1bb");
async function logApplication(app) {
await notionDb.create({
"Job Title": app.title,
"Company": app.company,
"Status": "Applied",
"URL": app.url,
"Follow up": new Date(Date.now() + 7 * 24 * 60 * 60 * 1000) // +1 week
});
}
Whenever I send applied <url> in chat, the agent:
- Looks up the posting in memory (title/company already scraped)
- Commits it to Notion and memory
- Emails the tailored résumé (next section)
- Schedules a follow-up reminder
Chat command handler:
onMessage(/applied (.+)/i, async ({ match }) => {
const url = match[1];
const job = memory.find("jobs", j => j.url === url);
if (!job) return "I don’t know that job URL.";
await logApplication(job);
await sendResume(job);
return `Logged ${job.title}. Reminder set for next week.`;
});
Tailoring résumé and cover letter per application
I keep master data in a YAML file (resume.yml). A Makefile transforms it into PDF via pandoc. The agent tweaks a few sections—skills, summary—based on the job description.
Prompt template
const prompt = ({ jd, resume }) => `You are a résumé editor.
Job description:\n${jd}\n
Current résumé YAML:\n${resume}\n
Return the updated YAML emphasising relevant experience.
`;
Then:
import { shell } from "openclaw/tools";
async function sendResume(job) {
const jd = await fetch(job.url).then(r => r.text());
const updatedYaml = await openai.complete({
model: "gpt-4o-preview",
prompt: prompt({ jd, resume: fs.readFileSync("resume.yml", "utf8") })
});
fs.writeFileSync("resume_temp.yml", updatedYaml);
await shell.exec("make pdf RESUME=resume_temp.yml OUTPUT=resume.pdf");
await gmail.send({
to: job.hr || "jobs@"+job.company+".com",
subject: `${job.title} — application from ${MY_NAME}`,
body: `Hi,\n\nAttached is my résumé tailored for ${job.title}.\n\nCheers,\n${MY_NAME}`,
attachments: ["resume.pdf"]
});
}
This is easily the slowest step (~25 s) but saves me half an hour per application.
Community tip: if your résumé build chain depends on LaTeX, run the agent in a container with a slimtex image; LaTeX on Alpine is pain.
Follow-ups, reminders, and interview prep
Every application record has a Follow up timestamp. The daemon’s daily cron surfaces anything overdue and lets me snooze via inline chat buttons (Telegram’s InlineKeyboardMarkup).
Example reminder flow
cron("0 9 * * *", async () => {
const pending = notionDb.query({ filter: {
property: "Status",
select: { equals: "Applied" }
}});
const due = pending.filter(p => new Date(p["Follow up"]).getTime() <= Date.now());
if (due.length === 0) return;
let msg = `🦅 Follow-ups due (${due.length}):\n`;
msg += due.map(d => `• ${d["Job Title"]} @ ${d["Company"]}`).join("\n");
return {
text: msg,
buttons: [
{ text: "Snooze 3d", callback_data: "snooze_3" },
{ text: "Done", callback_data: "mark_done" }
]
};
});
onCallback("snooze_3", ctx => {/* … */});
For interview prep, I reuse the same JD+résumé prompt but ask GPT to generate:
- Five behavioral questions
- Three system-design questions
- Suggested STAR-style answers using my project history from memory
It dumps a Markdown file into a /prep folder and returns a chat link (browser.serveFile()).
Chat-based daily workflow: what it looks like in practice
Monday 08:00 the agent writes:
Good morning.
2 new Rust backend roles on Greenhouse.
3 follow-ups due today.
11:00 interview prep for Orbital.
I reply details and get clickable links. Then apply #1, #2 triggers résumé tailoring and Notion updates. Lunch break, two new messages land: “Callback from Orbital scheduled Wed 15:00. Added to calendar.” Everything stays inside Telegram; Notion is just the source of truth for when I need a kanban view.
Edge cases: captchas sometimes break scraping; I added a fallback that screenshots the board and drops the PNG in chat so I can manually copy the link. If the résumé generator throws LaTeX errors, the agent pushes the build log as a gist via Composio’s GitHub integration.
Next steps: shipping it to other job seekers
If you want to replicate this, fork the jobhound template repo (link below) and swap in your Notion DB ID and Telegram token. The entire flow costs me roughly $6/month in OpenAI tokens and a $2.99 ClawCloud container. Not bad for ditching my spreadsheet and feeling on top of 40+ applications.
Repo: https://github.com/yourname/jobhound-openclaw
Questions or better prompts? The GitHub Discussions tab is open and I hang out in #openclaw-users on Discord.
Happy hunting.