I wanted my OpenClaw agent to act like a personal social-media intern: skim my X timeline, pull interesting links into memory, draft posts that actually sound like me, then queue them for the perfect time. The docs cover Slack and Gmail pretty well, but Twitter/X only shows up in a few GitHub issues. This post fills the gap.
We’re going to build an OpenClaw skill that:
- Authenticates against the X API with OAuth 2.0
- Fetches bookmarks and timeline into the agent’s short-term memory
- Drafts and schedules posts based on your conversation with the agent
- Monitors engagement and surfaces it back as structured data
- Respects Twitter’s rate limits and Terms of Service
Everything below was tested with OpenClaw v0.37.1 (Node 22.5) and the x-api NPM client v2.11.0.
Prerequisites & sanity checks
- Node 22+ (
node -vshould print ≥ 22.0.0) - OpenClaw gateway and daemon running locally or in ClawCloud
- A Twitter/X developer account with Elevated access (the Basic free tier won’t let you post)
- OAuth 2.0 app set up with Read and Write permissions
- Familiarity with JavaScript/TypeScript (all examples use ESM)
If you’re deploying on ClawCloud you only need the gateway. The skill code lives in a private Git repo and ClawCloud’s build system installs dependencies automatically.
Bootstrapping an OpenClaw skill project
OpenClaw skills are just Node modules exporting a factory. The quickest way is the CLI generator:
npx openclaw@latest init-skill openclaw-x-skill
cd openclaw-x-skill
npm i twitter-api-v2@2.11.0 dayjs@1.11.10 dotenv@16.3.1
The template gives you:
index.ts– entry pointmanifest.json– name, description, permissions- Tests, lint config, etc.
Remove the boilerplate routes you don’t need; we’ll add four commands: readBookmarks, readTimeline, draftPost, and schedulePost.
Wiring up OAuth 2.0 with PKCE
Why PKCE? Because it works for desktop, web, and cloud deployments without needing client secrets in the browser. Twitter supports it and OpenClaw’s gateway can host the callback URL.
1. Create a callback endpoint in the skill
// index.ts
import { createSkill } from "openclaw";
import { TwitterApi } from "twitter-api-v2";
export default createSkill(({ router, storage, logger }) => {
// Step 1: redirect user to Twitter consent page
router.get("/auth/x", async (req, res) => {
const client = new TwitterApi({
clientId: process.env.X_CLIENT_ID!,
});
const { url, codeVerifier, state } = client.generateOAuth2AuthLink(
process.env.X_REDIRECT_URI!,
{ scope: ["tweet.read", "users.read", "offline.access", "tweet.write", "bookmark.read"] }
);
await storage.set("x:codeVerifier", codeVerifier);
await storage.set("x:state", state);
res.redirect(url);
});
// Step 2: handle callback
router.get("/auth/x/callback", async (req, res) => {
const { state, code } = req.query;
const savedState = await storage.get("x:state");
if (state !== savedState) return res.status(400).send("Invalid state");
const client = new TwitterApi({ clientId: process.env.X_CLIENT_ID! });
const { client: loggedClient, accessToken, refreshToken, expiresIn } = await client.loginWithOAuth2({
code: code as string,
codeVerifier: await storage.get("x:codeVerifier"),
redirectUri: process.env.X_REDIRECT_URI!,
});
await storage.set("x:accessToken", accessToken);
await storage.set("x:refreshToken", refreshToken);
await storage.set("x:expiresAt", Date.now() + expiresIn * 1000);
res.send("Twitter connected. You can close this tab.");
});
});
Commit that and push. In ClawCloud, expose /auth/x via the “Public Routes” toggle so you can hit it from your browser.
You only do this once per agent. Tokens are stored in the agent’s KV and refreshed automatically (see next section).
Helper: get a hydrated Twitter client whenever we need one
async function getTwitterClient(storage) {
let accessToken = await storage.get("x:accessToken");
let refreshToken = await storage.get("x:refreshToken");
let expiresAt = Number(await storage.get("x:expiresAt"));
const client = new TwitterApi({
clientId: process.env.X_CLIENT_ID!,
clientSecret: process.env.X_CLIENT_SECRET!, // only server side
});
if (Date.now() > expiresAt - 60_000) {
const {
client: refreshedClient,
accessToken: newAT,
refreshToken: newRT,
expiresIn,
} = await client.refreshOAuth2Token(refreshToken);
await storage.set("x:accessToken", newAT);
await storage.set("x:refreshToken", newRT);
await storage.set("x:expiresAt", Date.now() + expiresIn * 1000);
return refreshedClient;
}
return new TwitterApi(accessToken);
}
All subsequent commands call getTwitterClient(storage).
Command: readBookmarks
The community uses bookmarks as a lightweight “inbox” for content they want the agent to consider. Twitter gives you GET /2/users/:id/bookmarks but it’s hidden behind v2 endpoints and limited to 800 requests/24h on the Elevated plan.
async function readBookmarks(ctx) {
const client = await getTwitterClient(ctx.storage);
const me = await client.v2.me();
const bookmarks = await client.v2.bookmarks(me.data.id, {
"tweet.fields": ["author_id", "created_at", "public_metrics", "entities"],
max_results: 100,
});
for await (const tweet of bookmarks) {
const text = tweet.text.replace(/\s+/g, " ").trim();
await ctx.memory.write(`BOOKMARK ${tweet.id}: ${text}`);
}
return `${bookmarks.tweets.length} bookmarks imported`;
}
Wire it in manifest.json:
{
"name": "OpenClaw X Skill",
"commands": [
{
"name": "readBookmarks",
"description": "Import latest bookmarks into memory"
}
]
}
After redeploy, tell the agent:
/skill readBookmarks
Depending on how many bookmarks you have, the command will write ~2 KB to memory—well below the default 64 KB limit.
Command: readTimeline (with minimal token burn)
The X API quotas are weird: 15 requests / 15 min for timeline but only if you include expansions. I fetch five pages of 100 tweets, then cache IDs in persistent storage to avoid re-fetching.
async function readTimeline(ctx) {
const client = await getTwitterClient(ctx.storage);
const me = await client.v2.me();
const seen = new Set((await ctx.storage.get("x:seenTweets"))?.split(",") || []);
const newIds = [];
const iter = client.v2.homeTimeline({
max_results: 100,
"tweet.fields": ["author_id", "public_metrics"],
});
let count = 0;
for await (const tweet of iter) {
if (seen.has(tweet.id)) continue;
const score = tweet.public_metrics.like_count + tweet.public_metrics.retweet_count * 2;
if (score < 10) continue; // trivial filter
await ctx.memory.write(`TIMELINE ${tweet.id}: ${tweet.text}`);
newIds.push(tweet.id);
count++;
if (count >= 500) break; // safety brake against endless streams
}
await ctx.storage.set("x:seenTweets", [...seen, ...newIds].slice(-5000).join(","));
return `${count} timeline tweets ingested`;
}
You now have two ingestion commands. I trigger them on a cron schedule:
cron.schedule("0 8 * * *", () => agent.command("openclaw-x-skill", "readBookmarks"));
cron.schedule("0 9,15 * * *", () => agent.command("openclaw-x-skill", "readTimeline"));
That stays safely under Twitter’s daily cap and matches when Europe/US folks post.
Command: draftPost – leveraging conversation context
Here’s where OpenClaw shines. The community pattern is:
- Have a chat with the agent about some idea or article
- End with “draft an X post from our conversation”
- The skill prompts the LLM with memory + chat transcript and returns formatted text
You could do this in a pure prompt, but I like an explicit command so I can hook scheduling.
async function draftPost(ctx, { style = "casual", hashtags = [] }) {
const transcript = await ctx.chat.transcript();
const prompt = `Write a Twitter post in ${style} style, first person, max 280 chars. Summarize core idea from the following conversation: \n\n${transcript}`;
if (hashtags.length) prompt += `\n\nInclude hashtags: ${hashtags.join(" ")}`;
const draft = await ctx.llm.complete({
model: "gpt-4o-mini",
prompt,
max_tokens: 120,
});
await ctx.memory.write(`DRAFT:${draft}`);
return draft;
}
You’ll notice we don’t post immediately. We want a human in the loop or at least a scheduling layer.
Command: schedulePost – queueing and posting
Scheduling is basically: store text + timestamp, then have a tick job that posts when due.
1. Persist the queue
async function schedulePost(ctx, { text, when }) {
const ts = dayjs(when).valueOf();
if (!text || !ts) throw new Error("text and when are required");
const queueRaw = (await ctx.storage.get("x:queue")) || "[]";
const queue = JSON.parse(queueRaw);
queue.push({ text, ts });
await ctx.storage.set("x:queue", JSON.stringify(queue));
return `Queued for ${dayjs(ts).toISOString()}`;
}
2. Posting worker
setInterval(async () => {
const queueRaw = (await storage.get("x:queue")) || "[]";
let queue = JSON.parse(queueRaw);
const now = Date.now();
const due = queue.filter(item => item.ts <= now);
if (!due.length) return;
const client = await getTwitterClient(storage);
for (const item of due) {
try {
await client.v2.tweet({ text: item.text });
logger.info(`Tweeted: ${item.text}`);
} catch (err) {
logger.error("Failed to tweet", err);
}
}
queue = queue.filter(item => item.ts > now);
await storage.set("x:queue", JSON.stringify(queue));
}, 60_000);
Under the hood we’re still burning one write call per tweet. If you have lots of content, consider batching and the “posts” sub-resource (but that’s Enterprise-only).
Monitoring engagement and closing the loop
I’m terrible at remembering to check replies. So I added a 30-min polling task that pulls metrics and writes a short report to the agent’s memory + Slack DM.
async function pollEngagement(ctx) {
const client = await getTwitterClient(ctx.storage);
const me = await client.v2.me();
const iter = client.v2.userTimeline(me.data.id, {
exclude: "retweets,replies",
"tweet.fields": ["public_metrics", "created_at"],
max_results: 50,
});
const summary = [];
for await (const tweet of iter) {
const m = tweet.public_metrics;
summary.push(`${tweet.id} | ♡${m.like_count} ↻${m.retweet_count} 💬${m.reply_count}`);
}
await ctx.memory.write(`ENGAGEMENT\n${summary.join("\n")}`);
await ctx.slack.post(process.env.SLACK_CHANNEL!, summary.join("\n"));
}
This is fairly chatty but the agent can now answer “How did yesterday’s tweets do?” without leaving my terminal.
Rate limits, ToS, and not getting shadow-banned
This wouldn’t be a real guide without caveats.
- Posting cadence: Twitter’s automation policy caps you at 50 tweets per 24 hours if they detect API keys. Stay under that (I do 8/day).
- Duplicate content: Reposting the same text across multiple accounts from the same IP/API key is a violation. If you cross-post to LinkedIn, make it paraphrased.
- Read endpoints: The free Elevated plan allows 15 requests / 15 min for timelines and 180 / 15 min for bookmarks. That’s why we batch.
- LLM hallucinations: The agent sometimes fabricates URLs. I run a
validateLinkshelper thatHEADs each URL before scheduling. - Human oversight: I keep the scheduling queue visible in the gateway UI so I can nuke something before it goes live. Not optional.
If you exceed limits you’ll get error 429. The twitter-api-v2 client exposes rateLimit headers; store them and back off dynamically.
Community pattern: LinkedIn + X dual posting from chat
A quick aside: many folks in #openclaw on Discord wire the same draftPost flow for LinkedIn. The only difference is max length (3,000 chars) and a friendlier tone. Inside the agent, you can abstract a SocialPost interface and choose the channel at schedule time:
await agent.command("openclaw-x-skill", "draftPost", { style: "professional", hashtags: ["#ai", "#opensource"] });
await agent.command("openclaw-li-skill", "schedulePost", { text: draft, when: "2024-07-03T15:00:00Z" });
Nice way to hit both networks without copy/paste.
Deploying to ClawCloud (optional but painless)
- Push your repo to GitHub/GitLab.
- In ClawCloud dashboard: Add Skill → Import from Git.
- Set
X_CLIENT_ID,X_CLIENT_SECRET,X_REDIRECT_URIas environment variables. The redirect can behttps://<agent-id>.claw.cloud/skills/openclaw-x-skill/auth/x/callback. - Click Deploy. Build + npm install takes ~30 s.
- Open
/skills/openclaw-x-skill/auth/xonce to connect the account.
That’s literally it. The hosted scheduler keeps running even if your laptop sleeps.
What we still need
- Better error surfaces — right now failures just dump to logs. Turning those into agent notifications would make debugging nicer.
- Thread support — the v2 endpoint takes an
in_reply_to_tweet_id; someone PR this. - Media uploads — possible via chunked uploads, but the skill doesn’t handle images yet.
If any of that scratches your itch, open an issue. I’ll review quickly; I want this skill solid before my next conference.
Next step: make the agent ask before posting
My personal TODO is a confirmation loop: when the agent drafts something, it should ping me on Telegram asking “Ship it?” Yes → schedule. No → retry. If you get that working first, send me the link — I owe you a coffee.
Happy automating.