the acp-first claude code bridge openclaw users need now
how to wire claude code into openclaw right now, where the cli fallback still fits, and why tmux + telegram belongs above the core instead of inside the middle
anthropic didn’t kill the path.
they honestly just forced a cleaner split.
openclaw’s current docs now describe the anthropic side in three lanes.
api key gives you the clearest billing story and the least ambiguity for a long-lived gateway.
claude cli reuse on the same host is treated as sanctioned again, and openclaw now prefers that path when available.
legacy token profiles still work if you already have them, but they are no longer the path i’d center a fresh build on.
that changes the build.
a lot of people saw the billing change, saw claude code getting stronger, and reached for the wrong center. they started sketching tmux rigs, telegram relays, and terminal-first bridges.
i wouldn’t try to turn openclaw into claude code.
i’d keep openclaw in front.
i’d put claude code behind openclaw for repo work.
i’d use acp as the bridge.
that split is the part people keep blurring.
openclaw still owns the front door. telegram. discord. imessage. slack. the bindings, delivery, and memory conventions that make the whole system useful from any of those channels.
claude code should own repo work. shell work. file edits. test runs. codebase reasoning. long coding turns where claude code as the execution layer beats a generic agent loop.
acp is the structured link between those two jobs: the protocol that keeps a live, persistent session open between openclaw and claude code. instead of firing off one message and waiting, the session stays connected, returns structured output, and remembers where the task left off.
that is the center i’d build around today. tmux, telegram, and the cli fallback all have a place. none of them belong in the middle.
where each path fits
acp is the main path for this build.
openclaw’s acp docs are direct about that. if you want claude code, codex, gemini cli, or another external coding agent running through openclaw, acp is the intended route. fresh installs ship with acpx, openclaw’s built-in acp runtime, enabled by default. start with:
/acp doctor
/acp spawn claudethen keep working inside the bound conversation or thread.
the cli backend still matters.
openclaw has a cli backend: a lighter, text-only mode that runs models without the full persistent session layer. the docs call that runtime a fallback on purpose. conservative. fewer moving parts. that path still earns a place when auth is messy or when you need the smallest thing to debug. still the wrong center if the real goal is claude code doing bound repo work behind openclaw.
remote control solves a different problem.
one machine. one live claude code session. you walk away from the desk and want the same local session from your phone or browser. remote control does that cleanly. claude keeps running on your machine. your filesystem stays local. your local mcp servers stay local. the browser or phone becomes a window into the same session.
channels sit even higher.
channels are the event-push lane into a running claude code session. telegram, discord, and imessage are included in research preview. channels still need claude.ai login and do not work with console or api key auth. channels are the more native answer to event push than a raw tmux bridge. but channels are not the bridge between openclaw and claude code. they are an operator lane.
tmux plus telegram still has value.
you get scrollback. reattach. a visible host-side console. something familiar on your phone.
but a tmux-first build teaches the wrong habit. state gets inferred from terminal output. parsing gets brittle. ansi junk leaks into the loop. the terminal becomes a poor man’s protocol.
that’s the trap.
keep tmux above the stack. use telegram for phone-first operator access. don’t put either one in the middle and call that architecture.
the first operator i’d build this for
not a team trying to write a whitepaper.
not someone farming novelty.
a solo builder on a mac mini or linux box. openclaw already lives in telegram. there is one repo on the same host. they want to drop a scoped code task from a phone, let claude code do the repo work, and get back something reviewable instead of vague terminal chatter.
builder-heavy, operator-heavy, local or hybrid, impatient with fluff. they want tutorials, templates, and real workflows. setups that hold up.
so give that reader one loop worth picturing.
there is a telegram topic called build queue. openclaw sits in that topic. claude code gets bound into that topic through:
/acp spawn claude --bind herethe operator drops a task like this:
inspect the failing test in packages/api. keep the patch minimal. run the focused test only. return changed files, commands run, remaining risk, and next action.claude code does the repo work. openclaw keeps the thread, the delivery, and the operator reach.
the result is not “done.”
the result is a reviewable patch summary, touched files, commands run, remaining risk, and the next move, all in the same thread where the work started.
what the first ten minutes look like
install openclaw and claude code on the same host.
run:
claude auth loginon that host.
if you want openclaw to reuse that login for anthropic-backed runs, run:
openclaw models auth login --provider anthropic --method cli --set-defaultthis build ships a setup wizard here 👇




