OpenClaw

OpenClaw

the layer openclaw builders should care about next

why minified json between agents matters more than another giant prompt

OpenClaw's avatar
Josh Davis's avatar
OpenClaw and Josh Davis
Mar 31, 2026
∙ Paid
Claude Code $200 plan limit reached and cooldown for 4 days : r/ClaudeAI

most people are still spending their best model on the wrong part of the stack.

the human asks in natural language.
one agent rewrites the request in natural language.
another agent summarizes it in natural language.
a third agent explains what it plans to do in natural language.
then someone has to read a wall of text and figure out what changed.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.

that is not orchestration.

that is expensive narration.

for openclaw builders, the stronger move is smaller.

natural language for the human.
structured state for the machine.
natural language again at the edge where a person needs to review, approve, or act.

that pattern fits openclaw unusually well because openclaw already behaves more like a control plane than a chat box. the official docs describe the gateway websocket protocol as the single control plane and node transport, with typebox schemas defining the protocol surface and driving runtime validation. your own source docs also frame openclaw around routing, control, repeatable workflows, and operator judgment, not generic ai chatter.

that matters because once the job is known, prose becomes waste.

if the planner already knows the task is “find late orders, score the risk, draft follow-up for the high-risk ones, hold for review,” the next worker does not need a motivational speech. it needs a compact object with task, inputs, constraints, and required output.

something like this:

{”task”:”late_order_triage”,”goal”:”identify this week’s at-risk orders and prepare follow-up drafts”,”inputs”:{”sources”:[”gsheet_factory_orders”,”salesforce_accounts”,”drive_vendor_threads”]},”constraints”:{”ship_window”:”7d”,”draft_for”:”high_risk_only”,”review_required”:true},”required_output”:{”at_risk_orders”:true,”reasons”:true,”drafts”:true},”route_next”:”retrieve”}

that object is boring.

good.

boring is easier to diff.

boring is easier to validate.

boring is easier to retry.

boring is easier to route to a cheaper model.

boring is easier to turn into a repeatable workflow later.

openclaw already has pieces of this pattern. the docs show lobster running yaml and json workflows with args, steps, conditions, and approval fields. they also show schema-driven json extraction through llm-task, where you pass a prompt, an input, and a schema and get structured output back. that is a real clue about where serious builders should think next. not more narration between steps. more typed handoffs between steps.

this is also where the article gets more practical than theoretical.

say you run a messy business workflow across sheets, salesforce, docs, and vendor threads. that exact operator shape already shows up across your source stack. your audience is full of builders and operators trying to turn ugly work into inspectable systems, and your article system explicitly treats spreadsheet chaos, routing, cost control, approvals, and “what should stay manual” as core lanes.

the old flow looks like this.

you ask the agent to find late orders.

the planner reads a huge context window.

the retriever writes a memo.

the scorer writes a memo about the memo.

the drafter writes a memo about the risk.

then you still have to read all of it and decide what matters.

the cleaner flow looks like this.

the human still speaks in normal language.

“find every order at risk of missing ship date this week, explain why, draft follow-up for the high-risk ones, and hold for approval.”

the compiler turns that into structured state.

{”task”:”late_order_triage”,”goal”:”identify this week’s at-risk orders and prepare follow-up drafts”,”inputs”:{”sources”:[”gsheet_factory_orders”,”salesforce_accounts”,”drive_vendor_threads”]},”constraints”:{”ship_window”:”7d”,”draft_for”:”high_risk_only”,”review_required”:true},”required_output”:{”at_risk_orders”:true,”reasons”:true,”drafts”:true},”route_next”:”retrieve”}

the retrieval worker returns a delta, not a diary.

{”task”:”late_order_triage”,”found”:12,”at_risk”:[”po184”,”po211”,”po233”],”missing”:[”vendor_eta_po211”],”route_next”:”score”}

the scoring worker does the same.

{”task”:”late_order_triage”,”risk_scores”:{”po184”:”high”,”po211”:”medium”,”po233”:”high”},”reasons”:{”po184”:”supplier_delay”,”po211”:”missing_eta”,”po233”:”inventory_gap”},”route_next”:”draft”}

the drafting worker stays narrow too.

{”task”:”late_order_triage”,”drafts_ready”:[”po184”,”po233”],”hold_reason”:”awaiting_human_review”,”route_next”:”review_packet”}

only at the end do you turn it back into normal language for a human.

two high-risk orders need review.

one medium-risk order is blocked on missing vendor eta.

drafts are ready for po184 and po233.

review before send.

that is cheaper.

that is easier to inspect.

that is easier to test.

that is easier to move across model tiers.

that is easier to cache.

that is easier to route into lobster once the path stabilizes.

this is the part most people miss.

minified json is not the point.

transport discipline is the point.

json is only useful because it forces a better question:

what exactly does the next step need

not:

what is the prettiest way for one model to explain itself to another

that shift also makes review boundaries cleaner. when a worker returns structured state, the approval point becomes obvious. you review fields. you review flags. you review missing values. you review the route decision. you review the outbound draft at the end. that fits openclaw’s actual trust model better than blind autonomy. the official security docs are blunt that openclaw assumes one trusted operator boundary per gateway, not hostile multi-tenant isolation, and your own source docs keep pushing hardening, review design, and what should stay manual as permanent editorial lanes.

there is a limit here, and it matters.

this pattern is not a product claim.

the json objects in this article are illustrative transport examples, not native documented openclaw websocket frames. the actual gateway protocol has its own request, response, and event framing. this article is about a design pattern builders should apply inside their workflows, not a claim that openclaw already ships these exact internal envelopes.

there is another limit too.

over-compression creates its own problems.

compress too early and you hide context loss.

compress too hard and you make debugging worse.

let schemas drift and workers start failing in quieter ways.

and none of this rescues a bad trust boundary, polluted memory, weak permissions, or a workflow that never got clear in the first place.

so the rule is narrower than “convert everything into minified json.”

use natural language where ambiguity is still high.

use structured state where the task is already known.

use deterministic steps where the path is stable.

keep human review where money, outbound messaging, code changes, account access, or destructive action are involved.

that is the move.

not more natural language everywhere.

not bigger prompts everywhere.

not frontier reasoning for every repeated step forever.

natural language at the human boundary.

structured state inside the system.

deterministic flow where the work has matured.

for openclaw builders, that is one of the cleanest ways to stop paying for narration and start paying for actual work.

starter asset

here is the first compiler prompt i’d test:

you are a transport compiler.

convert the user request into valid minified json for downstream workers.

rules:
- output json only
- include only fields needed for downstream execution
- remove filler, explanations, and restatement
- when a field is unknown, mark it missing instead of guessing
- set review_required to true for money, outbound messaging, code changes, account access, or destructive actions
- keep keys stable across runs
- prefer short enums and refs over repeated prose

schema:
{
  “task”: “”,
  “goal”: “”,
  “inputs”: {},
  “constraints”: {},
  “required_output”: {},
  “review_required”: false,
  “route_next”: “”
}

and here is the simple test i’d run before building anything bigger:

pick one painful loop you already repeat.

lead triage.

supplier follow-up.

support intake.

meeting notes into action items.

research into a decision packet.

crm cleanup after a sales call.

then force the system through this rule:

the human speaks in normal language.

the compiler converts the request into structured state.

each internal worker receives only the fields needed for its step.

each worker returns only the delta it produced.

the final layer converts the result back into normal language for review.

that is where this gets real.

not when the agent sounds smarter.

when the handoffs get smaller.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.

paid gets you the exact build behind this post: deployable files, prompts, configs, install steps, hardening checklists, routing logic, and real workflows you’ll run, ship, or sell. free gives you the model. paid gives you the operator-grade assets.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Josh Davis | substack.com/@joshdavis10x · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture