RAS Concepts

Deeper walkthrough of the Reason-able Action Space, reason(), and act() with real examples.

RAS Concepts

Think of the Reason-able Action Space (RAS) as the bounded space where action can keep moving without turning every local disturbance into another outer-loop turn.

This page walks through:

  • what the RAS does
  • the two runtime forms
  • how reason() and act() work together inside one action space

Control View

Re in Act is easiest to understand as a local control architecture.

  • The top-level reasoner defines the RAS and its constraints.
  • The RAS runtime keeps intermediate data local and runs deterministic control flow such as loops, retries, branches, and tool calls.
  • reason() plays the role of a local comparator and controller: its prompt carries the goal, local observations, relevant context, and governing constraints, then it compares local reality against that target and returns schema-bounded output that action can use immediately.
  • act() is the optional action interface that touches the outside world from inside the RAS.

This is an engineering analogy, not a claim that the runtime is literally a classical control block diagram. The point is that raw observations become useful only after local filtering and comparison.

If you want the deeper design rationale, see Control-Theoretic View.

In practical agent terms, this is why the RAS matters: it lets action keep working through local uncertainty instead of handing every small disturbance back to the main loop.

Two Runtime Forms

Code RAS

The agent writes scripts, typically in Python or TypeScript. reason() and act() are async interfaces available in scope.

The point is not the language. The point is that the action space gets deterministic, Turing-complete control flow while the model stays confined to bounded local judgments.

test_run = await act("bash", "npm test -- --reporter json")

focus = await reason(
    [
        "Goal: identify the highest-value retry step from this failed test run.",
        f"Observation:\n{test_run['content'][0]['text']}",
        "Relevant context: this is the latest CI run after the current branch changes.",
        "Constraints and rules: return only retry_cmd and a short reason grounded in the output.",
    ],
    {"retry_cmd": "", "reason": ""},
)

retry_run = await act("bash", focus["data"]["retry_cmd"])

decision = await reason(
    [
        "Goal: decide whether action should continue or escalate.",
        f"Observation:\n{retry_run['content'][0]['text']}",
        f"Relevant context: the retry step was selected because {focus['data']['reason']}",
        "Constraints and rules: return one action and one grounded reason.",
    ],
    {"action": "continue", "reason": ""},
)

if decision["data"]["action"] == "escalate":
    await act("notify", {"channel": "#build-failures", "message": decision["data"]["reason"]})
    print(decision["data"]["reason"])
else:
    await act("deploy", {"target": "production"})
    print("deployed")

Pattern: act() gathers local evidence → reason() compresses it into a bounded control signal → act() runs the next step → reason() decides whether the space should continue or escalate.

Exactly the same shape works in TypeScript/Node.js because the point is the orchestration pattern, not Python syntax.

const testRun = await act("bash", "npm test -- --reporter json");

const focus = await reason(
  [
    "Goal: identify the highest-value retry step from this failed test run.",
    `Observation:\n${testRun.content[0].text}`,
    "Relevant context: this is the latest CI run after the current branch changes.",
    "Constraints and rules: return only retry_cmd and a short reason grounded in the output.",
  ],
  { retry_cmd: "", reason: "" },
);

const retryRun = await act("bash", focus.data.retry_cmd);
const decision = await reason(
  [
    "Goal: decide whether action should continue or escalate.",
    `Observation:\n${retryRun.content[0].text}`,
    `Relevant context: the retry step was selected because ${focus.data.reason}`,
    "Constraints and rules: return one action and one grounded reason.",
  ],
  { action: "continue", reason: "" },
);

if (decision.data.action === "escalate") {
  await act("notify", { channel: "#build-failures", message: decision.data.reason });
  console.log(decision.data.reason);
} else {
  await act("deploy", { target: "production" });
  console.log("deployed");
}

The action space is the same even though the host syntax changes.

Bash RAS

The agent writes shell pipelines. reason and act are CLI commands available in PATH.

act --manual | \
  reason \
    --prompt "Goal: find the tools needed to collect the most relevant API and documentation context for this task." \
    --prompt - \
    --prompt "Constraints and rules: return only a JSON array of tool names. Prefer the smallest sufficient set." \
    --structure '["tool_name"]' | \
  jq -r '.data[]' | while read -r name; do
    act --manual "$name"
  done

Pattern: act() emits the local tool catalog to stdout, reason() selects the minimum sufficient subset, and shell keeps the deterministic pipe-and-loop control flow.

Unix pipelines are useful here because they are already a deterministic action fabric. reason() does not replace shell control flow; it gives shell one bounded semantic step inside that control flow.


reason(prompt, example_output)

reason() is the only required interface in the spec. It converts a natural-language request into deterministic structured JSON.

Key rules

  • Uses a fresh inference context — each call should include the needed local context explicitly, but the surrounding action state and orchestration still stay in the same RAS.
  • Include goal + observation + context + constraints — prompts should state the target objective, local observations/feedback, task-relevant context, and governing constraints or rules.
  • Retry on schema failure — validates the output against the schema inferred from example_output. Retries up to N times with the error as feedback before returning { "error": "..." }.
  • Bound output variety with schemaexample_output constrains responses into an executable control channel.
  • Use a lightweight model when possiblereason() is an atomic local reasoning call, not a full top-level reasoning cycle.

What reason() is for

reason() is not a second full agent. It is a local judgment step inside an orchestrable RAS. Its job is to compare goal against local reality, use the resulting deviation to decide what matters, and return bounded decisions that action can carry out deterministically under the control of the enclosing runtime.

reason() improves action quality by compressing noisy local observations and intermediate data into something small, explicit, and actionable.

Conceptually, reason() includes two tightly coupled steps:

  • a comparison step that assesses the relevant gap between the goal and the current local reality
  • a control step that turns that gap into a bounded output that the runtime can use immediately

Typical uses include:

  • classify success or failure from noisy logs
  • pick the next tool or branch from a constrained set
  • extract structured facts from unstructured local evidence
  • summarize only the signal that should leave the RAS

Example — local extraction

url = "https://example.com/release-notes"
response = await act("webfetch", {"url": url})
facts = await reason(
  "\n".join(
      [
          "Goal: extract the decision-relevant facts from this page.",
          f"Observation:\n{response['content'][0]['text']}",
          "Constraints and rules: ignore markup noise and return claims with evidence only.",
      ]
  ),
  {"claims": [{"fact": "", "evidence": ""}]},
)

print(facts["data"])

Example — local decision

decision = await reason(
  "Goal: decide whether action should retry or escalate. Observation: latest build retry failed. Constraints: return one action only.",
  example={"action": "retry"},
)

print(decision["data"])

act(name, args) — Optional

act() is a convenience interface provided by the RAS for calling external tools. The spec does not require you to use it — you may execute actions any way you like.

If you want optional interfaces beyond the core reason() / act() contract, see Re in Act Extensions.

It becomes most useful when paired with reason() inside one action space:

  • act() gets or changes something in the outside world
  • reason() interprets the local result
  • act() performs the next bounded step

When to use act()

  • Calling MCP tools
  • Running bash commands
  • Fetching external resources
  • Any stateless external operation

Error handling

result = await act("websearch", {"query": "Re in Act spec"})
if result.get("isError"):
    print(result)
    return
# Safe to use result["content"][0]["text"]

Multi-step action pattern

The action layer gets stronger when related act() and reason() calls stay inside one RAS instead of getting split across multiple outer-loop turns.

search = await act("websearch", {"query": topic})
urls = (await reason(
  f"Goal: pick the two best sources. Observation: {search['content'][0]['text']}",
  ["url1", "url2"],
))["data"]

pages = [(await act("webfetch", {"url": u}))["content"][0]["text"] for u in urls]

brief = (await reason(
  f"Goal: synthesize a short brief. Observation: {pages}",
  {"brief": "", "open_questions": [""]},
))["data"]

print(brief)

That is the core Re in Act pattern: keep the local evidence, the local judgments, and the next actions inside the same RAS until the local job is actually done.