Agent Interface Extension

Optional `agent()` interface for delegated execution inside the Reason-able Action Space.

Agent Interface Extension

  • Extension identifier: ria.agent
  • Status: Draft
  • Core dependency: requires Re in Act core (reason() required; act() optional)

Intent

agent() is an optional extension interface for delegated execution inside a RAS.

For runtime connectivity, ACP is a recommended integration path for agent() calls:

It is designed for cases where a local action unit benefits from bounded delegation while keeping Re in Act's core principle unchanged: strengthen action without turning the core into a multi-agent topology.

Agent Harness Pattern

agent() is most useful when treated as part of the RAS harness.

In this model, the RAS is the harness.

For related engineering terminology around long-running agent harnesses, see Anthropic's Harness design for long-running apps.

  • agent(prompt, config) performs delegated work and returns { data: { text, trajectory } }
  • reason() checks text and trajectory, then decides the next control action
  • runtime code enforces deterministic boundaries such as max iteration, timeout, and escalation

agent() is one delegated component inside the harness; the RAS keeps control explicit, bounded, and auditable.

Agent Orchestration Pattern

Besides harness-style local delegation, agent() can also be used for bounded agent orchestration.

Typical orchestration usage includes:

  • selecting a specialized backend agent for a subtask
  • routing work across multiple agent runtimes under one runtime policy
  • aggregating multiple delegated outputs before a local reason() decision

This remains extension-layer behavior inside the RAS, not a replacement for top-level reasoning in the core model.

Conceptual Boundary

The extension SHOULD be interpreted as:

  • a delegated action primitive under runtime control
  • bounded by explicit schema, resource limits, and return conditions
  • local to the current RAS execution context

The extension SHOULD NOT be interpreted as:

  • a new required core interface
  • an unbounded planner with implicit authority over outer-loop policy
  • a mechanism that bypasses reason() contracts or sandbox policy

Proposed Signature

agent(prompt, config?) -> { data: { text, trajectory } } | { error }

trajectory follows ATIF (Agent Trajectory Interchange Format):

Suggested config fields:

  • budget: max steps/time/tokens/cost
  • policy: tool and network policy profile
  • model: optional model selection hint
  • on_error: fail | return_error | retry_within_budget

Contract

  1. Deterministic envelope, non-deterministic inner work: The runtime enforces budgets, policy, and stop conditions. The delegated worker may perform non-deterministic reasoning within those bounds.

  2. Structured success envelope: On success, agent() returns { data: { text, trajectory } }. trajectory MUST follow ATIF. Validation and normalization happen in a separate reason() step, and trajectory can be used as an additional verification signal.

  3. Explicit failure semantics: On policy violation, budget exhaustion, or runtime failure, return structured { error }.

  4. Local traceability: The runtime SHOULD emit auditable trace metadata (start/end reason, budget usage, termination reason).

Minimal Example

max_iterations = 3
iteration = 0

while iteration < max_iterations:
    iteration += 1
    result = await agent(
        f"Goal: reproduce and patch failing checkout flow. Attempt {iteration}/{max_iterations}. Return what changed and test results.",
        {
            "budget": {"max_steps": 40, "max_minutes": 30},
            "on_error": "return_error",
        },
    )

    if isinstance(result, dict) and result.get("error"):
        verdict = await reason(
            [
                "Goal: decide whether to retry delegation or escalate.",
                f"Observation: {result['error']}",
                f"Relevant context: attempt {iteration} of {max_iterations}.",
                "Constraints and rules: return retry or escalate with a short reason.",
            ],
            {"action": "retry", "reason": ""},
        )
        if verdict["data"]["action"] == "retry" and iteration < max_iterations:
            continue
        print({"status": "escalate", "reason": verdict["data"]["reason"]})
        break

    # Keep trajectory verification local and compact before passing to reason().
    data = result.get("data", {})
    traj = data.get("trajectory", {})
    if isinstance(traj, dict):
        # ATIF trajectory root uses schema_version/session_id/steps.
        steps = traj.get("steps", [])
        trajectory_view = {
            "format": "ATIF",
            "schema_version": traj.get("schema_version"),
            "session_id": traj.get("session_id"),
            "step_count": len(steps),
            "steps_tail": steps[-8:],
            "final_metrics": traj.get("final_metrics"),
        }
    elif isinstance(traj, list):
        # Fallback for runtimes that pass ATIF steps directly.
        trajectory_view = {
            "format": "ATIF-steps-only",
            "step_count": len(traj),
            "steps_tail": traj[-8:],
        }
    else:
        trajectory_view = {"raw_tail": str(traj)[-4000:]}

    checked = await reason(
        [
            "Goal: verify and structure this delegated result.",
            f"Observation text:\n{data['text']}",
            f"Observation trajectory view:\n{trajectory_view}",
            f"Relevant context: attempt {iteration} of {max_iterations}.",
            "Constraints and rules: validate consistency using text plus the compact trajectory view; return continue, done, or escalate plus grounded reason and normalized summary.",
        ],
        {
            "action": "continue",
            "reason": "",
            "summary": "",
            "files_changed": [""],
            "tests": [{"name": "", "passed": True}],
        },
    )

    action = checked["data"]["action"]
    if action == "done":
        print(checked["data"])
        break
    if action == "continue" and iteration < max_iterations:
        continue

    print({"status": "escalate", "reason": checked["data"]["reason"]})
    break

Relationship to Core Re in Act

  • reason() remains the only required interface.
  • act() remains optional.
  • agent() is extension-only and optional.

This keeps the base model stable while allowing extension-level evolution when delegation improves task reliability and throughput.