SDK
Runs Client
Create and manage agent runs with client.runs.
The runs client manages agent execution contexts - creating runs, uploading spans, and marking completion.
Methods
list(agent_id)
List all runs for an agent.
runs = await client.runs.list(agent_id="your-agent-id")
for run in runs:
print(f"[{run.status}] {run.trigger} - {run.overall_score}")Returns: list[RunSummary]
create(agent_id, trigger)
Create a new run.
run = await client.runs.create(
agent_id="your-agent-id",
trigger="manual",
)
print(f"Run ID: {run.id}")Parameters:
| Name | Type | Description |
|---|---|---|
agent_id | str | The agent to create the run for |
trigger | str | "manual", "automatic", or "test" |
Returns: RunSummary
get(agent_id, run_id)
Get details of a specific run.
run = await client.runs.get(
agent_id="your-agent-id",
run_id="run-id",
)Returns: RunSummary
complete(agent_id, run_id)
Mark a run as completed. Triggers automatic evaluation if the cooldown period has passed.
run = await client.runs.complete(
agent_id="your-agent-id",
run_id="run-id",
)
print(f"Status: {run.status}") # "completed"Returns: RunSummary
spans(agent_id, run_id)
Get the trace spans for a run.
spans = await client.runs.spans(
agent_id="your-agent-id",
run_id="run-id",
)
for span in spans:
print(f"{span.node_name} ({span.span_kind}): {span.latency_ms}ms")Returns: List of span objects
Data Model
RunSummary
@dataclass
class RunSummary:
id: str
agent_id: str
status: str # "running", "completed", "failed"
trigger: str | None # "manual", "automatic", "test"
overall_score: float | None
created_at: str | None
completed_at: str | NoneNext Steps
- Tracing - instrument functions to record spans
- Runs (Concept) - run lifecycle details