SDK
Runs
Manage agent execution contexts with runs.
A run represents a single execution of your agent - one user request handled end-to-end. Runs group related trace spans together so Kate can evaluate them as a unit.
Creating a Run
from projectkate import KateClient
async with KateClient(api_key="your-api-key") as client:
run = await client.runs.create(
agent_id="your-agent-id",
trigger="manual", # or "automatic", "test"
)
print(f"Run started: {run.id}")Completing a Run
After your agent finishes processing, mark the run as complete. This triggers evaluation.
await client.runs.complete(
agent_id="your-agent-id",
run_id=run.id,
)Full Workflow
async with KateClient(api_key="your-api-key") as client:
# 1. Create the run
run = await client.runs.create(
agent_id=agent.id,
trigger="manual",
)
# 2. Execute your agent (traces are captured via @projectkate.trace)
result = await my_agent.handle_request("user input here")
# 3. Complete the run - triggers evaluation
await client.runs.complete(
agent_id=agent.id,
run_id=run.id,
)Trigger Types
| Trigger | When to Use |
|---|---|
"manual" | You're explicitly running the agent for testing |
"automatic" | The agent is handling a real user request |
"test" | Running as part of a test suite |
Listing Runs
runs = await client.runs.list(agent_id="your-agent-id")
for run in runs:
print(f"[{run.status}] {run.trigger} - score: {run.overall_score}")Viewing Spans
Get the trace spans for a specific run:
spans = await client.runs.spans(
agent_id="your-agent-id",
run_id="run-id",
)
for span in spans:
print(f" {span.node_name}: {span.latency_ms}ms")Polling Run Status
For runs kicked off asynchronously, poll until completion:
import projectkate
result = await projectkate.poll_run_status(
run_id="run-id",
interval_seconds=2.0,
timeout_seconds=300.0,
)Next Steps
- Tracing - instrument functions with
@projectkate.trace() - Traces & Evals (Dashboard) - view runs in the dashboard