Tools & Tool Loop
SDK reference for tool discovery, execution, and the agentic tool loop.
The Kate SDK provides functions for discovering marketplace tools, executing them, and running an agentic loop where your LLM automatically decides when to call tools.
tool_loop()
The main entry point for agentic tool use. Runs a loop: send messages to the LLM, execute any tool calls it makes, feed results back, repeat until the LLM responds with text (no tool calls).
import projectkate
from openai import AsyncOpenAI
result = await projectkate.tool_loop(
AsyncOpenAI(),
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather in Tokyo?"},
],
max_rounds=10,
)Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
client | AsyncOpenAI | AsyncAnthropic | required | LLM client instance |
model | str | required | Model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514") |
messages | list[dict] | required | Initial conversation messages |
tools | list[dict] | None | None | Tool definitions in OpenAI format. None = auto-fetch from Kate |
local_tools | list[LocalTool] | None | None | Local tool definitions to merge with marketplace tools |
max_rounds | int | 10 | Maximum LLM round-trips before forced stop |
on_tool_call | Callable | None | None | Callback (name, args, result_str) invoked after each tool call |
**llm_kwargs | Extra kwargs passed to the LLM client (e.g., temperature, max_tokens) |
Returns: ToolLoopResult
Supported providers: OpenAI (AsyncOpenAI) and Anthropic (AsyncAnthropic). The SDK auto-detects the provider from the client type.
How It Works
- Fetches marketplace tools via
get_tools()(unlesstoolsis provided) - Merges marketplace tools with any
local_tools - Sends messages + tool definitions to the LLM
- If the LLM makes tool calls: executes each one (local tools run in-process, marketplace tools via Kate API), appends results to the conversation, and loops back to step 3
- If the LLM responds with text (no tool calls): returns the result
Example with Anthropic
from anthropic import AsyncAnthropic
import projectkate
projectkate.init(
api_url="https://api.projectkate.com",
api_key="your-kate-api-key",
agent_name="My Agent",
)
result = await projectkate.tool_loop(
AsyncAnthropic(),
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "Analyze keywords for example.com"},
],
max_rounds=5,
max_tokens=4096, # passed to Anthropic client
)
print(result.content)
print(f"Rounds: {result.rounds}, Tool calls: {result.tool_calls_made}")get_tools()
Discover available marketplace tools for the current agent.
tools = await projectkate.get_tools(format="openai")Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
format | str | "openai" | Tool definition format. "openai" returns function-calling format |
Returns: list[dict] - tool definitions ready to pass to an LLM
Requires remote mode (projectkate.init(api_url=..., api_key=...) called first).
Returns an empty list on network failure for graceful degradation.
call_tool()
Execute a single marketplace tool.
result = await projectkate.call_tool(
"get_weather",
{"city": "London", "units": "metric"},
)
print(result.output) # {"city": "London", "temperature": 12, ...}
print(result.tokens_charged) # 5
print(result.execution_time_ms) # 230Parameters:
| Name | Type | Default | Description |
|---|---|---|---|
tool_name | str | required | Name of the tool to execute |
input_data | dict | None | None | Tool input parameters |
Returns: ToolResult
Raises:
KateCredentialError- tool requires credentials not yet configured (HTTP 428)KateBalanceError- insufficient token balance (HTTP 402)KateToolError- tool execution failed on the server (HTTP 502/504)
Tool calls are automatically recorded as TOOL spans in Kate's tracing system.
LocalTool
Define a local tool to mix with marketplace tools in tool_loop().
from projectkate import LocalTool
my_tool = LocalTool(
name="calculate_roi",
description="Calculate return on investment from cost and revenue",
parameters={
"type": "object",
"properties": {
"cost": {"type": "number", "description": "Total cost"},
"revenue": {"type": "number", "description": "Total revenue"},
},
"required": ["cost", "revenue"],
},
fn=lambda cost, revenue: {"roi": (revenue - cost) / cost * 100},
)Fields:
| Name | Type | Description |
|---|---|---|
name | str | Tool name (must not conflict with marketplace tool names) |
description | str | What the tool does (shown to the LLM) |
parameters | dict | JSON Schema for the tool's parameters |
fn | Callable | The function to execute (sync or async) |
Local tools run in your process - no network call to Kate. The function receives only the kwargs that match its signature (extra args are filtered).
Data Models
ToolResult
Returned by call_tool() and client.tools.execute().
@dataclass
class ToolResult:
success: bool
output: Any = None
error: str | None = None
execution_time_ms: int = 0
tokens_charged: int = 0ToolLoopResult
Returned by tool_loop().
@dataclass
class ToolLoopResult:
content: str # Final text response from the LLM
messages: list[dict] # Full conversation history including tool calls
tool_calls_made: int # Total number of tool calls across all rounds
rounds: int # Number of LLM round-trips
model: str # Model usedToolCredentialStatus
Returned by client.tools.status().
@dataclass
class ToolCredentialStatus:
tool_name: str
artifact_id: str
status: str # "active", "pending_credentials", "not_required"
missing_keys: list[str] # Keys the buyer still needs to provideError Classes
All tool errors inherit from KateRemoteError.
| Error | HTTP Code | When |
|---|---|---|
KateCredentialError | 428 | Tool requires credentials not yet configured |
KateBalanceError | 402 | Insufficient token balance to execute tool |
KateToolError | 502/504 | Tool execution failed or timed out on the server |
from projectkate import KateCredentialError, KateBalanceError, KateToolError
try:
result = await projectkate.call_tool("get_weather", {"city": "London"})
except KateCredentialError as e:
print(f"Missing credentials: {e}")
except KateBalanceError as e:
print(f"Need more tokens: {e}")
except KateToolError as e:
print(f"Tool failed: {e}")Management Client
For administrative operations, use KateClient.tools:
client.tools.list(agent_id)
List available tools for an agent.
async with KateClient(api_key="your-api-key") as client:
tools = await client.tools.list(agent_id="your-agent-id")Parameters:
| Name | Type | Description |
|---|---|---|
agent_id | str | Agent UUID |
format | str | Tool format (default: "openai") |
Returns: list[dict]
client.tools.execute(agent_id, tool_name, input_data)
Execute a tool through the management client.
result = await client.tools.execute(
agent_id="your-agent-id",
tool_name="get_weather",
input_data={"city": "London"},
)Returns: ToolResult
client.tools.status(agent_id)
Check credential status for all tools.
statuses = await client.tools.status(agent_id="your-agent-id")
for s in statuses:
print(f"{s.tool_name}: {s.status}")Returns: list[ToolCredentialStatus]
Next Steps
- Use Marketplace Tools - practical guide with examples
- Tools API Reference - REST endpoints
- Environment Variables API - credential management
- SDK Overview - other SDK capabilities