Skip to main content

Invoke

The invoke() function allows you to execute agents programmatically from Python code, making it easy to test agents, integrate them into applications, and automate workflows.

Quick Start

from erdo import invoke

# Simple invocation
response = invoke(
    "my-agent",
    messages=[{"role": "user", "content": "Hello!"}],
)

print(f"Success: {response.success}")
print(f"Result: {response.result}")

Basic Usage

Invoke with Messages

from erdo import invoke

response = invoke(
    "data-question-answerer",
    messages=[{"role": "user", "content": "What were Q4 sales?"}]
)

if response.success:
    print(response.result)
else:
    print(f"Error: {response.error}")

Invoke with Datasets

response = invoke(
    "data-question-answerer",
    messages=[{"role": "user", "content": "Show me the top products"}],
    datasets=["sales-q4-2024", "products-catalog"]
)

Invoke with Parameters

response = invoke(
    "data-analyzer",
    messages=[{"role": "user", "content": "Analyze the data"}],
    parameters={
        "analysis_type": "trend",
        "time_period": "monthly"
    }
)

Invocation Modes

Control how bot actions (especially LLM calls) are executed for testing and development:
ModeDescriptionCostUse Case
liveReal API calls$$$ per runProduction, fresh data
replayCached responses$$$ first run, FREE afterTesting, CI/CD
manualDeveloper-provided mocksFREE alwaysUnit tests, deterministic behavior

Live Mode (Default)

Runs against the real backend with LLM API (costs $):
response = invoke(
    "my-agent",
    messages=[...],
    mode="live"  # or omit mode parameter
)
When to use:
  • Production invocations
  • Development with real-time data
  • When you need fresh LLM responses
Uses cached responses - first run costs $, subsequent runs FREE:
# Basic replay - uses cache if available
response = invoke(
    "my-agent",
    messages=[...],
    mode="replay"
)

# First run: Executes live and caches response
# Second run: Returns cached response (FREE!)
Perfect for:
  • Development and testing
  • CI/CD pipelines
  • Iterating on agent logic
  • Running test suites
Cache behavior:
  • Cache key: SHA256 hash of (bot_id, bot_updated_at, action_type, parameters)
  • Automatic invalidation when bot is updated
  • Multi-tenant isolation (scoped by organization)
  • Only LLM calls are cached (configurable)

Replay Mode with Refresh

Force a fresh API call while staying in replay mode:
# Bypass cache and get fresh response
response = invoke(
    "my-agent",
    messages=[...],
    mode={"mode": "replay", "refresh": True}
)

# This will:
# 1. Bypass the cache lookup
# 2. Execute live against the LLM API
# 3. Update the cached response
# 4. Return the fresh result
When to use refresh:
  • Cache contains outdated responses
  • Testing cache refresh behavior
  • Updating cache after bot logic changes
  • Forcing fresh data without switching out of replay mode
Example workflow:
# Test 1: First run - caches response
invoke("my-agent", messages=[...], mode="replay")

# Test 2: Uses cached response (free!)
invoke("my-agent", messages=[...], mode="replay")

# Test 3: Update cache with fresh response
invoke("my-agent", messages=[...], mode={"mode": "replay", "refresh": True})

# Test 4: Uses newly cached response
invoke("my-agent", messages=[...], mode="replay")

Manual Mode

Uses developer-provided mock responses - always free:
response = invoke(
    "my-agent",
    messages=[...],
    mode="manual",
    manual_mocks={
        "llm.message": {
            "status": "success",
            "output": {
                "content": "Mocked response content",
                "model": "mock-model"
            }
        }
    }
)
When to use:
  • Unit testing with deterministic responses
  • Testing error handling
  • CI/CD pipelines
  • Offline development
Mock format:
manual_mocks = {
    "llm.message": {  # Action type
        "status": "success",  # or "error"
        "output": {
            "content": "Mocked LLM response",
            "model": "mock-model",
            # ... other output fields
        }
    },
    # Add mocks for other action types
}
Important:
  • Manual mode requires mocks for all executed actions
  • Error if no mock provided for an action
  • Action type keys: llm.message, memory.search, codeexec.run, etc.

Output Formats

Events Format (Default)

Returns raw events:
response = invoke(
    "my-agent",
    messages=[...],
    output_format="events"  # Default
)

# Response.result contains raw events
print(response.result)  # {"events": [...]}

Text Format

Returns human-readable text:
response = invoke(
    "my-agent",
    messages=[...],
    output_format="text"
)

# Clean text output
print(response.result)
# Bot: my agent
# Invocation ID: abc-123
# Result:
# The answer is 42

Text with Verbose Steps

Show step-by-step execution:
response = invoke(
    "my-agent",
    messages=[...],
    output_format="text",
    verbose=True
)

# Output includes steps
print(response.result)
# Bot: my agent
# Invocation ID: abc-123
#
# Steps:
#   ✓ parse_input (utils.parse_json)
#   ✓ analyze_data (llm.message)
#   ✓ format_output (utils.echo)
#
# Result:
# The answer is 42

JSON Format

Returns structured summary:
response = invoke(
    "my-agent",
    messages=[...],
    output_format="json"
)

# Structured data
print(response.result)
# {
#   "bot_name": "my agent",
#   "bot_key": "my-agent",
#   "invocation_id": "abc-123",
#   "steps": [...],
#   "result": "The answer is 42",
#   "success": true
# }

Streaming

Stream events in real-time:
response = invoke(
    "my-agent",
    messages=[...],
    stream=True
)

# Access events
for event in response.events:
    print(event)
Stream with formatted output:
response = invoke(
    "my-agent",
    messages=[...],
    stream=True,
    output_format="text"
)

# Output streams to stdout automatically
# Final result in response.result

InvokeResult

The invoke() function returns an InvokeResult object with a clean structure following the executor pattern:
class InvokeResult:
    success: bool                    # Whether invocation succeeded
    bot_id: Optional[str]           # Bot key
    invocation_id: Optional[str]    # Unique invocation ID
    result: Optional[Dict]          # types.Result object with status/parameters/output/message/error
    messages: List[Dict[str, Any]]  # All messages from all steps (including sub-agents)
    steps: List[Dict[str, Any]]     # Information about executed steps
    events: List[Dict[str, Any]]    # Complete raw event stream for debugging
    error: Optional[str]            # Error message if failed

Understanding the Result Structure

The result field follows the standardized types.Result structure from the backend:
{
    "status": "success",           # "success" or "error"
    "parameters": {...},           # Input parameters
    "output": {                    # Output content
        "content": [               # Array of content items
            {
                "content_type": "text",
                "content": "The actual response..."
            }
        ]
    },
    "message": "Optional message",
    "error": None                  # Error details if status is "error"
}

Example Usage

response = invoke("my-agent", messages=[...])

# Check success
if response.success:
    # Access the final result
    print(f"Status: {response.result['status']}")

    # Extract text from result.output.content
    if response.result and response.result.get('output'):
        for item in response.result['output'].get('content', []):
            if item.get('content_type') == 'text':
                print(item['content'])

    # Access all messages (including from sub-agents)
    print(f"\nMessages ({len(response.messages)}):")
    for msg in response.messages:
        print(f"  {msg['role']}: {msg['content'][:50]}...")

    # Access step execution info
    print(f"\nSteps ({len(response.steps)}):")
    for step in response.steps:
        print(f"  ✓ {step['key']} ({step['action']})")

    # Get invocation ID
    print(f"\nInvocation: {response.invocation_id}")

    # Access raw events for debugging
    print(f"Events: {len(response.events)} raw events")
else:
    # Handle error
    print(f"Error: {response.error}")

Accessing Messages from All Steps

The messages field captures messages from all events including intermediate steps and sub-agents, not just the final output:
response = invoke(
    "agent-with-sub-agents",
    messages=[{"role": "user", "content": "Process this"}]
)

# Get all messages from main agent and sub-agents
for msg in response.messages:
    print(f"{msg['role']}: {msg['content']}")

# Example output might include:
# assistant: Processing your request...
# assistant: Calling data analyzer sub-agent...
# assistant: Analysis complete. Here are the results...

Complete API Reference

def invoke(
    bot_key: str,
    messages: Optional[List[Dict[str, str]]] = None,
    parameters: Optional[Dict[str, Any]] = None,
    datasets: Optional[List[str]] = None,
    mode: Optional[Union[str, Dict[str, Any]]] = None,
    manual_mocks: Optional[Dict[str, Dict[str, Any]]] = None,
    stream: bool = False,
    output_format: str = "events",
    verbose: bool = False,
    print_events: bool = False,
    **kwargs
) -> InvokeResult

Parameters

  • bot_key (str, required): Bot key (e.g., “my-agent”, “data-question-answerer”)
  • messages (list, optional): Messages in format [{"role": "user", "content": "..."}]
  • parameters (dict, optional): Parameters to pass to the bot
  • datasets (list, optional): Dataset slugs to include (e.g., [“sales-2024”, “customers”])
  • mode (str or dict, optional): Invocation mode
    • String: “live” (default), “replay”, or “manual”
    • Dict: {"mode": "replay", "refresh": True} for advanced options
  • manual_mocks (dict, optional): Manual mock responses for mode=“manual”
    • Format: {"action_type": {"status": "success", "output": {...}}}
  • stream (bool, optional): Whether to stream events (default: False)
  • output_format (str, optional): Output format - “events” (raw), “text” (formatted), or “json” (summary) (default: “events”)
  • verbose (bool, optional): Show detailed steps (only for text format) (default: False)
  • print_events (bool, optional): Print events as they arrive (default: False)

Keyword Arguments

  • endpoint (str): Custom API endpoint
  • auth_token (str): Custom auth token

Examples

Testing Agents

from erdo import invoke
from erdo.test import text_contains

# Test an agent
response = invoke(
    "my-agent",
    messages=[{"role": "user", "content": "Test input"}],
    mode="replay",  # Free after first run
)

assert response.success
assert response.result['status'] == 'success'

# Check the text content
if response.result and response.result.get('output'):
    content_text = ""
    for item in response.result['output'].get('content', []):
        if item.get('content_type') == 'text':
            content_text += item['content']
    assert text_contains(content_text, "expected output")

# Verify steps were executed
assert len(response.steps) > 0
assert any(step['action'] == 'llm.message' for step in response.steps)

Multi-turn Conversation

response = invoke(
    "chatbot",
    messages=[
        {"role": "user", "content": "Hello"},
        {"role": "assistant", "content": "Hi! How can I help?"},
        {"role": "user", "content": "Tell me about Erdo"}
    ],
    mode="replay"
)

Data Analysis

response = invoke(
    "data-question-answerer",
    messages=[{"role": "user", "content": "What were Q4 sales by region?"}],
    datasets=["sales-2024"],
    parameters={
        "time_period": "Q4",
        "group_by": "region"
    },
    mode="live"
)

if response.success:
    # Access the result
    print(f"Status: {response.result['status']}")

    # Extract and print the text content
    if response.result.get('output'):
        for item in response.result['output'].get('content', []):
            if item.get('content_type') == 'text':
                print(item['content'])

    # Show execution steps
    print(f"\nExecuted {len(response.steps)} steps:")
    for step in response.steps:
        print(f"  ✓ {step['key']} ({step['action']})")

Streaming with Progress

response = invoke(
    "long-running-agent",
    messages=[{"role": "user", "content": "Process large dataset"}],
    datasets=["large-dataset"],
    stream=True,
    output_format="text",
    verbose=True
)

# Output streams as agent executes
# Steps shown in real-time

Batch Processing

from concurrent.futures import ThreadPoolExecutor

def process_query(query):
    return invoke(
        "data-analyzer",
        messages=[{"role": "user", "content": query}],
        mode="replay"
    )

queries = ["Query 1", "Query 2", "Query 3"]

# Process in parallel
with ThreadPoolExecutor(max_workers=3) as executor:
    results = list(executor.map(process_query, queries))

for i, result in enumerate(results):
    if result.success:
        print(f"Query {i+1}: {result.result}")

Integration with Flask

from flask import Flask, request, jsonify
from erdo import invoke

app = Flask(__name__)

@app.route('/analyze', methods=['POST'])
def analyze():
    data = request.json

    response = invoke(
        "data-analyzer",
        messages=[{"role": "user", "content": data["query"]}],
        datasets=data.get("datasets", []),
        parameters=data.get("parameters"),
        mode="live"
    )

    if response.success:
        # Extract text content from result
        content_text = ""
        if response.result and response.result.get('output'):
            for item in response.result['output'].get('content', []):
                if item.get('content_type') == 'text':
                    content_text += item['content']

        return jsonify({
            "success": True,
            "content": content_text,
            "result": response.result,
            "steps": response.steps,
            "invocation_id": response.invocation_id
        })
    else:
        return jsonify({
            "success": False,
            "error": response.error
        }), 400

if __name__ == "__main__":
    app.run()

Error Handling

from erdo import invoke

try:
    response = invoke(
        "my-agent",
        messages=[{"role": "user", "content": "Hello"}]
    )

    if response.success:
        print(response.result)
    else:
        # Agent returned an error
        print(f"Agent error: {response.error}")

except Exception as e:
    # Network or other error
    print(f"Invocation failed: {e}")

Best Practices

1. Use Replay Mode for Testing

# Good - fast, free after first run
response = invoke("my-agent", messages=[...], mode="replay")

# Avoid - slow, costs $ every time
response = invoke("my-agent", messages=[...])

2. Always Check Success

# Good
response = invoke("my-agent", messages=[...])
if response.success:
    print(response.result)
else:
    print(f"Error: {response.error}")

# Bad - may crash
response = invoke("my-agent", messages=[...])
print(response.result)  # Could be None if failed

3. Use Appropriate Output Format

# For humans
response = invoke("my-agent", messages=[...], output_format="text")

# For integration/parsing
response = invoke("my-agent", messages=[...], output_format="json")

# For custom processing
response = invoke("my-agent", messages=[...], output_format="events")

4. Stream Long-Running Agents

# Good for long-running agents
response = invoke(
    "long-agent",
    messages=[...],
    stream=True,
    output_format="text"
)

# Bad - may timeout
response = invoke("long-agent", messages=[...])

Troubleshooting

Bot Not Found

Make sure the agent is synced:
erdo sync-agent path/to/agent.py

Authentication Errors

Login first:
erdo login
Or set environment variables:
export ERDO_ENDPOINT="https://api.erdo.ai"
export ERDO_AUTH_TOKEN="your-token"

Slow Invocations

Use replay mode:
# Fast - cached after first run
response = invoke("my-agent", messages=[...], mode="replay")

Import Errors

Install the SDK:
cd erdo-agents
uv pip install -e ../erdo-python-sdk

See Also