Skip to main content

Agents

Agents are the core building blocks in Erdo. They contain workflow logic, execute steps, and coordinate AI operations.

Basic Agent

from erdo import Agent, state
from erdo.actions import llm

agent = Agent(
    name="data_analyzer",
    description="Analyzes data and generates insights"
)

# Add a step to the agent
analysis_step = agent.step(
    llm.message(
        model="claude-sonnet-4",
        query=f"Analyze this data: {state.dataset}"
    )
)

Agent Configuration

agent = Agent(
    name="unique_agent_name",        # Required: Unique identifier
    description="What this agent does",  # Required: Clear description
    visibility="public",             # public, private, or organization
    version="1.0.0",                # Semantic versioning
    tags=["analysis", "automation"], # Categorization tags
    timeout=300                     # Maximum execution time (seconds)
)

Status Messages

Agents display status messages to users during execution. There are three ways to configure these messages, listed in priority order:

1. Static Status (running_status, finished_status)

Simple static strings shown immediately. Use when the message doesn’t need context from inputs or outputs.
agent = Agent(
    name="data_analyzer",
    running_status="Analyzing data...",
    finished_status="Analysis complete"
)

2. Context-based Status (running_status_context, finished_status_context)

Provide context that gets hydrated with template variables and wrapped in a standard LLM prompt to generate a dynamic status message. Use when you want dynamic messages but don’t need full control over the prompt.
agent = Agent(
    name="data_analyzer",
    running_status_context=TemplateString("User query: {{query}}"),
    finished_status_context=TemplateString("Result: {{steps.analysis.output}}")
)
The context is wrapped in a standard prompt like:
“Generate a very short (max 50 chars) present-progressive status message for this task. Context: [your context]. Output only the message.”

3. Custom Prompt (running_status_prompt, finished_status_prompt)

Provide the full LLM prompt for complete control over status generation. Use when you need custom instructions or formatting.
agent = Agent(
    name="data_analyzer",
    running_status_prompt=TemplateString(
        "You are a status message generator. The user asked: {{query}}. "
        "Write a brief, friendly status message (max 40 chars) that explains "
        "what you're doing. Use emoji if appropriate."
    )
)

Priority Order

When multiple status fields are set, priority is: prompt > context > status
  1. If *_prompt is set → LLM called with your full prompt
  2. Else if *_context is set → LLM called with context wrapped in standard prompt
  3. Else if *_status is set → static message shown immediately

Choosing the Right Approach

Use CaseFieldExample
Simple, fixed messagerunning_status”Processing request…”
Dynamic based on inputrunning_status_contextTemplateString("Query: {{query}}")
Custom LLM instructionsrunning_status_promptFull custom prompt with specific format
Best practice: Start with _status for simple cases. Use _context when you want dynamic messages that reference inputs/outputs. Reserve _prompt for advanced cases requiring full prompt control.

Steps and Dependencies

Steps define what actions an agent performs. Use depends_on to control execution order:
from erdo.actions import websearch, llm

research_step = agent.step(
    websearch.search(query=f"{state.topic} latest research")
)

analysis_step = agent.step(
    llm.message(
        model="claude-sonnet-4",
        query=f"Analyze this research: {research_step.output.results}"
    ),
    depends_on=research_step  # Wait for research to complete
)

Result Handlers

Handle step outcomes with conditional logic:
from erdo.conditions import IsSuccess, GreaterThan
from erdo.actions import memory, utils

# Store successful results
analysis_step.on(
    IsSuccess() & GreaterThan("confidence", "0.8"),
    memory.store(memory={
        "content": analysis_step.output.response,
        "type": "high_confidence_analysis"
    })
)

# Handle low confidence results
analysis_step.on(
    IsSuccess() & ~GreaterThan("confidence", "0.8"),
    utils.send_status(
        status="review_needed",
        message="Analysis has low confidence - human review recommended"
    )
)

Example: Document Processor

from erdo import Agent, state
from erdo.actions import llm, memory, utils
from erdo.conditions import IsSuccess, TextContains

processor = Agent(
    name="document_processor",
    description="Extracts and validates information from documents"
)

# Step 1: Extract information
extract_step = processor.step(
    llm.message(
        model="claude-sonnet-4",
        query=f"Extract key information from: {state.document}",
        response_format={
            "Type": "json_schema",
            "Schema": {
                "type": "object",
                "properties": {
                    "title": {"type": "string"},
                    "summary": {"type": "string"},
                    "key_points": {"type": "array"}
                }
            }
        }
    )
)

# Step 2: Validate extraction
validate_step = processor.step(
    llm.message(
        model="claude-sonnet-4",
        query=f"Validate this extraction is complete: {extract_step.output.response}"
    ),
    depends_on=extract_step
)

# Store validated results
validate_step.on(
    IsSuccess() & TextContains("complete"),
    memory.store(memory={
        "content": extract_step.output.response,
        "type": "validated_extraction",
        "tags": ["document", "validated"]
    })
)

# Handle incomplete extractions
validate_step.on(
    IsSuccess() & ~TextContains("complete"),
    utils.send_status(
        status="incomplete",
        message="Document extraction incomplete - manual review needed"
    )
)

Agent Patterns

Sequential Processing

Execute steps one after another:
step1 = agent.step(action1)
step2 = agent.step(action2, depends_on=step1)
step3 = agent.step(action3, depends_on=step2)

Parallel Processing

Execute multiple steps simultaneously:
# These run in parallel
search_step = agent.step(websearch.search(query=state.query))
analysis_step = agent.step(llm.message(query=state.prompt))

# This waits for both to complete
synthesis_step = agent.step(
    llm.message(query="Combine results"),
    depends_on=[search_step, analysis_step]
)

Error Handling

Use result handlers to manage failures:
from erdo.conditions import IsError

process_step = agent.step(some_action)

# Handle errors with retry logic
process_step.on(
    IsError(),
    utils.send_status(status="failed", message="Processing failed"),
    # Could add retry logic here
)

Best Practices

  1. Clear Names: Use descriptive agent and step names
  2. Error Handling: Always handle potential failures
  3. Validation: Validate AI outputs before using them
  4. Dependencies: Use depends_on to control execution flow
  5. Memory: Store important results for future reference
  6. Timeouts: Set appropriate timeout values for long-running tasks