Skip to main content

REST API

Erdo’s REST API lets you integrate your data platform into any application. Query datasets, write data, manage conversations, create automations, and more — all via standard HTTP requests. Base URL: https://api.erdo.ai Authentication: All requests require a Bearer token in the Authorization header.
curl https://api.erdo.ai/v1/datasets \
  -H "Authorization: Bearer YOUR_API_KEY"

Getting an API Key

Click your profile in the bottom-left corner of Erdo and go to API Keys. Create a new key and copy the token. The organization is inferred from your API key automatically.

Scoped Tokens

For building apps where your end-users interact with Erdo, use scoped tokens to restrict access to specific datasets and threads.

Datasets

List Datasets

GET /v1/datasets
Returns all datasets in your organization.
ParameterTypeDescription
limitnumberMax results (default 20, max 100)
offsetnumberPagination offset
curl "https://api.erdo.ai/v1/datasets?limit=10" \
  -H "Authorization: Bearer YOUR_API_KEY"
{
  "datasets": [
    {
      "id": "uuid",
      "slug": "my-org.sales-data",
      "name": "Sales Data",
      "description": "Monthly sales figures",
      "type": "file",
      "status": "active"
    }
  ],
  "total": 42
}

Create Dataset

POST /v1/datasets-create
Create a new empty dataset. Uses your organization’s default storage backend. After creation, use Write Rows to add data.
ParameterTypeDescription
namestringName for the dataset
descriptionstringOptional description
instructionsstringOptional instructions for AI agents analyzing this dataset
curl -X POST "https://api.erdo.ai/v1/datasets-create" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"name": "Daily Metrics", "description": "Automated metrics from our monitoring pipeline"}'
{
  "id": "uuid",
  "slug": "my-org.daily-metrics",
  "name": "Daily Metrics",
  "description": "Automated metrics from our monitoring pipeline",
  "type": "file",
  "status": "active"
}

Delete Dataset

DELETE /v1/datasets/:slug
Permanently delete a dataset and all its data. Requires admin permission on the dataset.
curl -X DELETE "https://api.erdo.ai/v1/datasets/my-org.daily-metrics" \
  -H "Authorization: Bearer YOUR_API_KEY"

Search Datasets

GET /v1/datasets-search
Search datasets by name.
ParameterTypeDescription
querystringSearch text
curl "https://api.erdo.ai/v1/datasets-search?query=revenue" \
  -H "Authorization: Bearer YOUR_API_KEY"

Get Dataset Schema

GET /v1/datasets/:id/schema
Get detailed schema for a dataset including column names, types, statistics, and sample data. Call this before writing data to understand the column structure.
curl "https://api.erdo.ai/v1/datasets/my-dataset-uuid/schema" \
  -H "Authorization: Bearer YOUR_API_KEY"

Query Dataset (SQL)

POST /v1/datasets/:slug/query
Run a SQL query against a dataset. The SQL dialect depends on the storage backend (PostgreSQL, ClickHouse, or DuckDB for file datasets).
ParameterTypeDescription
querystringSQL query to execute
limitnumberMax rows (default 100)
curl -X POST "https://api.erdo.ai/v1/datasets/my-org.sales-data/query" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "SELECT * FROM data WHERE revenue > 10000 ORDER BY date DESC", "limit": 50}'
{
  "columns": ["date", "revenue", "orders"],
  "rows": [["2025-03-15", "42300", "156"], ["2025-03-14", "38900", "142"]],
  "row_count": 2
}
For file datasets (CSV/Excel), the table name is always data. For database datasets, use the actual table name from the schema.

Query Dataset (Natural Language)

POST /v1/datasets/:slug/query-nl
Query a dataset using natural language. Erdo generates and executes the correct SQL for you.
ParameterTypeDescription
questionstringNatural language question
curl -X POST "https://api.erdo.ai/v1/datasets/my-org.sales-data/query-nl" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"question": "What were the top 5 products by revenue last month?"}'

Fetch Dataset Contents

POST /v1/datasets/:slug/fetch
Fetch raw contents of a dataset without writing SQL.
ParameterTypeDescription
limitnumberMax rows (default 1000)
sql_querystringOptional SQL filter

Get Dataset Context

GET /v1/dataset-context
Get detailed context for multiple datasets at once — schemas, column types, statistics, and sample data.
ParameterTypeDescription
dataset_slugsstring[]Specific dataset slugs (empty = all)
limitnumberMax datasets (default 10)

Writing Data

Write data into your datasets from any application. Rows are written to whatever storage backend the dataset uses (Postgres, ClickHouse, or CSV file storage).

Write Rows

POST /v1/datasets/:slug/rows
Write or upsert rows to a dataset.
ParameterTypeDescription
rowsobject[]Array of row objects with column names as keys
key_columnstringOptional. Column to upsert on (update existing, insert new)
Append rows (no key column):
curl -X POST "https://api.erdo.ai/v1/datasets/my-org.metrics/rows" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "rows": [
      {"date": "2025-03-17", "revenue": 42300, "orders": 156},
      {"date": "2025-03-18", "revenue": 45100, "orders": 163}
    ]
  }'
Upsert rows (with key column):
curl -X POST "https://api.erdo.ai/v1/datasets/my-org.metrics/rows" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "rows": [
      {"date": "2025-03-17", "revenue": 43500, "orders": 160}
    ],
    "key_column": "date"
  }'
{
  "rows_affected": 2
}
Storage-specific behavior:
StorageAppendUpsert (key_column)Notes
PostgresInsert rowsON CONFLICT DO UPDATEKey column must have a unique constraint
ClickHouseBatch insertInsert (use ReplacingMergeTree for dedup)
CSV/FileAppend to CSVNot supported (always appends)New columns are added automatically

Delete Rows

DELETE /v1/datasets/:slug/rows
Delete rows from a dataset. Not supported for file datasets.
ParameterTypeDescription
key_columnstringColumn to match against
keysstring[]Values to delete. Empty = delete all rows.
curl -X DELETE "https://api.erdo.ai/v1/datasets/my-org.metrics/rows" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"key_column": "date", "keys": ["2025-03-17"]}'
{
  "rows_affected": 1
}

Update Schema

POST /v1/datasets/:slug/schema
Update a dataset’s schema: add, remove, rename columns, or change column types. Operations are applied atomically — if any fails, none are applied. Supported for CSV file datasets only. After changes, column analysis is automatically refreshed.
ParameterTypeDescription
operationsobject[]Array of schema operations
Each operation object:
FieldTypeDescription
typestringadd_column, remove_column, rename_column, or alter_column_type
columnstringTarget column name
new_namestringNew name (for rename_column only)
column_typestringType hint: text, integer, float, date, boolean (for add_column and alter_column_type)
curl -X POST "https://api.erdo.ai/v1/datasets/my-org.metrics/schema" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "operations": [
      {"type": "add_column", "column": "region", "column_type": "text"},
      {"type": "rename_column", "column": "rev", "new_name": "revenue"},
      {"type": "alter_column_type", "column": "revenue", "column_type": "float"},
      {"type": "remove_column", "column": "temp_notes"}
    ]
  }'
{
  "columns_added": 1,
  "columns_removed": 1,
  "columns_renamed": 1,
  "columns_retyped": 1,
  "current_columns": ["date", "revenue", "region"]
}

Ask Questions

Ask a Data Question

POST /v1/ask
Ask a natural language question about your data. Invokes an AI agent that analyzes datasets, writes code, and returns a text answer.
Can take 30 seconds to 2 minutes for complex questions.
ParameterTypeDescription
questionstringThe data question
dataset_slugsstring[]Optional. Scope to specific datasets.
timezonestringOptional. e.g. America/New_York
curl -X POST "https://api.erdo.ai/v1/ask" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"question": "What were total sales last quarter?", "dataset_slugs": ["my-org.sales-data"]}'
{
  "thread_id": "uuid",
  "status": "success",
  "answer": "Total sales last quarter were $1.2M, up 15% from the previous quarter..."
}

Threads & Conversations

List Threads

GET /v1/threads
ParameterTypeDescription
limitnumberMax results (default 20)
offsetnumberPagination offset

Get Thread Messages

GET /v1/threads/:id/messages
Get all messages from a conversation thread.

Create Thread

POST /v1/threads-create
ParameterTypeDescription
namestringOptional thread name
dataset_idsstring[]Optional dataset UUIDs to attach

Send Message

POST /v1/threads/:id/send
Send a message to a thread and get an AI-generated response.
Can take 30 seconds to 2 minutes.
ParameterTypeDescription
messagestringThe message
agent_keystringOptional. Default: erdo.data-question-answerer
timezonestringOptional timezone

Memories & Skills

Memories store reusable knowledge and instructions that Erdo’s AI uses in future conversations.

Create Memory

POST /v1/memories
ParameterTypeDescription
titlestringShort title
contentstringThe content or instructions
descriptionstringBrief description
typestringsnippet (knowledge) or skill (instructions)
categorystringOptional category
tagsstring[]Optional tags
dataset_idsstring[]Optional associated dataset UUIDs

Search Memories

GET /v1/memories-search
ParameterTypeDescription
querystringSearch text
limitnumberMax results (default 10)

List Memories

GET /v1/memories
ParameterTypeDescription
typestringOptional filter: snippet or skill
categorystringOptional category filter
limitnumberMax results (default 20)
offsetnumberPagination offset

Delete Memory

DELETE /v1/memories/:id
Soft delete — can be recovered.

Artifacts

Artifacts are AI-generated outputs from agent runs — insights, metrics, alerts, and tables.

List Artifacts

GET /v1/artifacts
ParameterTypeDescription
typestringOptional: insight, chart, metric, alert, table, suggestion
limitnumberMax results (default 20)
offsetnumberPagination offset

Get Artifact

GET /v1/artifacts/:id

Automations (Heartbeats)

Heartbeats are recurring agents that analyze your data on a schedule.

List Heartbeats

GET /v1/heartbeats
ParameterTypeDescription
limitnumberMax results (default 20)
offsetnumberPagination offset

Create Heartbeat

POST /v1/heartbeats
ParameterTypeDescription
namestringName
instructionsstringWhat to do on each run
interval_minutesnumberRun frequency (min 5)
descriptionstringOptional description
timezonestringOptional (default UTC)
active_window_startstringOptional. e.g. 09:00
active_window_endstringOptional. e.g. 18:00
active_daysnumber[]Optional. 0=Sun..6=Sat
dataset_idsstring[]Optional dataset UUIDs
effortstringOptional: low, medium, high
curl -X POST "https://api.erdo.ai/v1/heartbeats" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Revenue monitor",
    "instructions": "Check daily revenue for anomalies and alert if down >10% vs last week",
    "interval_minutes": 60,
    "dataset_ids": ["dataset-uuid"],
    "active_window_start": "09:00",
    "active_window_end": "18:00",
    "timezone": "America/New_York"
  }'

Run Heartbeat

POST /v1/heartbeats/:id/run
Trigger a heartbeat immediately, outside its schedule.

List Heartbeat Executions

GET /v1/heartbeats/:id/executions
ParameterTypeDescription
limitnumberMax results (default 10)

Rendering

Render Chart

POST /v1/render/chart
Render a data visualization. Supports bar, line, pie, histogram, and scatter charts. See MCP docs for full schema.

Render Table

POST /v1/render/table
Render a data table. See MCP docs for full schema.