Vercel AI SDK Integration
Connect Erdo’s MCP tools to any LLM (Claude, GPT, etc.) using Vercel AI SDK. Your users ask data questions in a chat UI, the LLM calls Erdo tools to analyze data, and you render rich charts and tables with ErdoToolResult.
Browser (useChat) → Your API Route → LLM → Erdo MCP Server → Tool Results
↓
Browser ← streamed response ← LLM + tool results ←────────────────┘
Installation
npm install @erdoai/ui ai @ai-sdk/react @ai-sdk/mcp @ai-sdk/anthropic
Replace @ai-sdk/anthropic with your preferred model provider (@ai-sdk/openai, @ai-sdk/google, etc.).
Quick Start
1. Server Routes
Two server routes: one for the LLM chat stream, one to create a scoped token for client-side data fetching (charts and tables).
// app/api/chat/route.ts
import { createMCPClient } from '@ai-sdk/mcp';
import { streamText, convertToModelMessages, type ToolSet } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(req: Request) {
const { messages } = await req.json();
const mcpClient = await createMCPClient({
transport: {
type: 'http',
url: `${process.env.ERDO_ENDPOINT || 'https://api.erdo.ai'}/mcp`,
headers: {
Authorization: `Bearer ${process.env.ERDO_AUTH_TOKEN}`,
},
},
});
const tools = await mcpClient.tools();
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
messages: modelMessages,
tools: tools as ToolSet,
system: 'You are a helpful data analyst. Use the Erdo tools to answer data questions.',
onFinish: async () => {
await mcpClient.close();
},
});
return result.toUIMessageStreamResponse();
}
// app/api/erdo-token/route.ts
import { ErdoClient } from '@erdoai/server';
const erdoClient = new ErdoClient({
authToken: process.env.ERDO_AUTH_TOKEN,
});
export async function POST() {
const { token } = await erdoClient.createToken({
botKeys: [], // Leave empty to grant access to all bots
expiresInSeconds: 3600,
});
return Response.json({ token });
}
Never expose your ERDO_AUTH_TOKEN to the browser. The MCP connection and token creation happen server-side only.
2. Render Results (Client)
Fetch a scoped token on mount and pass it to ErdoProvider. Charts and tables use this token to fetch dataset contents.
// app/chat/page.tsx
'use client';
import { useEffect, useState } from 'react';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport, isToolUIPart } from 'ai';
import { ErdoProvider, ErdoToolResult, isErdoTool } from '@erdoai/ui';
export default function ChatPage() {
const [token, setToken] = useState<string | null>(null);
useEffect(() => {
fetch('/api/erdo-token', { method: 'POST' })
.then(res => res.json())
.then(data => setToken(data.token));
}, []);
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
const isStreaming = status === 'streaming' || status === 'submitted';
return (
<ErdoProvider config={{
baseUrl: process.env.NEXT_PUBLIC_ERDO_ENDPOINT || 'https://api.erdo.ai',
token,
}}>
<div>
{messages.map((message) => (
<div key={message.id}>
{message.parts.map((part, i) => {
if (isToolUIPart(part) && isErdoTool(part)) {
return <ErdoToolResult key={part.toolCallId} part={part} />;
}
if (part.type === 'text' && part.text) {
return <p key={i}>{part.text}</p>;
}
return null;
})}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
const input = e.currentTarget.querySelector('input') as HTMLInputElement;
if (input.value.trim()) {
sendMessage({ text: input.value.trim() });
input.value = '';
}
}}>
<input placeholder="Ask a data question..." disabled={isStreaming} />
<button type="submit" disabled={isStreaming}>Send</button>
</form>
</div>
</ErdoProvider>
);
}
3. Set Environment Variables
# .env.local
ERDO_AUTH_TOKEN=your-api-key # Server-side only
ERDO_ENDPOINT=https://api.erdo.ai # Optional, defaults to https://api.erdo.ai
NEXT_PUBLIC_ERDO_ENDPOINT=https://api.erdo.ai # Client-side, for ErdoProvider
That’s it. The LLM will automatically use Erdo tools like erdo_list_datasets, erdo_ask_data_question, and erdo_query_data based on user questions.
Bridge Pattern (Alternative)
If you’re already using @erdoai/server, you can use client.getTools() instead of creating an MCP client directly:
// app/api/chat/route.ts
import { ErdoClient } from '@erdoai/server';
import { streamText, convertToModelMessages, type ToolSet } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const erdoClient = new ErdoClient({
authToken: process.env.ERDO_AUTH_TOKEN,
});
export async function POST(req: Request) {
const { messages } = await req.json();
// getTools() connects to Erdo's MCP server and returns AI SDK-compatible tools
const { tools, close } = await erdoClient.getTools();
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
messages: modelMessages,
tools: tools as ToolSet,
system: 'You are a helpful data analyst. Use the Erdo tools to answer data questions.',
onFinish: close,
});
return result.toUIMessageStreamResponse();
}
getTools() requires @ai-sdk/mcp as a peer dependency. Install it with npm install @ai-sdk/mcp.
The client-side rendering code is the same — use isErdoTool + ErdoToolResult as shown above.
Component Reference
Checks if an AI SDK message part is an Erdo tool call/result.
import { isErdoTool } from '@erdoai/ui';
// Works with both static and dynamic tool parts:
// - Static: { type: 'tool-erdo_list_datasets', ... }
// - Dynamic (MCP): { type: 'dynamic-tool', toolName: 'erdo_list_datasets', ... }
isErdoTool(part) // → true for any erdo_* tool
Renders an Erdo tool result with appropriate UI based on the tool type:
- UI tools (
erdo_render_chart, erdo_render_table): Charts and tables via UIGenerationNodes
- Markdown tools (
erdo_ask_data_question): Text answer rendered as markdown
- Data tools (
erdo_list_datasets, erdo_get_dataset_schema, etc.): Formatted JSON
- Loading states: Spinner with tool name
- Errors: Error message display
import { ErdoToolResult, type ErdoToolResultProps } from '@erdoai/ui';
<ErdoToolResult
part={part} // AI SDK tool part from message.parts
className="my-4" // Optional styling
onSuggestionClick={fn} // Optional callback for follow-up suggestions
/>
Extracts the Erdo tool name from an AI SDK message part, or returns undefined if it’s not an Erdo tool.
import { getErdoToolName } from '@erdoai/ui';
getErdoToolName(part) // → 'erdo_list_datasets' | 'erdo_ask_data_question' | ...
ErdoProvider
Required for chart and table rendering. Provides the data fetching context so charts can load dataset contents. Pass a scoped token created from your backend (see quick start above).
import { ErdoProvider } from '@erdoai/ui';
<ErdoProvider
config={{
baseUrl: 'https://api.erdo.ai',
token: scopedToken,
}}
>
{children}
</ErdoProvider>
If you don’t need chart/table rendering (only text and JSON results), ErdoProvider is optional.
Authentication
The API key (ERDO_AUTH_TOKEN) stays server-side for the MCP connection and token creation. The client gets a short-lived scoped token for data fetching — this is already set up in the quick start above.
For more advanced token scoping (restricting to specific datasets or users), see Scoped Tokens.
When connected via MCP, the LLM can use these Erdo tools:
| Tool | Description |
|---|
erdo_list_datasets | List datasets with name, type, and description |
erdo_get_dataset_schema | Get column names, types, and statistics |
erdo_gather_dataset_context | Get context for multiple datasets at once |
erdo_search_data | Search across datasets with natural language |
erdo_ask_data_question | AI analysis — returns a text answer |
erdo_render_chart | Render a chart (bar, line, pie, histogram, scatter) |
erdo_render_table | Render a data table |
erdo_query_data | Natural language SQL queries |
erdo_run_query | Execute raw SQL queries |
See MCP Server for full parameter details.
Example App
A complete working example is available in the SDK repository:
git clone https://github.com/erdoai/erdo-ts-sdk
cd erdo-ts-sdk/examples/nextjs-vercel-ai
It demonstrates three integration patterns:
- MCP Pattern (
/mcp) — MCP tools via AI SDK with ErdoToolResult rendering
- Token Pattern (
/) — Direct streaming with ephemeral tokens
- Proxy Pattern (
/proxy) — Server-proxied SSE streaming