MCP server (Beta)¶
The Tinybird remote MCP server enables AI agents to connect directly to your workspace to use endpoints as tools or execute queries. The Model Context Protocol gives AI assistants access to your analytics APIs, data sources, and endpoints through a standardized interface.
This integration is ideal when you want AI agents to autonomously query your data, call your analytics endpoints, or build data-driven applications without requiring manual API integration.
Our server only supports Streamable HTTP as the transport protocol. If your MCP client doesn't support it, you'll need to use the mcp-remote
package as a bridge.
Before you start¶
Before connecting to the Tinybird MCP server, ensure:
- You have a Tinybird account and workspace
- Find your Auth Token with appropriate scopes for your use case. Details below.
- Your MCP client supports either Streamable HTTP or can use the
mcp-remote
bridge
Authentication and Token Requirements¶
You'll need an Auth Token with the following scopes depending on which tools you want to access:
Static tokens
Use the admin token
to access to all available tools.
JSON Web tokens (JWTs)
Use them to have granular access to endpoint tools.
Available tools¶
Depending on the token scopes, the following tools will be exposed:
Endpoint Tools¶
Every published API endpoint in your workspace becomes an individual tool with the endpoint's name. These tools:
- Accept the same parameters as your endpoint
- Return the same JSON response format as direct API calls
- Respect endpoint rate limits and authentication
- Support all parameter types (query parameters, filters, etc.)
Example: If you have an endpoint named daily_active_users
, it becomes a tool named daily_active_users
that accepts the same parameters.
Core Tools¶
execute_query
¶
Execute arbitrary SQL queries against your workspace resources.
Parameters:
sql
(string, required): The SQL query to execute
Returns: Query results in JSON format. Same format as in the Query API
Example response:
{ "meta": [ { "name": "user_id", "type": "UInt64" }, { "name": "event_count", "type": "UInt64" } ], "data": [ {"user_id": 123, "event_count": 45}, {"user_id": 456, "event_count": 32} ], "rows": 2, "rows_before_limit_at_least": 2, "statistics": { "elapsed": 0.123, "rows_read": 1000, "bytes_read": 8192 } }
list_datasources
¶
List all data sources in your workspace.
Parameters: None
Returns: Array of data source objects with names, schemas, and metadata
list_endpoints
¶
List all published API endpoints in your workspace.
Parameters: None
Returns: Array of endpoint objects with names, parameters, and descriptions
Tool Availability by Token Scope¶
Tool | JWT | admin token |
---|---|---|
Endpoint tools | Only specific endpoint | ✅ |
list_endpoints | ✅ | ✅ |
list_datasources | ✅ | ✅ |
execute_query | ✅ | ✅ |
Connect your MCP client¶
The MCP server is hosted in https://cloud.tinybird.co/mcp
The MCP server url is the same no matter the region you are using.
MCP Clients with Streamable HTTP Support¶
If your MCP client supports Streamable HTTP directly, use this configuration:
{ "mcpServers": { "tinybird": { "url": "https://cloud.tinybird.co/mcp?token=TB_TOKEN" } } }
Replace TB_TOKEN
with your actual Auth Token.
MCP Clients Requiring Bridge (Cursor, Windsurf, Claude Desktop)¶
For clients that don't support Streamable HTTP natively:
{ "mcpServers": { "tinybird": { "command": "npx", "args": [ "-y", "mcp-remote", "https://cloud.tinybird.co/mcp?token=TB_TOKEN" ] } } }
Usage examples¶
Here are some of examples of simple agents using Tinybird's MCP Server.
Building an agent? Want to know which LLM generates best SQL queries? Explore the results in the LLM Benchmark.
Basic Query Execution with Pydantic AI¶
import asyncio from pydantic_ai import Agent from pydantic_ai.mcp import MCPServerStdio async def main(): try: server = MCPServerStdio( command="npx", args=[ "-y", "mcp-remote", "https://cloud.tinybird.co/mcp?token=TB_TOKEN", ], ) agent = Agent( name="analytics_agent", model="YOUR_FAVOURITE_MODEL", # e.g. "claude-3-5-sonnet-20241022", mcp_servers=[server] ) async with agent.run_mcp_servers(): # Query user activity trends result = await agent.run( "Show me the top 5 user activities by count for the last 7 days" ) print("Analysis:", result.output) # Use a specific endpoint result = await agent.run( "Call the daily_active_users endpoint for the past week" ) print("DAU Data:", result.output) except Exception as e: print(f"Error: {e}") if __name__ == "__main__": asyncio.run(main())
Advanced Analytics with OpenAI Agents SDK¶
import asyncio from agents import Agent, Runner from agents.mcp import MCPServerStreamableHttp async def analyze_user_behavior(): try: server = MCPServerStreamableHttp( name="tinybird", params={ "url": "https://cloud.tinybird.co/mcp?token=TB_TOKEN", }, ) async with server: agent = Agent( name="user_behavior_analyst", model="YOUR_FAVOURITE_MODEL", # e.g. "gpt4", mcp_servers=[server], instructions=""" You are a data analyst. When analyzing user behavior: 1. First list available endpoints to understand what data is available 2. Use appropriate endpoints or execute_query for analysis 3. Provide insights with specific numbers and trends 4. Suggest actionable recommendations """ ) result = await Runner.run( agent, input=""" Analyze our user engagement patterns: 1. What are the current weekly active user trends? 2. Which features are most popular? 3. Are there any concerning drops in engagement? """ ) print("Engagement Analysis:", result.final_output) except Exception as e: print(f"Analysis failed: {e}") if __name__ == "__main__": asyncio.run(analyze_user_behavior())
Real-time Dashboard Assistant¶
import asyncio from pydantic_ai import Agent from pydantic_ai.mcp import MCPServerStdio async def dashboard_assistant(): server = MCPServerStdio( command="npx", args=["-y", "mcp-remote", "https://cloud.tinybird.co/mcp?token=TB_TOKEN"], ) agent = Agent( name="dashboard_assistant", model=MODEL, # e.g. "claude-3-5-sonnet-20241022", mcp_servers=[server] ) async with agent.run_mcp_servers(): while True: try: user_question = input("\nAsk about your data (or 'quit' to exit): ") if user_question.lower() == 'quit': break result = await agent.run(user_question) print(f"Assistant: {result.output}") except KeyboardInterrupt: break except Exception as e: print(f"Error processing question: {e}") if __name__ == "__main__": asyncio.run(dashboard_assistant())
When to use MCP vs Direct API Integration¶
Use MCP when:
- Building AI agents that need autonomous access to your analytics
- Creating conversational interfaces for data exploration
- Developing AI-powered dashboards or reports
- Prototyping data analysis workflows with AI assistance
Use direct API integration when:
- Building production applications with predictable query patterns
- Need maximum performance and minimal latency
- Require fine-grained control over API calls and caching
See also¶
- Auth Tokens - Learn about creating and managing authentication tokens
- API Endpoints - Understand how to create and publish endpoints
- Query API - Direct API access for comparison with MCP usage
- Model Context Protocol Documentation - Official MCP specification and guides