Build AI agents with MCP in Python
Create agents, connect to MCP servers, or build your own server. Works with OpenAI, Anthropic, Google, and any LangChain-supported LLM.
Full-Stack MCP Framework
Build MCP Servers, MCP Clients, and AI agents connected to MCP servers.
MCP Agent
Build AI agents that interact with external tools and services. MCPAgent combines LLM integration, tool orchestration, and memory management to create autonomous AI agents.
- •LLM support: OpenAI, Anthropic, Google, Groq, and local models
- •Auto tool calling: Agents select and execute tools automatically
- •Multi-server: Connect to multiple MCP servers at once
- •Streaming: Real-time response streaming for better UX
- •Memory: Built-in conversation history and context management
MCP Client
Connect to MCP servers programmatically. Supports all primitives: tools, resources, prompts, sampling, elicitation, and notifications.
- •Multi-server: Connect to multiple servers simultaneously
- •Transports: STDIO, HTTP, and SSE connections
- •Code Mode: Execute tools via code, reduce context by up to 98.7%
- •Auth: OAuth 2.0 with DCR, bearer tokens, custom providers
- •Direct calls: Call tools without LLM when you know what you need
MCP Server
Build MCP servers so agents can connect to your service. The framework extends the official MCP Python SDK with production features while staying 100% compatible.
- •Official SDK compatible: Works with Claude, ChatGPT, Cursor
- •Inspector: Built-in debugger at /inspector endpoint
- •OpenMCP: Auto-generated /openmcp.json for server discovery
- •MCP Logging: Structured logs with method names and session IDs
- •Transports: STDIO, Streamable HTTP, and SSE
More accurate AI agents through Code Mode
Let agents write code instead of making direct tool calls. This reduces context consumption by up to 98.7% and increases accuracy for complex workflows.
Based on research from Anthropic and Cloudflare.
- •Progressive Disclosure: Agents load only the tools they need, when they need them. Search for relevant tools instead of loading all 150+ upfront.
- •Context-Efficient Tool Results: Large datasets are processed in the execution environment before returning to the agent. Filter and summarize locally without consuming agent context.
- •More Powerful Control Flow: Use familiar code patterns for loops, conditionals, and error handling instead of chaining individual tool calls through the agent loop.
- •Privacy-Preserving Operations: Process sensitive data locally without it entering the model's context.
Built for developers
Clean API, great defaults, and tools that help you ship faster.
6 Lines of Code
Spin up a fully functional AI agent with tool-calling. No boilerplate, no complexity. Just connect and run.
Built-in Inspector
Every server launches with a visual inspector at /inspector. Test tools, view resources, debug JSON-RPC messages in real-time.
Tool Restrictions
Reduce LLM hallucinations by restricting dangerous tools like filesystem or network operations.
Dynamic Server Selection
Agents automatically choose the right MCP server for each task from your configured pool.
MCP Logging
Structured logs with method names, session IDs, and execution timing. Easy to debug and monitor in production.
OpenMCP Discovery
Auto-generated /openmcp.json endpoint lets clients discover your server's tools and capabilities.
Join MCP community
Get help, share your projects, and get inspired.
The community for developers building with MCP and mcp-use.
Trusted by developers worldwide
mcp-use powers AI agents for thousands of developers
Agent executions powered by mcp-use
Companies building with mcp-use
GitHub stars