Build AI agents with MCP in Python

Create agents, connect to MCP servers, or build your own server. Works with OpenAI, Anthropic, Google, and any LangChain-supported LLM.

agent.py
1
2
3
4
5
6
7
8
9
10
11
12
from mcp_use import MCPClient, MCPAgent
from langchain_openai import ChatOpenAI
client = MCPClient(config_path="mcp.json")
agent = MCPAgent(
llm=ChatOpenAI(model="gpt-4o"),
client=client,
max_steps=30
)
result = await agent.run("Find restaurants near me")
6sense Logo
Elastic Logo
IBM Logo
Innovacer Logo
Intuit Logo
NVIDIA Logo
Oracle Logo
Red Hat Logo
Tavily Logo
Verizon Logo
6sense Logo
Elastic Logo
IBM Logo
Innovacer Logo
Intuit Logo
NVIDIA Logo
Oracle Logo
Red Hat Logo
Tavily Logo
Verizon Logo
Use Cases

Full-Stack MCP Framework

Build MCP Servers, MCP Clients, and AI agents connected to MCP servers.

agent.py
from mcp_use import MCPClient, MCPAgent
from langchain_openai import ChatOpenAI
client = MCPClient(config_path="mcp.json")
agent = MCPAgent(
llm=ChatOpenAI(model="gpt-4o"),
client=client,
max_steps=30
)
result = await agent.run("Find restaurants near me")

MCP Agent

Build AI agents that interact with external tools and services. MCPAgent combines LLM integration, tool orchestration, and memory management to create autonomous AI agents.

  • LLM support: OpenAI, Anthropic, Google, Groq, and local models
  • Auto tool calling: Agents select and execute tools automatically
  • Multi-server: Connect to multiple MCP servers at once
  • Streaming: Real-time response streaming for better UX
  • Memory: Built-in conversation history and context management

MCP Client

Connect to MCP servers programmatically. Supports all primitives: tools, resources, prompts, sampling, elicitation, and notifications.

  • Multi-server: Connect to multiple servers simultaneously
  • Transports: STDIO, HTTP, and SSE connections
  • Code Mode: Execute tools via code, reduce context by up to 98.7%
  • Auth: OAuth 2.0 with DCR, bearer tokens, custom providers
  • Direct calls: Call tools without LLM when you know what you need
client.py
from mcp_use import MCPClient
client = MCPClient(config={
"mcpServers": {
"github": {"url": "https://api.github.com/mcp"},
"linear": {"url": "https://mcp.linear.app/mcp"},
"custom": {"url": "http://localhost:3000/mcp"}
}
})
await client.create_all_sessions()
tools = await client.get_session("github").list_tools()
result = await client.get_session("linear").call_tool(
"create_issue",
{"title": "Bug fix"}
)
server.py
from mcp_use.server import MCPServer
server = MCPServer(
name="my-server",
version="1.0.0",
instructions="A simple example server"
)
@server.tool()
def greet(name: str) -> str:
"""Greet someone by name."""
return f"Hello, {name}!"
@server.tool()
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
if __name__ == "__main__":
server.run(transport="streamable-http")

MCP Server

Build MCP servers so agents can connect to your service. The framework extends the official MCP Python SDK with production features while staying 100% compatible.

  • Official SDK compatible: Works with Claude, ChatGPT, Cursor
  • Inspector: Built-in debugger at /inspector endpoint
  • OpenMCP: Auto-generated /openmcp.json for server discovery
  • MCP Logging: Structured logs with method names and session IDs
  • Transports: STDIO, Streamable HTTP, and SSE
Code Mode

More accurate AI agents through Code Mode

Let agents write code instead of making direct tool calls. This reduces context consumption by up to 98.7% and increases accuracy for complex workflows.

Based on research from Anthropic and Cloudflare.

code_mode.py
result = await client.execute_code("""
# Search for relevant tools instead of loading all upfront
search_result = await search_tools("github pull request")
github_tools = search_result['results']
# Only the 3 PR-related tools are loaded, not all 150+ tools
pr = await github.get_pull_request(
owner="facebook",
repo="react",
number=12345
)
return {"title": pr["title"]}
""")
  • Progressive Disclosure: Agents load only the tools they need, when they need them. Search for relevant tools instead of loading all 150+ upfront.
  • Context-Efficient Tool Results: Large datasets are processed in the execution environment before returning to the agent. Filter and summarize locally without consuming agent context.
  • More Powerful Control Flow: Use familiar code patterns for loops, conditionals, and error handling instead of chaining individual tool calls through the agent loop.
  • Privacy-Preserving Operations: Process sensitive data locally without it entering the model's context.
Developer Experience

Built for developers

Clean API, great defaults, and tools that help you ship faster.

6 Lines of Code

Spin up a fully functional AI agent with tool-calling. No boilerplate, no complexity. Just connect and run.

Built-in Inspector

Every server launches with a visual inspector at /inspector. Test tools, view resources, debug JSON-RPC messages in real-time.

Tool Restrictions

Reduce LLM hallucinations by restricting dangerous tools like filesystem or network operations.

Dynamic Server Selection

Agents automatically choose the right MCP server for each task from your configured pool.

MCP Logging

Structured logs with method names, session IDs, and execution timing. Easy to debug and monitor in production.

OpenMCP Discovery

Auto-generated /openmcp.json endpoint lets clients discover your server's tools and capabilities.

Join MCP community

Get help, share your projects, and get inspired.

The community for developers building with MCP and mcp-use.

Stats

Trusted by developers worldwide

mcp-use powers AI agents for thousands of developers

1.5M+

Agent executions powered by mcp-use

4,000+

Companies building with mcp-use

8,000+

GitHub stars

Begin your MCP journey
in the
fastest way

Build with mcp-use SDKs, preview with our inspector, deploy with mcp-use Cloud.