Skip to main content

MCP Agents

mcp-use provides a complete agent framework for building AI applications that leverage MCP servers. The MCPAgent combines LLM integration, tool orchestration, and memory management to create powerful, autonomous AI agents.

Key Features

  • LLM Integration: Support for OpenAI, Anthropic, Google, and Groq
  • Automatic tool calling: Agents automatically select and execute appropriate tools
  • Multi-server orchestration: Connect to multiple MCP servers simultaneously
  • Structured output: Type-safe responses with Zod schema validation
  • Streaming support: Real-time streaming of agent responses
  • Memory management: Built-in conversation history and context management

Installation

The agent framework is included with the mcp-use package:
npm install mcp-use
You’ll also need a LangChain LLM provider package:
npm install @langchain/openai

Quick Start

Here’s a basic example of creating and running an agent:
import { MCPAgent, MCPClient } from "mcp-use";
import { ChatOpenAI } from "@langchain/openai";

const client = new MCPClient({
  mcpServers: {
    filesystem: {
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem", "./"],
    },
  },
});

const llm = new ChatOpenAI({ model: "gpt-5.1" });

const agent = new MCPAgent({ llm, client, maxSteps: 50 });

agent.run({ 
  prompt: "What is in the current folder?" 
}).then(async (response) => {
  console.log(response);
  await agent.close();
});
Edit mcp-use-get-started-agent

Architecture Overview

The MCPAgent framework is built around several core components that work together to enable intelligent, tool-using AI applications.

Agent Core

The MCPAgent class orchestrates all agent functionality, managing LLM interactions, tool execution, and response generation. Learn more: Agent Configuration →

LLM Integration

Native support for multiple LLM providers with a unified interface. Each provider is automatically detected and configured based on the LLM instance you provide. Supported providers:
  • OpenAI (GPT-3.5, GPT-4, GPT-4 Turbo)
  • Anthropic (Claude 3 family)
  • Google (Gemini models)
  • Groq (Llama, Mixtral, and more)
Learn more: LLM Integration →

Tool Orchestration

Agents automatically discover tools from connected MCP servers and intelligently select the right tools for each task. The framework handles:
  • Tool discovery and schema conversion
  • Automatic tool selection by the LLM
  • Parameter validation and type safety
  • Error handling and retries
  • Multi-step workflows
Learn more: Client Configuration →

Structured Output

Generate type-safe responses with Zod schema validation. The agent can return structured data instead of plain text, enabling programmatic use of agent results.
import { z } from 'zod'

const result = await agent.run({
  prompt: 'Analyze the file structure',
  schema: z.object({
    totalFiles: z.number(),
    fileTypes: z.array(z.string()),
    largestFile: z.string()
  })
})

// result is fully typed!
console.log(result.totalFiles)
Learn more: Structured Output →

Streaming

Stream agent responses in real-time for better user experience and immediate feedback:
// Step-by-step streaming
for await (const step of agent.stream({ prompt: 'Write a report' })) {
  console.log(`Tool: ${step.action.tool}`)
}

// Pretty formatted streaming (great for CLI tools)
for await (const _ of agent.prettyStreamEvents({ 
  prompt: 'Analyze the codebase',
  maxSteps: 20 
})) {
  // Automatic syntax highlighting and formatting
}

// Low-level event streaming
for await (const event of agent.streamEvents({ prompt: 'Generate content' })) {
  if (event.event === 'on_chat_model_stream') {
    process.stdout.write(event.data?.chunk?.content || '')
  }
}
Learn more: Streaming →

Memory Management

Control conversation history with built-in memory management. The agent automatically maintains context across multiple interactions:
const agent = new MCPAgent({
  llm,
  client,
  memoryEnabled: true  // Enable automatic memory management
})

await agent.run({ prompt: "Hello, my name is Alice" })
await agent.run({ prompt: "What's my name?" })  // Agent remembers Alice

// Clear history when needed
agent.clearConversationHistory()
Learn more: Memory Management →

Next Steps