Debug MCP Servers with Inspector

MCP Inspector is an open-source developer tool for testing and debugging MCP servers. Test tools, explore resources, manage prompts, and monitor connections with support for MCP-UI and OpenAI Apps SDK widgets.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import { createMCPServer } from 'mcp-use/server'
// The MCP Inspector is automatically mounted at /inspector
const server = createMCPServer('my-mcp-server', {
version: '1.0.0',
description: 'An MCP server with Apps SDK support for ChatGPT',
baseUrl: process.env.MCP_URL,
})
// UI Widgets are React components in the "resources/" folder.
// They are automatically registered as both MCP tools and resources.
// ...
// Add your tools, resources, and prompts here
MCP Server
mcp-useInspector
Select a MCP Tool to inspect
6sense Logo
Elastic Logo
IBM Logo
Innovacer Logo
Intuit Logo
NVIDIA Logo
Oracle Logo
Red Hat Logo
Tavily Logo
Verizon Logo
6sense Logo
Elastic Logo
IBM Logo
Innovacer Logo
Intuit Logo
NVIDIA Logo
Oracle Logo
Red Hat Logo
Tavily Logo
Verizon Logo
Get Started

Add an MCP server to test it

Add a remote MCP server to test it in the Inspector.

Get Started

Three ways to use the Inspector

Use it online, run it locally with npx, or deploy to your own infrastructure. Same features everywhere.

Online

Open the hosted version in your browser. Connect to any MCP server with a URL. No installation needed.

inspector.mcp-use.com

npx

Run locally with a single command. Connect to local servers during development. Full feature parity.

npx @mcp-use/inspector

Self-Hosted

Deploy to your own infrastructure with Docker. Perfect for enterprise environments or air-gapped networks.

docker run mcpuse/inspector
Features

Everything you need to debug MCP servers

Full visibility into your MCP server during development. Test every primitive before connecting to agents.

Tool Testing

List, inspect, and execute MCP tools with custom parameters. See request and response data in real time.

Resource Browser

Browse and read resources exposed by your server. View metadata, test subscriptions, and inspect content.

Prompt Management

View and test prompt templates with different arguments. Validate prompts before connecting to agents.

RPC Logging

See every JSON-RPC message between client and server. Debug exactly what gets sent and what comes back.

Widget Preview

Test OpenAI Apps SDK and MCP-UI widgets directly in the inspector. See how they render in real chat clients.

Multi-Transport

Connect via stdio, SSE, or HTTP. Test your server regardless of how it communicates with clients.

Widget Testing

Test ChatGPT apps with UI widgets

The Inspector supports OpenAI Apps SDK and MCP-UI widgets. Test interactive components, verify tool calls, and debug widget state before deploying to production.

What you can test

  • Widget rendering: See how your widgets look in real chat clients
  • Tool calls from widgets: Test window.openai.callTool() interactions
  • Display modes: Switch between inline, PiP, and fullscreen
  • Widget state: Inspect and debug widgetState persistence
  • Dev mode: Hot reload for rapid widget development
  • Console proxy: See widget console.log output in the Inspector

window.openai API emulation

The Inspector fully emulates the window.openai API that widgets use in ChatGPT. Your components work identically in the Inspector and in production.

// Available in widgets
window.openai.toolInput
window.openai.toolOutput
window.openai.widgetState
window.openai.theme
window.openai.callTool()
window.openai.setWidgetState()
Debugging

See every JSON-RPC message

The RPC Logger shows all communication between the Inspector and your MCP server. Filter by method, search message content, and debug protocol issues.

Real-time Logging

Messages appear as they happen. No refresh needed.

Filter and Search

Filter by method type. Search message content.

Message Details

Expand any message to see full request and response.

Chat

Chat with your MCP server

The Inspector includes a chat interface that connects your MCP tools to an LLM. Test how agents will interact with your server in a real conversation.

Bring Your Own Key (BYOK)

The Chat feature uses your own API key. Your key is stored locally in your browser and never sent to our servers. All requests go directly from your device to your LLM provider.

OpenAI, Anthropic, and other providers
Keys stored in browser localStorage
Direct API calls, no proxy
Test tool selection and execution

Join MCP community

Get help, share your projects, and get inspired.

The community for developers building with MCP and mcp-use.