MCP 101: Principles of MCP Development with mcp-use
Back to Blog
Engineeringmcparchitecturetechnicalgetting-startedbest-practices

MCP 101: Principles of MCP Development with mcp-use

Luigi Pederzani

Luigi Pederzani

Co-founder

August 9, 2025·7 min read
Share:

The Model Context Protocol (MCP) has rapidly become the standard for connecting AI models to external tools and data sources. Introduced by Anthropic in late 2024, MCP addresses a fundamental limitation: AI models' isolation from live data and services.

In this comprehensive guide, we'll explore the principles of MCP development and show you how mcp-use provides the infrastructure to build production-grade AI agents at scale.

Understanding MCP Architecture

MCP is built on a clean client-server architecture that separates the AI agent (client) from external tools and data (server). This architecture enables models to autonomously maintain and switch context between multiple tools and data sources.

The Three Core Components

1. MCP Host

The AI-powered application where tasks are executed using the MCP client—this is your agentic product or custom AI agent. Early implementations include tools like Claude Desktop and Cursor.

2. MCP Client

Operating within the host, the client acts as a bridge to MCP servers. It manages communication by sending requests and querying available services over a secure transport layer.

3. MCP Server

The access point for the MCP client to carry out operations, exposing tools, resources, and prompts to the AI model.

// Example MCP architecture
┌─────────────────────────────────────────────┐
MCP Host (Your App)               │
│  ┌───────────────────────────────────────┐  │
│  │         MCP Client                    │  │
│  └───────────────┬───────────────────────┘  │
└──────────────────┼──────────────────────────┘

    ┌──────────────┼──────────────┐
    │              │              │
┌───▼────┐   ┌────▼───┐   ┌─────▼────┐
│ GitHub │   │ Slack  │   │ Database │
│ Server │   │ Server │   │  Server  │
└────────┘   └────────┘   └──────────┘

MCP Server Primitives

MCP servers implement three core primitives that define how models interact with external capabilities:

PrimitiveControlDescriptionExample Use
PromptsUser-controlledInteractive templates invoked by user choiceSlash commands, menu options
ResourcesApplication-controlledContextual data managed by the clientFile contents, API responses
ToolsModel-controlledFunctions exposed to the LLMAPI calls, data updates

Tools: Actions and Operations

Tools are operations that the server executes on behalf of the model—invoking external web services, running computations, or controlling systems.

server.addTool({
  name: 'create_github_issue',
  description: 'Create a new GitHub issue',
  parameters: {
    title: { type: 'string' },
    body: { type: 'string' },
    labels: { type: 'array' }
  },
  handler: async ({ title, body, labels }) => {
    return await github.issues.create({ title, body, labels });
  }
});

Resources: Data Access

Resources expose read-only data to LLMs. Each resource has a unique URI, display name, optional metadata, and content.

// Example resource
{
  uri: "file:///project/README.md",
  name: "Project README",
  mimeType: "text/markdown",
  content: "# Welcome to the project..."
}

Prompts: Templated Instructions

Prompts are reusable templates managed by the server to help format or enrich the model's input, ensuring consistency across interactions.

Building Agents with mcp-use SDK

The mcp-use SDK is the easiest way to interact with MCP servers in custom agents. It supports any MCP server and works with both Python and TypeScript.

Key Features

  • Great DX: Clean integration with no double async loops or complex session management
  • Multi-Server Support: Use multiple MCP servers simultaneously in a single agent
  • Tool Restrictions: Reduce LLM hallucinations by restricting potentially dangerous tools
  • LLM Agnostic: Works with any LLM, including local models
  • Dynamic Server Selection: Agents automatically choose the most appropriate server for each task

Creating Your First Agent

Here's how to spin up an MCP-enabled agent in just 6 lines of code:

import asyncio
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient

async def main():
    # Configure MCP servers
    config = {
        "mcpServers": {
            "playwright": {
                "command": "npx",
                "args": ["@playwright/mcp@latest"],
                "env": {"DISPLAY": ":1"}
            }
        }
    }
    
    # Create client and agent
    client = MCPClient.from_dict(config)
    llm = ChatOpenAI(model="gpt-4o")
    agent = MCPAgent(llm=llm, client=client, max_steps=30)
    
    # Run your query
    result = await agent.run(
        "Find the best restaurant in San Francisco"
    )
    print(f"\nResult: {result}")

if __name__ == "__main__":
    asyncio.run(main())

📺 Watch the Demo: See how an agent combines browsing and Linear capabilities to create tickets with the best HN posts: YouTube Demo

The Challenge: MCP Development at Scale

As teams adopt MCP for production AI agents, they face critical infrastructure challenges:

1. Correctly Building and Deploying MCP Servers

Auto-generating MCP servers from OpenAPI specs often leads to hallucination-prone agent behavior. Agents misinterpret capabilities or fail on error states. Without rigorous testing and proper versioning, these integrations break silently.

2. Fragmented Configuration Management

MCP configs are scattered across GitHub repos, Slack messages, and internal codebases. There's no single source of truth, making updates and credential rotation painful.

3. Authentication and Access Control

Every MCP server has its own authentication mechanism, making security enforcement difficult. What's needed is fine-grained access control where agents are treated as untrusted users with scoped privileges.

4. Tool Overload

Many MCP servers expose dozens of tools, overwhelming LLMs. Performance degrades rapidly beyond 10-20 tools per context window, leading to hallucinations and incorrect tool usage.

5. Environment and Governance

Without profiles, namespaces, or policy layers, managing different environments (prod, staging, dev) becomes a nightmare. There's no organizational oversight for who owns servers or who approves new tools.

6. Observability Gap

Most MCP implementations offer little observability. You can't trace what the agent requested, what tools returned, or why decisions were made.

7. Local Execution Limits

Despite MCP's vision of scalable infrastructure, most agents still run locally in closed-source apps or developer machines, making them hard to monitor and standardize across teams.

The mcp-use Platform Solution

mcp-use provides a vertical solution for MCP development with three key offerings:

  1. mcp-use SDK: Easily integrate MCP-enabled AI agents into your products
  2. mcp-use Cloud Platform: Central control plane for managing configs, metrics, and access control
  3. mcp-use Server Hosting: Managed/self-hosted servers with sandboxed stdio execution

📺 Platform Demo: Watch the full platform walkthrough

Core Platform Features

Centralized Server Configuration

Manage all MCP server configurations in one place. No more hardcoded configs or manual sharing. Import configs directly via SDK or API.

// Pull centralized config
const agent = new Agent({
  profileId: 'prod-engineering',
  // All server configs loaded automatically
});

Automated Deployment

Use the mcp-use GitHub app to automatically build, deploy, and update MCP servers from your repository. The platform handles versioning, canary releases, and rollbacks.

Profile-Based Access Control

Assign granular permissions through role-based profiles. Agents are treated as untrusted by default with scoped privileges. Every tool invocation is logged for compliance.

// Profile with restricted access
{
  "profile": "junior-dev-agent",
  "permissions": {
    "github": {
      "tools": ["read_issues", "create_comment"],
      "blocked": ["delete_repository", "force_push"]
    }
  }
}

Tool Restrictions

Limit or disable specific tools per server. The SDK automatically hides blocked tools from the model, keeping the tool list lean and reducing hallucinations.

Full Observability

Capture detailed metrics, logs, and traces for every MCP interaction. Track exactly what agents requested, what tools returned, and how decisions were made.

Agent Execution Runtime

Run agents in sandboxed, isolated environments with policy enforcement and auditability. Available as managed cloud hosting or self-hosted infrastructure.

Real-World Impact

With over 150,000 SDK downloads and 7,000+ GitHub stars, mcp-use is trusted by development teams at:

  • NASA
  • NVIDIA
  • SAP
  • Hundreds of innovative startups

Organizations use mcp-use to build both customer-facing agentic products and internal automation tools, reducing development time from weeks to hours.

Getting Started

Ready to build production-grade AI agents with MCP?

  1. Explore the SDK: GitHub Repository
  2. Read the Docs: docs.mcp-use.com
  3. Try the Platform: Sign up for free
  4. Join the Community: Discord

Conclusion

The Model Context Protocol represents the future of AI agent development, but building production-grade agents at scale requires proper infrastructure. The mcp-use platform delivers a comprehensive solution that handles configuration management, deployment automation, access control, and observability.

By providing both a powerful SDK and centralized control plane, mcp-use enables teams to focus on building innovative AI products instead of managing infrastructure complexity.

Whether you're building your first AI agent or scaling to hundreds of agents across your organization, mcp-use provides the tools and infrastructure you need to succeed.

Get Started

What will you build with MCP?

Start building AI agents with MCP servers today. Connect to any tool, automate any workflow, and deploy in minutes.