Streaming enables real-time output from your agents, providing immediate feedback as the agent works through tasks. This creates responsive user experiences and allows you to show progress indicators, tool calls, and partial results as they happen.
The MCPAgent provides different streaming approaches for different use cases.
Copy
Ask AI
import { ChatOpenAI } from '@langchain/openai'import { MCPAgent, MCPClient } from 'mcp-use'async function stepStreamingExample() { // Setup agent const config = { mcpServers: { playwright: { command: 'npx', args: ['@playwright/mcp@latest'] } } } const client = new MCPClient(config) const llm = new ChatOpenAI({ model: 'gpt-4o' }) const agent = new MCPAgent({ llm, client }) // Stream the agent's steps console.log('🤖 Agent is working...') console.log('-'.repeat(50)) for await (const step of agent.stream('Search for the latest Python news and summarize it')) { console.log(`\n🔧 Tool: ${step.action.tool}`) console.log(`📝 Input: ${JSON.stringify(step.action.toolInput)}`) // Note: step.observation is empty when the step is yielded // Tool results are tracked internally but not included in the step object } console.log('\n🎉 Done!') await client.closeAllSessions()}stepStreamingExample().catch(console.error)
For more granular control, use the streamEvents() method to get real-time output events:
Copy
Ask AI
import { ChatOpenAI } from '@langchain/openai'import { MCPAgent, MCPClient } from 'mcp-use'async function basicStreamingExample() { // Setup agent const config = { mcpServers: { playwright: { command: 'npx', args: ['@playwright/mcp@latest'] } } } const client = new MCPClient(config) const llm = new ChatOpenAI({ model: 'gpt-4o' }) const agent = new MCPAgent({ llm, client }) // Stream the agent's response console.log('Agent is working...') for await (const event of agent.streamEvents('Search for the latest Python news and summarize it')) { if (event.event === 'on_chat_model_stream') { // Stream LLM output token by token // Note: chunk property may be 'text' or 'content' depending on LLM provider const text = event.data?.chunk?.text || event.data?.chunk?.content if (text) { process.stdout.write(text) } } } console.log('\n\nDone!') await client.closeAllSessions()}basicStreamingExample().catch(console.error)
Event Property: The event chunk property may be text or content depending on the LangChain version and LLM provider. Check both properties for compatibility:
Copy
Ask AI
const text = event.data?.chunk?.text || event.data?.chunk?.content
The prettyStreamEvents() method provides beautifully formatted, syntax-highlighted output for streaming agent execution. Perfect for CLI tools and development environments where you want human-readable output.
Copy
Ask AI
import { MCPAgent } from 'mcp-use'async function prettyStreamExample() { const agent = new MCPAgent({ llm: 'openai/gpt-4o', mcpServers: { filesystem: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', './'] } } }) // Pretty streaming with automatic formatting and colors for await (const _ of agent.prettyStreamEvents({ prompt: 'List all TypeScript files and count the total lines of code', maxSteps: 20 })) { // Just iterate - formatting is handled automatically } await agent.close()}prettyStreamExample().catch(console.error)
The prettyStreamEvents() method automatically handles:
Syntax highlighting: JSON and code are highlighted with colors
Tool call formatting: Clear display of tool names and inputs
Progress indicators: Visual feedback during execution
Token streaming: Real-time LLM output display
Error formatting: Clear error messages with context
Terminal Compatibility: The pretty output uses ANSI color codes and works best in modern terminals. For environments without color support, the output gracefully degrades to plain text.