Skip to content

Instantly share code, notes, and snippets.

@anoochit
Created July 17, 2025 01:19
Show Gist options
  • Save anoochit/c2d02965138c8b2666685a514da337f7 to your computer and use it in GitHub Desktop.
Save anoochit/c2d02965138c8b2666685a514da337f7 to your computer and use it in GitHub Desktop.
mcp workshop day2
marp theme paginate class
true
default
true
lead
invert
<style> @import url('https://fonts.com/css2?family=Bai+Jamjuree:ital,wght@0,200;0,300;0,400;0,500;0,600;1,200;1,300;1,400;1,500;1,600;1,700&family=Open+Sans:ital,wght@0,300..800;1,300..800&display=swap'); section { /*width: 1280px; height: 720px;*/ font-size: 40px; font-family: "Bai Jamjuree"; } h1 { font-size: 38pt; } pre { font-family: "Open Sans"; } blockquote { font-size: 32px; } </style>

AI Agent Bootcamp - Model Context Protocol

Anuchit Chalothorn


Redeem - AI Agent for Developer

Code : 2VFIT5ZXIGWOPLWK1TI5HZFXNUEJ


What is MCP?

MCP (Model Context Protocol) is a standard for connecting AI applications and agents to various data sources and tools, including:

  • Local files
  • Databases
  • Content management systems
  • Popular services like GitHub, Google Maps, and Puppeteer.

The "USB-C" for AI

Think of MCP as the USB-C port of the AI world.

It's a universal adapter that lets AI seamlessly access data and tools without needing custom code for each connection.

Before MCP, developers wrote custom code for every new data source, which was time-consuming. MCP provides a reusable, standard solution.


Why is MCP Important?

For AI Users:

  • AI gets access to your important data (docs, meeting notes, code, calendar).
  • This leads to more accurate and personalized assistance.

For Developers:

  • Reduces the burden of building custom connections.
  • Write an MCP server once, and many apps can reuse it.
  • Fosters an open-source, collaborative ecosystem.

Example: A Smarter AI Assistant

You ask your AI:

"Summarize last week's team meeting and schedule a follow-up."


With MCP, the AI can:

  1. Access Google Drive to read the notes.
  2. Analyze who needs to follow up.
  3. Use your Calendar to schedule the meeting automatically.

All of this happens securely and efficiently.


MCP Architecture

MCP is designed on a client-server model.


Core Components

  • MCP Hosts: AI applications (e.g., Claude Desktop, IDEs).
  • MCP Clients: Intermediaries connecting hosts to servers.
  • MCP Servers: Small programs exposing data/tools via the protocol.
  • Data Sources: Local files, databases, or remote services (Google, Slack APIs).

How MCP Works

  1. Servers connect to data sources (Google Drive, GitHub).
  2. Clients act as brokers for AI applications.
  3. AI uses the connection to read data and take action.

The system is modular: add new servers without changing the core AI app.


Who Builds MCP Servers?

MCP Servers can be developed and maintained by:

  • Anthropic (the initiators)
  • General developers
  • In-house enterprise teams
  • Software providers

This creates a universally connectable and self-growing ecosystem.


Get Involved

  • Find available servers on the project's GitHub page.
  • Start building your own MCP Server.

Developing MCP Servers with Python


Chapter Overview

This chapter guides you through building a simple weather server with the Python SDK.

You will learn to:

  • Set up the development environment.
  • Create Tools that an LLM can use.
  • Test your server with the MCP Inspector and Claude Desktop.

Core MCP Server Capabilities

An MCP server has three main features:

  • Resources: File-like data the client can read (e.g., API responses).
  • Tools: Functions the LLM can call (with user approval).
  • Prompts: Pre-written templates to help users with specific tasks.

This chapter focuses on creating Tools.


Getting Started: Python SDK

Prerequisites:

  • Familiarity with Python and LLMs (like Claude).
  • Python 3.10+
  • mcp Python SDK 1.2.0+

Environment Setup:

  • We will use uv, a fast Python package manager, for setup.

Environment Setup with uv

Key uv Commands:

# Create a project and virtual environment
uv init weather
cd weather
uv venv
source .venv/bin/activate # (or .venv\Scripts\activate.ps1 on Windows)

# Add dependencies
uv add "mcp[cli]" httpx

# Create the server file
touch server.py

Building the Server: server.py

1. Imports and Initialization

import os
import httpx
from mcp.server.fastmcp import FastMCP

# Create a named server instance
mcp = FastMCP("My App")

FastMCP uses Python type hints and docstrings to automatically generate tool definitions for the LLM.


Building the Server: Helper Functions

2. API Helper Functions

Create functions to fetch and format data from an external API, like OpenWeatherMap.

async def get_weather_data(client: httpx.AsyncClient, city: str) -> dict:
    """Helper to fetch weather data from OWM API."""
    # ... implementation ...

def format_forecast(data: dict) -> str:
    """Formats the weather data into a readable string."""
    # ... implementation ...

Defining a Tool

3. Implement the Tool with a Decorator

Use the @mcp.tool decorator. The function's signature and docstring define the tool for the LLM.

@mcp.tool(title="Weather Fetcher")
async def fetch_weather(city: str) -> str:
    """Fetch current weather for a city"""
    async with httpx.AsyncClient() as client:
        data = await get_weather_data(client, city)
        return format_forecast(data)

Adding More Tools

You can easily add multiple tools to the same server.

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool(title="BMI Calculator")
def calculate_bmi(weight_kg: float, height_cm: float) -> float:
    """Calculate BMI given weight in kg and height in cm"""
    return weight_kg / ((height_cm / 100) ** 2)

Running the Server

4. Start the Server

Add this to the end of server.py:

if __name__ == "__main__":
    mcp.run(transport='stdio')

Run from your terminal:

# Using uv
uv run server.py

# Or using the MCP CLI for development
mcp dev server.py

Testing with MCP Inspector

The mcp dev command automatically starts the Inspector.

mcp dev server.py
  • Access the Inspector in your browser at http://127.0.0.1:6274.
  • It provides a UI to view, test, and debug your server's tools.

Using the Inspector

  1. Set required environment variables (e.g., OWM_API_KEY).
  2. Connect to the running server.
  3. Navigate to the Tools tab to see a list of your defined tools.
  4. Select a tool, fill in its parameters, and click Run Tool.

Testing with Claude Desktop

Method 1: Edit claude_desktop_config.json

  • Add your server's command and environment details to the mcpServers object.

Method 2: Use the MCP CLI (Easier for local dev)

  • The -v flag sets an environment variable.
  • The -f flag loads variables from a .env file.

# Pass a variable directly
mcp dev server.py -v OWM_API_KEY=YOUR_API_KEY

# Or from a file
mcp dev server.py -f .env

Interacting with Claude

  • Ask Claude to list its available tools: "list your tools".
  • Ask a question that requires a tool: "What's the weather in London?"
  • Claude will ask for your permission before running the tool.

Summary

  • You built your first MCP server using the Python SDK.
  • You learned to:
    • Set up a project with uv.
    • Define tools with the @mcp.tool decorator.
    • Test locally with the MCP Inspector.
    • Integrate and test with Claude Desktop.

This provides a solid foundation for building powerful AI applications that can interact with external data and services.


Building MCP Clients with Python


Chapter Overview

This chapter covers the other side: building an MCP Client.

You will learn to:

  • Set up a client project with the Python SDK.
  • Connect to an MCP server.
  • Create a conversation loop that sends queries to an LLM (Claude).
  • Handle tool-use requests from the LLM.
  • Display the final results to the user.

Project Setup

Prerequisites:

  • Python and uv installed.

Setup Steps:

# 1. Create project and activate environment
uv init mcp-client
cd mcp-client
uv venv
source .venv/bin/activate

# 2. Install dependencies
uv add mcp anthropic python-dotenv

# 3. Create your client file
touch client.py

API Key Setup

Your client needs to authenticate with the LLM provider.

  1. Get an API key from the Anthropic Console.
  2. Create a .env file to store it securely.
    touch .env

  1. Add your key to the .env file:
    ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
    
  2. Add .env to your .gitignore file.

Client Architecture: The Core Class

The client is structured around a class that manages the session and conversation.

class MCPClient:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack() # Manages resources
        self.anthropic = Anthropic() # Anthropic API client

    async def connect_to_server(self, server_script_path: str):
        # ... connection logic ...

    async def process_query(self, query: str) -> str:
        # ... conversation and tool-use logic ...

    async def chat_loop(self):
        # ... user interaction loop ...

Step 1: Connecting to the Server

The client starts a server process (e.g., your weather server) and establishes a connection.

async def connect_to_server(self, server_script_path: str):
    # Determine command ('python' or 'node')
    command = "python" if server_script_path.endswith('.py') else "node"
    server_params = StdioServerParameters(command=command, args=[server_script_path])

    # Establish a stdio connection
    stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
    self.stdio, self.write = stdio_transport
    
    # Initialize the MCP session
    self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
    await self.session.initialize()

    # List available tools from the server
    response = await self.session.list_tools()
    print("Connected with tools:", [tool.name for tool in response.tools])

Step 2: Processing the User's Query

This is the main logic loop for a single user query.

  1. Get the list of available tools from the server.
  2. Send the user's message and the tool list to the LLM.
  3. Check the LLM's response:
    • If it's a text answer, display it.
    • If it's a tool_use request, execute the next step.

The Tool-Use Loop

When the LLM wants to use a tool:

  1. The client receives the tool name and arguments.
  2. It calls the tool on the MCP server: session.call_tool(tool_name, tool_args).
  3. The tool's result is sent back to the LLM for it to formulate a final, natural-language answer.
  4. The final text response is shown to the user.

Step 3: Creating the Chat Interface

A simple while loop allows for a continuous conversation.

async def chat_loop(self):
    print("\nMCP Client Started! Type 'quit' to exit.")
    while True:
        query = input("\nQuery: ").strip()
        if query.lower() == 'quit':
            break
        response = await self.process_query(query)
        print("\n" + response)

Running Your Client

The client takes the path to a server script as a command-line argument.

# Run the client and connect it to your Python weather server
uv run client.py ..\weather\server.py

Example Interaction:

Connected to server with tools: ['fetch_weather', 'add', 'calculate_bmi']

MCP Client Started!
Type your queries or 'quit' to exit.

Query: How's the weather in Bangkok
[Calling tool fetch_weather with args {'city': 'Bangkok'}]
The current weather in Bangkok is overcast with a temperature of around 31°C.

Behind the Scenes

  1. Client to Server: Get list of available tools.
  2. Client to LLM: Send user query + list of tools.
  3. LLM to Client: "Use the fetch_weather tool with city='Bangkok'."
  4. Client to Server: "Run fetch_weather(city='Bangkok')."

  1. Server to Client: "Result: { 'description': 'overcast clouds', 'temp': 31.09 }"
  2. Client to LLM: "Here is the tool result. Now give me a final answer."
  3. LLM to Client: "The weather in Bangkok is overcast with a temperature of 31°C."
  4. Client to User: Display the final answer.

Summary

  • You learned to build a complete MCP client with the Python SDK.
  • This involved:
    • Connecting to a server process.
    • Sending queries and tool definitions to an LLM.
    • Handling tool-use requests and responses.
    • Managing the conversation loop with the user.

Now you can build applications that fully leverage the power of tool-using AI.


MCP Architecture Deep Dive


Architectural Overview

MCP uses a Client-Server architecture with three core components:

  • Host: The main application embedding an LLM (e.g., Claude Desktop, an IDE).
  • Client: A module within the Host that communicates with the Server.
  • Server: Provides context, tools, and prompts for the session.

Key Components: Protocol Layer

This layer defines the communication rules between Client and Server.

Responsibilities:

  • Formatting messages.
  • Matching requests to responses.
  • Managing the message flow according to the standard.

It's the "grammar" of the conversation.


Key Components: Transport Layer

This layer handles how data is sent. MCP supports multiple transport types.

  • Stdio: For low-latency, local communication. Easy to set up.
  • HTTP Streaming: Uses HTTP POST for requests and Server-Sent Events (SSE) for responses. Ideal for network communication.

MCP uses the JSON-RPC 2.0 standard for message structure.


JSON-RPC Message Types

  • Request: A message asking the server to do something. Expects a response.
  • Result: The successful response to a Request.
  • Error: The error response to a Request.
  • Notification: A message sent from one side to the other that does not expect a response.

Connection Lifecycle

MCP has a defined sequence for establishing and closing connections.


Connection Steps

  1. Initialize: Client sends an initialize request.
  2. Acknowledge: Server replies with its capabilities (e.g., version).
  3. Confirm: Client sends an initialized notification.
  4. Communicate: The session is now active. Client and Server can exchange messages asynchronously.
  5. Shutdown: A close() method is used to terminate the connection gracefully.

Error Handling

MCP uses standard JSON-RPC error codes.

{
  "ParseError": -32700,
  "InvalidRequest": -32600,
  "MethodNotFound": -32601,
  "InvalidParams": -32602,
  "InternalError": -32603
}

Developers can define custom error codes in the range -32000 to -32099 for application-specific issues.


Best Practices: Transport

Environment Recommended Transport Notes
Local Machine stdio High speed, simple setup.
Network HTTP Streaming Requires security considerations like TLS.

Best Practices: Security

  • Use TLS/SSL for all remote (network) connections.
  • Validate all inputs on the server side to prevent injection or misuse.
  • Limit permissions to ensure clients can only access necessary resources.
  • Avoid exposing sensitive data in error messages.

Best Practices: Development

  • Input Validation: Always validate data from the client. Use a clear schema.
  • Error Handling: Handle all potential errors and return meaningful messages.
  • Progress Updates: For long-running tasks, send progress notifications to prevent the user from thinking the app is frozen.
  • Logging: Implement separate logs for protocol messages and application logic to simplify debugging.
  • Health Checks: Implement a health check endpoint to monitor server status.

Understanding MCP Tools


What are Tools?

In MCP, a Tool is an action.

It's a function exposed by a server that allows an LLM to interact directly with external systems:

  • Calling an API
  • Managing files
  • Running system commands

Unlike resources (data), tools are about doing.


The Tool Workflow

  1. Discovery (tools/list): The client asks the server, "What tools do you have?"
  2. Execution (tools/call): The LLM decides to use a tool and tells the client, "Run github_create_issue with this title and description."
  3. Result: The server runs the function and returns the outcome (success or error) to the client.

Defining a Tool

A tool's definition tells the LLM everything it needs to know.

{
  "name": "analyze_csv",
  "description": "Analyzes a CSV file and provides a summary.",
  "inputSchema": {
    "type": "object",
    "properties": { "fileUri": { "type": "string" } },
    "required": ["fileUri"]
  },
  "annotations": {
    "title": "Analyze CSV File",
    "readOnlyHint": true
  }
}

Key Parts of a Definition

  • name: A unique identifier.
  • description: A clear explanation for the LLM. This is crucial for the model to choose the right tool.
  • inputSchema: A JSON Schema defining the required inputs. This provides structure and enables validation.
  • annotations: Hints for the client/user (e.g., is it safe to run?).

Examples of Tool Types

Category Example Tool Purpose
Data Processing analyze_csv Transform or summarize data.
System Mgmt execute_shell Interact with the OS.
External APIs github_create_issue Connect to third-party services.
Communication send_email Send notifications.
Internal Data fetch_sales_data Interact with internal systems (ERP, CRM).

Best Practices for Tool Design

  1. Be Specific: Create focused tools (convert_to_pdf) instead of one giant, all-purpose tool.
  2. Describe Clearly: Write a great description. The LLM depends on it.
  3. Define a Strict Schema: Use inputSchema to enforce correct inputs.
  4. Use Annotations: Help the user understand what a tool does (e.g., readOnlyHint).
  5. Handle Naming Conflicts: Use prefixes if connecting to multiple servers with similar tools (e.g., github_create_issue, jira_create_issue).

Security is Paramount

  1. Input Validation: Always validate inputs against the schema. Sanitize everything. Never trust input from the LLM directly, especially for shell commands.
  2. Access Control: Implement permissions. Log all tool usage. Use rate limiting.
  3. Error Handling: Don't leak sensitive internal details in error messages.

Dynamic Tools

MCP servers can notify clients if the list of available tools changes.

  • A server can send a notifications/tools/list_changed message.
  • This allows tools to be added, removed, or updated at runtime without restarting the client.

Testing Tools

A comprehensive testing strategy is essential.

  • Functional Tests: Does the tool work correctly with valid inputs?
  • Integration Tests: Does it connect properly to the real API or system?
  • Security Tests: Can the inputs be abused (e.g., prompt injection)?
  • Performance Tests: Is it fast enough? Does it leak memory?
  • Failure Tests: How does it handle API outages, timeouts, or invalid data?

MCP Transport Layer


The Role of Transport

The Transport Layer is the foundation for communication between an MCP client and server.

  • It's responsible for sending and receiving messages.
  • It works with the JSON-RPC 2.0 message format, which is the "language" of MCP.

Think of it as the postal service that delivers the letters (messages).


MCP Message Format: JSON-RPC 2.0

A lightweight, widely-used standard for remote procedure calls.

Request: Asks the server to do something.

{ "jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {} }

Response: The successful result of a request.

{ "jsonrpc": "2.0", "id": 1, "result": { ... } }

Notification: A one-way message; no response expected.

{ "jsonrpc": "2.0", "method": "tools/list_changed" }

Transport Type 1: stdio

Standard Input/Output

  • Communication happens via the stdin and stdout streams of a process.
  • Ideal for local communication where the client and server are on the same machine.

When to use stdio:

  • Building command-line interface (CLI) tools.
  • Internal system scripts.
  • Editor extensions (like for VS Code).

Transport Type 2: Streamable HTTP

The primary transport for web-based and networked systems.

How it works:

  • Client-to-Server: All requests are standard HTTP POST.
  • Server-to-Client: Responses can be a single JSON object or a stream of Server-Sent Events (SSE). This allows the server to push updates to the client.

Streamable HTTP: Session Management

  • A session is started by the client, which receives a Mcp-Session-Id in an HTTP header.
  • The client must include this Mcp-Session-Id in all subsequent requests to maintain the session context.
  • The session is terminated with an HTTP DELETE request containing the session ID.

This makes the communication stateful.


Transport Best Practices

Area Recommendation
Connection Manage the connection lifecycle explicitly (open, close).
Error Handling Implement robust error detection and handling.
Security Use TLS, validate headers, and restrict access.
Data Transfer Validate message size, integrity, and content.
Debugging Log transport-level messages and connection states.

Backward Compatibility

To support older clients or servers:

  • Servers can expose both the new Streamable HTTP endpoint and the legacy SSE-only endpoint.
  • Clients can try connecting to the new endpoint first and, if that fails, fall back to the legacy transport.

This ensures a smoother transition as the protocol evolves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment