Before configuring your MCP clients, it's important to understand the two components involved:
-
llms.txt: A website index format that provides background information, guidance, and links to detailed documentation for LLMs. As described in the LangChain documentation, llms.txt is "an index file containing links with brief descriptions of the content"[1]. It acts as a structured gateway to a project's documentation.
-
MCP (Model Context Protocol): A protocol enabling communication between AI agents and external tools, allowing LLMs to discover and use various capabilities. As stated by Anthropic, MCP is "an open protocol that standardizes how applications provide context to LLMs"[2].
The mcpdoc server, created by LangChain, "create[s] an open source MCP server to provide MCP host applications (e.g., Cursor, Windsurf, Claude Code/Desktop) with (1) a user-defined list of llms.txt files and (2) a simple fetch_docs tool read URLs within any of the provided llms.txt files"[3]. This bridges llms.txt with MCP, giving developers full control over how documentation is accessed.
References:
- LangChain LLMS-txt Overview (https://langchain-ai.github.io/langgraph/llms-txt-overview/)
- Model Context Protocol Introduction (https://modelcontextprotocol.io/introduction)
- LangChain mcpdoc GitHub Repository (https://github.com/langchain-ai/mcpdoc)
- Ensure you have Python installed
- Install UV (Universal Python Wrapper):
curl -LsSf https://astral.sh/uv/install.sh | sh
- Navigate to Claude Desktop settings
- Open the MCP configuration section
- Add the following JSON configuration:
{
"mcpServers": {
"documentation-server": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"PydanticAI:https://ai.pydantic.dev/llms.txt",
"PydanticAI Full:https://ai.pydantic.dev/llms-full.txt",
"MCP Protocol:https://modelcontextprotocol.io/llms.txt",
"MCP Protocol Full:https://modelcontextprotocol.io/llms-full.txt",
"Google A2A:https://raw.githubusercontent.com/google/A2A/refs/heads/main/llms.txt",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt",
"LangChain:https://python.langchain.com/llms.txt",
"Vercel AI SDK:https://sdk.vercel.ai/llms.txt",
"--transport",
"stdio"
],
"description": "Documentation server for multiple AI frameworks"
}
}
}
This configuration uses the mcpdoc server command with multiple URLs specified via the --urls
parameter, as documented in the mcpdoc README: "You can specify multiple URLs by using the --urls parameter multiple times"[3].
The mcpdoc server implements strict domain access controls as documented in the LangChain repository:
"When you specify a remote llms.txt URL (e.g., https://langchain-ai.github.io/langgraph/llms.txt), mcpdoc automatically adds only that specific domain (langchain-ai.github.io) to the allowed domains list. This means the tool can only fetch documentation from URLs on that domain"[3].
For local files, the documentation states: "When using a local file, NO domains are automatically added to the allowed list. You MUST explicitly specify which domains to allow using the --allowed-domains parameter"[3].
Key security guidelines:
- For remote llms.txt files, only the domain of the specified URL is automatically allowed[3]
- For local files, you must explicitly specify allowed domains using
--allowed-domains
[3] - To allow additional domains, add:
--allowed-domains domain1.com domain2.com
[3] - Use
--allowed-domains '*'
to allow all domains (use with caution)[3]
According to the mcpdoc documentation, to configure Cursor:
- "Open Cursor Settings and MCP tab. This will open the ~/.cursor/mcp.json file"[3]
- Navigate to the MCP tab
- This opens
~/.cursor/mcp.json
The mcpdoc documentation provides a specific format for Cursor configuration[3]. Here's the recommended configuration:
{
"mcpServers": {
"ai-docs-server": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"PydanticAI:https://ai.pydantic.dev/llms.txt",
"MCP Protocol:https://modelcontextprotocol.io/llms.txt",
"Google A2A:https://raw.githubusercontent.com/google/A2A/refs/heads/main/llms.txt",
"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt",
"LangChain:https://python.langchain.com/llms.txt",
"Vercel AI SDK:https://sdk.vercel.ai/llms.txt",
"--transport",
"stdio",
"--allowed-domains",
"ai.pydantic.dev",
"modelcontextprotocol.io",
"raw.githubusercontent.com",
"langchain-ai.github.io",
"python.langchain.com",
"sdk.vercel.ai"
]
}
}
}
This configuration follows the example provided in the mcpdoc documentation, which shows how to specify multiple URLs and configure domain access[3].
The mcpdoc documentation recommends updating Cursor Global (User) Rules for optimal usage[3]. According to their guide, "Best practice is to then update Cursor Global (User) rules"[3]:
<rules>
for ANY question about LangGraph, use the langgraph-docs-mcp server to help answer --
+ call list_doc_sources tool to get the available llms.txt file
+ call fetch_docs tool to read it
+ reflect on the urls in llms.txt
+ reflect on the input question
+ call fetch_docs on any urls relevant to the question
</rules>
This rule structure should be adapted for each framework in your configuration, as demonstrated in the mcpdoc documentation[3].
As documented in the mcpdoc repository: "You can specify multiple URLs by using the --urls parameter multiple times"[3]. The documentation provides this example: "uvx --from mcpdoc mcpdoc \ --urls "LangGraph:https://langchain-ai.github.io/langgraph/llms.txt\" "LangChain:https://python.langchain.com/llms.txt\""[3].
According to the LangChain documentation:
llms.txt
: "is an index file containing links with brief descriptions of the content"[1]llms-full.txt
: "includes all the detailed content directly in a single file, eliminating the need for additional navigation"[1]
The documentation notes: "A key consideration when using llms-full.txt is its size. For extensive documentation, this file may become too large to fit into an LLM's context window"[1].
The mcpdoc documentation provides specific instructions for testing your configuration[3]:
uvx --from mcpdoc mcpdoc \
--urls "Test:https://your-test-url.com/llms.txt" \
--transport sse \
--port 8082 \
--host localhost
"Run MCP inspector and connect to the running server: npx @modelcontextprotocol/inspector"[3]
The documentation notes: "Here, you can test the tool calls"[3].
The mcpdoc documentation states: "You can specify documentation sources in three ways, and these can be combined"[3]:
# sample_config.yaml
- name: LangGraph Python
llms_txt: https://langchain-ai.github.io/langgraph/llms.txt
As shown in the documentation: "This will load the LangGraph Python documentation from the sample_config.yaml file in this repo"[3].
Reference in your configuration:
{
"mcpServers": {
"docs-from-file": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--yaml",
"path/to/sample_config.yaml",
"--transport",
"stdio"
]
}
}
}
According to the documentation: "Both YAML and JSON configuration files should contain a list of documentation sources"[3].
For local files, always specify allowed domains:
{
"mcpServers": {
"local-docs": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"Local Docs:/path/to/local/llms.txt",
"--allowed-domains",
"docs.example.com",
"api.example.com",
"--transport",
"stdio"
]
}
}
}
Once configured, you can access documentation through the MCP server using two main tools:
To see all available documentation sources, use the list_doc_sources
tool. This will show you the configured llms.txt files and their names:
Tool: list_doc_sources
This will return a list like:
- PydanticAI: https://ai.pydantic.dev/llms.txt
- MCP Protocol: https://modelcontextprotocol.io/llms.txt
- LangGraph: https://langchain-ai.github.io/langgraph/llms.txt
- LangChain: https://python.langchain.com/llms.txt
To retrieve specific documentation, use the fetch_docs
tool with the URL of the desired content:
Tool: fetch_docs
URL: https://ai.pydantic.dev/agents/index.md
Here are powerful examples of how to leverage the MCP documentation server:
You: "Compare the agent architectures between PydanticAI and LangGraph. How do they handle system prompts, tools, and dependency injection?"
Claude's workflow:
- Uses the mcpdoc server to access documentation from both frameworks[1]
- Analyzes system prompt configurations in both frameworks
- Compares tool registration mechanisms
- Examines dependency injection patterns
- Creates comparative analysis as described in the MCP documentation: "gives developers the best way to provide contextual data to LLMs and AI assistants to solve problems"[4]
You: "Can you analyze how I could integrate a LangGraph agent with PydanticAI's MCP client for real-time data fetching?"
Claude's workflow:
- Accesses documentation via the mcpdoc server[3]
- Examines integration options as described in MCP protocols: "enables seamless integration between LLM applications and external data sources and tools"[6]
- Identifies compatible interfaces and data formats
- Suggests bridge code for message passing
- Provides implementation examples using MCP's two-way communication feature[7]
You: "Which model providers are commonly supported across PydanticAI, LangChain, and LangGraph? How do their APIs differ?"
Claude's workflow:
- Retrieves model provider documentation from all three frameworks
- Maps common providers (OpenAI, Anthropic, Google, etc.)
- Analyzes API differences for model initialization
- Identifies compatibility layers and adapters
- Creates a unified interface recommendation
You: "Map out the dependency relationship between MCP servers, tools, and resources as implemented in PydanticAI versus the Model Context Protocol spec."
Claude's workflow:
- Fetches MCP specification documentation
- Analyzes PydanticAI's MCP implementation
- Creates a hierarchical diagram of components
- Highlights implementation deviations
- Suggests standardization improvements
You: "I have a LangChain agent using OpenAI and vector stores. How can I migrate this to PydanticAI while maintaining similar functionality?"
Claude's workflow:
- Examines LangChain's agent patterns and vector store usage
- Analyzes PydanticAI's equivalent features
- Maps concepts between frameworks
- Provides step-by-step migration guide
- Highlights potential pitfalls and solutions
You: "Compare how streaming responses are handled in LangGraph, PydanticAI, and the core MCP protocol. Which patterns should I adopt for real-time applications?"
Claude's workflow:
- Retrieves streaming documentation from all sources
- Analyzes implementation patterns
- Evaluates performance implications
- Recommends architecture based on use case
- Provides code examples for each pattern
You: "I need to create a tool that works across MCP-compatible frameworks. What's the common interface pattern?"
Claude's workflow:
- Fetches tool specifications from MCP protocol
- Analyzes tool implementations in PydanticAI and LangGraph
- Identifies common interfaces and parameters
- Suggests a universal tool template
- Provides validation and testing strategies
Based on MCP best practices and documentation:
-
Use Cross-Reference Queries: As MCP enables "dynamic discovery" of capabilities[7], ask Claude to find references to specific concepts across multiple frameworks simultaneously
-
Request Compatibility Matrices: MCP's standardized protocol allows for "comparing model context protocol server frameworks"[8], helping you get detailed compatibility information
-
Explore Edge Cases: Ask about framework-specific limitations and workarounds using documentation insights from the llms.txt index[1]
-
Version-Aware Analysis: Include version numbers in queries to ensure compatibility, as recommended for documentation access[3]
-
Performance Comparisons: Request benchmarking data or performance considerations from framework documentation, utilizing MCP's ability to "connect with 100+ MCP servers"[4]
MCP enables sophisticated documentation access that goes beyond simple retrieval, as described in various sources:
- Dynamic Discovery: "MCP allows AI models to dynamically discover and interact with available tools without hard-coded knowledge of each integration"[7]
- Real-time Updates: Documentation changes are immediately available without reconfiguration[2]
- Contextual Understanding: "MCP gives developers the best way to provide contextual data to LLMs and AI assistants to solve problems"[4]
- Cross-Framework Analysis: Seamlessly compare features across different ecosystems[8]
- Integration Insights: MCP provides "standardization for connecting LLMs with external tools & data"[9], identifying patterns that may not be obvious from individual documentation
References:
- LangChain LLMS-txt Overview (https://langchain-ai.github.io/langgraph/llms-txt-overview/)
- Model Context Protocol Introduction (https://modelcontextprotocol.io/introduction)
- LangChain mcpdoc GitHub Repository (https://github.com/langchain-ai/mcpdoc)
- The Top 7 MCP-Supported AI Frameworks (https://getstream.io/blog/mcp-llms-agents/)
- Google A2A Protocol High-Level Summary (https://raw.githubusercontent.com/google/A2A/refs/heads/main/llms.txt)
- Model Context Protocol GitHub Organization (https://github.com/modelcontextprotocol)
- MCP vs API Explained (https://norahsakal.com/blog/mcp-vs-api-model-context-protocol-explained/)
- Comparing Model Context Protocol Server Frameworks (https://medium.com/@FrankGoortani/comparing-model-context-protocol-mcp-server-frameworks-03df586118fd)
- What is MCP? (https://addyo.substack.com/p/mcp-what-it-is-and-why-it-matters)
According to the mcpdoc documentation[3]:
- Server not starting: Check that UV is properly installed ("Please see official uv docs for other ways to install uv"[3])
- Permission issues: Ensure the user has access to read the configuration files
- Domains blocked: Verify that required domains are included in
--allowed-domains
[3] - Tool calls failing: "Confirm that the server is running in your Cursor Settings/MCP tab"[3]
- Tool availability: Ensure MCP is enabled in Claude Desktop settings
By following this guide, you'll have a robust MCP configuration that provides seamless access to documentation across multiple AI frameworks in both Claude Desktop and Cursor, along with practical knowledge of how to effectively use the documentation server in your daily workflow.