Unpacking MCP: The AI’s Universal Adapter

When Anthropic first unveiled their Model Context Protocol (MCP), I was immediately intrigued, but I’ll confess, a little lost in the details. The documentation was thorough, as expected, yet my brain works best when I can actually get my hands dirty and build something. That’s why I started SimpleMCP – a minimal, educational project designed to strip away the complexities and show exactly how MCP ticks under the hood.
In this guide, we’re going to embark on a similar journey. You’ll build your very own MCP server from scratch, creating a “Story Manager” tool that lets an AI assistant like Claude create, read, update, and delete children’s stories. By the time we’re done, you won’t just know what MCP is; you’ll understand the intricate dance of communication between an AI and its external tools, ready to build your own integrations.
What makes this different from other guides? We’ll craft two distinct transport layers (STDIO and HTTP/SSE) and you’ll see firsthand how to separate your core business logic from your protocol layers. This isn’t about production-ready complexity; it’s about grasping the core concepts in a runnable, follow-along fashion. Ready to dive in?
Unpacking MCP: The AI’s Universal Adapter
Think of the Model Context Protocol as the USB-C of AI applications. Before USB-C, every gadget had its own quirky connector. Before MCP, integrating an AI tool often meant custom-building a unique communication bridge for every single service. It was a fragmented mess, to say the least.
MCP arrives to standardize all that. It provides a common language and framework for AI assistants, like Claude, to talk to external tools, databases, APIs, and services. It’s a game-changer for building robust, extensible AI applications. The core idea is simple: Claude needs to do something beyond its internal capabilities, so it asks an external tool (your MCP server) to do it.
Here’s the simplified flow: Claude identifies a need, sends a request via the MCP Protocol (which uses JSON-RPC 2.0 under the hood) to your MCP server. Your server then translates that request into an action performed by your underlying business logic, and sends the result back to Claude. It’s an elegant handshake, ensuring seamless interaction.
Core MCP Concepts You’ll Encounter
While MCP has several facets, we’ll focus on the foundational ones: Tools and Transports. Tools are essentially the functions Claude can call – like `list_stories` or `get_story` in our example. Transports are how the data actually flows between Claude and your server, be it through standard input/output (STDIO) or over a network via HTTP/SSE.
For our hands-on experience, we’re building a “Story Manager” that will handle a collection of children’s stories. Claude will be able to ask our server to list all available stories and retrieve a specific one by its ID. Sounds straightforward, right? That simplicity is key to truly grasping the underlying mechanics of MCP.
The Cornerstone of Good Design: Separating Logic from Protocol
Before we even think about talking to an AI, let’s build the brain of our operation: the story management system. This is a crucial architectural decision in my SimpleMCP project, and one I highly recommend for any integration work: keep your core business logic completely separate from your protocol layers. What does that mean in practice?
It means our `story_manager.py` file, which handles all the story-related operations, has no idea it’s talking to an AI or what communication protocol is being used. It simply provides clean functions that take inputs and return outputs, managing stories from a JSON file. This isolation makes your code much easier to test, debug, and, crucially, extend. You can swap out your AI interface or add new ways for users to interact without ever touching your core story-handling code.
Building Our Story Manager
Our `story_manager.py` module contains functions like `load_stories`, `list_stories`, and `get_story`. The `load_stories` function, for instance, reads our story data from a JSON file, and we cache it for performance, ensuring we don’t hit the disk every time Claude asks for a story. The `list_stories` function returns just the IDs and titles, while `get_story` fetches the full text of a specific tale.
This module is designed with clear APIs, explicit error handling (like raising a `KeyError` if a story isn’t found), and absolutely no MCP-specific code. It’s pure Python, focusing solely on the domain of managing stories. This transport-agnostic approach is the pattern I advocate, as it paves the way for a more robust and flexible system down the line.
Bringing the Protocol to Life: STDIO and HTTP/SSE Transports
With our robust story manager in place, it’s time to build the “ears and mouth” for our AI tool. This is where the MCP server comes in, acting as the intermediary between Claude and our `story_manager.py`. We’ll explore two primary transport layers: STDIO and HTTP/SSE.
The STDIO Transport: Local & Lean
STDIO (Standard Input/Output) is the simplest transport, making it perfect for local desktop applications like Claude Desktop. It communicates directly via a process’s standard input and output streams. Our `mcp_server.py` implements this. Here’s how it works:
- Tool Discovery: When Claude connects, it first asks our server, “What can you do?” Our server, through the `@app.list_tools()` decorator, responds with a list of available tools, like `list_stories` and `get_story`, along with their descriptions and an `inputSchema` (a JSON Schema) detailing what parameters each tool expects.
- Tool Execution: When Claude decides to use a tool (e.g., “Tell me the story about the squirrel and owl”), it calls the `call_tool` function on our server, passing the tool’s name and any necessary arguments (like `story_id`). Our server then routes this call to the appropriate function in our `story_manager.py`, processes the result, and sends it back to Claude.
The `stdio_server()` context manager from the MCP SDK handles all the low-level stdin/stdout communication, allowing us to focus on defining our tools and routing logic. It’s an efficient, low-latency way to integrate with local AI clients.
The HTTP/SSE Transport: Network-Enabled Flexibility
While STDIO is great for local interactions, what if you want your MCP server accessible over a network, perhaps as part of a web application or a cloud service? That’s where HTTP with Server-Sent Events (SSE) shines. For this, I used `FastMCP`, a higher-level wrapper that simplifies the HTTP/SSE implementation significantly.
In `mcp_http_server.py`, you’ll notice a cleaner, decorator-based approach. We use `@mcp.tool()` to define our `list_stories` and `get_story` functions. `FastMCP` is smart enough to infer the input schemas directly from Python type hints, cutting down on boilerplate. Crucially, both the STDIO and HTTP/SSE servers leverage the *exact same* underlying `story_manager` functions. This reinforces the power of our initial architectural decision: separate logic, flexible protocols.
The choice between STDIO and HTTP/SSE depends on your use case. STDIO is ideal for local, desktop-based integrations, offering minimal latency. HTTP/SSE is your go-to for web applications, remote access, or when multiple clients need to connect to your server concurrently. My SimpleMCP project includes both primarily for demonstration purposes; in a real-world scenario, you’d typically pick one that best fits your needs.
Seeing It All In Action: Interacting with Claude
The real magic happens when you see your MCP server interacting with Claude. For the STDIO server, you simply configure Claude Desktop with a small JSON snippet pointing to your `mcp_server.py` script. After a quick restart, you can prompt Claude with queries like “What stories are available?” or “Tell me the story about the squirrel and owl.” You’ll see Claude intelligently invoke your server’s tools, fetch the stories, and present them back to you!
The HTTP/SSE server starts with a simple `python mcp_http_server.py` command, making it accessible on a specified local port (e.g., `http://localhost:8000/sse`). Any MCP client supporting HTTP/SSE can then connect to this endpoint, enabling remote communication with your story manager.
This hands-on interaction makes the theoretical concrete. You’re no longer just reading about protocols; you’re observing an AI intelligently using the tools you built, dynamically querying your custom backend to fulfill your requests. It’s an empowering moment that truly cements your understanding of MCP.
Final Thoughts: Your Gateway to AI Composability
We’ve come a long way. You’ve learned the fundamentals of the Model Context Protocol, understood the critical importance of separating business logic from transport layers, and built functional MCP servers using both STDIO and HTTP/SSE. You’ve seen how to define tools, handle their execution, and even integrate them with Claude.
The overarching pattern is remarkably consistent: Your (the Story Manager) is wrapped by an (your server definition), which then uses a (STDIO or HTTP/SSE) to communicate with . This clean, modular architecture is the secret sauce to building scalable AI integrations.
The Model Context Protocol, though relatively new, is rapidly gaining traction, with major players already adopting it. What truly excites me about MCP isn’t just its immediate utility, but its inherent composability. Once you grasp this pattern, you can build MCP servers for virtually anything: your personal knowledge base, IoT device control, complex database queries, API integrations, or even file system access. It’s about building standardized, AI-friendly interfaces for all your digital assets and capabilities.
If this guide has sparked an idea, or if you build something incredible with MCP, please do share! The community is thriving, and new innovations are emerging daily. The full code for SimpleMCP is available on GitHub (link in the original source, not included here as per instructions), complete with tests and setup instructions, so you can start experimenting right away. Happy building! 🚀




