What is Model Context Protocol (MCP)? A Developer’s Guide

If you’ve been following AI development in 2025–2026, you’ve probably heard about Model Context Protocol (MCP). It’s one of the most talked-about standards in the AI developer community — and for good reason. This guide explains what MCP is, why it matters, and how to start using it.


What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI models (like Claude) communicate with external tools, data sources, and services. Think of it as a universal connector — a standardized way for AI assistants to reach outside their context window and interact with the real world.

Before MCP, every AI integration was custom-built. Want your AI to read files? Write a custom function. Query a database? Another custom integration. MCP replaces all of that with one standard protocol.


Why Does MCP Matter for Developers?

Here’s the core problem MCP solves: AI models are isolated. They can only work with what’s in their context window. MCP gives them a standardized way to:

  • Read and write files on your filesystem
  • Query databases (PostgreSQL, SQLite, etc.)
  • Call external APIs
  • Execute code
  • Browse the web
  • Interact with GitHub, Slack, Google Drive, and hundreds of other services

The key word is standardized. You build an MCP server once, and any MCP-compatible client (Claude, Claude Code, Cursor, etc.) can use it immediately — no custom glue code needed.


How MCP Works: The Architecture

MCP has three main components:

🖥️ MCP Host

The AI application that uses MCP — for example, Claude Desktop, Claude Code, or Cursor. The host connects to one or more MCP servers and routes AI requests through them.

⚙️ MCP Server

A lightweight program that exposes tools, resources, and prompts to the AI model. For example, a filesystem MCP server exposes tools like read_file, write_file, and list_directory. You can run MCP servers locally or remotely.

🔗 MCP Client

The protocol layer inside the host that handles the actual communication with MCP servers using JSON-RPC over stdio or HTTP/SSE.

The flow looks like this:

User prompt → Claude (Host) → MCP Client → MCP Server → Tool/Data → Response back to Claude → Answer to user

MCP Primitives: Tools, Resources, and Prompts

MCP servers can expose three types of capabilities:

🔧 Tools

Functions the AI can call to perform actions. Examples: create_file, run_query, send_email. Tools can have side effects — they actually do things.

📄 Resources

Read-only data the AI can access. Examples: file contents, database records, API responses. Resources are like providing context without executing actions.

💬 Prompts

Pre-defined prompt templates the AI can use. Think of them as reusable instructions for common tasks.


Quick Start: Your First MCP Server

Here’s a minimal MCP server in Python using the official SDK:

pip install mcp
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types

server = Server("my-first-server")

@server.list_tools()
async def list_tools():
    return [
        types.Tool(
            name="say_hello",
            description="Returns a greeting message",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string", "description": "Name to greet"}
                },
                "required": ["name"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "say_hello":
        return [types.TextContent(type="text", text=f"Hello, {arguments['name']}!")]

async def main():
    async with stdio_server() as streams:
        await server.run(streams[0], streams[1], server.create_initialization_options())

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

To connect this to Claude Desktop, add it to your MCP config file (~/.claude/claude_desktop_config.json on Mac/Linux):

{
  "mcpServers": {
    "my-server": {
      "command": "python3",
      "args": ["/path/to/server.py"]
    }
  }
}

Popular MCP Servers to Try Today

Anthropic and the community have already built hundreds of MCP servers. Here are the most useful ones:

  • Filesystem — read/write local files (@modelcontextprotocol/server-filesystem)
  • GitHub — manage repos, PRs, issues (@modelcontextprotocol/server-github)
  • PostgreSQL — query your database in natural language (@modelcontextprotocol/server-postgres)
  • Brave Search — real-time web search (@modelcontextprotocol/server-brave-search)
  • Google Drive — read/write Drive files (@modelcontextprotocol/server-gdrive)
  • Slack — send messages and read channels (@modelcontextprotocol/server-slack)

Browse the full list at github.com/modelcontextprotocol/servers.


MCP vs Traditional Tool Use

You might wonder: how is MCP different from regular function calling / tool use in the Claude API?

  • Tool use (API) — you define tools inline in your API request. Works well for simple, app-specific tools. Custom code required for every integration.
  • MCP — tools live in standalone servers, reusable across any MCP client. No glue code. Built once, works everywhere.

For production applications: use the API tool use. For developer workflows and AI assistants (Claude Desktop, Claude Code): use MCP.


What’s Next?

💡 Ready to build? Check out the official MCP documentation and the Python SDK on GitHub.

In upcoming posts we’ll cover:

  • Building a production MCP server with authentication
  • Connecting MCP to PostgreSQL for natural language database queries
  • Using MCP with Claude API in your own applications