MCP Servers What They Are and How to Use Them

In the evolving world of AI and software development, you may have started hearing about MCP servers. If you’re curious what they are and how you can actually use them, this post will walk you through the basics.

What Is an MCP Server?

MCP stands for Model Context Protocol. It’s a protocol designed to help AI models (like GPT) connect to external tools, data sources, and services in a structured, safe, and standardized way.

Think of an MCP server as a “bridge” between an AI assistant and the outside world. Instead of giving an AI unrestricted access to your computer or APIs, you set up an MCP server that:

  • Exposes specific tools or resources the AI can use
  • Enforces permissions and boundaries
  • Provides structured data in a consistent format

This makes it easier and safer to let AI systems interact with your files, databases, APIs, or custom workflows.

Why Use MCP Servers?

MCP servers bring some key advantages:

  1. Safety & Security – Instead of giving raw API keys or file access, you expose only the functions you want.
  2. Consistency – All servers follow the same protocol, so once you know how to use one, you can use others.
  3. Extensibility – You can build your own MCP server for any system (e.g., GitHub repos, databases, internal company tools).
  4. Separation of Concerns – AI doesn’t need to “know” how your tool works internally—it just asks the MCP server.

How MCP Servers Work

At a high level, the setup looks like this:

AI Assistant (Client)  <----->  MCP Server  <----->  External Resource
  • The client (usually an AI assistant or IDE plugin) speaks the MCP protocol.
  • The MCP server exposes a set of capabilities: listing resources, performing actions, streaming results.
  • The external resource can be anything—like a database, file system, or API.

The client doesn’t talk directly to the resource; it only communicates through the server.

How to Use an MCP Server

Install an Existing MCP Server

There are already open-source MCP servers available for things like local files, Git, or HTTP APIs.

By default, this runs a simple MCP server on your local machine, usually accessible via a local port or Unix socket.

Connect It to Your AI Assistant

Most AI assistants (like GPT clients or IDE plugins) that support MCP use a config file to list available servers.

For example, if your assistant supports a configuration file called .mcp-config.json, it might look like this:

{
  "servers": {
    "local-tools": {
      "command": "node",
      "args": ["path/to/your/mcp-server.js"]
    },
    "git-server": {
      "command": "python",
      "args": ["-m", "mcp_git"]
    }
  }
}

After saving this, restart your assistant, it will automatically detect and connect to your MCP servers.

Start Using MCP Tools Through the AI

Once connected, your AI assistant will automatically discover the tools and resources exposed by each server.

For example, you might now be able to say:

@assistant search my repo for the function "handleLogin"

Or:

@assistant query my local database for all customers with unpaid invoices

Under the hood, your AI isn’t searching your files or database directly, it’s sending structured MCP requests to the connected server, which performs the action and returns structured results.

Example: Creating a Simple Custom MCP Server

You can create your own MCP server for any backend or API.
Here’s a minimal example in Node.js that exposes a simple “ping” command:

import { createServer } from "@modelcontextprotocol/server";

const server = createServer({
  tools: {
    ping: {
      description: "Responds with 'pong'",
      handler: async () => ({ message: "pong" })
    }
  }
});

server.listen();
console.log("MCP server running on port 8000");

Run it:

node my-mcp-server.js

Now, any AI assistant configured to connect to your MCP server can use the new ping tool safely.