Development

Building Custom MCP Servers for Claude Code: A Developer's Guide

Asep Alazhari

Learn to build powerful custom MCP servers with TypeScript. Real-world examples including GitHub integration, business logic automation, and API orchestration that save 15+ hours per week.

Building Custom MCP Servers for Claude Code: A Developer's Guide

Last month, I found myself explaining the same API endpoints to Claude for the 47th time. Copy-paste documentation. Wait for Claude to digest it. Fix the inevitable misunderstandings. Rinse and repeat.

Then I discovered I could teach Claude once by building a custom MCP server.

The result? My first server took 3 hours to build. It now saves me 15+ hours every week. Claude instantly understands our internal APIs, business rules, and data structures—no explanation needed.

If you’ve ever wished Claude could directly access your databases, APIs, or internal tools, this guide is for you. I’ll show you how to build production-ready MCP servers with real-world examples that actually solve problems.

Why Build Custom MCP Servers?

Before we dive into code, let’s talk about when building a custom server makes sense.

When You Should Build One

You have repetitive context-switching workflows. If you’re constantly copying data between Claude and other systems—databases, APIs, internal tools—a custom server eliminates that friction.

You need to integrate proprietary systems. Your internal API, custom database schema, or business logic engine won’t have pre-built MCP servers. Building your own bridges that gap.

You want type-safe, validated interactions. Unlike free-form prompts, MCP servers use Zod schemas for strict validation. Claude can’t send malformed requests, and you control exactly what data flows in and out.

You need to aggregate multiple data sources. One of my servers combines our PostgreSQL database, Stripe API, and internal analytics service. Claude gets a unified view without knowing the underlying complexity.

The Decision Matrix

Here’s when to build versus when to use existing servers:

ScenarioBuild CustomUse Existing
Public API (GitHub, Stripe, etc.)NoYes - Check MCP servers registry first
Internal/proprietary systemsYesNo - No choice—you need custom
Complex business logicYesNo - Your rules, your code
Simple file operationsNoYes - Use @modelcontextprotocol/server-filesystem
Database queriesMaybeMaybe - Start with existing, customize if needed

The Business Value Proposition

Let me give you specific numbers from my own experience:

Before custom MCP servers:

  • 10 minutes to generate a customer quote (manual data gathering)
  • 25 minutes to analyze a pull request (GitHub UI + manual review)
  • 30+ context-switching interruptions per day

After custom MCP servers:

  • 30 seconds to generate quotes (automated pricing rules)
  • 2 minutes for PR analysis (automated metrics + code review)
  • 3-5 context switches per day (Claude handles the rest)

ROI: 3 hours to build my first server, 15+ hours saved weekly. Breakeven in 12 days.

If you’re still reading, you probably have a use case in mind. Let’s build it.

Technical Foundation

Prerequisites

Before we start coding, make sure you have:

  • Node.js v18+ (v20+ recommended)
  • TypeScript knowledge (intermediate level)
  • Claude Desktop installed and configured
  • Basic understanding of async/await and JSON schemas

If you’ve already set up MCP with Claude Desktop, you’re ready to go.

Project Setup

Create a new project and install dependencies:

mkdir my-custom-mcp-server
cd my-custom-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx

Critical configuration: Set up your package.json as an ES Module. This is the #1 mistake I see developers make:

{
    "name": "my-custom-mcp-server",
    "version": "1.0.0",
    "type": "module",
    "main": "build/index.js",
    "scripts": {
        "build": "tsc",
        "dev": "tsx src/index.ts"
    },
    "dependencies": {
        "@modelcontextprotocol/sdk": "^1.0.0",
        "zod": "^3.25.0"
    },
    "devDependencies": {
        "@types/node": "^20.0.0",
        "typescript": "^5.7.0",
        "tsx": "^4.19.2"
    }
}

Create tsconfig.json:

{
    "compilerOptions": {
        "target": "ES2022",
        "module": "ES2022",
        "moduleResolution": "bundler",
        "outDir": "./build",
        "rootDir": "./src",
        "strict": true,
        "esModuleInterop": true,
        "skipLibCheck": true,
        "forceConsistentCasingInFileNames": true,
        "resolveJsonModule": true
    },
    "include": ["src/**/*"],
    "exclude": ["node_modules"]
}

Why ES Modules matter: The MCP SDK uses ES Module syntax (.js imports). If you forget "type": "module" in package.json, you’ll get cryptic import errors that waste hours of debugging. Trust me.

Your First Server: Calculator

Let’s build a simple calculator server to understand the core concepts. This server will expose basic math operations to Claude.

Create src/index.ts:

#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import * as z from "zod";

// Initialize the server
const server = new McpServer({
    name: "calculator-server",
    version: "1.0.0",
});

// Register the 'add' tool
server.registerTool(
    "add",
    {
        title: "Add Numbers",
        description: "Add two numbers together",
        inputSchema: {
            a: z.number().describe("First number"),
            b: z.number().describe("Second number"),
        },
        outputSchema: {
            result: z.number(),
            operation: z.string(),
        },
    },
    async ({ a, b }) => {
        const output = {
            result: a + b,
            operation: "addition",
        };

        return {
            content: [
                {
                    type: "text",
                    text: `${a} + ${b} = ${output.result}`,
                },
            ],
            structuredContent: output,
        };
    }
);

// Register the 'multiply' tool
server.registerTool(
    "multiply",
    {
        title: "Multiply Numbers",
        description: "Multiply two numbers together",
        inputSchema: {
            a: z.number().describe("First number"),
            b: z.number().describe("Second number"),
        },
        outputSchema: {
            result: z.number(),
            operation: z.string(),
        },
    },
    async ({ a, b }) => {
        const output = {
            result: a * b,
            operation: "multiplication",
        };

        return {
            content: [
                {
                    type: "text",
                    text: `${a} × ${b} = ${output.result}`,
                },
            ],
            structuredContent: output,
        };
    }
);

// Connect via stdio transport
const transport = new StdioServerTransport();
await server.connect(transport);

console.error("Calculator MCP Server running on stdio");

Key concepts explained:

  1. Server initialization: McpServer takes a name and version. These appear in Claude’s MCP server list.

  2. Tool registration: Each tool has:

    • A unique name ('add', 'multiply')
    • Metadata (title, description)
    • Input schema (Zod validators)
    • Output schema (what Claude receives back)
    • Handler function (your business logic)
  3. Stdio transport: For local servers, we use StdioServerTransport. Claude Desktop spawns your server as a child process and communicates via standard input/output.

  4. Response format: Tools return:

    • content: Human-readable text for Claude
    • structuredContent: Typed data matching your output schema

Build and test:

npm run build
node build/index.js

You should see “Calculator MCP Server running on stdio” in the console. Press Ctrl+C to stop.

Real-World Example 1: GitHub PR Analyzer

Now let’s build something practical. This server analyzes GitHub pull requests by aggregating data from multiple API endpoints.

Use case: Before reviewing PRs, I want Claude to give me a summary: changed files, test coverage, CI status, and review comments. Manually gathering this takes 10+ minutes. With this server, it’s instant.

Install GitHub API client:

npm install @octokit/rest

Create src/github-server.ts:

#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { Octokit } from "@octokit/rest";
import * as z from "zod";

// Initialize GitHub client
const octokit = new Octokit({
    auth: process.env.GITHUB_TOKEN,
});

const server = new McpServer({
    name: "github-pr-analyzer",
    version: "1.0.0",
});

server.registerTool(
    "analyze-pr",
    {
        title: "Analyze GitHub Pull Request",
        description: "Get comprehensive analysis of a GitHub PR including files, reviews, and CI status",
        inputSchema: {
            owner: z.string().describe("Repository owner (username or org)"),
            repo: z.string().describe("Repository name"),
            pr_number: z.number().describe("Pull request number"),
        },
        outputSchema: {
            pr: z.object({
                title: z.string(),
                state: z.string(),
                author: z.string(),
                created_at: z.string(),
                mergeable: z.boolean().nullable(),
            }),
            files: z.array(
                z.object({
                    filename: z.string(),
                    additions: z.number(),
                    deletions: z.number(),
                    status: z.string(),
                })
            ),
            reviews: z.array(
                z.object({
                    user: z.string(),
                    state: z.string(),
                    submitted_at: z.string(),
                })
            ),
            checks: z.object({
                total: z.number(),
                passed: z.number(),
                failed: z.number(),
                pending: z.number(),
            }),
        },
    },
    async ({ owner, repo, pr_number }) => {
        try {
            // Fetch PR details
            const { data: pr } = await octokit.pulls.get({
                owner,
                repo,
                pull_number: pr_number,
            });

            // Fetch changed files
            const { data: files } = await octokit.pulls.listFiles({
                owner,
                repo,
                pull_number: pr_number,
            });

            // Fetch reviews
            const { data: reviews } = await octokit.pulls.listReviews({
                owner,
                repo,
                pull_number: pr_number,
            });

            // Fetch CI checks
            const { data: checks } = await octokit.checks.listForRef({
                owner,
                repo,
                ref: pr.head.sha,
            });

            // Aggregate check statuses
            const checkSummary = checks.check_runs.reduce(
                (acc, check) => {
                    acc.total++;
                    if (check.conclusion === "success") acc.passed++;
                    else if (check.conclusion === "failure") acc.failed++;
                    else acc.pending++;
                    return acc;
                },
                { total: 0, passed: 0, failed: 0, pending: 0 }
            );

            const output = {
                pr: {
                    title: pr.title,
                    state: pr.state,
                    author: pr.user?.login || "unknown",
                    created_at: pr.created_at,
                    mergeable: pr.mergeable,
                },
                files: files.map((f) => ({
                    filename: f.filename,
                    additions: f.additions,
                    deletions: f.deletions,
                    status: f.status,
                })),
                reviews: reviews.map((r) => ({
                    user: r.user?.login || "unknown",
                    state: r.state,
                    submitted_at: r.submitted_at || "",
                })),
                checks: checkSummary,
            };

            return {
                content: [
                    {
                        type: "text",
                        text:
                            `**${pr.title}** by @${pr.user?.login}\n\n` +
                            `State: ${pr.state} | Mergeable: ${pr.mergeable}\n` +
                            `Files changed: ${files.length}\n` +
                            `Reviews: ${reviews.length}\n` +
                            `CI: ${checkSummary.passed}/${checkSummary.total} passed`,
                    },
                ],
                structuredContent: output,
            };
        } catch (error) {
            return {
                content: [
                    {
                        type: "text",
                        text: `Error analyzing PR: ${error instanceof Error ? error.message : "Unknown error"}`,
                    },
                ],
                isError: true,
            };
        }
    }
);

const transport = new StdioServerTransport();
await server.connect(transport);

console.error("GitHub PR Analyzer running on stdio");

What makes this powerful:

  1. Multi-endpoint aggregation: We call 4 different GitHub API endpoints and combine the results. Claude gets everything in one request.

  2. Error handling: The try-catch block ensures graceful failures. If the API is down, Claude gets a clear error message instead of crashing.

  3. Type safety: Zod schemas enforce that Claude receives exactly the data structure we define. No surprises.

Business value: This single tool reduced my PR review preparation from 10 minutes to 30 seconds. That’s ~50 hours saved per year for just this one workflow.

Real-World Example 2: Business Logic Server

Let’s tackle a different problem: complex business rules. This server implements e-commerce pricing logic with tier-based discounts.

Use case: Our sales team needs accurate quotes, but our pricing rules are complex (volume discounts, customer tiers, seasonal promotions). Before this server, generating quotes required opening 3 different spreadsheets.

Create src/pricing-server.ts:

#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import * as z from "zod";

// Pricing tiers and rules
const PRICING_TIERS = {
    bronze: { multiplier: 1.0, min_quantity: 1 },
    silver: { multiplier: 0.9, min_quantity: 10 },
    gold: { multiplier: 0.8, min_quantity: 50 },
    platinum: { multiplier: 0.7, min_quantity: 100 },
} as const;

const PRODUCTS = {
    widget_a: { base_price: 29.99, name: "Widget A" },
    widget_b: { base_price: 49.99, name: "Widget B" },
    widget_c: { base_price: 99.99, name: "Widget C" },
} as const;

const server = new McpServer({
    name: "pricing-engine",
    version: "1.0.0",
});

server.registerTool(
    "calculate-quote",
    {
        title: "Calculate Customer Quote",
        description: "Calculate pricing quote with volume discounts and customer tier benefits",
        inputSchema: {
            customer_tier: z.enum(["bronze", "silver", "gold", "platinum"]).describe("Customer tier level"),
            items: z
                .array(
                    z.object({
                        product_id: z.enum(["widget_a", "widget_b", "widget_c"]).describe("Product identifier"),
                        quantity: z.number().min(1).describe("Quantity ordered"),
                    })
                )
                .min(1)
                .describe("List of items in the quote"),
        },
        outputSchema: {
            quote: z.object({
                customer_tier: z.string(),
                tier_discount: z.number(),
                items: z.array(
                    z.object({
                        product: z.string(),
                        quantity: z.number(),
                        unit_price: z.number(),
                        volume_discount: z.number(),
                        line_total: z.number(),
                    })
                ),
                subtotal: z.number(),
                total_discount: z.number(),
                final_total: z.number(),
            }),
        },
    },
    async ({ customer_tier, items }) => {
        const tierConfig = PRICING_TIERS[customer_tier];
        const processedItems = items.map((item) => {
            const product = PRODUCTS[item.product_id];

            // Calculate volume discount
            let volumeDiscount = 0;
            if (item.quantity >= 100) volumeDiscount = 0.15;
            else if (item.quantity >= 50) volumeDiscount = 0.1;
            else if (item.quantity >= 10) volumeDiscount = 0.05;

            // Apply tier multiplier and volume discount
            const basePrice = product.base_price;
            const tierAdjustedPrice = basePrice * tierConfig.multiplier;
            const finalUnitPrice = tierAdjustedPrice * (1 - volumeDiscount);
            const lineTotal = finalUnitPrice * item.quantity;

            return {
                product: product.name,
                quantity: item.quantity,
                unit_price: parseFloat(finalUnitPrice.toFixed(2)),
                volume_discount: parseFloat((volumeDiscount * 100).toFixed(1)),
                line_total: parseFloat(lineTotal.toFixed(2)),
            };
        });

        const subtotal = processedItems.reduce((sum, item) => sum + item.line_total, 0);
        const originalTotal = items.reduce((sum, item) => {
            return sum + PRODUCTS[item.product_id].base_price * item.quantity;
        }, 0);
        const totalDiscount = originalTotal - subtotal;

        const output = {
            quote: {
                customer_tier,
                tier_discount: parseFloat(((1 - tierConfig.multiplier) * 100).toFixed(1)),
                items: processedItems,
                subtotal: parseFloat(subtotal.toFixed(2)),
                total_discount: parseFloat(totalDiscount.toFixed(2)),
                final_total: parseFloat(subtotal.toFixed(2)),
            },
        };

        return {
            content: [
                {
                    type: "text",
                    text:
                        `Quote for ${customer_tier.toUpperCase()} tier customer:\n\n` +
                        processedItems
                            .map((i) => `${i.product} x${i.quantity} @ $${i.unit_price} = $${i.line_total}`)
                            .join("\n") +
                        `\n\nSubtotal: $${output.quote.subtotal}\n` +
                        `Total Discount: $${output.quote.total_discount}\n` +
                        `**Final Total: $${output.quote.final_total}**`,
                },
            ],
            structuredContent: output,
        };
    }
);

const transport = new StdioServerTransport();
await server.connect(transport);

console.error("Pricing Engine running on stdio");

Key implementation details:

  1. Complex business rules: The code implements both tier-based discounts (customer loyalty) and volume discounts (quantity-based). Real-world pricing is rarely simple.

  2. Type safety for business logic: The enum validators ensure Claude can only request valid products and tiers. Invalid requests fail at validation, not at runtime.

  3. Calculation transparency: The response shows both the calculations and the final numbers. Sales teams can explain the quote to customers.

Business value: Quote generation went from 10 minutes (manual spreadsheet work) to 30 seconds. Our sales team can now generate quotes during customer calls.

Real-World Example 3: Database + API Hybrid

The most powerful MCP servers combine multiple data sources. This server enriches customer data by joining a local database with external API calls.

Use case: When analyzing customer behavior, I need both our internal data (PostgreSQL) and external enrichment (Clearbit). Manually cross-referencing these takes 15+ minutes per customer.

Install dependencies:

npm install pg dotenv

Create .env:

DATABASE_URL=postgresql://user:password@localhost:5432/mydb
CLEARBIT_API_KEY=your_clearbit_key_here

Create src/customer-server.ts:

#!/usr/bin/env node
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import pkg from "pg";
const { Pool } = pkg;
import * as z from "zod";
import "dotenv/config";

// Database connection
const pool = new Pool({
    connectionString: process.env.DATABASE_URL,
});

// Simple in-memory cache
const cache = new Map<string, { data: any; timestamp: number }>();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

const server = new McpServer({
    name: "customer-enrichment",
    version: "1.0.0",
});

server.registerTool(
    "get-customer-profile",
    {
        title: "Get Enriched Customer Profile",
        description: "Retrieve customer data from database and enrich with external API data",
        inputSchema: {
            email: z.string().email().describe("Customer email address"),
        },
        outputSchema: {
            customer: z.object({
                id: z.number(),
                email: z.string(),
                name: z.string(),
                created_at: z.string(),
                lifetime_value: z.number(),
                enrichment: z
                    .object({
                        company: z.string().optional(),
                        title: z.string().optional(),
                        location: z.string().optional(),
                        linkedin: z.string().optional(),
                    })
                    .optional(),
            }),
        },
    },
    async ({ email }) => {
        try {
            // Check cache first
            const cacheKey = `customer:${email}`;
            const cached = cache.get(cacheKey);
            if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
                return {
                    content: [
                        {
                            type: "text",
                            text: `[CACHED] Customer: ${cached.data.name} (${cached.data.email})`,
                        },
                    ],
                    structuredContent: { customer: cached.data },
                };
            }

            // Fetch from database
            const dbResult = await pool.query(
                `SELECT id, email, name, created_at, lifetime_value
         FROM customers
         WHERE email = $1`,
                [email]
            );

            if (dbResult.rows.length === 0) {
                return {
                    content: [
                        {
                            type: "text",
                            text: `Customer not found: ${email}`,
                        },
                    ],
                    isError: true,
                };
            }

            const customer = dbResult.rows[0];

            // Enrich with Clearbit API (with graceful fallback)
            let enrichment = undefined;
            try {
                const clearbitResponse = await fetch(`https://person.clearbit.com/v2/people/find?email=${email}`, {
                    headers: {
                        Authorization: `Bearer ${process.env.CLEARBIT_API_KEY}`,
                    },
                });

                if (clearbitResponse.ok) {
                    const clearbitData = await clearbitResponse.json();
                    enrichment = {
                        company: clearbitData.employment?.name,
                        title: clearbitData.employment?.title,
                        location: clearbitData.location,
                        linkedin: clearbitData.linkedin?.handle,
                    };
                }
            } catch (error) {
                console.error("Clearbit enrichment failed:", error);
                // Continue without enrichment
            }

            const output = {
                id: customer.id,
                email: customer.email,
                name: customer.name,
                created_at: customer.created_at.toISOString(),
                lifetime_value: parseFloat(customer.lifetime_value),
                enrichment,
            };

            // Cache the result
            cache.set(cacheKey, { data: output, timestamp: Date.now() });

            return {
                content: [
                    {
                        type: "text",
                        text:
                            `**${output.name}** (${output.email})\n` +
                            `LTV: $${output.lifetime_value}\n` +
                            `Customer since: ${new Date(output.created_at).toLocaleDateString()}\n` +
                            (enrichment
                                ? `\n${enrichment.title} at ${enrichment.company}\n${enrichment.location}`
                                : ""),
                    },
                ],
                structuredContent: { customer: output },
            };
        } catch (error) {
            return {
                content: [
                    {
                        type: "text",
                        text: `Error fetching customer: ${error instanceof Error ? error.message : "Unknown error"}`,
                    },
                ],
                isError: true,
            };
        }
    }
);

const transport = new StdioServerTransport();
await server.connect(transport);

console.error("Customer Enrichment Server running on stdio");

Advanced patterns demonstrated:

  1. Multi-source aggregation: Combines PostgreSQL (internal data) with Clearbit API (external enrichment). Claude gets a unified view.

  2. Graceful failure handling: If Clearbit is down or rate-limits us, we still return the database data. Partial data beats no data.

  3. Performance optimization: Simple in-memory cache with TTL prevents repeated database and API calls for the same customer.

  4. Environment-based configuration: Sensitive credentials (database URL, API keys) come from environment variables, never hardcoded.

Business value: Customer analysis time dropped from 15 minutes to 2 seconds. Our support team now enriches tickets in real-time during customer conversations.

For more database integration patterns, see my guide on MCP MySQL integration.

Configuration & Integration with Claude Desktop

Now that we’ve built servers, let’s connect them to Claude Desktop.

Claude Desktop Configuration

Edit your Claude Desktop config file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your servers to the mcpServers section:

{
    "mcpServers": {
        "calculator": {
            "command": "node",
            "args": ["/absolute/path/to/my-custom-mcp-server/build/index.js"]
        },
        "github-pr-analyzer": {
            "command": "node",
            "args": ["/absolute/path/to/my-custom-mcp-server/build/github-server.js"],
            "env": {
                "GITHUB_TOKEN": "ghp_your_github_token_here"
            }
        },
        "pricing-engine": {
            "command": "node",
            "args": ["/absolute/path/to/my-custom-mcp-server/build/pricing-server.js"]
        },
        "customer-enrichment": {
            "command": "node",
            "args": ["/absolute/path/to/my-custom-mcp-server/build/customer-server.js"],
            "env": {
                "DATABASE_URL": "postgresql://user:password@localhost:5432/mydb",
                "CLEARBIT_API_KEY": "sk_your_clearbit_key"
            }
        }
    }
}

Critical details:

  1. Absolute paths: Always use full paths to your built JavaScript files, not relative paths.

  2. Environment variables: Pass sensitive data via the env object, not in code.

  3. Server names: The key ("calculator", "github-pr-analyzer") appears in Claude’s tool list.

Development Workflow

Here’s my recommended workflow:

# 1. Make code changes
vim src/index.ts

# 2. Build TypeScript
npm run build

# 3. Test manually
node build/index.js

# 4. Restart Claude Desktop to reload servers
# (macOS: Cmd+Q to quit, then reopen)
# (Windows: Fully close and reopen)

# 5. Test in Claude
# Ask Claude: "Use the calculator to add 5 and 3"

Pro tip: I use a watch script during development:

{
    "scripts": {
        "dev": "tsx src/index.ts",
        "build": "tsc",
        "watch": "tsc --watch"
    }
}

Run npm run watch in one terminal, edit code, and Claude Desktop automatically picks up changes on restart.

Advanced Topics

Error Handling Patterns

Production servers need robust error handling. Here’s my pattern:

server.registerTool(
    "risky-operation",
    {
        title: "Risky Operation",
        description: "Operation that might fail",
        inputSchema: {
            data: z.string(),
        },
        outputSchema: {
            success: z.boolean(),
            result: z.string().optional(),
            error: z.string().optional(),
        },
    },
    async ({ data }) => {
        try {
            // Validate input beyond Zod schema
            if (data.length > 10000) {
                throw new Error("Input too large (max 10KB)");
            }

            // Perform operation
            const result = await performRiskyOperation(data);

            return {
                content: [
                    {
                        type: "text",
                        text: `Success: ${result}`,
                    },
                ],
                structuredContent: {
                    success: true,
                    result,
                },
            };
        } catch (error) {
            // Log error server-side
            console.error("Operation failed:", error);

            // Return user-friendly error to Claude
            return {
                content: [
                    {
                        type: "text",
                        text: `Operation failed: ${error instanceof Error ? error.message : "Unknown error"}`,
                    },
                ],
                structuredContent: {
                    success: false,
                    error: error instanceof Error ? error.message : "Unknown error",
                },
                isError: true,
            };
        }
    }
);

Key principles:

  1. Validate beyond schemas: Zod handles types; you handle business rules.
  2. Log errors server-side: Use console.error for debugging (appears in Claude Desktop logs).
  3. Return user-friendly errors: Don’t leak stack traces or sensitive info to Claude.
  4. Use isError flag: Tells Claude the operation failed.

Performance Optimization

For servers that handle expensive operations:

import { LRUCache } from "lru-cache";

// Cache expensive API calls
const cache = new LRUCache({
    max: 100,
    ttl: 1000 * 60 * 5, // 5 minutes
});

server.registerTool(
    "expensive-operation",
    {
        /* ... */
    },
    async ({ input }) => {
        const cacheKey = `operation:${input}`;

        // Check cache
        const cached = cache.get(cacheKey);
        if (cached) {
            return cached;
        }

        // Perform operation
        const result = await expensiveApiCall(input);

        // Cache result
        cache.set(cacheKey, result);

        return result;
    }
);

When to cache:

  • External API calls (GitHub, Stripe, etc.)
  • Database queries with stable data
  • Expensive computations

When NOT to cache:

  • Real-time data (stock prices, live metrics)
  • User-specific sensitive data
  • Operations with side effects (writes, updates)

Security Considerations

Input validation:

inputSchema: {
  user_id: z.number()
    .int()
    .positive()
    .max(999999999)
    .describe('User ID'),
  query: z.string()
    .min(1)
    .max(1000)
    .regex(/^[a-zA-Z0-9\s]+$/)
    .describe('Search query (alphanumeric only)')
}

API key management:

// ❌ NEVER do this
const API_KEY = "sk_live_abc123";

// ✅ Always use environment variables
const API_KEY = process.env.API_KEY;
if (!API_KEY) {
    throw new Error("API_KEY environment variable required");
}

Rate limiting:

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({
    tokensPerInterval: 10,
    interval: "minute",
});

server.registerTool(
    "rate-limited-api",
    {
        /* ... */
    },
    async (params) => {
        const allowed = await limiter.removeTokens(1);
        if (!allowed) {
            throw new Error("Rate limit exceeded. Try again later.");
        }

        return await apiCall(params);
    }
);

Testing Your Servers

I use Vitest for testing MCP servers:

npm install -D vitest

Create src/index.test.ts:

import { describe, it, expect } from "vitest";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import * as z from "zod";

describe("Calculator Server", () => {
    it("should add two numbers correctly", async () => {
        const server = new McpServer({
            name: "test-server",
            version: "1.0.0",
        });

        let capturedResult: any;

        server.registerTool(
            "add",
            {
                title: "Add",
                description: "Add numbers",
                inputSchema: {
                    a: z.number(),
                    b: z.number(),
                },
                outputSchema: {
                    result: z.number(),
                },
            },
            async ({ a, b }) => {
                const output = { result: a + b };
                capturedResult = output;
                return {
                    content: [{ type: "text", text: `${output.result}` }],
                    structuredContent: output,
                };
            }
        );

        // Simulate tool call
        const tools = server.server.getCapabilities().tools;
        expect(tools).toBeDefined();

        // Call the handler directly
        await server.server.callTool({ name: "add", arguments: { a: 5, b: 3 } }, {});

        expect(capturedResult).toEqual({ result: 8 });
    });
});

For more advanced testing workflows, check out Claude Tools Monitor for real-time debugging.

Troubleshooting Common Issues

Server Not Appearing in Claude

Problem: Server configured but doesn’t show up in Claude’s tool list.

Solutions:

  1. Check absolute paths: Verify claude_desktop_config.json uses full paths:

    # macOS/Linux
    pwd  # Get current directory
    # Use the full path like /Users/yourname/projects/server/build/index.js
  2. Verify build output: Make sure TypeScript compiled successfully:

    npm run build
    ls build/  # Should see index.js
  3. Check Claude Desktop logs:

    • macOS: ~/Library/Logs/Claude/
    • Windows: %APPDATA%\Claude\logs\

    Look for error messages about your server.

  4. Restart Claude Desktop completely: Quit fully (not just close window) and reopen.

”Module not found” Errors

Problem: Error: Cannot find module '@modelcontextprotocol/sdk'

Solutions:

  1. Verify package.json has "type": "module":

    {
        "type": "module"
    }
  2. Use .js extensions in imports:

    // ✅ Correct
    import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
    
    // ❌ Wrong
    import { McpServer } from "@modelcontextprotocol/sdk/server/mcp";
  3. Check tsconfig.json module settings:

    {
        "compilerOptions": {
            "module": "ES2022",
            "moduleResolution": "bundler"
        }
    }

Tools Not Executing

Problem: Server appears in Claude but tools don’t work.

Solutions:

  1. Check tool registration:

    // Make sure you're calling registerTool, not just defining it
    server.registerTool(
        "tool-name",
        {
            /* config */
        },
        async () => {
            /* handler */
        }
    );
  2. Verify Zod schemas:

    // ❌ Wrong - missing .describe()
    inputSchema: {
        name: z.string();
    }
    
    // ✅ Correct
    inputSchema: {
        name: z.string().describe("User name");
    }
  3. Check for handler errors:

    // Add logging to your handler
    async ({ param }) => {
        console.error("Tool called with:", param);
        try {
            const result = await operation();
            console.error("Tool succeeded:", result);
            return result;
        } catch (error) {
            console.error("Tool failed:", error);
            throw error;
        }
    };

Environment Variables Not Loading

Problem: process.env.VARIABLE is undefined.

Solutions:

  1. Check claude_desktop_config.json env section:

    {
        "mcpServers": {
            "your-server": {
                "command": "node",
                "args": ["path/to/server.js"],
                "env": {
                    "API_KEY": "your_key_here"
                }
            }
        }
    }
  2. For local testing, use dotenv:

    npm install dotenv
    import "dotenv/config";
    
    const apiKey = process.env.API_KEY;
    if (!apiKey) {
        throw new Error("API_KEY required");
    }
  3. Verify variable names match exactly (case-sensitive).

Conclusion: Your MCP Server Journey

Let me give you the specific numbers from my own experience:

Time invested:

  • First server (calculator): 3 hours
  • GitHub PR analyzer: 5 hours
  • Pricing engine: 4 hours
  • Customer enrichment: 6 hours
  • Total: ~18 hours

Time saved weekly:

  • PR reviews: 5 hours
  • Customer quotes: 4 hours
  • Customer analysis: 3 hours
  • Miscellaneous workflows: 3 hours
  • Total: ~15 hours/week

ROI: Broke even in less than 2 weeks. Now saving 60+ hours per month.

Your Next Steps

Start simple: Build the calculator server from this guide. Get comfortable with the workflow.

Identify your pain point: What repetitive task costs you the most time? That’s your second server.

Iterate: My first servers were rough. I refined them based on real usage. Don’t aim for perfection—aim for useful.

The Bigger Picture

MCP servers aren’t just about saving time. They’re about fundamentally changing how you work with AI.

Before custom servers, I spent hours explaining context to Claude. Now Claude knows my systems, my data, my business rules. Conversations are faster, more accurate, and more valuable.

You’ve seen the code. You understand the patterns. The question is: what will you build?

Start today. Build your first server. Save your first hour. Then come back and tell me what you automated.

Back to Blog

Related Posts

View All Posts »