MCP Server Tutorial: How to Expose Tools & UI
Goal: stand up an MCP server that defines typed tools, handles requests from ChatGPT, and returns inline UI descriptors so your ChatGPT App can render forms, tables, and cards in-chat.
If you’re new to the stack, start with MCP (Model Context Protocol) and the Apps SDK Tutorial. For UI patterns, see Inline UI & Widgets.
What you’ll build
- A minimal HTTP MCP server with one tool:
summarize - Typed input/output validation
- Inline UI: a form to collect inputs, and a result card
- Production basics: logging, error handling, rate limits, and secrets hygiene
1) Project scaffold
mkdir mcp-server && cd mcp-server
npm init -y
npm i express zod pino helmet express-rate-limit
/mcp-server
├─ server.js # HTTP server + routes
├─ tools/
│ └─ summarize.js # Your tool implementation
├─ schemas.js # Zod schemas for I/O
├─ ui.js # UI descriptors (form/card/table)
└─ .env # Secrets/env (never commit)
Prefer Python? Mirror these concepts in FastAPI/Pydantic. The MCP shapes stay the same.
2) Define typed I/O with Zod
schemas.js
import { z } from "zod";
export const SummarizeInput = z.object({
text: z.string().min(40, "Please paste at least 40 characters."),
length: z.enum(["short","medium","detailed"]).default("short")
});
export const SummarizeOutput = z.object({
bullets: z.array(z.string()).min(1)
});
Typed schemas make validation-errors user-friendly and protect your tools. See Security for ChatGPT Apps.
3) Implement the tool
tools/summarize.js
import { SummarizeInput, SummarizeOutput } from "../schemas.js";
export async function summarize(reqBody) {
const input = SummarizeInput.parse(reqBody?.input ?? {});
// Replace with your LLM/API call or business logic:
const sents = input.text.split(/[\.\!\?]\s+/).filter(Boolean);
const take = input.length === "short" ? 3 : input.length === "medium" ? 5 : 8;
const bullets = sents.slice(0, take).map(s => "• " + s.trim());
return SummarizeOutput.parse({ bullets });
}
4) Describe inline UI (form + result card)
ui.js
export const UI = {
form: {
type: "form",
title: "Summarize Text",
fields: [
{ id: "text", type: "textarea", label: "Paste text", minLength: 40, required: true },
{ id: "length", type: "select", label: "Summary length", options: [
{ label: "Short (≈3 bullets)", value:"short" },
{ label: "Medium (≈5 bullets)", value:"medium" },
{ label: "Detailed (≈8 bullets)", value:"detailed" }
], default: "short" }
],
submitLabel: "Summarize"
},
preview: {
type: "card",
title: "Summary",
itemsBinding: "output.bullets"
}
};
These are descriptors the Apps SDK can render. More patterns: Inline UI & Widgets.
5) Wire the MCP endpoints
server.js
import express from "express";
import helmet from "helmet";
import rateLimit from "express-rate-limit";
import pino from "pino";
import { summarize } from "./tools/summarize.js";
import { UI } from "./ui.js";
const log = pino();
const app = express();
app.use(express.json({ limit: "1mb" }));
app.use(helmet());
app.use(rateLimit({ windowMs: 60_000, max: 60 }));
// Health probe
app.get("/health", (_, res) => res.json({ ok: true }));
// UI route: return descriptors ChatGPT can render
app.get("/ui/summarize", (_, res) => res.json({ type: "ui_bundle", ...UI }));
// MCP tool: summarize
app.post("/tools/summarize", async (req, res) => {
try {
const output = await summarize(req.body);
res.json({ type: "tool_result", tool: "summarize", output });
} catch (err) {
log.warn({ err }, "tool_error");
res.status(400).json({
type: "tool_error",
tool: "summarize",
message: err?.message || "Invalid input"
});
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => log.info(`MCP server on :${PORT}`));
Contract recap
- UI:
/ui/*returns widgets (forms/tables/cards). - Tools:
/tools/*receive{ input }and returntool_resultortool_error.
6) Connect from your App (Apps SDK side)
Your App’s route calls the MCP endpoints:
import ui from "./fetch-ui.js"; // fetches /ui/summarize at startup
export default {
name: "Summarizer",
routes: [
{
path: "/start",
render: ui.form,
onSubmit: async ({ text, length }) => {
const r = await fetch(process.env.MCP_BASE + "/tools/summarize", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ input: { text, length } })
});
const j = await r.json();
if (j.type === "tool_error") throw new Error(j.message);
return { render: { ...ui.preview, data: { output: j.output } } };
}
}
]
};
Full SDK walkthrough: Apps SDK Tutorial
7) Security & privacy essentials
- Least privilege: if you don’t need user data/scopes, don’t ask.
- Secrets: store in env vars; never ship keys to the client.
- Logging: avoid raw PII; mask payloads; set log retention.
- Abuse controls: input length caps, rate limits, and timeouts.
- Error copy: actionable and safe (no stack traces to users).
Playbooks:
- Security for ChatGPT Apps
- Data Privacy in ChatGPT Agents & Apps
- Handling Secrets & API Keys
- Compliance & PII
8) Reliability & performance
- Timeouts: set server and upstream timeouts; show progress in UI.
- Retries/backoff for flaky APIs; mark idempotent writes.
- Circuit breakers to shed load when dependencies fail.
- Observability: instrument query → view → submit → success.
Measure it: Analytics for ChatGPT Apps
9) Local test & deploy
Local
node server.js
curl http://localhost:3000/health
curl -X POST http://localhost:3000/tools/summarize \
-H "Content-Type: application/json" \
-d '{"input":{"text":"Long text ...","length":"short"}}'
Deploy
- Any HTTPS host (Render, Fly, Cloud Run, Vercel).
- Set
MCP_BASEin your App to your server URL. - Run smoke tests of the form → tool → result loop in ChatGPT.
10) Submission-ready checklist
- ✅ One hero tool with a clear UI flow
- ✅ Validation + friendly error states
- ✅ Rate limits, timeouts, masked logs
- ✅ No unnecessary scopes
- ✅ Accurate listing assets (title, desc, screenshots, prompts)
Ship it: ChatGPT App Submission → App Verification & Review
FAQ
Can I stream partial results?
Yes—return incremental messages and update the UI. Use loading states for long jobs.
How do I handle big inputs?
Chunk on the server, or pre-validate size and guide the user to scope down.
Can Agents call my MCP tools too?
Yes. For long, multi-step workflows, pair with AgentKit and Agent Orchestration Workflows.
