/blog/Mar 12, 2026

MCP over SSE is easier than stdio for web workloads

Why I picked SSE transport for our FastMCP server, the one reconnection problem it creates, and how we handled auth.

mcpaipython

When I first looked at MCP transport options, stdio felt like the obvious choice — it's what the docs default to. Then I thought about what "deploy this to a server" actually means for stdio and switched to SSE.

Why SSE

Stdio MCP works great for local tooling. For a server that lives at mcp.time-4-action.com behind nginx, you need a persistent HTTP connection anyway. SSE gives you that without spawning subprocess trees, and it reverse- proxies cleanly:

location /mcp {
    proxy_pass http://localhost:8000;
    proxy_set_header Connection '';
    proxy_http_version 1.1;
    chunked_transfer_encoding on;
}

No special configuration, no process supervision gymnastics.

The reconnection problem

SSE connections drop. The client needs to reconnect and resume — and if it doesn't, a long-running tool call just disappears. FastMCP handles the protocol side, but you need to make sure your tool handlers are idempotent so a retry doesn't double-execute a write.

We made warehouse sync tools read-only from the MCP side. The only writes go through separate API endpoints with their own deduplication keys.

Auth

Auth0 JWT validation on every request, RBAC at the tool level. Not at the endpoint level — at the tool level. A client with products:read scope can call get_product_stock. It cannot call list_orders no matter what the model decides to try.

@mcp.tool()
async def get_product_stock(sku: str, ctx: Context) -> str:
    require_permission(ctx, "warehouse:read")
    ...

The permission check is inside the tool, not in middleware. That means it's tested with the tool, not separately.

Bottom line

SSE is the right default for anything you're deploying. Stdio is for local scripts. The reconnection semantics take ten minutes to think through; the operational simplicity is permanent.