Skip to content

Production Deployment

In production, mcpr runs in direct HTTP mode — it binds to a port and receives traffic directly, without a tunnel. You deploy it next to your MCP server, behind your load balancer or reverse proxy, just like any other backend service.

Terminal window
mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080

mcpr listens on port 8080 and proxies MCP requests to your backend. No tunnel, no relay, no external dependency.

With widgets:

Terminal window
mcpr run --mcp http://localhost:9000 --widgets http://localhost:4444 --no-tunnel --port 8080

The Docker image sets MCPR_NO_TUI=1 by default, so TUI is automatically disabled. Logs go to stderr as structured JSON.

With CLI args:

Terminal window
docker run -d -p 8080:8080 -p 9901:9901 ghcr.io/cptrodgers/mcpr:latest \
run --mcp http://host.docker.internal:9000 \
--no-tunnel --port 8080

With a config file:

Terminal window
docker run -d -p 3000:3000 -p 9901:9901 \
-v ./mcpr.toml:/app/mcpr.toml \
ghcr.io/cptrodgers/mcpr:latest \
run --no-tunnel

The container’s working directory is /app, so mcpr finds mcpr.toml automatically.

With --network host (simplest — your MCP server on localhost is reachable directly):

Terminal window
docker run -d --network host \
-v ./mcpr.toml:/app/mcpr.toml \
ghcr.io/cptrodgers/mcpr:latest \
run --no-tunnel
services:
mcp-server:
build: ./my-mcp-server
ports:
- "9000:9000"
mcpr:
image: ghcr.io/cptrodgers/mcpr:latest
ports:
- "8080:8080"
- "9901:9901"
command:
- "run"
- "--mcp"
- "http://mcp-server:9000"
- "--no-tunnel"
- "--port"
- "8080"
- "--admin-bind"
- "0.0.0.0:9901"
depends_on:
- mcp-server

mcpr handles SIGTERM gracefully — it stops accepting new connections, waits for in-flight requests to complete (up to --drain-timeout), then exits cleanly.

apiVersion: apps/v1
kind: Deployment
metadata:
name: mcpr
spec:
replicas: 1
selector:
matchLabels:
app: mcpr
template:
metadata:
labels:
app: mcpr
spec:
terminationGracePeriodSeconds: 30
containers:
- name: mcpr
image: ghcr.io/cptrodgers/mcpr:latest
args:
- "run"
- "--mcp"
- "http://mcp-server:9000"
- "--no-tunnel"
- "--port"
- "8080"
- "--admin-bind"
- "0.0.0.0:9901"
- "--drain-timeout"
- "25"
ports:
- name: proxy
containerPort: 8080
- name: admin
containerPort: 9901
livenessProbe:
httpGet:
path: /healthz
port: admin
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: admin
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: mcpr
spec:
selector:
app: mcpr
ports:
- name: proxy
port: 8080
targetPort: 8080
- name: admin
port: 9901
targetPort: 9901

If you already run Nginx for TLS termination:

server {
listen 443 ssl;
server_name mcp.yourapp.com;
ssl_certificate /etc/ssl/certs/yourapp.com.pem;
ssl_certificate_key /etc/ssl/private/yourapp.com.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# SSE support
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 86400s;
}
}

Then run mcpr locally:

Terminal window
mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080

Your MCP app is available at https://mcp.yourapp.com.

mcpr exposes an admin API on a separate port (default 127.0.0.1:9901) for health probes and operational tooling:

MethodPathDescription
GET/healthzLiveness — 200 unless shutting down
GET/readyReadiness — 503 while draining or MCP upstream disconnected
GET/versionVersion info as JSON

Configure the admin bind address with --admin-bind or admin_bind in mcpr.toml. For Kubernetes, use --admin-bind 0.0.0.0:9901 to expose the admin port to probes.

Disable with --admin-bind none.

mcpr handles SIGTERM and SIGINT for graceful shutdown:

  1. Marks /ready as 503 (load balancers stop sending traffic)
  2. Stops accepting new connections
  3. Waits for in-flight requests to complete (up to --drain-timeout, default 30s)
  4. Flushes logs
  5. Exits cleanly

Set terminationGracePeriodSeconds in Kubernetes to be slightly longer than --drain-timeout.

In headless mode (no TUI), mcpr writes structured logs to stderr:

Terminal window
# JSON format (default) — machine-parseable for log aggregators
mcpr run --mcp http://localhost:9000 --no-tui --log-format json
# Pretty format — human-readable for debugging
mcpr run --mcp http://localhost:9000 --no-tui --log-format pretty

The Docker image disables TUI by default, so logs go to stderr automatically.

Validate your config file before deploying:

Terminal window
mcpr validate -c /etc/mcpr/mcpr.toml

Returns exit code 0 on success, 1 on errors. Use this in CI/CD pipelines.

Pipe events to your logging infrastructure:

Terminal window
mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080 \
--events 2>/dev/null >> /var/log/mcpr/events.jsonl

Or sync directly to mcpr Cloud for dashboard and alerting:

mcpr.toml
mcp = "http://localhost:9000"
no_tunnel = true
port = 8080
[cloud]
token = "mcpr_xxxxxxxx"
AI Client (ChatGPT / Claude)
│ HTTPS
Nginx / LB (TLS termination)
│ HTTP
mcpr proxy (:8080) Admin API (:9901)
├── JSON-RPC → MCP Server (:9000) ├── /healthz
└── Assets → Widget Server (:4444) ├── /ready
│ └── /version
Event Emitter → stderr / file / cloud.mcpr.app

mcpr is a single static binary with no runtime dependencies. It’s designed to run as a sidecar next to your MCP server.