Production Deployment
In production, mcpr runs in direct HTTP mode — it binds to a port and receives traffic directly, without a tunnel. You deploy it next to your MCP server, behind your load balancer or reverse proxy, just like any other backend service.
Direct HTTP mode
Section titled “Direct HTTP mode”mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080mcpr listens on port 8080 and proxies MCP requests to your backend. No tunnel, no relay, no external dependency.
With widgets:
mcpr run --mcp http://localhost:9000 --widgets http://localhost:4444 --no-tunnel --port 8080Docker
Section titled “Docker”The Docker image sets MCPR_NO_TUI=1 by default, so TUI is automatically disabled. Logs go to stderr as structured JSON.
With CLI args:
docker run -d -p 8080:8080 -p 9901:9901 ghcr.io/cptrodgers/mcpr:latest \ run --mcp http://host.docker.internal:9000 \ --no-tunnel --port 8080With a config file:
docker run -d -p 3000:3000 -p 9901:9901 \ -v ./mcpr.toml:/app/mcpr.toml \ ghcr.io/cptrodgers/mcpr:latest \ run --no-tunnelThe container’s working directory is /app, so mcpr finds mcpr.toml automatically.
With --network host (simplest — your MCP server on localhost is reachable directly):
docker run -d --network host \ -v ./mcpr.toml:/app/mcpr.toml \ ghcr.io/cptrodgers/mcpr:latest \ run --no-tunnelDocker Compose
Section titled “Docker Compose”services: mcp-server: build: ./my-mcp-server ports: - "9000:9000"
mcpr: image: ghcr.io/cptrodgers/mcpr:latest ports: - "8080:8080" - "9901:9901" command: - "run" - "--mcp" - "http://mcp-server:9000" - "--no-tunnel" - "--port" - "8080" - "--admin-bind" - "0.0.0.0:9901" depends_on: - mcp-serverKubernetes
Section titled “Kubernetes”mcpr handles SIGTERM gracefully — it stops accepting new connections, waits for in-flight requests to complete (up to --drain-timeout), then exits cleanly.
apiVersion: apps/v1kind: Deploymentmetadata: name: mcprspec: replicas: 1 selector: matchLabels: app: mcpr template: metadata: labels: app: mcpr spec: terminationGracePeriodSeconds: 30 containers: - name: mcpr image: ghcr.io/cptrodgers/mcpr:latest args: - "run" - "--mcp" - "http://mcp-server:9000" - "--no-tunnel" - "--port" - "8080" - "--admin-bind" - "0.0.0.0:9901" - "--drain-timeout" - "25" ports: - name: proxy containerPort: 8080 - name: admin containerPort: 9901 livenessProbe: httpGet: path: /healthz port: admin initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: admin initialDelaySeconds: 5 periodSeconds: 5---apiVersion: v1kind: Servicemetadata: name: mcprspec: selector: app: mcpr ports: - name: proxy port: 8080 targetPort: 8080 - name: admin port: 9901 targetPort: 9901Behind Nginx
Section titled “Behind Nginx”If you already run Nginx for TLS termination:
server { listen 443 ssl; server_name mcp.yourapp.com;
ssl_certificate /etc/ssl/certs/yourapp.com.pem; ssl_certificate_key /etc/ssl/private/yourapp.com.key;
location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;
# SSE support proxy_buffering off; proxy_cache off; proxy_read_timeout 86400s; }}Then run mcpr locally:
mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080Your MCP app is available at https://mcp.yourapp.com.
Health & admin endpoints
Section titled “Health & admin endpoints”mcpr exposes an admin API on a separate port (default 127.0.0.1:9901) for health probes and operational tooling:
| Method | Path | Description |
|---|---|---|
| GET | /healthz | Liveness — 200 unless shutting down |
| GET | /ready | Readiness — 503 while draining or MCP upstream disconnected |
| GET | /version | Version info as JSON |
Configure the admin bind address with --admin-bind or admin_bind in mcpr.toml. For Kubernetes, use --admin-bind 0.0.0.0:9901 to expose the admin port to probes.
Disable with --admin-bind none.
Graceful shutdown
Section titled “Graceful shutdown”mcpr handles SIGTERM and SIGINT for graceful shutdown:
- Marks
/readyas 503 (load balancers stop sending traffic) - Stops accepting new connections
- Waits for in-flight requests to complete (up to
--drain-timeout, default 30s) - Flushes logs
- Exits cleanly
Set terminationGracePeriodSeconds in Kubernetes to be slightly longer than --drain-timeout.
Structured logging
Section titled “Structured logging”In headless mode (no TUI), mcpr writes structured logs to stderr:
# JSON format (default) — machine-parseable for log aggregatorsmcpr run --mcp http://localhost:9000 --no-tui --log-format json
# Pretty format — human-readable for debuggingmcpr run --mcp http://localhost:9000 --no-tui --log-format prettyThe Docker image disables TUI by default, so logs go to stderr automatically.
Config validation
Section titled “Config validation”Validate your config file before deploying:
mcpr validate -c /etc/mcpr/mcpr.tomlReturns exit code 0 on success, 1 on errors. Use this in CI/CD pipelines.
Structured events in production
Section titled “Structured events in production”Pipe events to your logging infrastructure:
mcpr run --mcp http://localhost:9000 --no-tunnel --port 8080 \ --events 2>/dev/null >> /var/log/mcpr/events.jsonlOr sync directly to mcpr Cloud for dashboard and alerting:
mcp = "http://localhost:9000"no_tunnel = trueport = 8080
[cloud]token = "mcpr_xxxxxxxx"Architecture in production
Section titled “Architecture in production”AI Client (ChatGPT / Claude) │ │ HTTPS ▼ Nginx / LB (TLS termination) │ │ HTTP ▼ mcpr proxy (:8080) Admin API (:9901) ├── JSON-RPC → MCP Server (:9000) ├── /healthz └── Assets → Widget Server (:4444) ├── /ready │ └── /version ▼ Event Emitter → stderr / file / cloud.mcpr.appmcpr is a single static binary with no runtime dependencies. It’s designed to run as a sidecar next to your MCP server.
