Nginx for Rails Apps — Reverse Proxy Configuration

Nginx sits in front of nearly every production Rails application, and yet most developers configure it once, paste a block from a blog post, and never think about it again until something breaks at 2 AM. That is a mistake. Nginx is doing more work than you realise — terminating SSL, compressing responses, serving static files, buffering slow clients, setting security headers, and shielding your Puma processes from the raw chaos of the public internet. This guide covers each of those responsibilities in concrete detail: upstream configuration, SSL termination, gzip tuning, static asset serving, security headers, buffer settings, and the common directives you will actually need. It connects to the broader Rails deployment topic, where Nginx is one piece of a larger production stack. The Nginx documentation is the authoritative reference, but it reads like a specification, not a tutorial. After configuring Nginx for Rails apps across a decade of production deployments — from single-box side projects to multi-server clusters behind load balancers — what I want to give you here is the practical subset: what to set, why, and what happens when you get it wrong.
Why Nginx sits in front of Puma
Puma is an excellent application server. It handles concurrent Ruby requests well, manages worker processes, and integrates tightly with Rails. What Puma is not designed to do is deal directly with the internet.
The internet sends slow clients, incomplete requests, enormous headers, malformed payloads, SSL handshakes, and static asset requests — none of which should reach your Ruby process. Nginx handles all of this in C, using event-driven I/O, at a fraction of the memory cost. A single Nginx worker can handle thousands of concurrent connections. A single Puma worker handles one request per thread.
Without Nginx, your Puma workers spend time waiting on slow client uploads, serving static files (CSS, JavaScript, images) that do not require Ruby, and performing SSL cryptography that could be offloaded. With Nginx in front, Puma only sees complete, decrypted requests for dynamic content — which is all it should ever have to process.
Think of it this way: Nginx is the bouncer. Puma is the bartender. You do not want the bartender checking IDs at the door.
The upstream block
The upstream block tells Nginx where your Puma server is listening. For a typical Rails deployment using Unix sockets:
upstream rails_app {
server unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock fail_timeout=0;
}
Why a Unix socket instead of 127.0.0.1:3000? Two reasons. First, Unix sockets avoid the TCP stack entirely — no connection overhead, no port allocation, no TIME_WAIT states under heavy load. Second, file permissions on the socket file give you access control that TCP ports do not provide.
The fail_timeout=0 directive tells Nginx to keep trying the upstream even after failures. For a single-backend deployment, this is what you want — there is nowhere else to route traffic, so Nginx should keep attempting the connection and let the request fail visibly rather than returning a generic 502 silently.
If you are running multiple Puma instances (on different sockets, perhaps for blue-green deploys), you can list multiple servers in the upstream block. Nginx will round-robin between them by default.
The server block: basic structure
A minimal production server block:
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
root /home/deploy/myapp/current/public;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
try_files $uri/index.html $uri @rails;
location @rails {
proxy_pass http://rails_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
Walk through what this does. The first block redirects all HTTP traffic to HTTPS — no exceptions, no conditional logic. The second block listens on 443 with SSL and HTTP/2 enabled. The root directive points to your Rails public/ directory, and try_files checks for static files before proxying to Puma. If a file exists at public/about.html, Nginx serves it directly. If not, the request falls through to the @rails named location and hits Puma.
The proxy_set_header lines are not optional. Without X-Forwarded-For, your Rails app sees every request as coming from 127.0.0.1. Without X-Forwarded-Proto, Rails does not know the request was originally HTTPS, which breaks force_ssl, secure cookies, and any URL generation that depends on the scheme.
SSL termination
Nginx handles the SSL handshake so Puma does not have to. The computational cost of TLS is non-trivial — especially the initial handshake — and offloading it to Nginx keeps your Ruby workers focused on application logic.
For Let's Encrypt certificates with Certbot, the certificate and key paths follow a standard pattern. Add session caching and modern protocol settings:
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
The ssl_session_cache directive allows resumed connections, which skip the expensive key exchange on repeat visits. Ten megabytes of shared cache holds roughly 40,000 sessions. For most Rails applications, that is more than enough.
Do not include TLSv1.0 or TLSv1.1. They are deprecated, insecure, and modern browsers no longer support them. If someone complains that your site does not work on Internet Explorer 8, that is not a problem you need to solve.
Gzip compression
Gzip is free performance. Compressing HTML, JSON, CSS and JavaScript responses before sending them over the wire reduces transfer sizes by 60-80% for text-based content. The CPU cost on the server is negligible compared to the bandwidth savings.
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
image/svg+xml;
A few things people get wrong here. gzip_comp_level goes from 1 (fastest, least compression) to 9 (slowest, most compression). Level 5 is the sweet spot for most workloads — you get 90% of the compression benefit at a fraction of the CPU cost of level 9. Going above 6 is almost never worth it.
gzip_min_length 256 prevents Nginx from compressing tiny responses where the gzip header overhead would actually make the response larger. Set this too high and you miss compressible responses; too low and you waste CPU on responses that do not benefit.
gzip_vary on adds a Vary: Accept-Encoding header so intermediate caches store compressed and uncompressed versions separately. Without this, a cache might serve a gzip-compressed response to a client that did not request it.
Do not gzip images (JPEG, PNG, WebP) or other already-compressed formats. They do not compress further and you burn CPU for nothing. SVG is the exception — it is XML and compresses well.
Static asset serving
Rails generates fingerprinted assets during precompilation: application-abc123.css, logo-def456.js. These files live in public/assets/ and are immutable — the fingerprint changes when the content changes. This means you can cache them aggressively.
location /assets/ {
expires max;
add_header Cache-Control "public, immutable";
access_log off;
gzip_static on;
}
expires max sets a far-future expiry header. Cache-Control: public, immutable tells browsers and CDNs this file will never change. access_log off stops Nginx from writing a log entry for every CSS and JavaScript request, which reduces disk I/O on high-traffic sites. gzip_static on tells Nginx to look for a pre-compressed .gz file (created during assets:precompile) and serve it directly, avoiding runtime compression entirely.
This configuration means your Rails application never sees asset requests. Nginx serves them from disk with optimal caching headers and pre-compressed content. That is the entire point.
For user-uploaded files stored in public/uploads/ (if you are using local storage instead of S3), use a separate location block with less aggressive caching — user uploads can change or be replaced.
Security headers
Nginx is the right place to set HTTP security headers because it applies them to every response, including static files that Nginx serves directly without hitting Rails.
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
The always parameter is important. Without it, Nginx only adds headers to successful responses (2xx and 3xx). Error pages — 404s, 500s — would be served without security headers, which is exactly when you want them most.
X-Frame-Options: SAMEORIGIN prevents your site from being embedded in iframes on other domains, which blocks clickjacking attacks. X-Content-Type-Options: nosniff prevents browsers from MIME-sniffing responses, which closes a class of content injection attacks. Permissions-Policy restricts browser features your application does not use.
Consider adding a Content-Security-Policy header as well, but be warned: CSP is powerful and easy to misconfigure. A restrictive CSP that blocks your own JavaScript will break your application. Start with a report-only policy, review the reports, and tighten incrementally.
Buffer and timeout settings
Nginx's default buffer sizes are conservative. For Rails applications that return large JSON responses, generate PDFs, or accept file uploads, you will need to tune them.
proxy_buffer_size 16k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
client_max_body_size 25m;
client_body_timeout 12s;
client_header_timeout 12s;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffer_size handles the response headers from Puma. If your application sets many cookies or large headers, the default 4k or 8k buffer overflows and Nginx writes the response to a temporary file on disk, which is dramatically slower. 16k handles most Rails applications.
client_max_body_size controls the maximum upload size. The default is 1 MB, which means file uploads above 1 MB get a 413 error before they ever reach Rails. Set this to match your application's upload requirements.
The timeout values prevent slow clients from tying up connections indefinitely. Twelve seconds for client headers and body is generous — a well-behaved client sends these in under a second. The proxy timeouts should match or slightly exceed your longest-running Rails request. If you have endpoints that take 30 seconds (report generation, large exports), set proxy_read_timeout accordingly for those specific locations rather than globally.
Common directives people miss
A few directives that are easy to overlook but matter in production:
server_tokens off;
This removes the Nginx version number from error pages and the Server header. It does not make you more secure — security through obscurity is not security — but it removes low-hanging information disclosure.
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
This blocks access to dotfiles — .env, .git, .htaccess — that should never be publicly accessible. If your application root accidentally exposes these, Nginx catches the request before it reaches the filesystem.
location = /favicon.ico {
access_log off;
log_not_found off;
}
location = /robots.txt {
access_log off;
log_not_found off;
}
These suppress log noise for the two files that every bot on the internet requests. Your access logs will thank you.
What usually goes wrong
After configuring Nginx for Rails applications across dozens of production deployments, these are the failures I see on repeat:
- Missing
X-Forwarded-Protoheader. Rails thinks every request is HTTP, soforce_ssltriggers infinite redirects, secure cookies are not set, andurl_forgenerates HTTP links on an HTTPS site. One missing header, three different symptoms. client_max_body_sizetoo low. Users upload a 5 MB image, get a 413 error, and you see nothing in your Rails logs because the request never reached Rails. The error only appears in the Nginx error log, which nobody is watching.- Forgetting
gzip_static onin the assets location. Rails precompiles.gzfiles duringassets:precompile. Withoutgzip_static, Nginx ignores them and recompresses on every request, wasting CPU. - SSL certificate renewal failure. Certbot's auto-renewal fails silently because the Nginx config changed or the web root path no longer matches. Run
certbot renew --dry-runmonthly and check the output. Better yet, add it to your monitoring. - Upstream socket permission errors. Nginx runs as
www-data, Puma creates its socket asdeploy, and the socket file is not readable by Nginx. The symptom is a 502 Bad Gateway withconnect() to unix:///...puma.sock failed (13: Permission denied)in the Nginx error log. Fix the socket path permissions or adddeployto thewww-datagroup. - Proxy buffer overflow causing temp file writes. Large response headers (often from authentication gems that set multiple cookies) exceed the default buffer size. Nginx starts writing to disk, and response times spike under load. Increase
proxy_buffer_sizeand the problem vanishes. - No
try_filesdirective. Every request — including static assets — hits Puma. Your Ruby workers are serving CSS files instead of handling application requests. This is the single most common misconfiguration I see.
Configuration checklist
- HTTP-to-HTTPS redirect on port 80
- SSL certificate paths configured and tested
- SSL session caching enabled
- TLSv1.2 and TLSv1.3 only (no legacy protocols)
- Upstream block pointing to Puma Unix socket
-
proxy_set_headerincludes Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Proto -
try_filesserves static files before proxying to Rails - Gzip enabled for text-based content types
- Static assets location with far-future cache headers and
gzip_static on - Security headers set with
alwaysflag -
client_max_body_sizematches your upload requirements - Buffer sizes tuned for your response sizes
-
server_tokens off - Dotfile access blocked
- Certbot renewal tested with
--dry-run - Nginx config tested with
nginx -tbefore reload
FAQ
Should I use Unix sockets or TCP for the Puma connection?
Unix sockets. They are faster (no TCP overhead), produce no TIME_WAIT states, and allow file-based permission control. TCP is only necessary when Nginx and Puma are on different machines, which is uncommon for single-server deployments.
Do I need Nginx if I am using a load balancer like AWS ALB?
The load balancer handles SSL termination and basic routing, but Nginx still adds value as a local reverse proxy: static file serving, gzip, buffer management, and security headers. Running Puma directly behind an ALB works, but you lose those capabilities and force your Ruby workers to handle traffic they should never see.
How do I test my Nginx configuration without restarting?
nginx -t validates the configuration syntax. nginx -s reload applies the new configuration without dropping existing connections. Never restart Nginx in production — reload is graceful, restart is not.
What about HTTP/3 and QUIC?
Nginx has experimental QUIC support, but it is not production-ready for most deployments as of late 2025. HTTP/2 with TLS 1.3 gets you most of the performance benefits. Revisit HTTP/3 when Nginx mainline includes stable support and your CDN can pass QUIC through.
Can I use Nginx to serve a Rails app and a separate frontend on the same domain?
Yes. Use separate location blocks to route /api/ to your Rails upstream and / to your frontend's static files or a separate upstream. This is a common pattern for Rails API backends with a JavaScript frontend.
Related reading
- Rails Deployment — parent topic covering the full production stack
- Deploy Ruby on Rails on a VPS — step-by-step guide that includes Nginx setup in context
- Web Performance for Rails Developers — performance tuning from the browser's perspective, where Nginx configuration has direct impact