Back to Blog
DevOps
16 min read

Nginx Quirks: The Stuff the Docs Don't Tell You

Nginx is everywhere, but its behavior is... unique. Here are the gotchas that cost me hours so they don't cost you, from 15+ years of Nginx battles.

Tibbe & Brett

I have a love-hate relationship with Nginx. It powers 30% of the web, it's incredibly fast, and it's saved my bacon more times than I can count. It's also driven me to the edge of madness with its... unique interpretation of how web servers should behave.

After 15+ years of wrestling with Nginx configurations, I've learned that the docs tell you what Nginx should do, but they don't warn you about what it actually does. The devil, as they say, is in the details.

Here are the quirks that burned me so they don't burn you.

The Great Symlink Revelation

Let me start with the one that cost me the most time. It was 2015, and I was deploying a Ruby on Rails app using Capistrano. Standard deployment: symlink the current release to /var/www/myapp/current.

Everything worked perfectly until I noticed something weird in my access logs. Instead of logging requests to /var/www/myapp/current/public/assets/style.css, Nginx was logging /var/www/myapp/releases/20150312153045/public/assets/style.css.

At first, I thought it was just a logging quirk. But then file permissions started breaking. Security rules weren't matching. Path-based logic was failing.

That's when I learned the hard truth: Nginx resolves symlinks before processing requests.

If you come from Windows or macOS, you think of symlinks as "shortcuts to files." They're visual conveniences that point to the real location. That's not how Linux (and therefore Nginx) sees them.

In Linux, symlinks are more like bookmarks. They contain the actual path to the target file. When Nginx encounters a symlink, it follows that bookmark and uses the resolved path for everything: logging, security rules, location matching, everything.

# Your deployment structure
/var/www/myapp/current -> /var/www/myapp/releases/v1.2.3
/var/www/myapp/releases/v1.2.3/index.html

# What you expect to see in logs:
GET /var/www/myapp/current/index.html

# What you actually see:
GET /var/www/myapp/releases/v1.2.3/index.html

Why this matters:

  • Security rules based on file paths use the resolved path, not the symlink path
  • Location blocks that match on file paths need to account for the real paths
  • Access logs show the resolved paths, which can break log parsing tools
  • SSL certificate paths get resolved, which can cause issues with cert renewals

I spent an entire weekend debugging why my SSL certificate wasn't updating after renewal. Turns out, the symlink to the cert file was getting resolved to an old path that no longer existed after the cert renewal process.

Location Blocks: Order Is Everything (Sometimes)

This one bit me hard when I was trying to set up a health check endpoint that bypassed authentication. Seemed simple enough:

# What I thought would work
location /api/ {
    proxy_pass http://backend;
    # Authentication middleware
}

location /api/health {
    return 200 "OK";
    # No authentication needed
}

Requests to /api/health kept hitting the first block and getting proxied to the backend with authentication. I thought location blocks were processed in order, but Nginx has its own priority system that doesn't match the order you write them.

Here's how Nginx actually processes location blocks:

  1. Exact matches (= /path) - highest priority
  2. Regex matches (~ /pattern) - processed in order until first match
  3. Prefix matches (/path) - longest match wins, not first match

My /api/ prefix was longer than /api, so it matched first. The fix:

# Exact match wins over prefix matches
location = /api/health {
    return 200 "OK";
}

location /api/ {
    proxy_pass http://backend;
}

I've seen developers spend hours debugging why their API routes aren't working, only to discover it's a location block ordering issue. Learn the priority system, save yourself the headache.

The try_files Trap

Everyone knows try_files for SPAs:

try_files $uri $uri/ /index.html;

But I learned that try_files is much more powerful—and dangerous—than most people realize.

I was building a system that served static files if they existed, otherwise proxied to a backend API. My first attempt:

location / {
    try_files $uri $uri/ @backend;
}

location @backend {
    proxy_pass http://api-server;
}

This worked great until I realized that try_files can trigger infinite internal redirects if you're not careful. If your backend returns a 404 for a directory path, Nginx will try $uri/ again, which can create loops.

The safer pattern I learned:

location / {
    try_files $uri @backend;
}

location @backend {
    proxy_pass http://api-server;
}

# Handle directories separately if needed
location ~ /$ {
    try_files $uri @backend;
}

Named locations (@backend) are your friend. They're explicit, predictable, and don't create weird edge cases.

The Trailing Slash Nightmare

This is the one that makes experienced developers weep. I learned it the hard way when setting up a reverse proxy:

# These are COMPLETELY DIFFERENT to Nginx
location /api {
    proxy_pass http://backend;
}

location /api/ {
    proxy_pass http://backend/;
}

A request to /api/users behaves differently depending on which block matches:

  • First block: Proxies to http://backend/api/users
  • Second block: Proxies to http://backend/users

I spent three hours debugging why my API routes were returning 404s. The backend was receiving /api/users instead of /users because I had mismatched trailing slashes.

The rule: Match your location and proxy_pass trailing slashes. If location ends with /, proxy_pass should too.

I now have a checklist I run through for every reverse proxy configuration:

  • Location pattern matches expected URLs
  • Trailing slashes are consistent
  • Backend receives the expected path
  • Test with both /path and /path/ requests

Proxy Headers: The Devils in the Details

Most Nginx tutorials mention setting proxy headers, but they don't explain why they're critical or what happens when you get them wrong.

I learned this when my backend application started logging every request as coming from 127.0.0.1. CORS was breaking because the backend couldn't identify the real origin. SSL redirects weren't working because the backend thought every request was HTTP.

# These headers are not optional
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;

What each header does:

  • Host: The original Host header from the client
  • X-Real-IP: The actual client IP address
  • X-Forwarded-For: Chain of proxy IPs (supports multiple proxies)
  • X-Forwarded-Proto: Original protocol (HTTP or HTTPS)
  • X-Forwarded-Host: Original host header
  • X-Forwarded-Port: Original port

Without these headers, your backend application lives in a bubble. It can't implement proper security policies, logging, or user experience features.

Buffer Sizes: When Big Requests Break

Nginx's default buffer sizes are tiny by modern standards. I discovered this when clients started getting random 502 errors, but only for large POST requests.

The problem: Nginx was rejecting requests larger than 1MB (the default client_max_body_size). Worse, the error messages were cryptic and didn't clearly indicate the issue.

Here are the buffer settings I use for modern APIs:

# Client request limits
client_max_body_size 50M;          # Max request body size
client_body_buffer_size 1M;        # Buffer for request body
client_header_buffer_size 8k;      # Buffer for request headers
large_client_header_buffers 4 16k; # For large headers/cookies

# Proxy buffers
proxy_buffer_size 8k;              # Initial response buffer
proxy_buffers 8 32k;               # Main response buffers
proxy_busy_buffers_size 64k;       # Buffers for sending to client
proxy_temp_file_write_size 64k;    # Temp file chunk size

# Timeouts
proxy_connect_timeout 60s;         # Time to connect to upstream
proxy_send_timeout 60s;            # Time to send request to upstream
proxy_read_timeout 60s;            # Time to read response from upstream

These settings handle most modern applications, but you might need to adjust based on your specific use case. File upload endpoints often need larger client_max_body_size. API endpoints with large JSON payloads need bigger proxy buffers.

SSL Configuration That Actually Works

SSL in Nginx is straightforward until it isn't. I've spent countless hours debugging certificate issues, TLS handshake failures, and mixed content warnings.

Here's the SSL configuration I use in production:

server {
    listen 443 ssl http2;
    server_name example.com;
    
    # Certificate paths (Let's Encrypt)
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # Modern SSL settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    
    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    
    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-Frame-Options DENY always;
    add_header X-XSS-Protection "1; mode=block" always;
    
    # Your location blocks here
}

Don't forget the HTTP to HTTPS redirect:

server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

I always use 301 (permanent) redirects for HTTP to HTTPS. Search engines and browsers will remember the redirect and skip the HTTP request entirely for future visits.

Variables: More Dangerous Than They Look

Nginx variables seem simple until you try to use them in complex configurations. I learned this when building a dynamic proxy configuration that routed based on subdomains.

The trap: Nginx variables are evaluated at request time, but in contexts you might not expect. This can lead to unexpected behavior:

# This doesn't work as expected
map $subdomain $backend {
    api     api-server;
    web     web-server;
    default fallback-server;
}

server {
    server_name ~^(?<subdomain>[^.]+).example.com$;
    
    location / {
        # $backend is evaluated here, but $subdomain might not be set yet
        proxy_pass http://$backend;
    }
}

The fix is to be explicit about variable evaluation:

server {
    server_name ~^(?<subdomain>[^.]+).example.com$;
    
    location / {
        set $backend_server fallback-server;
        
        if ($subdomain = "api") {
            set $backend_server api-server;
        }
        if ($subdomain = "web") {
            set $backend_server web-server;
        }
        
        proxy_pass http://$backend_server;
    }
}

Yes, everyone says "avoid if in location context," but sometimes it's the clearest solution. Just be careful about the order of operations.

Log Debugging That Actually Helps

Nginx error logs are notoriously unhelpful by default. I've stared at messages like "upstream prematurely closed connection" for hours without learning anything useful.

Here's how I configure logging to actually debug issues:

# Custom log format with timing data
log_format detailed '$remote_addr - $remote_user [$time_local] '
                   '"$request" $status $body_bytes_sent '
                   '"$http_referer" "$http_user_agent" '
                   'rt=$request_time uct="$upstream_connect_time" '
                   'uht="$upstream_header_time" urt="$upstream_response_time" '
                   'upstream="$upstream_addr" '
                   'host="$host"';

# Per-site access logs
access_log /var/log/nginx/site.log detailed;

# More verbose error logging during debugging
error_log /var/log/nginx/site-error.log debug;

The timing variables are incredibly useful:

  • $request_time: Total request processing time
  • $upstream_connect_time: Time to connect to backend
  • $upstream_header_time: Time to receive response headers
  • $upstream_response_time: Time to receive full response

When requests are slow, you can immediately see if the problem is network connectivity, backend processing, or response transfer.

Rate Limiting: The Right Way

Basic rate limiting in Nginx is straightforward, but the edge cases will get you. I learned this when implementing API rate limiting for a SaaS application.

# Define rate limiting zones
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/m;
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

# In server block
location /api/ {
    limit_req zone=api burst=20 nodelay;
    # nodelay means excess requests are rejected, not queued
    
    proxy_pass http://backend;
}

location /api/login {
    limit_req zone=login burst=1;
    # No nodelay means requests are queued (with delay)
    
    proxy_pass http://backend;
}

Key points:

  • $binary_remote_addr uses less memory than $remote_addr
  • burst allows temporary spikes above the base rate
  • nodelay rejects excess requests immediately vs queuing them
  • Multiple zones let you have different limits for different endpoints

The tricky part: rate limiting with load balancers or CDNs. If you're behind CloudFlare, you need to rate limit by $http_cf_connecting_ip instead of $remote_addr.

What Breaks in Production (A Horror Story Collection)

After 15 years of running Nginx in production, here are the issues that will eventually bite you:

File descriptor limits. Nginx runs out of file descriptors under high load, causing connection failures. Increase worker_rlimit_nofile and the system ulimit.

DNS resolution fails. If your upstream servers are specified by hostname and DNS fails, Nginx can't route requests. Use IP addresses or implement health checks.

SSL certificates expire. Use monitoring to alert before certificates expire. Implement automated renewal with Let's Encrypt.

Log files fill up disk space. Implement log rotation with logrotate. Monitor disk usage and set up alerts.

Worker processes crash silently. Monitor worker process restarts. Frequent restarts indicate memory issues or segfaults.

Time zone issues with logs. Make sure your servers are using UTC for logs, or you'll go insane debugging timezone-related issues.

My Nginx Philosophy

After years of fighting with Nginx configurations, here's what I've learned:

Test your config obsessively. nginx -t checks syntax, but it won't catch logic errors. Test every scenario you can think of.

Start simple, add complexity gradually. Begin with basic proxy configurations and add features incrementally. Complex configs are impossible to debug.

Use named locations for complex logic. They're more explicit than regex locations and easier to understand.

Monitor everything. Nginx fails silently in creative ways. Monitor worker processes, connection counts, response times, and error rates.

Keep configs version controlled. Nginx configs are code. Treat them like code with version control, code review, and deployment pipelines.

Tools That Save Your Sanity

These tools have saved me countless hours of Nginx debugging:

nginx-test: Test your configurations against real HTTP requests before deploying.

nginx-vts: Virtual host traffic status module for monitoring.

stub_status: Built-in module for basic connection statistics.

access.log analyzers: Tools like GoAccess or Logstash for understanding traffic patterns.

The Nginx Paradox

Here's the thing about Nginx: it's simultaneously one of the most reliable and most frustrating pieces of software I've ever worked with.

It'll serve millions of requests per day without breaking a sweat, but spend three hours refusing to match a simple location block because you forgot a trailing slash.

It handles complex load balancing, SSL termination, and request routing better than almost anything else, but has quirks that make seasoned engineers question their sanity.

The key is understanding that Nginx was designed for performance above all else. The quirks aren't bugs—they're the result of architectural decisions made to squeeze every bit of performance out of the system.

Once you accept that Nginx operates by its own logic, not your assumptions about how web servers should behave, it becomes a powerful ally.

Just keep these quirks in mind. Your future debugging self will thank you.

Need help taming your Nginx configuration?

We've debugged hundreds of Nginx configurations and know all the gotchas. Let us help you build reliable, performant web infrastructure without the quirks.

Get Nginx Expertise