ANAVEM
Reference
Languagefr
How to Deploy Nginx Reverse Proxy with Docker Containers

How to Deploy Nginx Reverse Proxy with Docker Containers

Deploy a production-ready Nginx reverse proxy using Docker containers with automated SSL certificates and dynamic service discovery for multiple web applications.

Emanuel DE ALMEIDAEmanuel DE ALMEIDA
March 17, 2026 18 min 6
harddocker 9 steps 18 min

Why Deploy Nginx as a Reverse Proxy with Docker?

A reverse proxy acts as an intermediary between clients and your backend services, providing a single entry point for multiple applications. When containerized with Docker, Nginx becomes a powerful traffic director that can automatically discover services, manage SSL certificates, and provide load balancing without manual configuration changes.

What Makes Docker-Based Reverse Proxies Essential for Modern Applications?

Traditional reverse proxy setups require manual configuration for each new service, certificate management, and complex load balancing rules. Docker-based solutions like nginxproxy/nginx-proxy eliminate this overhead by automatically detecting new containers and configuring routes based on environment variables. This approach is particularly valuable in microservices architectures where services frequently scale up and down.

How Does Automated SSL Certificate Management Work with Docker?

The combination of nginx-proxy and acme-companion provides zero-touch SSL certificate provisioning through Let's Encrypt. When you deploy a new service with the appropriate environment variables, the system automatically requests, validates, and installs SSL certificates. Certificate renewal happens automatically in the background, eliminating the manual overhead that traditionally makes SSL management complex in multi-service environments.

Related: Ansible

Related: List Installed Roles and Features Using PowerShell on

Related: Configure Program Pinning to Taskbar Using Microsoft Intune

Related: How to Enable Remote Desktop on Windows Server Using

Implementation Guide

Full Procedure

01

Install Docker and Docker Compose

First, install the latest Docker Engine and Docker Compose. We'll use the official installation script for Docker 27.1.0+.

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

Install Docker Compose v2.29.2:

sudo curl -SL https://github.com/docker/compose/releases/download/v2.29.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Log out and back in to apply the docker group membership, then verify installation:

docker --version
docker-compose --version
Warning: Docker 27.1+ enforces stricter socket security. Always use read-only mounts (:ro) for /var/run/docker.sock in production.
02

Create the Project Directory Structure

Set up a clean directory structure for your reverse proxy configuration. This organization makes maintenance easier as you add more services.

mkdir -p ~/nginx-proxy/{config,ssl,logs}
cd ~/nginx-proxy

Create the main Docker Compose file that will orchestrate our reverse proxy setup:

touch docker-compose.yml

Verify the structure:

tree ~/nginx-proxy
# Should show:
# nginx-proxy/
# ├── config/
# ├── docker-compose.yml
# ├── logs/
# └── ssl/
Pro tip: Keep your proxy configuration in a dedicated directory separate from your application services. This makes backup and migration much simpler.
03

Configure the Automated Nginx Proxy with SSL

Create the main Docker Compose configuration using the latest nginxproxy images. This setup provides automatic service discovery and SSL certificate management.

version: '3.8'

services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy:1.3.0
    container_name: nginx-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - certs:/etc/nginx/certs
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./config:/etc/nginx/conf.d
      - ./logs:/var/log/nginx
    environment:
      - DEFAULT_HOST=yourdomain.com
    networks:
      - nginx-reverse-proxy
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"

  acme-companion:
    image: nginxproxy/acme-companion:2.5.1
    container_name: nginx-proxy-acme
    restart: always
    depends_on:
      - nginx-proxy
    environment:
      - DEFAULT_EMAIL=your@email.com
      - NGINX_PROXY_CONTAINER=nginx-proxy
    volumes:
      - certs:/etc/nginx/certs
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - acme:/etc/acme.sh
    networks:
      - nginx-reverse-proxy

networks:
  nginx-reverse-proxy:
    driver: bridge

volumes:
  certs:
  html:
  acme:

Replace your@email.com with your actual email address for Let's Encrypt notifications. Start the proxy:

docker-compose up -d

Verify both containers are running:

docker-compose ps
# Should show nginx-proxy and nginx-proxy-acme as "Up"
04

Deploy a Test Backend Application

Create a simple test application to verify our reverse proxy works correctly. We'll use a basic Node.js app that shows the container hostname.

Create a new directory for the test app:

mkdir ~/test-app && cd ~/test-app

Create a simple Node.js application:

// app.js
const express = require('express');
const os = require('os');
const app = express();
const PORT = 3000;

app.get('/', (req, res) => {
  res.json({
    message: 'Hello from reverse proxy!',
    hostname: os.hostname(),
    timestamp: new Date().toISOString(),
    headers: req.headers
  });
});

app.listen(PORT, '0.0.0.0', () => {
  console.log(`Server running on port ${PORT}`);
});

Create the Dockerfile:

FROM node:18-alpine
WORKDIR /app
RUN npm init -y && npm install express
COPY app.js .
EXPOSE 3000
CMD ["node", "app.js"]

Build and test the application:

docker build -t test-app .
docker run -d --name test-backend \
  --network nginx-proxy_nginx-reverse-proxy \
  -e VIRTUAL_HOST=app.yourdomain.com \
  -e LETSENCRYPT_HOST=app.yourdomain.com \
  -e VIRTUAL_PORT=3000 \
  test-app

Verify the backend is running and connected to the proxy network:

docker logs test-backend
docker network inspect nginx-proxy_nginx-reverse-proxy
05

Configure Custom Nginx Settings

Create custom Nginx configurations for advanced proxy settings like timeouts, buffer sizes, and security headers. This step is crucial for production deployments.

Create a custom configuration file in the config directory:

cd ~/nginx-proxy/config

Create proxy.conf with production-ready settings:

# proxy.conf
client_max_body_size 100M;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;
proxy_busy_buffers_size 16k;

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always;

# Real IP forwarding
set_real_ip_from 172.16.0.0/12;
set_real_ip_from 192.168.0.0/16;
set_real_ip_from 10.0.0.0/8;
real_ip_header X-Forwarded-For;
real_ip_recursive on;

Create gzip.conf for compression:

# gzip.conf
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
    text/plain
    text/css
    text/xml
    text/javascript
    application/json
    application/javascript
    application/xml+rss
    application/atom+xml
    image/svg+xml;

Restart the proxy to apply the new configuration:

cd ~/nginx-proxy
docker-compose restart nginx-proxy

Verify the configuration is loaded without errors:

docker-compose logs nginx-proxy | grep -i error
Pro tip: Always test configuration changes in a staging environment first. Use docker exec nginx-proxy nginx -t to validate syntax before restarting.
06

Set Up SSL Certificate Monitoring

Monitor SSL certificate status and renewal to prevent unexpected certificate expiration. The ACME companion handles renewal automatically, but monitoring ensures everything works correctly.

Check current certificate status:

docker exec nginx-proxy-acme /app/cert_status

Create a monitoring script to check certificate expiration:

#!/bin/bash
# cert-monitor.sh
DOMAINS=("app.yourdomain.com" "api.yourdomain.com")
WARN_DAYS=30

for domain in "${DOMAINS[@]}"; do
    expiry=$(docker exec nginx-proxy openssl x509 -in "/etc/nginx/certs/${domain}.crt" -noout -enddate 2>/dev/null | cut -d= -f2)
    if [ ! -z "$expiry" ]; then
        expiry_epoch=$(date -d "$expiry" +%s)
        current_epoch=$(date +%s)
        days_until_expiry=$(( (expiry_epoch - current_epoch) / 86400 ))
        
        if [ $days_until_expiry -lt $WARN_DAYS ]; then
            echo "WARNING: Certificate for $domain expires in $days_until_expiry days"
        else
            echo "OK: Certificate for $domain expires in $days_until_expiry days"
        fi
    else
        echo "ERROR: No certificate found for $domain"
    fi
done

Make the script executable and test it:

chmod +x cert-monitor.sh
./cert-monitor.sh

Add to crontab for daily monitoring:

echo "0 9 * * * /home/$(whoami)/nginx-proxy/cert-monitor.sh | logger -t cert-monitor" | crontab -

Verify the ACME companion is working by checking logs:

docker-compose logs acme-companion | tail -20
07

Add Multiple Backend Services

Demonstrate how to add multiple services behind the reverse proxy. This shows the real power of the automated proxy setup for microservices architecture.

Create a second test service (API backend):

mkdir ~/api-service && cd ~/api-service

Create a simple API service:

// api.js
const express = require('express');
const app = express();
const PORT = 4000;

app.get('/health', (req, res) => {
  res.json({ status: 'healthy', service: 'api', version: '1.0.0' });
});

app.get('/users', (req, res) => {
  res.json({
    users: [
      { id: 1, name: 'John Doe' },
      { id: 2, name: 'Jane Smith' }
    ]
  });
});

app.listen(PORT, '0.0.0.0', () => {
  console.log(`API service running on port ${PORT}`);
});

Create Dockerfile for the API service:

FROM node:18-alpine
WORKDIR /app
RUN npm init -y && npm install express
COPY api.js .
EXPOSE 4000
CMD ["node", "api.js"]

Build and deploy the API service:

docker build -t api-service .
docker run -d --name api-backend \
  --network nginx-proxy_nginx-reverse-proxy \
  -e VIRTUAL_HOST=api.yourdomain.com \
  -e LETSENCRYPT_HOST=api.yourdomain.com \
  -e VIRTUAL_PORT=4000 \
  api-service

Test both services through the proxy:

curl -H "Host: app.yourdomain.com" http://localhost/
curl -H "Host: api.yourdomain.com" http://localhost/health

Verify SSL certificates were automatically generated:

docker exec nginx-proxy ls -la /etc/nginx/certs/ | grep -E "(app|api).yourdomain.com"
Warning: Each new service must be on the same Docker network as the proxy. Services on different networks won't be discovered automatically.
08

Configure Load Balancing and Health Checks

Set up load balancing for high-availability services and implement health checks to ensure traffic only goes to healthy backends.

Create a load-balanced service with multiple instances:

cd ~/nginx-proxy

Add a load-balanced service configuration to your docker-compose.yml:

  web-app-1:
    image: test-app
    container_name: web-app-1
    environment:
      - VIRTUAL_HOST=lb.yourdomain.com
      - LETSENCRYPT_HOST=lb.yourdomain.com
      - VIRTUAL_PORT=3000
    networks:
      - nginx-reverse-proxy
    restart: unless-stopped

  web-app-2:
    image: test-app
    container_name: web-app-2
    environment:
      - VIRTUAL_HOST=lb.yourdomain.com
      - LETSENCRYPT_HOST=lb.yourdomain.com
      - VIRTUAL_PORT=3000
    networks:
      - nginx-reverse-proxy
    restart: unless-stopped

Create custom upstream configuration for health checks:

# config/lb.yourdomain.com_location
location /health {
    access_log off;
    return 200 "healthy\n";
    add_header Content-Type text/plain;
}

location / {
    proxy_pass http://lb.yourdomain.com;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    
    # Health check settings
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
    proxy_connect_timeout 5s;
    proxy_send_timeout 10s;
    proxy_read_timeout 10s;
}

Deploy the load-balanced services:

docker-compose up -d web-app-1 web-app-2

Test load balancing by making multiple requests:

for i in {1..6}; do
  curl -s -H "Host: lb.yourdomain.com" http://localhost/ | jq .hostname
done

You should see requests distributed between different container hostnames, confirming load balancing is working.

09

Implement Monitoring and Logging

Set up comprehensive monitoring and logging for your reverse proxy to track performance, errors, and security events.

Create a custom Nginx log format for better monitoring:

# config/log_format.conf
log_format proxy_log '$remote_addr - $remote_user [$time_local] '
                     '"$request" $status $body_bytes_sent '
                     '"$http_referer" "$http_user_agent" '
                     '$request_time $upstream_response_time '
                     '$upstream_addr $upstream_status';

access_log /var/log/nginx/access.log proxy_log;

Create a log rotation configuration:

# /etc/logrotate.d/nginx-proxy
~/nginx-proxy/logs/*.log {
    daily
    missingok
    rotate 52
    compress
    delaycompress
    notifempty
    create 644 root root
    postrotate
        docker exec nginx-proxy nginx -s reload
    endscript
}

Set up a simple monitoring script:

#!/bin/bash
# monitor-proxy.sh
LOG_FILE="~/nginx-proxy/logs/access.log"
ERROR_THRESHOLD=10
TIME_WINDOW="1 minute ago"

# Check for high error rates
ERROR_COUNT=$(grep -c " 5[0-9][0-9] " $LOG_FILE | tail -100)
if [ $ERROR_COUNT -gt $ERROR_THRESHOLD ]; then
    echo "HIGH ERROR RATE: $ERROR_COUNT 5xx errors detected"
fi

# Check proxy health
docker-compose ps | grep -q "Up" || echo "ALERT: Proxy containers not running"

# Check disk space for logs
DISK_USAGE=$(df ~/nginx-proxy/logs | tail -1 | awk '{print $5}' | sed 's/%//')
if [ $DISK_USAGE -gt 80 ]; then
    echo "WARNING: Log disk usage at ${DISK_USAGE}%"
fi

# Check certificate expiration
./cert-monitor.sh

Make the monitoring script executable and add to cron:

chmod +x monitor-proxy.sh
echo "*/5 * * * * /home/$(whoami)/nginx-proxy/monitor-proxy.sh" | crontab -

Test the complete setup by checking all services:

docker-compose ps
docker-compose logs --tail=50
curl -I https://app.yourdomain.com
curl -I https://api.yourdomain.com
Pro tip: Use tools like Prometheus and Grafana for advanced monitoring. The nginx-prometheus-exporter can provide detailed metrics for production environments.

Frequently Asked Questions

How do I troubleshoot nginx-proxy not detecting my Docker containers?+
Ensure your containers are on the same Docker network as nginx-proxy (usually nginx-proxy_nginx-reverse-proxy). Check that you've set the VIRTUAL_HOST environment variable correctly and that the container is actually running. Use 'docker network inspect nginx-proxy_nginx-reverse-proxy' to verify network connectivity and 'docker-compose logs nginx-proxy' to see if the proxy detected your service.
What's the difference between nginx-proxy and manual Nginx configuration in Docker?+
Manual Nginx configuration requires editing config files and restarting containers for each new service. nginx-proxy automatically generates configurations by watching Docker events and reading container environment variables. This means you can deploy new services without touching the proxy configuration, making it ideal for dynamic environments and CI/CD pipelines.
How can I configure custom Nginx settings with nginxproxy/nginx-proxy?+
Create configuration files in the format 'domain.com_location' or 'domain.com' in your mounted config directory. For global settings, use files like 'proxy.conf' or 'gzip.conf'. You can also set per-container configurations using environment variables like VIRTUAL_PROTO=https or VIRTUAL_PORT=8080. The proxy automatically includes these custom configurations when generating the final Nginx config.
Why are my Let's Encrypt certificates not renewing automatically?+
Check that the acme-companion container is running and has access to the Docker socket with read-only permissions. Ensure your DEFAULT_EMAIL environment variable is set and that ports 80 and 443 are accessible from the internet for domain validation. Use 'docker-compose logs acme-companion' to check for renewal errors. Certificate renewal typically happens 30 days before expiration.
How do I set up load balancing for multiple instances of the same service?+
Deploy multiple containers with identical VIRTUAL_HOST environment variables. nginx-proxy automatically detects multiple containers with the same hostname and configures upstream load balancing using round-robin by default. You can customize load balancing behavior by creating upstream configuration files in your config directory or using environment variables like VIRTUAL_PROTO and health check settings.
Emanuel DE ALMEIDA
Written by

Emanuel DE ALMEIDA

Microsoft MCSA-certified Cloud Architect | Fortinet-focused. I modernize cloud, hybrid & on-prem infrastructure for reliability, security, performance and cost control - sharing field-tested ops & troubleshooting.

Discussion

Share your thoughts and insights

You must be logged in to comment.

Loading comments...