Nginx as a Microservices API Gateway: Beyond Auto-Generated Proxies

14 min read, Thu, 28 Aug 2025

Image from pixabay Image from pixabay.com

In the evolving landscape of microservices, managing communication between numerous services and external clients can become complex. An API Gateway acts as a single entry point for all client requests, routing them to the appropriate microservice, handling cross-cutting concerns like authentication, rate limiting, and caching. While various dedicated API Gateway solutions exist, Nginx, traditionally known as a high-performance web server and reverse proxy, offers a surprisingly robust and flexible platform for building a custom, lightweight API Gateway.

This article explores how to harness the power of Nginx configuration files in your microservice project to create a simple yet highly configurable API Gateway proxy. We’ll delve into why relying solely on auto-generated code from OpenAPI contracts for proxying can be detrimental and demonstrate how Nginx provides a superior, more maintainable alternative for scenarios where a simple, feature-rich proxy is needed.

Nginx: A Powerful and Configurable Beast

Nginx (pronounced “engine-X”) is an open-source web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. Its event-driven, asynchronous architecture allows it to handle a large number of concurrent connections with minimal resource consumption, making it an ideal choice for high-performance scenarios.

How Configurable and Powerful is Nginx?

Nginx’s power lies in its declarative configuration language, which allows for granular control over request processing. You can define:

This extensive configurability means Nginx can act as a lightweight, high-performance API Gateway, handling many of the common cross-cutting concerns that dedicated API Gateway solutions provide, often with less overhead and more direct control.

The Pitfalls of Auto-Generated Proxy Code from OpenAPI Contracts

OpenAPI (formerly Swagger) contracts are invaluable for defining the structure of your APIs. Tools can auto-generate server stubs and client SDKs from these contracts, which is fantastic for ensuring consistency and speeding up development. However, using auto-generated code solely for creating proxy endpoints within a gateway has several significant drawbacks:

For these reasons, while OpenAPI is excellent for contract definition and code generation for business logic services, it’s generally ill-suited for the infrastructure concern of API gateway proxying.

Nginx as a Simple, Powerful Proxy API Gateway

If your primary need is a simple proxy with crucial functionalities like basic authentication, authorization (delegated to an external service or simple rules), caching, rate limiting, and robust routing, then leveraging Nginx directly as an API Gateway is an exceptionally good approach.

Advantages of using Nginx for simple API Gateway needs:

Let’s visualize this architectural shift:

Below diagram illustrate the core component of Nginx API Gateway:

graph LR
    subgraph Core Functions
        A[Request Routing]
        B[Load Balancing]
        C[Health Checks]
    end

    subgraph Security+Control
        D[Authentication & Authorization]
        E[Rate Limiting]
        F[Access Control]
    end

    subgraph Performance+Optimization
        G[Caching]
        H[SSL/TLS Termination]
        I[Keep-Alive Connections]
    end

Below diagram illustrate the routing of requests using Nginx API Gateway:

graph TD
    Client --> Nginx_API_Gateway
    Nginx_API_Gateway -- "Path: /api/v1/users" --> Users_Service[Users Service]
    Nginx_API_Gateway -- "Path: /api/v1/products" --> Products_Service[Products Service]
    Nginx_API_Gateway -- "Path: /api/v1/orders" --> Orders_Service[Orders Service]

Sample Nginx Configuration and Folder Structure

Let’s illustrate with a practical example. Imagine a microservices project with three services: users-service, products-service, and orders-service.

Folder Structure for Microservices with Nginx Gateway

.
├── nginx/
│   ├── nginx.conf                  # Main Nginx configuration
│   └── conf.d/
│       ├── api_gateway.conf        # Our primary API gateway config
│       └── mime.types              # (Optional) Standard MIME types
├── microservices/
│   ├── users-service/              # Node.js, Spring Boot, etc.
│   │   └── Dockerfile
│   │   └── ...
│   ├── products-service/
│   │   └── Dockerfile
│   │   └── ...
│   └── orders-service/
│       └── Dockerfile
│       └── ...
└── docker-compose.yml              # For orchestrating services and Nginx

docker-compose.yml Example

This docker-compose.yml orchestrates our services and the Nginx gateway.

version: "3.8"

services:
    users-service:
        build: ./microservices/users-service
        ports:
            - "8081:8080" # Internal port, Nginx will proxy to this
        environment:
            - PORT=8080

    products-service:
        build: ./microservices/products-service
        ports:
            - "8082:8080"
        environment:
            - PORT=8080

    orders-service:
        build: ./microservices/orders-service
        ports:
            - "8083:8080"
        environment:
            - PORT=8080

    api-gateway:
        image: nginx:stable-alpine
        ports:
            - "80:80" # Expose Nginx on port 80
            - "443:443" # Expose Nginx on port 443 for HTTPS
        volumes:
            - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
            - ./nginx/conf.d:/etc/nginx/conf.d:ro
            # Optional: For HTTPS, mount your SSL certificates here
            # - ./nginx/certs:/etc/nginx/certs:ro
        depends_on:
            - users-service
            - products-service
            - orders-service

nginx/nginx.com (Main Nginx Configuration)

This is the global Nginx configuration. It primarily includes the conf.d directory where our specific API Gateway configuration will reside.

worker_processes  auto;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    error_log   /var/log/nginx/error.log warn;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip  on;
    gzip_min_length 1000;
    gzip_proxied    expired no-cache no-store private auth;
    gzip_types      text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    # Include our API Gateway specific configurations
    include /etc/nginx/conf.d/*.conf;
}

nginx/conf.d/api_gateway.conf (API Gateway Specific Configuration)

This is where the magic happens. We’ll define our upstream services and routing logic.

# Define upstream groups for our microservices
# This allows Nginx to load balance requests across multiple instances of a service
upstream users_backend {
    server users-service:8080; # 'users-service' is the Docker service name
    # server users-service-2:8080; # Add more servers for load balancing
    keepalive 64; # Keep connections alive to backend services
}

upstream products_backend {
    server products-service:8080;
    keepalive 64;
}

upstream orders_backend {
    server orders-service:8080;
    keepalive 64;
}

# Define the main server block for HTTP traffic
server {
    listen 80; # Listen for incoming HTTP requests on port 80
    listen [::]:80; # Listen on IPv6

    server_name localhost; # Replace with your domain name

    # Basic error pages
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    # Route /api/v1/users requests to the users-service
    location /api/v1/users/ {
        proxy_pass http://users_backend; # Proxy to the upstream group
        proxy_set_header Host $host; # Preserve the original Host header
        proxy_set_header X-Real-IP $remote_addr; # Pass client IP
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Pass a chain of proxy IPs
        proxy_set_header X-Forwarded-Proto $scheme; # Pass the protocol (http/https)
        # client_max_body_size 10M; # Max request body size

        # Enable caching for this endpoint
        # proxy_cache users_cache;
        # proxy_cache_valid 200 302 10m; # Cache 200 and 302 responses for 10 minutes
        # proxy_cache_valid 404      1m; # Cache 404 responses for 1 minute
        # proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

        # Rate limiting (requires http_limit_req_zone in http block or another conf file)
        # limit_req zone=users_req_zone burst=5 nodelay;
    }

    # Route /api/v1/products requests to the products-service
    location /api/v1/products/ {
        proxy_pass http://products_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Route /api/v1/orders requests to the orders-service
    location /api/v1/orders/ {
        proxy_pass http://orders_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Example: Implement basic authentication for orders endpoint
        # auth_basic "Restricted Content";
        # auth_basic_user_file /etc/nginx/conf.d/.htpasswd; # Path to htpasswd file
    }

    # Default catch-all for undefined routes
    location / {
        return 404 "API Endpoint Not Found";
    }

    # Optional: HTTPS configuration
    # listen 443 ssl http2;
    # listen [::]:443 ssl http2;
    # ssl_certificate /etc/nginx/certs/your_domain.crt;
    # ssl_certificate_key /etc/nginx/certs/your_domain.key;
    # ssl_session_cache shared:SSL:10m;
    # ssl_session_timeout 10m;
    # ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
    # ssl_prefer_server_ciphers on;

    # Redirect HTTP to HTTPS (if HTTPS is enabled)
    # server {
    #     listen 80;
    #     listen [::]:80;
    #     server_name localhost; # Replace with your domain
    #     return 301 https://$host$request_uri;
    # }
}

# Define a cache path (needs to be in the http block or another conf file included by http)
# http {
#     ...
#     proxy_cache_path /var/cache/nginx/users_cache levels=1:2 keys_zone=users_cache:10m inactive=60m;
#     ...
# }

# Define a rate limiting zone (needs to be in the http block or another conf file included by http)
# http {
#     ...
#     limit_req_zone $binary_remote_addr zone=users_req_zone:10m rate=1r/s;
#     ...
# }

Detailed Explanation of Nginx Configuration

Let’s break down the key directives used in api_gateway.conf:

a. upstream Block

upstream users_backend {
    server users-service:8080;
    keepalive 64;
}

b. server Block

server {
    listen 80;
    listen [::]:80;
    server_name localhost;
    # ... other configurations
}

c. location Block

location /api/v1/users/ {
    proxy_pass http://users_backend;
    # ... headers and other directives
}

d. proxy_set_header Directives

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

e. Caching (Commented Out Example)

# proxy_cache users_cache;
# proxy_cache_valid 200 302 10m;
# proxy_cache_valid 404      1m;
# proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

f. Rate Limiting (Commented Out Example)

# limit_req zone=users_req_zone burst=5 nodelay;

g. Basic Authentication

# auth_basic "Restricted Content";
# auth_basic_user_file /etc/nginx/conf.d/.htpasswd;

h. HTTPS Configuration

# listen 443 ssl http2;
# ssl_certificate /etc/nginx/certs/your_domain.crt;
# ssl_certificate_key /etc/nginx/certs/your_domain.key;

i. Default Catch-All

location / {
    return 404 "API Endpoint Not Found";
}

Conclusion

Nginx stands out as a formidable and highly adaptable tool for constructing an API Gateway proxy in a microservices environment. Its ability to handle a high volume of concurrent connections, coupled with a powerful and flexible declarative configuration, makes it an excellent choice for routing, load balancing, caching, and applying various cross-cutting concerns.

By opting for Nginx over auto-generated proxy code from OpenAPI contracts, you gain greater control, reduce codebase bloat, improve maintainability, and leverage a battle-tested, high-performance solution. While OpenAPI excels at contract definition, Nginx shines as an infrastructure component, neatly separating concerns and allowing your microservices to focus solely on their business logic. For projects requiring a robust yet straightforward API gateway, Nginx offers a compelling and often superior alternative. It allows you to build a resilient and performant entry point to your microservices with clarity and efficiency.