This topic covers how NGINX works under the hood, how the NGINX Ingress controller processes and syncs configuration, and the key settings that affect O&M (operations and maintenance) decisions — including load balancing, timeouts, logging, and debugging.
NGINX configuration fundamentals
Configuration file structure
NGINX reads its configuration from /etc/nginx/nginx.conf. The file is organized into five nested blocks:
| Block | Scope | Typical settings |
|---|---|---|
| Main | Global process behavior | Worker count, PID path, log paths, included files |
| Events | Network connection handling | Max connections per worker, event model |
| HTTP | HTTP server features | Proxying, caching, logging, virtual hosts |
| Server | Per-virtual-host rules | Domain names, ports, TLS |
| Location | Per-URL-path rules | Routing, proxying, static file serving |
A typical nginx.conf looks like this:
# Number of worker processes
worker_processes 4;
# Include additional config files
include /etc/nginx/conf.d/*.conf;
http {
server {
listen 80;
server_name www.example.com;
location / {
root /var/www/html;
index index.html;
}
}
}
For the full directive reference, see the NGINX core module documentation.
HTTP block
The HTTP block sets global parameters for all virtual hosts:
http {
worker_processes 4;
worker_rlimit_nofile 8192;
client_max_body_size 128m;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
...
}
}
Server block
A Server block defines traffic-handling rules for a specific domain or group of domains. Multiple Server blocks can be nested under the HTTP block, letting you host multiple websites on the same server with independent configurations.
server {
listen 80;
server_name www.example.com;
location / {
root /var/www/html/example;
index index.html index.htm;
}
location /app {
proxy_pass http://localhost:3000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server_name matching modes
The server_name directive supports four matching modes, listed in priority order (highest to lowest):
| Priority | Syntax | Match type |
|---|---|---|
| 1 | server_name www.example.com |
Exact match |
| 2 | server_name *.example.com |
Wildcard — subdomain |
| 3 | server_name www.example.* |
Wildcard — top-level domain |
| 4 | server_name ~^www\.example\.*$ |
Regular expression |
Location block
A Location block routes requests to different handlers based on URL path. When multiple Location blocks match a request, NGINX applies them according to a fixed priority order.
Matching rules (in priority order):
| Rule | Syntax | Behavior |
|---|---|---|
| Exact match | location = /uri |
Matches only the exact URI; stops search immediately |
| Prefix match (no regex) | location ^~ /uri |
Matches URI prefix; stops search before regex evaluation |
| Case-sensitive regex | location ~ regex |
Matched against regular expression, case-sensitive |
| Case-insensitive regex | location ~* regex |
Matched against regular expression, case-insensitive |
| Prefix match | location /uri |
Matched after regex evaluation; longer prefix wins |
| General match | location / |
Catch-all; applies when no other block matches |
| Internal redirect | location @name |
NGINX-internal redirect only |
Priority: = > ^~ > ~ / ~* > (no modifier) > /
location = / { } # Exact match for root only
location ^~ /xx/ { } # Any path starting with /xx/; stops regex evaluation
location /uri { } # Prefix match (lower priority than regex)
location / { } # General match: catches everything else
location ~* \.(gif|jpg|jpeg)$ { } # Any request ending with an image extension
An NGINX Ingress sorts multiple Ingress paths for a host in descending order of length. For more information, see Ingress path matching and Understanding NGINX server and location block selection algorithms.
Directive inheritance
Directives cascade from outer contexts to inner ones: http{} settings are inherited by all server{} and location{} blocks. If a directive appears in both a parent and child context, the child's value overrides (does not merge with) the parent's value.
http {
add_header X-HTTP-LEVEL-HEADER 1;
add_header X-ANOTHER-HTTP-LEVEL-HEADER 1;
server {
listen 8080;
location / {
return 200 "OK";
# Inherits both X-HTTP-LEVEL-HEADER and X-ANOTHER-HTTP-LEVEL-HEADER
}
}
server {
listen 8081;
add_header X-SERVER-LEVEL-HEADER 1;
# This overrides http-level add_header directives in this server block
location /correct {
# Must explicitly repeat all desired headers at this level
add_header X-HTTP-LEVEL-HEADER 1;
add_header X-ANOTHER-HTTP-LEVEL-HEADER 1;
add_header X-SERVER-LEVEL-HEADER 1;
add_header X-LOCATION-LEVEL-HEADER 1;
return 200 "OK";
}
}
}
Reverse proxy directives
To configure NGINX as a reverse proxy, use proxy_pass inside a Location block:
server {
listen 80;
server_name www.example.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:3000;
}
}
proxy_set_header adds or overrides headers sent to the backend server — here, it preserves the original hostname and client IP.
Common proxy directives:
| Directive | Description |
|---|---|
proxy_pass |
Backend server to forward requests to (IP:port or domain name) |
proxy_set_header |
Headers to forward to the backend server (e.g., Host, X-Real-IP) |
proxy_read_timeout |
Timeout between two consecutive reads from the backend server (not the full response transfer) |
proxy_buffer_size / proxy_buffers |
Buffer size for caching backend responses |
Common NGINX commands
| Command | Description |
|---|---|
nginx -s reload |
Hot-reload the configuration file without dropping connections |
nginx -s reopen |
Restart the NGINX server |
nginx -s stop |
Shut down NGINX immediately |
nginx -s quit |
Shut down NGINX after current requests complete |
nginx -T |
Print the merged final configuration |
nginx -t |
Validate the configuration file for syntax errors |
How the NGINX Ingress controller works
Architecture
The NGINX Ingress controller integrates the control plane and data plane in each pod — one controller process plus the NGINX processes.
Configuration sync and when reloads happen
The controller watches Kubernetes resources — Ingresses, Services, pods, and endpoints — and syncs changes to either the nginx.conf file or a Lua table. The Lua table mainly includes configurations of upstream server endpoints, canary releases, and certificates. Changes to these configurations also trigger updates to the nginx.conf file, thereby triggering a reload.
For the full list of reload triggers, see When a reload is required in the NGINX Ingress community documentation.
Control plane configuration
Startup arguments
The controller reads startup arguments from the Deployment or Pod spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader-nginx
- --ingress-class=nginx
- --watch-ingress-without-class
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
- --enable-annotation-validation
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --v=2
For the full argument reference, see Command line arguments.
Controller log verbosity (--v)
The --v flag controls the verbosity of controller logs. This is separate from the NGINX process log level, which is configured in the ConfigMap.
| Flag | Output |
|---|---|
--v=2 |
Configuration changes in NGINX (default) |
--v=3 |
Service, Ingress rule, and endpoint changes; dumps NGINX config as JSON |
--v=5 |
Debug mode |
Load balancing
NGINX Ingress supports two global load balancing algorithms. Set the global default via the ConfigMap; override per-Ingress with annotations.
| Algorithm | How it works | Best for |
|---|---|---|
| Round robin (default) | Distributes requests cyclically across backends | Backends with roughly equal capacity |
| EWMA (Exponentially Weighted Moving Average) | Sends each request to the backend with the lowest weighted average load; the weight adjusts as requests arrive | Backends with variable performance |
For session cookie-based affinity, use the nginx.ingress.kubernetes.io/affinity annotation. For IP or header-based consistent hashing, use nginx.ingress.kubernetes.io/upstream-hash-by.
Implementation references:
Timeout configuration
Global timeouts (ConfigMap)
Set these in the nginx-configuration ConfigMap to apply defaults across all Ingresses.
| Option | Description | Default |
|---|---|---|
proxy-connect-timeout |
Timeout for establishing a connection with the proxy server. Cannot exceed 75s. | 5s |
proxy-read-timeout |
Timeout between two consecutive reads from the proxy server (not the full response transfer) | 60s |
proxy-send-timeout |
Timeout between two consecutive writes to the proxy server (not the full request transfer) | 60s |
proxy-stream-next-upstream-timeout |
Maximum time allowed to pass a connection to the next upstream server. Set to 0 for no limit. | 600s |
proxy-stream-timeout |
Timeout between consecutive reads or writes on a client or proxy connection. Closes the connection if no data is transmitted within this period. | 600s |
upstream-keepalive-timeout |
Timeout for keeping an idle connection open to upstream servers | Open source edition: 60s / ACK edition: 900s |
worker-shutdown-timeout |
Graceful shutdown timeout for worker processes | 240s |
proxy-protocol-header-timeout |
Timeout for receiving the PROXY protocol header. Prevents the TLS passthrough handler from waiting indefinitely on a broken connection. | 5s |
ssl-session-timeout |
Validity period for SSL session parameters in the session cache (measured from creation time). Each session cache entry uses about 0.25 MB. | 10m |
client-body-timeout |
Timeout for reading the client request body | 60s |
client-header-timeout |
Timeout for reading the client request headers | 60s |
For all global ConfigMap options, see ConfigMaps.
Per-Ingress timeout annotations
Override the global defaults for a specific Ingress using annotations.
| Annotation | Description |
|---|---|
nginx.ingress.kubernetes.io/proxy-connect-timeout |
Timeout for establishing a connection with the proxy server |
nginx.ingress.kubernetes.io/proxy-send-timeout |
Timeout for sending data to the proxy server |
nginx.ingress.kubernetes.io/proxy-read-timeout |
Timeout for reading data from the proxy server |
nginx.ingress.kubernetes.io/proxy-next-upstream |
Retry conditions. Separate multiple values with spaces (e.g., http_500 http_502). Supported values: error, timeout, invalid_response, http_500, http_502, http_503, http_504, http_403, http_404, http_429, off |
nginx.ingress.kubernetes.io/proxy-next-upstream-tries |
Maximum number of retries when retry conditions are met |
nginx.ingress.kubernetes.io/proxy-request-buffering |
on: buffer the full request before forwarding (HTTP/1.1 chunked requests are always buffered). off: stream the request directly; retries are disabled if an error occurs mid-transfer. |
For all available annotations, see Annotations.
Custom snippet configuration
Inject raw NGINX directives into specific configuration blocks using snippet annotations:
| Annotation | Applies to |
|---|---|
nginx.ingress.kubernetes.io/configuration-snippet |
Location block |
nginx.ingress.kubernetes.io/server-snippet |
Server block |
nginx.ingress.kubernetes.io/stream-snippet |
Stream block |
Data plane configuration
The NGINX Ingress data plane combines NGINX with the ngx_lua module (OpenResty). NGINX processes HTTP requests through multiple phases; OpenResty injects custom Lua handlers into those phases — including Rewrite/Access, Content, and Log phases, plus the Master (startup) phase. This gives the controller 11 total intervention points for Lua scripts.
Core data plane modules
Built-in NGINX modules used by the data plane:
Third-party modules:
-
ngx_http_opentracing_module — Integrated in Container Service for Kubernetes (ACK). Enables tracing with Application Real-Time Monitoring Service (ARMS). See Enable tracing for NGINX Ingress Controller.
-
OpenTelemetry for NGINX — Enable via ConfigMap:
data: enable-opentelemetry: "true" otlp-collector-host: "otel-coll-collector.otel.svc"
For the full NGINX module reference, see NGINX official documentation.
Lua initialization blocks
The data plane uses several Lua initialization blocks in the HTTP context:
init_by_lua_block
init_worker_by_lua_block
Runs once per worker process at startup:
init_worker_by_lua_block {
lua_ingress.init_worker()
balancer.init_worker()
monitor.init_worker(10000)
plugins.run()
}
upstream and balancer_by_lua_block
All backends are handled dynamically in Lua — no static upstream sections are generated per backend:
upstream upstream_balancer {
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
keepalive 8000;
keepalive_time 1h;
keepalive_timeout 60s;
keepalive_requests 10000;
}
To inspect the current in-memory backend list, use the ingress-nginx kubectl plugin and run kubectl ingress-nginx backends.
Logs
access_log off;
error_log /var/log/nginx/error.log notice;
The default configuration disables access logging and sets the error log level to notice. This is the NGINX process log level, configured separately from the controller log verbosity (--v flag described above).
Supported log levels (least to most verbose): emerg, alert, crit, error, warn, notice, info, debug.
To filter warning and error entries from pod logs:
kubectl logs -f <nginx-ingress-pod-name> -n kube-system | grep -E '^[WE]'