This topic describes the basics of NGINX, how the NGINX Ingress controller works, and the relevant O&M capabilities.
Basics of NGINX
NGINX Ingress controller
How it works
The NGINX Ingress controller is an implementation that integrates both the control plane and the data plane. Each pod has a controller process and relevant NGINX processes.
Core modules of the NGINX Ingress data plane
Third-party modules:
NGINX Ingresses in Container Service for Kubernetes (ACK) are integrated with this module. This module implements the integration with the tracing feature of Real-Time Monitoring Service (ARMS). For more information, see Enable tracing for NGINX Ingress Controller.
You can enable this module for NGINX Ingresses by using a ConfigMap.
data: enable-opentelemetry: "true" otlp-collector-host: "otel-coll-collector.otel.svc" ## OpenTelemetry endpoint
For more information, see NGINX official documentation.
Implementation of configuration synchronization
The following figure shows how configuration synchronization is implemented. This helps you understand how to reduce the frequency of configuration reloads and when it is necessary to perform configuration reloads.
The NGINX Ingress controller monitors resources such as Ingresses, services, pods, and endpoints to update configurations to the nginx.conf
file or the Lua table
. The Lua table mainly includes the configurations of upstream server endpoints, canary releases, and certificates. They correspond to some related variables of NGINX. Changes to these configurations also trigger updates to the nginx.conf
file, thereby triggering a reload. For more information, see When a reload is required in the NGINX Ingress community documentation.
Configurations of the NGINX Ingress control plane
Startup arguments
For more information, see Command line arguments.
You can view the container startup arguments from Deployment Spec or Pod Spec of NGINX Ingresses, as shown in the following code:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader-nginx
- --ingress-class=nginx
- --watch-ingress-without-class
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
- --enable-annotation-validation
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --v=2
-v
specifies the log level.
--v=2
: displays detailed information about configuration changes in NGINX.--v=3
: displays detailed information about services, Ingress rules, and endpoint changes, and dumps NGINX configuration in JSON format.--v=5
: enables the debug mode.
Load balancing
NGINX Ingresses allow you to specify a global default load balancing algorithm. The Round_robin and Ewma algorithms are supported. By default, Round_robin is used. The Round_robin algorithm cyclically distributes requests among backend workloads to achieve even distribution. However, uneven load balancing may occur if the performance of backend workloads varies greatly. The Ewma algorithm sends requests to the backend workload with the lowest weighted average load. The weighted load index gradually changes as requests arrive. This ensures more balanced load distribution.
For load balancing based on consistent hashing by using variables such as the IP address, consider using the nginx.ingress.kubernetes.io/upstream-hash-by
annotation. For session cookie-based load balancing, consider using the nginx.ingress.kubernetes.io/affinity
annotation.
Related implementations:
Related timeout configurations
Global timeout configurations
You can use the following configuration options to specify global timeout configurations of NGINX Ingresses:
Configuration option | Description | Default value |
| Sets the timeout period for establishing a connection with the proxy server. In general, the value cannot exceed 75s. | 5s |
| Sets the timeout period for reading a response from the proxy server. This timeout period applies between two consecutive read operations, rather than to the transmission of the entire response. | 60s |
| Sets the timeout period for sending a request to the proxy server. This timeout period applies between two consecutive write operations, rather than to the transmission of the entire request. | 60s |
| Limits the amount of time allowed to pass a connection to the next server. If you set the value to 0, no limit is imposed. | 600s |
| Sets the timeout period between two consecutive read or write operations on a client or proxy server connection. If no data is transmitted within the period, the connection is closed. | 600s |
| Sets a timeout period for keeping an idle connection open with upstream servers. |
|
| Sets the timeout period for a graceful shutdown. | 240s |
| Sets the timeout period for receiving the proxy protocol header. The default value prevents the Transport Layer Security (TLS) passthrough handler from waiting indefinitely for a broken connection. | 5s |
| Sets the validity period for SSL session parameters in the session cache. The expiration time is subject to the creation time. Each session cache occupies about 0.25 MB of space. | 10m |
| Sets the timeout period for reading the client request body. | 60s |
| Sets the timeout period for reading the client request headers. | 60s |
Resource-specific custom timeout configurations
The following table describes the options for resource-specific custom timeout configurations.
Configuration option | Description |
| Sets the timeout period for establishing a connection with the proxy server. |
| Sets the timeout period for sending data to the proxy server. |
| Sets the timeout period for reading data from the proxy server. |
| Configures retry policies or retry conditions. Separate multiple items with spaces. For example, you can use
|
| Sets the number of retries allowed if the retry conditions are met. |
| Specifies whether to enable the request buffering feature. Valid values:
|
General global ConfigMap configurations
For more information, see ConfigMaps.
Other annotations
For more information, see Annotations.
Custom snippet configurations
nginx.ingress.kubernetes.io/configuration-snippet
: The configuration applies to the Location block.nginx.ingress.kubernetes.io/server-snippet
: The configuration applies to the Server block.nginx.ingress.kubernetes.io/stream-snippet
: The configuration applies to the Stream block.
Configurations of the NGINX Ingress data plane
The data plane of NGINX Ingresses is implemented by combining NGINX with the ngx_lua
module (OpenResty). NGINX uses a multi-modular design, which divides HTTP request processing into multiple phases. This design allows multiple modules to work together, where each module is responsible for handling an independent and simple feature. This makes request processing more efficient and reliable, and improves the scalability of the system.
OpenResty can inject custom handlers to process requests in different processing phases of NGINX, including the Rewrite/Access phase, Content phase, and Log phase. Together with the initialization phase of system startup, which is the Master phase, OpenResty provides a total of 11 phases. These phases enable Lua scripts to intervene in HTTP request processing. The following figure shows the main available phases of OpenResty.
HTTP block
http {
lua_package_path "/etc/nginx/lua/?.lua;;";
lua_shared_dict balancer_ewma 10M;
lua_shared_dict balancer_ewma_last_touched_at 10M;
lua_shared_dict balancer_ewma_locks 1M;
lua_shared_dict certificate_data 20M;
lua_shared_dict certificate_servers 5M;
lua_shared_dict configuration_data 20M;
lua_shared_dict global_throttle_cache 10M;
lua_shared_dict ocsp_response_cache 5M;
...
}
init_by_lua_block
The following Lua-related modules are loaded during the initialization:
configuration
balancer
monitor
certificate
plugins
init_worker_by_lua_block
init_worker_by_lua_block {
lua_ingress.init_worker()
balancer.init_worker()
monitor.init_worker(10000)
plugins.run()
}
upstream and balancer_by_lua_block
upstream upstream_balancer {
### Attention!!!
#
# We no longer create "upstream" section for every backend.
# Backends are handled dynamically using Lua. If you would like to debug
# and see what backends ingress-nginx has in its memory you can
# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
# inspect current backends.
#
###
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
keepalive 8000;
keepalive_time 1h;
keepalive_timeout 60s;
keepalive_requests 10000;
}
Stream block
Logs
access_log off;
error_log /var/log/nginx/error.log notice;
In this example,
access_log
is disabled.The error log level is set to
notice
. The following levels are supported:debug
(debug_core
,debug_alloc
,debug_event
,debug_http
, ...),info
,warn
,error
,crit
,alert
, andemerg
. The higher the level, the less information is recorded.
Run the following command to check the errors recorded in the pod logs of NGINX Ingresses:
kubectl logs -f <nginx-ingress-pod-name> -n kube-system |grep -E '^[WE]'
References
For more information about NGINX Ingress configurations, see NGINX Configuration.
For more information about NGINX Ingress troubleshooting, see Commonly used diagnostic methods in the "NGINX Ingress controller troubleshooting" topic.
For more information about the advanced usage of NGINX Ingresses, see Advanced NGINX Ingress configurations.