All Products
Search
Document Center

Container Service for Kubernetes:Basics of NGINX Ingress O&M

Last Updated:Dec 13, 2024

This topic describes the basics of NGINX, how the NGINX Ingress controller works, and the relevant O&M capabilities.

Basics of NGINX

In NGINX, directives can be used to configure the forwarding proxy logic. You can specify directives in the /etc/nginx/nginx.conf configuration file or in another configuration file that is referenced by using the include directive in the nginx.conf configuration file. The NGINX configuration file mainly consists of four blocks: Main (global configurations), Server (host configurations), Upstream (configurations of load balancer servers), and Location (configurations of URL-based location matching).

image

The following content describes the structure of the NGINX configuration file.

  1. Main block: configures directives that affect the overall operation of NGINX. Generally, this block includes configurations such as the user group under which the NGINX server runs, the path where the NGINX process ID (PID) is stored, the path for log storage, configuration file inclusion, and the number of worker processes that can be generated.

    # configuration file /etc/nginx/nginx.conf:
    # Configuration checksum: 15621323910982901520
    # setup custom paths that do not require root access
    pid /tmp/nginx/nginx.pid;
    daemon off;
    worker_processes 31;
    worker_cpu_affinity auto;
    worker_rlimit_nofile 1047552;
    worker_shutdown_timeout 240s ;
    access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
    error_log  /var/log/nginx/error.log notice;
  2. Events block: configures the network connections that affect the NGINX server or end users that initiate network requests, including the maximum number of connections per process and the event-driven model to be used to process connection requests.

    events {
        multi_accept        on;
        worker_connections  65536;
        use                 epoll;
    }
  3. HTTP block: configures most features and third-party modules such as the servers, proxying, caching, and logging. Multiple Server blocks can be nested under this block, and each one defines the configurations of a specific virtual host or server. Examples of configurations in the HTTP block include file inclusion, Multipurpose Internet Mail Extensions (MIME) type definitions, custom logging, whether to use sendfile for file transmission, connection timeout period, and the number of requests per connection.

    http {
        lua_package_path "/etc/nginx/lua/?.lua;;";
        lua_shared_dict balancer_ewma 10M;
        lua_shared_dict balancer_ewma_last_touched_at 10M;
        lua_shared_dict balancer_ewma_locks 1M;
        lua_shared_dict certificate_data 20M;
        lua_shared_dict certificate_servers 5M;
        lua_shared_dict configuration_data 20M;
        lua_shared_dict global_throttle_cache 10M;
        lua_shared_dict ocsp_response_cache 5M;
        init_by_lua_block {
            collectgarbage("collect")
            -- init modules
            local ok, res
            ok, res = pcall(require, "lua_ingress")
            if not ok then
            error("require failed: " .. tostring(res))
            else
            lua_ingress = res
            lua_ingress.set_config({
                use_forwarded_headers = false,
                use_proxy_protocol = false,
                is_ssl_passthrough_enabled = false,
                http_redirect_code = 308,
                listen_ports = { ssl_proxy = "442", https = "443" },
                hsts = true,
                hsts_max_age = 15724800,
                hsts_include_subdomains = true,
                hsts_preload = false,
            ... 
        }
        ...
    }
  4. Server block: configures the parameters for a specific virtual host. Multiple Server blocks can be nested under the HTTP block.

        ## start server www.exmaple.com
        server {
            server_name www.example.com ;
            listen 80  ;
            listen [::]:80  ;
            listen 443  ssl http2 ;
            listen [::]:443  ssl http2 ;
            set $proxy_upstream_name "-";
            ssl_certificate_by_lua_block {
                certificate.call()
            }
            location / {
                set $namespace      "xapi";
                set $ingress_name   "xapi-server";
                set $service_name   "xapi-server-rta";
                set $service_port   "80";
                set $location_path  "/";
                set $global_rate_limit_exceeding n;
                rewrite_by_lua_block {
                    lua_ingress.rewrite({
                        force_ssl_redirect = false,
                        ssl_redirect = false,
                        force_no_ssl_redirect = false,
                        preserve_trailing_slash = false,
                        use_port_in_redirects = false,
                        global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
                    })
                    balancer.rewrite()
                    plugins.run()
                }
               ...
            }
    }
    ## end server www.example.com
  5. Location block: configures the routing of requests and processing of various pages.

    location / {
      proxy_pass http://upstream_balancer;
    }

The following code shows a sample nginx.conf file, including some common directives and directive blocks:

# the number of worker processes to use
worker_processes 4;

# include additional configuration files
include /etc/nginx/conf.d/*.conf;

http {
  # HTTP server settings
  server {
    # listen on port 80
    listen 80;

    # server name
    server_name www.example.com;

    # default location
    location / {
      # root directory
      root /var/www/html;

      # index file
      index index.html;
    }
  }
}

In this example:

  • worker_processes: a directive that specifies the number of worker processes to be used by NGINX.

  • include: a directive that specifies other configuration files to include.

  • server: a block that configures the settings of a specific server.

  • location: a block that configures the settings of the default location, which is the root URL of the NGINX server.

For more information, see NGINX official documentation.

The following sections describe the most critical parts of the NGINX configuration file.

Directive blocks

HTTP block

The HTTP block configures the global parameters of the HTTP server. The parameters include the number of worker processes to run, maximum number of connections allowed, and log settings. The following code shows a sample HTTP block:

http {
    worker_processes 4;  
    worker_rlimit_nofile 8192; 
    client_max_body_size 128m; 

    access_log /var/log/nginx/access.log; 
    error_log /var/log/nginx/error.log; 

    server {
        ...
    }
}

In this example, the HTTP block specifies the number of worker processes, maximum number of file descriptors allowed, maximum body size allowed for client requests, and locations of the access log file and error log file. A Server block is nested under the HTTP block, specifying the configurations of a specific web server.

Server block

A Server block defines the processing rules of traffic requests for a specific domain name or a group of domain names. You can configure different settings and behaviors for multiple websites and applications hosted on the same server.

The following code shows a sample Server block:

server {
    listen 80;
    server_name www.example.com;

    location / {
        root /var/www/html/example;
        index index.html index.htm;
    }

    location /app {
        proxy_pass http://localhost:3000;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

In this example, the server processes only unencrypted traffic of the domain name www.example.com on port 80.

  • listen specifies that the server listens for inbound connections on port 80.

  • server_name specifies the domain name that the Server block processes.

  • listen and server_name are used together to specify the domain name and port based on which the server listens for and processes traffic.

server_name

This directive specifies the domain name of a virtual host.

server_name name1 name2 name3

# example:
server_name www.example.com;

The following four matching modes are supported to specify a domain name:

  • Exact match: server_name www.example.com

  • Wildcard match (with a wildcard used to indicate the subdomain): server_name *.example.com

  • Wildcard match (with a wildcard used to indicate the top-level domain): server_name www.example.*

  • Regular expression match: server_name ~^www\.example\.*$

The four matching modes in the list are prioritized in descending order.

Location block

A Location block configures the request processing rules of the server for a specific URL path or URL pattern. You can define different behaviors based on different parts of a website or an application, such as providing static files for specific directories or proxying requests to another server.

The following code shows some sample Location blocks:

server {
    ...

    location / {
        root /var/www/html/example;
        index index.html index.htm;
    }

    location /app {
        proxy_pass http://localhost:3000;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

In this example:

  • The / Location block specifies that requests to the root URL of the website are processed by providing files in the /var/www/html/example directory.

  • The /app Location block specifies that requests to the /app URL are proxied to localhost on port 3000.

  • The /50x.html Location block specifies how the server handles server errors by serving specific files.

When multiple Location blocks exist, requests are captured and processed by the Location block that is first matched based on the Location block matching rules and priority.

Syntax of matching rule concatenation:

  • location [ = | ~ | ~* | ^~ ] uri { ... }

  • location @name { ... }

Matching rule

Description

location = /uri

= indicates an exact match. A Location block takes effect only if it is exactly matched.

location ^~ /uri

^~ indicates a prefix match for URL paths before a regular expression match. The search is stopped if a URL path is matched.

location ~ Regular expression

~ indicates a case-sensitive regular expression match.

location ~* Regular expression

~* indicates a case-insensitive regular expression match.

location !~ Regular expression

!~ indicates a case-sensitive regular expression mismatch.

location !~*Regular expression

!~* indicates a case-insensitive regular expression mismatch.

location /uri

This matching rule uses no character. It indicates a prefix match after a regular expression match.

location /

A general match. This matching rule applies to a request with no Location block hitting other matching rules. It is equivalent to Default in Switch.

location @Name

An internal redirection in NGINX.

The priority of the matching rules is in the following descending order: = > ^~ > ~ > ~* > No character used.

Example:

# Default general match. This matching rule applies to a request with no Location block hitting other matching rules.
location = / {
}
# Prefix match.
location ^~ /xx/ {
     # If a request prefixed with /xx/ is matched, the search is stopped. 
}
# Prefix match without a regular expression.
location  /uri {
}

location / {
     # Match all requests because all requests start with /. However, Location blocks defined by using regular expressions and those with longer prefix matches are prioritized to be matched. 
}

# Regular expression match.
location ~*.(gif|jpg|jpeg)$ {
     # Match any request that ends with gif, jpg, or jpeg. 
}

An NGINX Ingress sorts multiple Ingress paths of a host in descending order of length. For more information, see Ingress Path Matching and Understanding Nginx Server and Location Block Selection Algorithms.

Directive inheritance and override

In NGINX configuration, directives follow an inheritance mechanism from the outer to the inner contexts. An inner child context inherits configuration directives from its outer parent context. For example, the directives specified in the http{} context are inherited by all nested server{} and location{} contexts. Similarly, the directives specified in the server{} context are inherited by all nested location{} contexts. Take note that if a directive is specified in both a parent context and a child context, the configuration in the child context overrides, rather than combining with, that in the parent context.

Example:

http {
    add_header X-HTTP-LEVEL-HEADER 1;
    add_header X-ANOTHER-HTTP-LEVEL-HEADER 1;

    server {
        listen 8080;
        location / {
            return 200 "OK";
        } 
    }

    server {
        listen 8081;
        add_header X-SERVER-LEVEL-HEADER 1;

        location / {
            return 200 "OK";
        }

        location /test {
            add_header X-LOCATION-LEVEL-HEADER 1;
            return 200 "OK";
        }

        location /correct {
            add_header X-HTTP-LEVEL-HEADER 1;
            add_header X-ANOTHER-HTTP-LEVEL-HEADER 1;

            add_header X-SERVER-LEVEL-HEADER 1;
            add_header X-LOCATION-LEVEL-HEADER 1;
            return 200 "OK";
        } 
    }
}

Reverse proxy configurations of NGINX

proxy_pass

The reverse proxy configurations of NGINX allow NGINX to operate as a proxy server that listens for client requests and forwards them to one or more backend servers. This enables NGINX to receive and route requests to appropriate backend servers for processing. To configure NGINX as a reverse proxy, you can use the proxy_pass directive in a Location block in the NGINX configuration file.

Example:

server {
    listen 80;
    server_name www.example.com;

    location / {
        proxy_pass http://localhost:3000;
    }
}

In this example, the / Location block specifies that all requests to the domain name www.example.com are proxied to the local host on port 3000. This means that NGINX receives and forwards requests to the server that runs on port 3000 on the local host.

proxy_set_header

You can use the proxy_set_header directive to specify additional headers to add to the proxy request.

Example:

server {
    listen 80;
    server_name www.example.com;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://localhost:3000;
    }
}

In this example, the proxy_set_header directive adds the Host and X-Real-IP headers to the proxy request, setting the hostname of the incoming request and the IP address to their respective values. These headers are useful for ensuring that the upstream server receives accurate information about the request.

Common NGINX directives

Directive

Description

proxy_pass

Specifies the backend server to which the request is forwarded. The value can be an IP address with a port number or a domain name.

proxy_set_header

Specifies the headers to be forwarded to the backend server, such as Host and X-Real-IP.

proxy_read_timeout

Sets the timeout period for establishing a connection with the backend server.

proxy_buffer_size and proxy_buffers

Set the buffer size for caching responses of the backend server.

Common NGINX commands

Command

Description

nginx -s reload

Sends a signal to the main process to reload the configuration file and perform a hot restart.

nginx -s reopen

Restarts the NGINX server.

nginx -s stop

Shuts down the NGINX server.

nginx -s quit

Shuts down the NGINX server after the worker processes complete processing.

nginx -T

Displays the final NGINX configurations.

nginx -t

Checks the configuration file for errors.

NGINX Ingress controller

How it works

The NGINX Ingress controller is an implementation that integrates both the control plane and the data plane. Each pod has a controller process and relevant NGINX processes.

image

Core modules of the NGINX Ingress data plane

Third-party modules:

  • ngx_http_opentracing_module

    NGINX Ingresses in Container Service for Kubernetes (ACK) are integrated with this module. This module implements the integration with the tracing feature of Real-Time Monitoring Service (ARMS). For more information, see Enable tracing for NGINX Ingress Controller.

  • opentelemetry for nginx

    You can enable this module for NGINX Ingresses by using a ConfigMap.

    data:
      enable-opentelemetry: "true"
      otlp-collector-host: "otel-coll-collector.otel.svc"  ## OpenTelemetry endpoint

For more information, see NGINX official documentation.

Implementation of configuration synchronization

The following figure shows how configuration synchronization is implemented. This helps you understand how to reduce the frequency of configuration reloads and when it is necessary to perform configuration reloads.

image

The NGINX Ingress controller monitors resources such as Ingresses, services, pods, and endpoints to update configurations to the nginx.conf file or the Lua table. The Lua table mainly includes the configurations of upstream server endpoints, canary releases, and certificates. They correspond to some related variables of NGINX. Changes to these configurations also trigger updates to the nginx.conf file, thereby triggering a reload. For more information, see When a reload is required in the NGINX Ingress community documentation.

Configurations of the NGINX Ingress control plane

Startup arguments

For more information, see Command line arguments.

You can view the container startup arguments from Deployment Spec or Pod Spec of NGINX Ingresses, as shown in the following code:

containers:
  - args:
    - /nginx-ingress-controller
    - --election-id=ingress-controller-leader-nginx
    - --ingress-class=nginx
    - --watch-ingress-without-class
    - --controller-class=k8s.io/ingress-nginx
    - --configmap=$(POD_NAMESPACE)/nginx-configuration
    - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
    - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
    - --annotations-prefix=nginx.ingress.kubernetes.io
    - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
    - --enable-annotation-validation
    - --validating-webhook=:8443
    - --validating-webhook-certificate=/usr/local/certificates/cert
    - --validating-webhook-key=/usr/local/certificates/key
    - --v=2

-v specifies the log level.

  • --v=2: displays detailed information about configuration changes in NGINX.

  • --v=3: displays detailed information about services, Ingress rules, and endpoint changes, and dumps NGINX configuration in JSON format.

  • --v=5: enables the debug mode.

Load balancing

NGINX Ingresses allow you to specify a global default load balancing algorithm. The Round_robin and Ewma algorithms are supported. By default, Round_robin is used. The Round_robin algorithm cyclically distributes requests among backend workloads to achieve even distribution. However, uneven load balancing may occur if the performance of backend workloads varies greatly. The Ewma algorithm sends requests to the backend workload with the lowest weighted average load. The weighted load index gradually changes as requests arrive. This ensures more balanced load distribution.

For load balancing based on consistent hashing by using variables such as the IP address, consider using the nginx.ingress.kubernetes.io/upstream-hash-by annotation. For session cookie-based load balancing, consider using the nginx.ingress.kubernetes.io/affinity annotation.

Related implementations:

Related timeout configurations

Global timeout configurations

You can use the following configuration options to specify global timeout configurations of NGINX Ingresses:

Configuration option

Description

Default value

proxy-connect-timeout

Sets the timeout period for establishing a connection with the proxy server. In general, the value cannot exceed 75s.

5s

proxy-read-timeout

Sets the timeout period for reading a response from the proxy server. This timeout period applies between two consecutive read operations, rather than to the transmission of the entire response.

60s

proxy-send-timeout

Sets the timeout period for sending a request to the proxy server. This timeout period applies between two consecutive write operations, rather than to the transmission of the entire request.

60s

proxy-stream-next-upstream-timeout

Limits the amount of time allowed to pass a connection to the next server. If you set the value to 0, no limit is imposed.

600s

proxy-stream-timeout

Sets the timeout period between two consecutive read or write operations on a client or proxy server connection. If no data is transmitted within the period, the connection is closed.

600s

upstream-keepalive-timeout

Sets a timeout period for keeping an idle connection open with upstream servers.

  • Open source edition: 60s

  • ACK edition: 900s

worker-shutdown-timeout

Sets the timeout period for a graceful shutdown.

240s

proxy-protocol-header-timeout

Sets the timeout period for receiving the proxy protocol header. The default value prevents the Transport Layer Security (TLS) passthrough handler from waiting indefinitely for a broken connection.

5s

ssl-session-timeout

Sets the validity period for SSL session parameters in the session cache. The expiration time is subject to the creation time. Each session cache occupies about 0.25 MB of space.

10m

client-body-timeout

Sets the timeout period for reading the client request body.

60s

client-header-timeout

Sets the timeout period for reading the client request headers.

60s

Resource-specific custom timeout configurations

The following table describes the options for resource-specific custom timeout configurations.

Configuration option

Description

nginx.ingress.kubernetes.io/proxy-connect-timeout

Sets the timeout period for establishing a connection with the proxy server.

nginx.ingress.kubernetes.io/proxy-send-timeout

Sets the timeout period for sending data to the proxy server.

nginx.ingress.kubernetes.io/proxy-read-timeout

Sets the timeout period for reading data from the proxy server.

nginx.ingress.kubernetes.io/proxy-next-upstream

Configures retry policies or retry conditions. Separate multiple items with spaces. For example, you can use http_500 and http_502. The following policies are supported:

  • error: returns a response upon a connection failure.

  • timeout: returns a response upon a timeout error.

  • invalid_response: returns a response when an invalid status code is identified.

  • http_xxx: xxx can be replaced with a status code. For example, if you set xxx to 500, the next workload is selected when the upstream returns the status code 500. Supported status codes are 500, 502, 503, 504, 403, 404, and 429.

  • off: disables the retry mechanism and returns a response regardless of the error that occurs.

nginx.ingress.kubernetes.io/proxy-next-upstream-tries

Sets the number of retries allowed if the retry conditions are met.

nginx.ingress.kubernetes.io/proxy-request-buffering

Specifies whether to enable the request buffering feature. Valid values:

  • on: enables request buffering. After request buffering is enabled, the request data is forwarded to the backend workload only after it is completely received. HTTP/1.1 chunked-encoded requests are not subject to this setting and are always buffered.

  • off: disables request buffering. After request buffering is disabled, if an error occurs during the data transmission, no workload is selected for a retry.

General global ConfigMap configurations

For more information, see ConfigMaps.

Other annotations

For more information, see Annotations.

Custom snippet configurations

  • nginx.ingress.kubernetes.io/configuration-snippet: The configuration applies to the Location block.

  • nginx.ingress.kubernetes.io/server-snippet: The configuration applies to the Server block.

  • nginx.ingress.kubernetes.io/stream-snippet: The configuration applies to the Stream block.

Configurations of the NGINX Ingress data plane

The data plane of NGINX Ingresses is implemented by combining NGINX with the ngx_lua module (OpenResty). NGINX uses a multi-modular design, which divides HTTP request processing into multiple phases. This design allows multiple modules to work together, where each module is responsible for handling an independent and simple feature. This makes request processing more efficient and reliable, and improves the scalability of the system.

OpenResty can inject custom handlers to process requests in different processing phases of NGINX, including the Rewrite/Access phase, Content phase, and Log phase. Together with the initialization phase of system startup, which is the Master phase, OpenResty provides a total of 11 phases. These phases enable Lua scripts to intervene in HTTP request processing. The following figure shows the main available phases of OpenResty.

image

HTTP block

http {
    lua_package_path "/etc/nginx/lua/?.lua;;";
    lua_shared_dict balancer_ewma 10M;
    lua_shared_dict balancer_ewma_last_touched_at 10M;
    lua_shared_dict balancer_ewma_locks 1M;
    lua_shared_dict certificate_data 20M;
    lua_shared_dict certificate_servers 5M;
    lua_shared_dict configuration_data 20M;
    lua_shared_dict global_throttle_cache 10M;
    lua_shared_dict ocsp_response_cache 5M;
    ...
}

init_by_lua_block

init_by_lua_block {
        collectgarbage("collect")
        -- init modules
        local ok, res
        ok, res = pcall(require, "lua_ingress")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        lua_ingress = res
        lua_ingress.set_config({
            use_forwarded_headers = false,
            use_proxy_protocol = false,
            is_ssl_passthrough_enabled = false,
            http_redirect_code = 308,
            listen_ports = { ssl_proxy = "442", https = "443" },
            hsts = true,
            hsts_max_age = 15724800,
            hsts_include_subdomains = true,
            hsts_preload = false,
            global_throttle = {
                memcached = {
                    host = "", port = 11211, connect_timeout = 50, max_idle_timeout = 10000, pool_size = 50,
                },
                status_code = 429,
            }
        })
        end
        ok, res = pcall(require, "configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        configuration = res
        configuration.prohibited_localhost_port = '10246'
        end
        ok, res = pcall(require, "balancer")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        balancer = res
        end
        ok, res = pcall(require, "monitor")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        monitor = res
        end
        ok, res = pcall(require, "certificate")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        certificate = res
        certificate.is_ocsp_stapling_enabled = false
        end
        ok, res = pcall(require, "plugins")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        plugins = res
        end
        ...
  }

The following Lua-related modules are loaded during the initialization:

  • configuration

  • balancer

  • monitor

  • certificate

  • plugins

init_worker_by_lua_block

    init_worker_by_lua_block {
        lua_ingress.init_worker()
        balancer.init_worker()
        monitor.init_worker(10000)
        plugins.run()
    }

upstream and balancer_by_lua_block

    upstream upstream_balancer {
        ### Attention!!!
        #
        # We no longer create "upstream" section for every backend.
        # Backends are handled dynamically using Lua. If you would like to debug
        # and see what backends ingress-nginx has in its memory you can
        # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
        # Once you have the plugin you can use "kubectl ingress-nginx backends" command to
        # inspect current backends.
        #
        ###
        server 0.0.0.1; # placeholder
        balancer_by_lua_block {
            balancer.balance()
        }
        keepalive 8000;
        keepalive_time 1h;
        keepalive_timeout  60s;
        keepalive_requests 10000;
    }

Stream block

stream {
    lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
    lua_shared_dict tcp_udp_configuration_data 5M;
    resolver 192.168.0.10 valid=30s;
    init_by_lua_block {
        collectgarbage("collect")
        -- init modules
        local ok, res
        ok, res = pcall(require, "configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        configuration = res
        end
        ok, res = pcall(require, "tcp_udp_configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        tcp_udp_configuration = res
        tcp_udp_configuration.prohibited_localhost_port = '10246'
        end
        ok, res = pcall(require, "tcp_udp_balancer")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        tcp_udp_balancer = res
        end
    }

    init_worker_by_lua_block {
        tcp_udp_balancer.init_worker()
    }
    lua_add_variable $proxy_upstream_name;
    log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
    access_log off;
    error_log  /var/log/nginx/error.log notice;
    upstream upstream_balancer {
        server 0.0.0.1:1234; # placeholder
        balancer_by_lua_block {
            tcp_udp_balancer.balance()
        }
    }
    server {
        listen 127.0.0.1:10247;
        access_log off;
        content_by_lua_block {
            tcp_udp_configuration.call()
        }
    }
  }

Similar to the HTTP block, init_by_lua_block loads the TCP-related Lua module and initializes it.

Logs

    access_log off;
    error_log  /var/log/nginx/error.log notice;
  • In this example, access_log is disabled.

  • The error log level is set to notice. The following levels are supported: debug (debug_core, debug_alloc, debug_event, debug_http, ...), info, warn, error, crit, alert, and emerg. The higher the level, the less information is recorded.

Run the following command to check the errors recorded in the pod logs of NGINX Ingresses:

kubectl logs -f <nginx-ingress-pod-name> -n kube-system  |grep -E '^[WE]'

References

For more information about NGINX Ingress configurations, see NGINX Configuration.

For more information about NGINX Ingress troubleshooting, see Commonly used diagnostic methods in the "NGINX Ingress controller troubleshooting" topic.

For more information about the advanced usage of NGINX Ingresses, see Advanced NGINX Ingress configurations.