This topic provides answers to some frequently asked questions about Ingresses.
Which SSL or TLS protocol versions are supported by Ingresses?
Do Ingresses pass Layer 7 request headers to backend servers by default?
Can ingress-nginx forward requests to backend HTTPS servers?
Configure an Internet-facing or internal-facing NGINX Ingress controller
How do I change Layer 4 listeners to Layer 7 HTTP or HTTPS listeners for ingress-nginx?
How do I collect access logs from multiple Ingress controllers?
What is the match logic of certificates configured for NGINX Ingresses?
What do I do if NGINX pods fail health checks in heavy load scenarios?
What do I do if certificates fail to be issued due to cert-manager errors?
How do I handle NGINX memory usage spikes during peak hours?
What do I do if the NGINX Ingress controller remains in the Upgrading state?
How do I configure IP blacklists and whitelists to control access to NGINX Ingresses?
What are the known issues in NGINX Ingress controller 1.2.1?
How do I enable NGINX Ingresses to support large client request headers or cookies?
What are the known issues in earlier versions of the NGINX Ingress controller?
The following list provides references for the issues in earlier versions of the NGINX Ingress controller. To prevent issues when you use the NGINX Ingress controller, we recommend that you upgrade it to the latest version. For more information, see Upgrade the NGINX Ingress controller.
Affected versions: 1.2.1-aliyun.1 and earlier.
Solution: Upgrade the NGINX Ingress controller to version 1.5.1-aliyun.1 or later.
Affected versions: 1.10.2-aliyun.1 and earlier.
Solution: Upgrade the NGINX Ingress controller to 1.10.4-aliyun.1 or later.
Large file (larger than 2 GB) upload failures
Cause: The value of
client-body-buffer-size
is larger than the storage limit for 32-bit integers.Solution: Set
client-body-buffer-size
to a smaller value, such as200M
.
Which SSL or TLS protocol versions are supported by Ingresses?
Ingress-nginx supports Transport Layer Security (TLS) 1.2 and TLS 1.3. If the TLS protocol version that is used by a browser or mobile client is earlier than 1.2, errors may occur during handshakes between the client and ingress-nginx.
If you want ingress-nginx to support more TLS protocol versions, add the following configurations to the nginx-configuration
ConfigMap in the kube-system namespace. For more information, see TLS/HTTPS.
If you want to enable TLS 1.0 or 1.1 for NGINX Ingress controller 1.7.0 and later, you must specify @SECLEVEL=0
in the ssl-ciphers
parameter.
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
Do Ingresses pass Layer 7 request headers to backend servers by default?
By default, ingress-nginx passes Layer 7 request headers to backend servers. However, request headers that do not conform to HTTP rules are filtered out before requests are forwarded to the backend servers. For example, the Mobile Version request header is filtered out. If you do not want to filter out these request headers, run the kubectl edit cm -n kube-system nginx-configuration
command to add the relevant configurations to the nginx-configuration ConfigMap. For more information, see ConfigMap.
enable-underscores-in-headers: true
Can ingress-nginx forward requests to backend HTTPS servers?
To enable ingress-nginx to forward requests to backend HTTPS servers, run the following command to add the required annotations to the Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xxxx
annotations:
# You must specify HTTPS as the protocol that is used by the backend server.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Do Ingresses pass client IP addresses at Layer 7?
By default, ingress-nginx adds the X-Forward-For and X-Real-IP header fields to pass client IP addresses. However, if the X-Forward-For and X-Real-IP header fields are already added to a request by a client, the backend server cannot obtain the client IP address.
You can run the kubectl edit cm -n kube-system nginx-configuration
command to modify the nginx-configuration ConfigMap in the kube-system namespace. This allows ingress-nginx to pass client IP addresses at Layer 7.
compute-full-forwarded-for: "true"
forwarded-for-header: "X-Forwarded-For"
use-forwarded-headers: "true"
If traffic passes multiple upstream proxy servers before it reaches an NGINX Ingress, you must add the proxy-real-ip-cidr field to the nginx-configuration ConfigMap and set the value of proxy-real-ip-cidr
to the CIDR blocks of the upstream proxy servers. Separate multiple CIDR blocks with commas (,). For more information, see Use WAF or transparent WAF.
proxy-real-ip-cidr: "0.0.0.0/0,::/0"
In IPv6 scenarios, if the NGINX Ingress receives an empty X-Forwarded-For
header and an upstream Classic Load Balancer (CLB) instance is used, you can enable the Proxy protocol on the CLB instance to retrieve the client IP address. For more information about the Proxy protocol, see Enable Layer 4 listeners to preserve client IP addresses and pass them to backend servers.
Does the NGINX Ingress controller support HSTS?
By default, HTTP Strict Transport Security (HSTS) is enabled for nginx-ingress-controller. When a browser sends a plain HTTP request for the first time, the response header from the backend server (with HSTS enabled) contains Non-Authoritative-Reason: HSTS
. This indicates that the backend server supports HSTS. If the client also supports HSTS, the client continues to send HTTPS requests if the first access attempt succeeds. The body of the response from the backend server contains the 307 Internal Redirect
status code, as shown in the following figure.
If you do not want to forward the client requests to backend HTTPS servers, disable HSTS for nginx-ingress-controller. For more information, see HSTS.
By default, the HSTS configuration is cached by browsers. You must manually delete the browser cache after you disable HSTS for nginx-ingress-controller.
Which rewrite rules are supported by ingress-nginx?
Only simple rewrite rules are supported by ingress-nginx. For more information, see Rewrite. If you want to configure complex rewrite rules, use the following methods:
configuration-snippet: Add this annotation to the location configuration of an Ingress. For more information, see Configuration snippet.
server-snippet: Add this annotation to the server configuration of an Ingress. For more information, see Server snippet.
You can use other snippets to add global configurations, as shown in the following figure. For more information, see main-snippet.
What gets updated in the system after I update the NGINX Ingress controller on the Add-ons page of the ACK console?
If the version of the NGINX Ingress controller is earlier than 0.44, the component includes the following resources:
serviceaccount/ingress-nginx
configmap/nginx-configuration
configmap/tcp-services
configmap/udp-services
clusterrole.rbac.authorization.k8s.io/ingress-nginx
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx
role.rbac.authorization.k8s.io/ingress-nginx
rolebinding.rbac.authorization.k8s.io/ingress-nginx
service/nginx-ingress-lb
deployment.apps/nginx-ingress-controller
If the version of the NGINX Ingress controller is 0.44 or later, the component includes the following resources in addition to the preceding resources:
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission
service/ingress-nginx-controller-admission
serviceaccount/ingress-nginx-admission
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission
role.rbac.authorization.k8s.io/ingress-nginx-admission
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission
job.batch/ingress-nginx-admission-create
job.batch/ingress-nginx-admission-patch
When you update the NGINX Ingress controller on the Add-ons page of the Container Service for Kubernetes (ACK) console, the configurations of the following resources remain unchanged:
configmap/nginx-configuration
configmap/tcp-services
configmap/udp-services
service/nginx-ingress-lb
The configurations of other resources are reset to default values. For example, the default value of the replicas parameter of the deployment.apps/nginx-ingress-controller
resource is 2. If you set the value of replicas to 5 before you update the NGINX Ingress controller, the replicas parameter uses the default value 2 after you update the component from the Add-ons page of the ACK console.
How do I change Layer 4 listeners to Layer 7 HTTP or HTTPS listeners for ingress-nginx?
By default, the Server Load Balancer (SLB) instance of the ingress-nginx pod listens on TCP ports 443 and 80. You can change Layer 4 listeners to Layer 7 listeners by changing the protocol of the listeners to HTTP or HTTPS.
Your service will be temporarily interrupted when the system changes the listeners. We recommend that you perform this operation during off-peak hours.
Create a certificate and record the certificate ID (cert-id). For more information, see Use a certificate from Certificate Management Service.
Change the listeners of the SLB instance used by the Ingress from Layer 4 to Layer 7 by using annotations.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
In the upper part of the Services page, set Namespace to kube-system. Find the
ingress-nginx-lb
Service and click Edit YAML in the Actions column.In the View in YAML panel, set
targetPort
to 80 for port 443 in theports
section.- name: https port: 443 protocol: TCP targetPort: 80 # Set targetPort to 80 for port 443.
Add the following configurations to the
annotations
parameter and then click Update.service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port: "http:80,https:443" service.beta.kubernetes.io/alibaba-cloud-loadbalancer-cert-id: "${YOUR_CERT_ID}"
Verify the result.
On the Services page, find the ingress-nginx-lb Service and click the
icon in the Type column.
Click the Listener tab. If HTTP:80 and HTTPS:443 are displayed in the Frontend Protocol/Port column, the listeners of the SLB instance are changed from Layer 4 to Layer 7.
How do I specify an existing SLB instance for ack-ingress-nginx deployed from the Marketplace page of the ACK console?
Log on to the ACK console. In the left-side navigation pane, choose .
On the App Catalog tab, select ack-ingress-nginx or ack-ingress-nginx-v1.
If your cluster runs Kubernetes 1.20 or earlier, select ack-ingress-nginx.
If your cluster runs a Kubernetes version later than 1.20, select ack-ingress-nginx-v1.
Deploy an Ingress controller. For more information, see Deploy multiple Ingress controllers in a cluster.
On the Parameters wizard page, delete the original annotations and then add new annotations.
Delete all annotations in the controller.service.annotations section.
Add new annotations.
Specify the SLB instance that you want to use. service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "${YOUR_LOADBALANCER_ID}" # Overwrite the listeners of the SLB instance. service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
Click OK to deploy the Ingress controller.
After the Ingress controller is deployed, configure an Ingress class for the Ingress controller. For more information, see Deploy multiple Ingress controllers in a cluster.
How do I collect access logs from multiple Ingress controllers?
Prerequisites
Logtail is installed in your cluster. By default, Logtail is installed during cluster creation. If Logtail is not installed in your cluster, refer to Collect text logs from Kubernetes containers in DaemonSet mode to manually install Logtail.
Log collection is enabled for the default Ingress controller.
The labels added to the pods of other Ingress controllers are obtained. For more information, see How do I obtain the labels and environment variables of a Docker container?
Procedure:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and then click Cluster Information in the left-side navigation pane.
On the Cluster Information page, click the Basic Information tab and then click the hyperlink to the right side of Log Service Project in the Cluster Resources section.
On the Logstores page in the Simple Log Service console, create a Logstore. For more information, see Manage a logstore. To avoid repeated log collection, we recommend that you create a separate Logstore for each Ingress controller.
You can name a Logstore based on the name of the Ingress controller that uses the Logstore.
In the Data Collection Wizard message, click Cancel.
In the left-side navigation pane of the Logstores page, choose
. In the Logtail Configuration column, click k8s-nginx-ingress to go to the configuration page.On the Logtail Configuration page, click Copy. On the Logtail Replication page, select the Logstore that you created from the drop-down list. In the Container Filtering section, click Add in the Container Label Whitelist column and add the labels of Ingress controllers as key-value pairs. On the Logtail Replication page, click Submit.
In the left-side navigation pane of the Logstores page, click the Logstore that you created. In the upper-right corner of the page, click Enable. Then, click OK in the Search & Analysis panel. The Logstore configuration is completed.
In the left-side navigation pane of the Logstores page, choose the
. On the Logtail Configuration page, click Manage Logtail Configuration in the Actions column. On the Configuration Details tab, click Exact Field (Regex Mode) in the Processor Name column to view the extracted log fields.On the Logtail Configuration page, click Switch to Editor Configuration. Click Edit below Logtail Configuration. In the Plug-in Configuration section, configure the Keys and Regex parameters based on your requirements. Then, click Save.
NoteIf different NGINX Ingress controllers use different log formats, you need to modify the
processors
parameter in the Logtail configuration in different Logstores accordingly.
How do I enable TCP listeners for nginx ingress controller?
By default, Ingresses forward only external HTTP and HTTPS requests to Services in the cluster. You can configure ingress-nginx to enable an Ingress to forward external TCP requests received on the TCP port specified in the relevant ConfigMap.
Procedure:
Use the tcp-echo template to deploy a Service and a Deployment.
Use the following template to create a ConfigMap.
Modify the tcp-services-cm.yaml file. Then, save the changes and exit.
apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: kube-system data: 9000: "default/tcp-echo:9000" # This configuration indicates that external TCP requests received on port 9000 are forwarded to the tcp-echo Service in the default namespace. 9001:"default/tcp-echo:9001"
Run the following command to create the ConfigMap:
kubectl apply -f tcp-services-cm.yaml
Open TCP ports for the Service used by nginx-ingress-controller. Then, save the changes and exit.
kubectl edit svc nginx-ingress-lb -n kube-system
apiVersion: v1 kind: Service metadata: labels: app: nginx-ingress-lb name: nginx-ingress-lb namespace: kube-system spec: allocateLoadBalancerNodePorts: true clusterIP: 192.168.xx.xx ipFamilies: - IPv4 ports: - name: http nodePort: 30xxx port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 30xxx port: 443 protocol: TCP targetPort: 443 - name: tcp-echo-9000 # The port name. port: 9000 # The port number. protocol: TCP # The protocol. targetPort: 9000 # The destination port. - name: tcp-echo-9001 # The port name. port: 9001 # The port number. protocol: TCP # The protocol. targetPort: 9001 selector: app: ingress-nginx sessionAffinity: None type: LoadBalancer
Check whether the configuration takes effect.
Run the following command to query information about the Ingress. You can obtain the IP address of the SLB instance associated with the Ingress.
kubectl get svc -n kube-system| grep nginx-ingress-lb
Expected output:
nginx-ingress-lb LoadBalancer 192.168.xx.xx 172.16.xx.xx 80:31246/TCP,443:30298/TCP,9000:32545/TCP,9001:31069/TCP
Run the
nc
command to sendhelloworld
to the IP address corresponding to port 9000. If no response is returned, the configuration takes effect.echo "helloworld" | nc <172.16.xx.xx> 9000 echo "helloworld" | nc <172.16.xx.xx> 9001
What is the match logic of certificates configured for NGINX Ingresses?
An Ingress uses the spec.tls
parameter to specify TLS configurations and the spec.rules.host
parameter to specify the domain name of the Ingress. The NGINX Ingress controller uses Lua tables to store the mappings between domain names and certificates.
When a client sends an HTTPS request to NGINX, the request carries a Server Name Indication (SNI) field that specifies the host
to which the request is sent. The NGINX Ingress uses the certificate.call()
method to check whether a certificate is associated with the domain name. If no certificate is found, a fake
certificate is returned.
Sample NGINX configurations:
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=65535 ;
listen [::]:80 default_server reuseport backlog=65535 ;
listen 443 default_server reuseport backlog=65535 ssl http2 ;
listen [::]:443 default_server reuseport backlog=65535 ssl http2 ;
set $proxy_upstream_name "-";
ssl_reject_handshake off;
ssl_certificate_by_lua_block {
certificate.call()
}
...
}
## start server www.example.com
server {
server_name www.example.com ;
listen 80 ;
listen [::]:80 ;
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
...
}
ingress-nginx supports the Online Certificate Status Protocol (OCSP) stapling feature, which is used to check the certificate status. With this feature enabled, clients are not required to verify the certificate status with certificate authorities (CAs). This speeds up certificate validation and accelerates the access to NGINX. For more information, see Configure OCSP stapling.
What do I do if no certificate matches an NGINX Ingress?
Find the Secret that stores the certificate and save the Base64-decoded certificate content to a new file. Then, run openssl
commands to decrypt the certificate.
kubectl get secret <YOUR-SECRET-NAME> -n <SECRET-NAMESPACE> -o jsonpath={.data."tls\.crt"} |base64 -d | openssl x509 -text -noout
Check whether the domain name that you accessed is included in the Common Name (CN) field. If the domain name is not included in the CN field, you need to create a new certificate.
What do I do if NGINX pods fail health checks in heavy load scenarios?
Health checks are performed by sending requests to the /healthz
path through port 10246 of NGINX.
The following messages are returned when NGINX fails health checks:
I0412 11:01:52.581960 7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:01:55 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:01:55.895683 7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:02.582247 7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:05 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:05.896126 7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:12.582687 7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:15 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:15.895719 7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:22.582516 7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:25 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:25.896955 7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:28.983016 7 nginx.go:408] "NGINX process has stopped"
I0412 11:02:28.983033 7 sigterm.go:44] Handled quit, delaying controller exit for 10 seconds
I0412 11:02:32.582587 7 healthz.go:261] nginx-ingress-controller check failed: healthz
[-]nginx-ingress-controller failed: the ingress controller is shutting down
2024/04/12 11:02:35 Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
W0412 11:02:35.895853 7 nginx_status.go:171] unexpected error obtaining nginx status info: Get "http://127.0.0.1:10246/nginx_status": dial tcp 127.0.0.1:10246: connect: connection refused
I0412 11:02:38.986048 7 sigterm.go:47] "Exiting" code=0
In heavy load scenarios, the CPU usage of NGINX processes spikes and may even approach 100%. In this case, NGINX fails health checks. To resolve this issue, we recommend that you scale out NGINX pods to spread the pods to different nodes.
What do I do if certificates fail to be issued due to cert-manager errors?
This issue may occur when Web Application Firewall (WAF) is enabled. WAF may interfere with HTTP01 requests, which interrupts certificate issuance. To resolve this issue, we recommend that you disable WAF. Before you disable WAF, evaluate the impact after WAF is disabled.
How do I handle NGINX memory usage spikes during peak hours?
If NGINX encounters memory usage spikes and out of memory (OOM) errors during peak hours, log on to the NGINX pod and identify the processes that consume excessive memory. In most cases, metric collection processes lead to memory leaks. This is a known issue in NGINX Ingress controller 1.6.4. To resolve this issue, we recommend that you update the NGINX Ingress controller to the latest version and refer to Install the NGINX Ingress controller in high-load scenarios to disable collection for metrics that significantly increase memory usage, such as nginx_ingress_controller_ingress_upstream_latency_seconds
. For more information, see Ingress controller stress test, Prometheus metric collector memory leak, and Metrics PR.
What do I do if the NGINX Ingress controller remains in the Upgrading state?
When you implement a canary release to upgrade NGINX Ingress controller, the upgrade task gets stuck in the verification step and the system prompts Operation is forbidden for task in failed state. In most cases, this issue occurs because the duration of the upgrade task exceeds the timeout period, which is four days by default. When an upgrade task times out, the system automatically terminates the task. To resolve this issue, you must manually change the status of the canary release.
If the upgrade task is in the Release step, you do not need to perform the upgrade. When the duration of the upgrade task exceeds four days, the system automatically terminates the task.
Procedure
After you complete the modifications, the system resumes the canary release to upgrade the NGINX Ingress controller. However, the NGINX Ingress controller may remain in the Upgrading state on the Add-ons page in the ACK console for about two weeks.
Run the following command to modify the nginx-ingress-controller Deployment:
kubectl edit deploy -n kube-system nginx-ingress-controller
Configure the following parameters:
spec.minReadySeconds: 0
spec.progressDeadlineSeconds: 600
spec.strategy.rollingUpdate.maxSurge: 25%
spec.strategy.rollingUpdate.maxUnavailable: 25%
Save the changes and exit.
What do I do if chunked transfer does not work as normal (Transfer-Encoding: chunked) when I use NGINX Ingress controller 1.10 or later?
If you specify the Transfer-Encoding: chunked
HTTP header in the code and duplicate Transfer-Encoding: chunked headers exist in the controller log, the issue may be caused by NGINX updates. For more information, see NGINX update logs. NGINX 1.10 or later enhances the verification of HTTP responses. As a result, if multiple Transfer-Encoding: chunked
headers are returned from the backend application, the response is considered invalid. To resolve this issue, you must configure the backend application to return only one Transfer-Encoding: chunked
header. For more information, see GitHub Issue #11162.
How do I configure IP blacklists and whitelists to control access to NGINX Ingresses?
If you want to configure IP blacklists and whitelists to control access to an NGINX Ingress, you can add key-value pairs to the nginx-configuration ConfigMap to add annotations to the Ingress. The nginx-configuration ConfigMap takes effect on all NGINX Ingresses. The IP blacklists and whitelists configured in Ingresses take precedence over those configured in the nginx-configuration ConfigMap. The following table describes the annotations that you can add to configure IP blacklists and whitelists. For more information, see Denylist Source Range and Whitelist Source Range.
Annotation | Description |
nginx.ingress.kubernetes.io/denylist-source-range | The IP blacklist. IP addresses and CIDR blocks are supported. Separate IP addresses or CIDR blocks with commas (,). |
nginx.ingress.kubernetes.io/whitelist-source-range | The IP whitelist. IP addresses and CIDR blocks are supported. Separate IP addresses or CIDR blocks with commas (,). |
What are the known issues in NGINX Ingress controller 1.2.1?
When you configure the defaultBackend parameter of an Ingress, the defaultBackend parameter value of the default server may be overwritten. For more information, see GitHub Issue #8823. To resolve this issue, we recommend that you upgrade the NGINX Ingress controller to 1.3 or later. For more information about how to upgrade the NGINX Ingress controller, see Upgrade the NGINX Ingress controller.
What do I do if connection reset errors occur when I use curl to access public services over the Internet?
When you use curl
to access public services outside China over HTTP, the curl: (56) Recv failure: Connection reset by peer
error may be returned. In most cases, this issue occurs because the HTTP plaintext requests include sensitive words, which causes the request to be blocked or the response to be reset. You can configure a TLS certificate for Ingress rules to encrypt the communication.
What is the logic of path matching priorities?
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths in descending order of length before writing the paths to the NGINX configuration. For more information, see https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/.
Why are non-idempotent requests not retried?
In NGINX 1.9.13 and later, NGINX does not retry non-idempotent requests, such as POST, LOCK, PATCH requests, when errors occur. To restore the previous behavior, specify retry-non-idempotent=true
in the nginx-configuration ConfigMap.
How do I enable NGINX Ingresses to support large client request headers or cookies?
If an NGINX Ingress receives excessively large client request headers or cookies, the "400 Request Header Or Cookie Too Large /Bad request"
error may be returned. To resolve this issue, you must modify the following parameters to increase the buffer size for reading client request headers:
client-header-buffer-size: the buffer size for reading client request headers. Default value:
1k
.large-client-header-buffers: the maximum number and size of buffers used for reading large client request headers. Default value:
4 8k
.
You can run the kubectl edit cm -n kube-system nginx-configuration
command to modify the nginx-configuration ConfigMap and modify the preceding parameters based on your business requirements. Example:
client-max-body-size: "16k"
large-client-header-buffers: "4 32k"
After you complete the modification, make sure that the new configurations take effect on the NGINX data plane. You can run the kubectl exec <nginx-ingress-pod> -n kube-system -- cat/etc/nginx/nginx.conf | vim -
command to query the configurations in the nginx.conf
file to check whether the configurations are synchronized from the nginx-configuration ConfigMap.
After I set pathType
to Exact
or Prefix
for specific paths in the configurations of an Ingress, why is a regular expression used instead?
If the use-regex
or rewrite-target
annotation is added to any Ingress for a given host, a case-insensitive regular expression is enforced on all paths for the host. This is the implementation logic of the NGINX Ingress controller. For more information, see Community documentation.
Why do validation webhooks respond slowly when I add new Ingresses to a cluster that already has a large number of Ingresses?
This is a known performance issue in Nginx Ingress controller 1.11.4 and earlier. For more information, see #11115.
Solution:
Assume that your cluster is tolerant of slow responses and does not accept verification. If the current response of the webhook has timed out, you can increase the timeout period for the validatingwebhookconfigurations of ingress-nginx-admission. The default timeout period is 10 seconds. The maximum timeout period is 30 seconds. Note that the value of the timeout parameter value will be overwritten when the NGINX Ingress controller is upgraded.
Add the
--disable-full-test=true
setting to the startup parameters of the Deployment created for the NGINX Ingress controller. After you add the setting, the system performs only incremental verification on Ingresses. This accelerates the verification speed. However, conflicts between Ingress rules cannot be detected.
Why do snippet issues occur?
If you configure multiple snippets in different Ingresses, type errors may occur. As a result, the configurations may be unexpected.
W0619 14:58:49.323721 7 controller.go:1314] Server snippet already configured for server "test.example.com", skipping (Ingress "default/test.example.com")
W0619 14:58:49.323727 7 controller.go:1314] Server snippet already configured for server "test.example.com", skipping (Ingress "default/test.example.com")
W0619 14:58:49.323734 7 controller.go:1314] Server snippet already configured for server "test.example.com", skipping (Ingress "default/test.example.com")
Why do certificates stored in Secrets not take effect?
Run the kubectl -n kube-system logs <nginx-ingress-controller-pod-name> | grep "Error getting SSL certificate"
command to view the log of the pod of the NGINX Ingress controller. If the following error message is displayed in the log, you can perform the following steps to troubleshoot this issue. xxxx
refers to the Service name.
Error getting SSL certificate "xxxx": local SSL certificate xxxx tls was not found. Using default certificate
This error message may occur due to the following reasons:
The
xxxx
Service does not exist.The public key (
tls.crt
) does not match the private key (tls.key
). When you run thekubectl create secret tls
command to create a TLS Secret where the public key (tls.crt
) does not match the private key (tls.key
), an error message will be returned. Example:root@Aliyun ~/ssl # kubectl create secret tls tls-test-ingress --key example.com.key --cert httpbin.example.com.crt error: tls: private key does not match public key
For an existing Secret, you can run the following command to check whether this issue exists in the Secret:
export SECRET_NAME=<Your Secret Name> export NAME_SPACE=<Your Secret Namespace> diff <(kubectl get secret $SECRET_NAME -n $NAME_SPACE -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(kubectl get secret $SECRET_NAME -n $NAME_SPACE -o jsonpath='{.data.tls\.key}' | base64 -d | openssl rsa -noout -modulus | openssl md5) && echo "Certificate and Key match" || echo "Certificate and Key do not match"
If the following output is returned, this issue exists in the Secret:
root@Aliyun ~/ssl # export SECRET_NAME=test root@Aliyun ~/ssl # export NAME_SPACE=default root@Aliyun ~/ssl # diff <(kubectl get secret $SECRET_NAME -n $NAME_SPACE -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(kubectl get secret $SECRET_NAME -n $NAME_SPACE -o jsonpath='{.data.tls\.key}' | base64 -d | openssl rsa -noout -modulus | openssl md5) && echo "Certificate and Key match" || echo "Certificate and Key do not match" 1c1 < (stdin)= 66a309089e87e32d1b6fe361ebf8cd88 --- > (stdin)= 12e15c5fe35585b6fd9920abc8e8706d Certificate and Key do not match
The same domain name is used by multiple TLS certificates configured for different Ingresses. However, one TLS certificate is incorrectly configured. Find the TLS certificate with incorrect configurations and correct the configurations.
Why do I still receive alert notifications for certificate expiration after I renew expired certificates?
Specific bugs may exist in earlier versions of the NGINX Ingress controller. After you renew a certificate, the metrics nginx_ingress_controller_ssl_expire_time_seconds
metric may still exist. To resolve this issue, you must perform a rolling update for the NGINX Ingress controller. This issue is fixed in NGINX Ingress controller 1.11.4 and later. For more information about how to upgrade the NGINX Ingress controller, see Upgrade the NGINX Ingress controller.