All Products
Search
Document Center

Container Service for Kubernetes:Enable tracing for the NGINX Ingress controller

Last Updated:Mar 26, 2026

Container Service for Kubernetes (ACK) lets you enable distributed tracing on the NGINX Ingress controller and send trace data to Managed Service for OpenTelemetry. Managed Service for OpenTelemetry persists the data and computes it in real time to produce trace details and real-time topology, which you can use to troubleshoot and diagnose issues.

Version compatibility

The tracing protocol supported by the NGINX Ingress controller depends on its version.

NGINX Ingress controller version OpenTelemetry OpenTracing
≥ 1.10.2-aliyun.1 Supported Not supported
v1.9.3-aliyun.1 Supported Supported
v1.8.2-aliyun.1 Supported Supported
< v1.8.2-aliyun.1 Not supported Supported

Follow the procedure that matches your installed version.

Prerequisites

Before you begin, ensure that you have:

Procedure

Enable tracing with OpenTelemetry

Use this procedure for NGINX Ingress controller versions ≥ v1.8.2-aliyun.1.

Step 1: Get the endpoint from Managed Service for OpenTelemetry

The steps differ depending on which version of the Managed Service for OpenTelemetry console you are using.

New version of the Managed Service for OpenTelemetry console

  1. Log on to the Managed Service for OpenTelemetry console. In the left-side navigation pane, click Integration Center.

  2. In the Open Source Frameworks section, click the OpenTelemetry card.

  3. In the OpenTelemetry panel, select the region from which you want to import trace data.

  4. Record the endpoint used to import data over gRPC.

    The NGINX Ingress controller and the Managed Service for OpenTelemetry agent in this example are deployed in the same region. Use a virtual private cloud (VPC) endpoint. If they are in different regions, use a public endpoint.

    ot-新版-中文.jpg

Previous version of the Managed Service for OpenTelemetry console

  1. Log on to the Managed Service for OpenTelemetry console.

  2. In the left-side navigation pane, click Cluster Configurations. On the right side of the page that appears, click the Access point information tab.

  3. In the upper part of the page, select the region from which you want to import trace data.

  4. In the Cluster Information section, select Show Token. Then, click OpenTelemetry in the Client section and record the endpoint used to import data over gRPC.

    The NGINX Ingress controller and the Managed Service for OpenTelemetry agent in this example are deployed in the same region. Use a VPC endpoint. If they are in different regions, use a public endpoint.

    ot-旧版-中文.jpg

Step 2: Enable OpenTelemetry on the NGINX Ingress controller

Part A: Add the authentication environment variable

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of your cluster. In the left-side navigation pane, choose Workloads > Deployments.

  3. In the Namespace drop-down list, select kube-system. Enter nginx-ingress-controller in the search box and click the search icon. Click Edit in the Actions column of the nginx-ingress-controller deployment.

  4. At the top of the Edit page, select the nginx-ingress-controller container. On the Environments tab, click Add and configure the following environment variable. Set the value to the authentication token from Step 1. Example: authentication=bfXXXXXXXe@7bXXXXXXX1_bXXXXXe@XXXXXXX1. After adding the variable, click Update on the right side of the Edit page. In the dialog that appears, click Confirm.

    Type Variable key Value/ValueFrom
    Custom OTEL_EXPORTER_OTLP_HEADERS authentication=<Authentication token>

    image

Part B: Configure the nginx-configuration ConfigMap

  1. In the left-side navigation pane, choose Configurations > ConfigMaps.

  2. In the Namespace drop-down list, select kube-system. Enter nginx-configuration in the Name search box and click the search icon. Click Edit in the Actions column.

  3. In the Edit panel, click Add to add the following configuration entries, then click OK.

    Name Description Valid value Example
    enable-opentelemetry Enable Managed Service for OpenTelemetry. true / false true
    main-snippet Expose the OTEL_EXPORTER_OTLP_HEADERS environment variable to the NGINX configuration. env OTEL_EXPORTER_OTLP_HEADERS; env OTEL_EXPORTER_OTLP_HEADERS;
    otel-service-name Service name displayed in traces. Custom value nginx-ingress
    otlp-collector-host Domain name for gRPC data export. Remove http:// and the port number from the VPC endpoint obtained in Step 1. Domain name only tracing-analysis-XX-XX-XXXXX.aliyuncs.com
    otlp-collector-port Port for gRPC data export. Port number 8090
    opentelemetry-trust-incoming-span Whether to trust the call traces of other services or systems. true: trust upstream spans. false: do not trust the call traces of other services or systems. true / false true
    opentelemetry-operation-name Span name format. HTTP $request_method $service_name $uri HTTP $request_method $service_name $uri
    otel-sampler Sampling strategy. For available options, see opentelemetry. TraceIdRatioBased TraceIdRatioBased
    otel-sampler-ratio Fraction of requests to sample. 0: no data collected. 1: all requests sampled. Accurate to two decimal places. For more information, see opentelemetry. 01 0.1
    otel-sampler-parent-based Whether to inherit the sampling decision from the upstream span. false (default): apply otel-sampler and otel-sampler-ratio. true: inherit the upstream sampling flag and ignore otel-sampler and otel-sampler-ratio. For more information, see opentelemetry. true / false false

Step 3: Verify trace data in Managed Service for OpenTelemetry

  1. Log on to the Managed Service for OpenTelemetry.

  2. In the left-side navigation pane, click Applications.

  3. At the top of the Applications page, select the region you configured in Step 1. Click nginx-ingress.

  4. On the application details page, review the trace data:

    • On the Application Overview tab, view the request count and error count. 应用概览-中.jpg

    • On the Trace Analysis tab, view the trace list and average duration. 调用链分析-中.jpg

    • On the Trace Analysis tab, click a trace ID to view span details. trace详情-中.jpg

Enable tracing with OpenTracing

Step 1: Get the endpoint from Managed Service for OpenTelemetry

New version of the Managed Service for OpenTelemetry console

  1. Log on to the Managed Service for OpenTelemetry console. In the left-side navigation pane, click Integration Center.

  2. In the Open Source Frameworks section, click the Zipkin card.

    This step uses a Zipkin client to collect trace data.
  3. In the Zipkin panel, select the region from which you want to import trace data.

  4. Record the endpoint.

    The NGINX Ingress controller and the Managed Service for OpenTelemetry agent in this example are deployed in the same region. Use a VPC endpoint. If they are in different regions, use a public endpoint.

    zipkin-新版-中.jpg

Previous version of the Managed Service for OpenTelemetry console

  1. Log on to the Managed Service for OpenTelemetry console.

  2. In the left-side navigation pane, click Cluster Configurations. On the page that appears, click the Access point information tab.

  3. At the top of the page, select the region from which you want to import trace data.

  4. In the Cluster Information section, select Show Token. Click Zipkin in the Client section and record the endpoint.

    The NGINX Ingress controller and the Managed Service for OpenTelemetry agent in this example are deployed in the same region. Use a VPC endpoint. If they are in different regions, use a public endpoint.

    zipkin-旧版-中文.jpg

Use this procedure for NGINX Ingress controller versions that support OpenTracing (< v1.10.2-aliyun.1). OpenTracing uses a Zipkin client to collect data.

Step 2: Enable OpenTracing on the NGINX Ingress controller

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of your cluster. In the left-side navigation pane, choose Configurations > ConfigMaps.

  3. In the Namespace drop-down list, select kube-system. Enter nginx-configuration in the Name search box and click the search icon. Click Edit in the Actions column.

  4. In the Edit panel, click Add to add the following configuration entries, then click OK.

    Name Description Valid value Example
    enable-opentracing Enable Tracing Analysis. true / false true
    zipkin-service-name Service name displayed in traces. Custom value nginx-ingress
    zipkin-collector-host Domain name for data import. Remove http:// from the endpoint obtained in Step 1 and append ?. Example: http://tracing-analysis-dc-hz-internal.aliyuncs.com/adapt_****_**/api/v1/spans` becomes `tracing-analysis-dc-hz-internal.aliyuncs.com/adapt_**_****/api/v1/spans?. Modified endpoint tracing-analysis-dc-hz-internal.aliyuncs.com/adapt_****_****/api/v1/spans?
    opentracing-trust-incoming-span Whether to trust trace context propagated by upstream services. true / false true
    zipkin-sample-rate Fraction of requests to sample. 0: no data collected. 1: all requests sampled. Accurate to two decimal places. 01 0.1

Step 3: Verify trace data in Managed Service for OpenTelemetry

  1. Log on to the Managed Service for OpenTelemetry console.

  2. In the left-side navigation pane, click Applications.

  3. At the top of the Applications page, select the region you configured in Step 1. Click nginx.

  4. In the left-side navigation pane of the details page, click Interface Calls. Review the trace data on the right side of the page:

    • On the Overview tab, view the trace topology. 3.jpg

    • On the Traces tab, view the top 100 most time-consuming traces. For more information, see Interface calls. 调用链路

    • On the Traces tab, click a trace ID to view span details. 2.jpg

(Optional) Change the trace propagation protocol

When OpenTelemetry is enabled, Managed Service for OpenTelemetry passes trace data in the W3C trace context specification to the downstream service. If your frontend or backend applications use a different protocol, such as Jaeger or Zipkin, change the propagation protocol so that spans from the frontend application, NGINX Ingress, and backend application are correctly joined into a single trace.

  1. Add the OTEL_PROPAGATORS environment variable to the nginx-ingress-controller deployment. Follow the same steps as Part A of Step 2 in the OpenTelemetry procedure, then save the changes and redeploy.

    Variable key Value Description
    OTEL_PROPAGATORS tracecontext,baggage,b3,jaeger Protocols used to propagate trace context. For more information, see Specify the format to pass trace data.
  2. Update the main-snippet entry in the nginx-configuration ConfigMap. Follow the same steps as Part B of Step 2 and set main-snippet to the following value.

    Name Value Description
    main-snippet env OTEL_EXPORTER_OTLP_HEADERS; env OTEL_PROPAGATORS; Loads both the OTEL_EXPORTER_OTLP_HEADERS and OTEL_PROPAGATORS environment variables into the NGINX configuration.

What's next