All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure ASM to report tracing data

Last Updated:Mar 11, 2026

Envoy sidecars in Service Mesh (ASM) generate distributed trace spans for every request. You can export these spans to Alibaba Cloud Managed Service for OpenTelemetry or to a self-managed backend such as Zipkin or SkyWalking. The export method depends on your ASM version.

How it works

When tracing is enabled, every Envoy sidecar generates trace spans for inbound and outbound requests. ASM forwards these spans through one of two paths:

  • Direct export (versions earlier than 1.18.0.124, and 1.22.6.89 or later) -- ASM sends spans directly to Managed Service for OpenTelemetry. Enable this in the ASM console.

  • Collector-based export (versions 1.18.0.124 to earlier than 1.22.6.89) -- Deploy an OpenTelemetry Collector in the Container Service for Kubernetes (ACK) cluster, then point ASM at the Collector. The Collector forwards spans to Managed Service for OpenTelemetry over gRPC.

Important

Envoy sidecars generate independent spans. To join spans into end-to-end traces, your application must propagate trace context headers between services. Forward the following headers in every request:

  • x-request-id

  • traceparent and tracestate (W3C Trace Context)

  • For Zipkin: x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags (B3 format) Without header propagation, traces appear as disconnected spans rather than a single correlated trace.

Prerequisites

Before you begin, make sure that you have:

Check your ASM version

Configuration steps differ by ASM version. To check your version:

  1. Log on to the ASM console.

  2. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  3. Click the name of your ASM instance. The version appears on the Base Information page.

To upgrade, see Update an ASM instance.

Export to Managed Service for OpenTelemetry

Choose the procedure that matches your ASM version.

Versions earlier than 1.17.2.35

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. Click the name of the target ASM instance. In the left-side navigation pane, choose ASM Instance > Base Information.

  3. On the Base Information page, click Settings. In the Settings Update panel:

    1. Select Enable Tracing Analysis.

    2. Set the Sampling Percentage.

    3. For Sampling Method, select Enable Managed Service for OpenTelemetry.

    4. Click OK.

  4. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis. The Managed Service for OpenTelemetry console opens and displays the ASM tracing data.

    Tracing Analysis page

For more information about Managed Service for OpenTelemetry, see What is Managed Service for OpenTelemetry?

Note To disable tracing, clear Enable Tracing Analysis in the Settings Update panel and click OK.

Versions 1.17.2.35 to earlier than 1.18.0.124

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. Click the name of the target ASM instance. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis.

  3. Click Collect ASM Tracing Data to Managed Service for OpenTelemetry. In the Submit dialog, click OK.

  4. Click Open the Managed Service for OpenTelemetry Console to view the tracing data.

    Tracing Analysis page

For more information about Managed Service for OpenTelemetry, see What is Managed Service for OpenTelemetry?

Note To disable tracing, click Disable Collection on the Tracing Analysis page. In the Submit dialog, click OK.

Versions 1.18.0.124 to earlier than 1.22.6.89

In this version range, the ASM console does not offer built-in Managed Service for OpenTelemetry integration. Instead, deploy an OpenTelemetry Collector in the ACK cluster and configure ASM to send spans to it.

Step 1: Deploy the OpenTelemetry Operator

  1. Connect to the ACK cluster with kubectl. Create the opentelemetry-operator-system namespace:

       kubectl create namespace opentelemetry-operator-system
  2. Install the OpenTelemetry Operator with Helm:

       helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
       helm install --namespace=opentelemetry-operator-system opentelemetry-operator open-telemetry/opentelemetry-operator \
         --set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s" \
         --set admissionWebhooks.certManager.enabled=false \
         --set admissionWebhooks.autoGenerateCert.enabled=true
  3. Verify the Operator is running: Expected output: A STATUS of Running confirms that the Operator is ready.

       kubectl get pod -n opentelemetry-operator-system
       NAME                                      READY   STATUS    RESTARTS   AGE
       opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          1m

Step 2: Deploy the OpenTelemetry Collector

  1. Create a file named collector.yaml with the following content. Replace the two placeholders before applying:

    Click to view collector.yaml

       apiVersion: opentelemetry.io/v1alpha1
       kind: OpenTelemetryCollector
       metadata:
         labels:
           app.kubernetes.io/managed-by: opentelemetry-operator
         name: default
         namespace: opentelemetry-operator-system
         annotations:
           sidecar.istio.io/inject: "false"
       spec:
         config: |
           extensions:
             zpages:
               endpoint: 0.0.0.0:55679
           receivers:
             otlp:
               protocols:
                 grpc:
                   endpoint: 0.0.0.0:4317
           exporters:
             debug:
               verbosity: detailed
             otlp:
               endpoint: ${ENDPOINT}
               tls:
                 insecure: true
               headers:
                 Authentication: ${TOKEN}
           service:
             extensions: [zpages]
             pipelines:
               traces:
                 receivers: [otlp]
                 processors: []
                 exporters: [otlp, debug]
         ingress:
           route: {}
         managementState: managed
         mode: deployment
         observability:
           metrics: {}
         podDisruptionBudget:
           maxUnavailable: 1
         replicas: 1
         resources: {}
         targetAllocator:
           prometheusCR:
             scrapeInterval: 30s
           resources: {}
         upgradeStrategy: automatic
    Note This sample configuration deploys a single-replica Collector without persistent storage. For production workloads, increase the replica count and configure appropriate resource requests and limits.
    PlaceholderDescriptionHow to obtain
    ${ENDPOINT}VPC gRPC access point for Managed Service for OpenTelemetrySee Access and authentication instructions
    ${TOKEN}Authentication tokenSee Access and authentication instructions
  2. Apply the Collector to the cluster:

       kubectl apply -f collector.yaml
  3. Verify the Collector pod is running: Expected output:

       kubectl get pod -n opentelemetry-operator-system
       NAME                                      READY   STATUS    RESTARTS   AGE
       opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          3m
       default-collector-5cbb4497f4-2hjqv        1/1     Running   0          30s
  4. Verify the Collector service exists: Expected output: The default-collector service listening on port 4317 confirms a successful deployment.

       kubectl get svc -n opentelemetry-operator-system
       opentelemetry-operator           ClusterIP   172.16.138.165   <none>        8443/TCP,8080/TCP   3m
       opentelemetry-operator-webhook   ClusterIP   172.16.127.0     <none>        443/TCP             3m
       default-collector                ClusterIP   172.16.145.93    <none>        4317/TCP            30s
       default-collector-headless       ClusterIP   None             <none>        4317/TCP            30s
       default-collector-monitoring     ClusterIP   172.16.136.5     <none>        8888/TCP            30s

Step 3: Enable tracing in the ASM console

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. Click the name of the target ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Configuration page, in the Link Tracking Settings section, set Sampling Percentage to 100, then click Submit.

  4. In the left-side navigation pane, choose Observability Management Center > Link Tracking. Configure the following fields:

    FieldValue
    Opentelemetry Service Address/domain Namedefault-collector.opentelemetry-operator-system.svc.cluster.local
    Opentelemetry Service Port4317
  5. Click Collect Service Mesh Link Tracking Data To Opentelemetry.

Versions 1.22.6.89 or later (recommended)

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. Click the name of the target ASM instance. In the left-side navigation pane, choose Observability Management Center > Tracing Analysis.

  3. On the Tracing Analysis page, set Export Method to Export to Alibaba Cloud Observable link, choose a data submission protocol under Fill in the configuration (for example, Zipkin), then click Submission.

  4. Click Go To Alibaba Cloud Observable link OpenTelemetry console to view the tracing data. For more information, see What is Managed Service for OpenTelemetry?

    Tracing Analysis page for ASM 1.22.6.89+

    Note To disable tracing, click Disable Collection on the Tracing Analysis page. In the Submit dialog, click OK.

Export to a self-managed tracing system

If you run a self-managed OpenTelemetry-compatible backend, Zipkin, or SkyWalking instance, configure ASM to export spans directly to it.

Versions earlier than 1.18.0.124

  • Earlier than 1.17.2.28: Log on to the ASM console. On the Basic Information page, click Settings, select Enable Tracing Analysis, configure the settings, then click OK.

  • 1.17.2.28 to earlier than 1.18.0.124: See the Tracing Analysis settings section in "Configure observability settings".

Versions 1.18.0.124 to earlier than 1.22.6.89

Log on to the ASM console. Navigate to Observability Management Center > Link Tracking and configure the following parameters:

ParameterDescription
OpenTelemetry Domain Name (FQDN)Fully qualified domain name of the self-managed backend. Example: otel.istio-system.svc.cluster.local
OpenTelemetry Service PortService port of the self-managed backend. Example: 8090

Versions 1.22.6.89 or later

Log on to the ASM console. Navigate to Observability Management Center > Link Tracking, then select and configure a self-managed system.

Important

The self-managed tracing backend must be deployed within the ASM instance or registered through a ServiceEntry. If the backend runs outside the mesh, create a ServiceEntry to make it accessible. See ServiceEntry.

The configuration parameters depend on the export protocol:

OpenTelemetry (gRPC)

ParameterDescription
Service domain name (full FQDN)FQDN of the backend. Example: otel.istio-system.svc.cluster.local
Service PortService port. Example: 8090
TimeoutOptional. Request timeout in seconds. Example: 1. Disabled by default.
Request HeaderOptional. Custom headers. Example: authentication: token-xxx. Empty by default.

OpenTelemetry (HTTP)

ParameterDescription
Service domain name (full FQDN)FQDN of the backend. Example: otel.istio-system.svc.cluster.local
Service PortService port. Example: 8090
Request PathHTTP request path. Example: /api/v2/spans. Default: /
TimeoutOptional. Request timeout in seconds. Example: 1. Disabled by default.
Request HeaderOptional. Custom headers. Example: authentication: token-xxx. Empty by default.

Zipkin

ParameterDescription
Service domain name (full FQDN)FQDN of the backend. Example: zipkin.istio-system.svc.cluster.local
Service PortService port. Example: 8090
Request PathHTTP request path. Example: /api/v2/spans. Default: /api/v2/spans

SkyWalking

ParameterDescription
Service domain name (full FQDN)FQDN of the backend. Example: skywalking.istio-system.svc.cluster.local
Service PortService port. Example: 8090

Verify tracing data

After configuration, generate traffic and confirm that traces appear in the backend.

Deploy sample applications

Deploy the Bookinfo and sleep applications to the data-plane cluster to generate traceable traffic.

  1. Create a file named bookinfo.yaml and copy the following content to it.

    Expand to view details

       apiVersion: v1
       kind: Service
       metadata:
         name: details
         labels:
           app: details
           service: details
       spec:
         ports:
         - port: 9080
           name: http
         selector:
           app: details
       ---
       apiVersion: v1
       kind: ServiceAccount
       metadata:
         name: bookinfo-details
         labels:
           account: details
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: details-v1
         labels:
           app: details
           version: v1
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: details
             version: v1
         template:
           metadata:
             labels:
               app: details
               version: v1
           spec:
             serviceAccountName: bookinfo-details
             containers:
             - name: details
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-details-v1:1.20.1
               imagePullPolicy: IfNotPresent
               ports:
               - containerPort: 9080
       ---
       ##################################################################################################
       # Ratings service
       ##################################################################################################
       apiVersion: v1
       kind: Service
       metadata:
         name: ratings
         labels:
           app: ratings
           service: ratings
       spec:
         ports:
         - port: 9080
           name: http
         selector:
           app: ratings
       ---
       apiVersion: v1
       kind: ServiceAccount
       metadata:
         name: bookinfo-ratings
         labels:
           account: ratings
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: ratings-v1
         labels:
           app: ratings
           version: v1
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: ratings
             version: v1
         template:
           metadata:
             labels:
               app: ratings
               version: v1
           spec:
             serviceAccountName: bookinfo-ratings
             containers:
             - name: ratings
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-ratings-v1:1.20.1
               imagePullPolicy: IfNotPresent
               ports:
               - containerPort: 9080
       ---
       ##################################################################################################
       # Reviews service
       ##################################################################################################
       apiVersion: v1
       kind: Service
       metadata:
         name: reviews
         labels:
           app: reviews
           service: reviews
       spec:
         ports:
         - port: 9080
           name: http
         selector:
           app: reviews
       ---
       apiVersion: v1
       kind: ServiceAccount
       metadata:
         name: bookinfo-reviews
         labels:
           account: reviews
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: reviews-v1
         labels:
           app: reviews
           version: v1
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: reviews
             version: v1
         template:
           metadata:
             labels:
               app: reviews
               version: v1
           spec:
             serviceAccountName: bookinfo-reviews
             containers:
             - name: reviews
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-reviews-v1:1.20.1
               imagePullPolicy: IfNotPresent
               env:
               - name: LOG_DIR
                 value: "/tmp/logs"
               ports:
               - containerPort: 9080
               volumeMounts:
               - name: tmp
                 mountPath: /tmp
               - name: wlp-output
                 mountPath: /opt/ibm/wlp/output
             volumes:
             - name: wlp-output
               emptyDir: {}
             - name: tmp
               emptyDir: {}
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: reviews-v2
         labels:
           app: reviews
           version: v2
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: reviews
             version: v2
         template:
           metadata:
             labels:
               app: reviews
               version: v2
           spec:
             serviceAccountName: bookinfo-reviews
             containers:
             - name: reviews
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-reviews-v2:1.20.1
               imagePullPolicy: IfNotPresent
               env:
               - name: LOG_DIR
                 value: "/tmp/logs"
               ports:
               - containerPort: 9080
               volumeMounts:
               - name: tmp
                 mountPath: /tmp
               - name: wlp-output
                 mountPath: /opt/ibm/wlp/output
             volumes:
             - name: wlp-output
               emptyDir: {}
             - name: tmp
               emptyDir: {}
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: reviews-v3
         labels:
           app: reviews
           version: v3
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: reviews
             version: v3
         template:
           metadata:
             labels:
               app: reviews
               version: v3
           spec:
             serviceAccountName: bookinfo-reviews
             containers:
             - name: reviews
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-reviews-v3:1.20.1
               imagePullPolicy: IfNotPresent
               env:
               - name: LOG_DIR
                 value: "/tmp/logs"
               ports:
               - containerPort: 9080
               volumeMounts:
               - name: tmp
                 mountPath: /tmp
               - name: wlp-output
                 mountPath: /opt/ibm/wlp/output
             volumes:
             - name: wlp-output
               emptyDir: {}
             - name: tmp
               emptyDir: {}
       ---
       ##################################################################################################
       # Productpage services
       ##################################################################################################
       apiVersion: v1
       kind: Service
       metadata:
         name: productpage
         labels:
           app: productpage
           service: productpage
       spec:
         ports:
         - port: 9080
           name: http
         selector:
           app: productpage
       ---
       apiVersion: v1
       kind: ServiceAccount
       metadata:
         name: bookinfo-productpage
         labels:
           account: productpage
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: productpage-v1
         labels:
           app: productpage
           version: v1
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: productpage
             version: v1
         template:
           metadata:
             annotations:
               prometheus.io/scrape: "true"
               prometheus.io/port: "9080"
               prometheus.io/path: "/metrics"
             labels:
               app: productpage
               version: v1
           spec:
             serviceAccountName: bookinfo-productpage
             containers:
             - name: productpage
               image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/examples-bookinfo-productpage-v1:1.20.1
               imagePullPolicy: IfNotPresent
               ports:
               - containerPort: 9080
               volumeMounts:
               - name: tmp
                 mountPath: /tmp
             volumes:
             - name: tmp
               emptyDir: {}
       ---
  2. Deploy the Bookinfo application to the data-plane cluster:

       kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f bookinfo.yaml
  3. Create a file named sleep.yaml and copy the following content to it.

    Expand to view sleep.yaml

       apiVersion: v1
       kind: ServiceAccount
       metadata:
         name: sleep
       ---
       apiVersion: v1
       kind: Service
       metadata:
         name: sleep
         labels:
           app: sleep
           service: sleep
       spec:
         ports:
         - port: 80
           name: http
         selector:
           app: sleep
       ---
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: sleep
       spec:
         replicas: 1
         selector:
           matchLabels:
             app: sleep
         template:
           metadata:
             labels:
               app: sleep
           spec:
             terminationGracePeriodSeconds: 0
             serviceAccountName: sleep
             containers:
             - name: sleep
               image: registry.cn-hangzhou.aliyuncs.com/acs/curl:8.1.2
               command: ["/bin/sleep", "infinity"]
               imagePullPolicy: IfNotPresent
               volumeMounts:
               - mountPath: /etc/sleep/tls
                 name: secret-volume
             volumes:
             - name: secret-volume
               secret:
                 secretName: sleep-secret
                 optional: true
       ---
  4. Deploy the sleep application:

       kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f sleep.yaml

Generate test traffic

From the sleep pod, send 100 requests to the Bookinfo productpage:

kubectl exec -it deploy/sleep -- sh -c 'for i in $(seq 1 100); do curl -s productpage:9080/productpage > /dev/null; done'

View traces in Managed Service for OpenTelemetry

  1. Log on to the Managed Service for OpenTelemetry console.

  2. In the left-side navigation pane, click Application List. The Bookinfo services appear as separate applications, each showing incoming and outgoing spans.

    Application List in Managed Service for OpenTelemetry

See also