All Products
Search
Document Center

Alibaba Cloud Service Mesh:Export ASM tracing data to a self-managed system

Last Updated:Apr 09, 2024

If your Service Mesh (ASM) instance is earlier than v1.18.0.124, you can export the tracing data only to a self-managed system that is compatible with Zipkin. If your ASM instance is v1.18.0.124 or later, you can export the tracing data only to Managed Service for OpenTelemetry. This topic describes how to export ASM tracing data to a self-managed system that is compatible with Zipkin or Managed Service for OpenTelemetry.

Prerequisites

  • A self-managed system that is compatible with Zipkin is built and listens on port 9411 of the Zipkin server. If you use Jaeger, Zipkin collectors must be deployed.

  • The self-managed system is deployed in a cluster on the data plane.

  • An ASM instance is created, and a Kubernetes cluster is added to the ASM instance. For more information, see Add a cluster to an ASM instance.

  • An ingress gateway is created in the ASM instance For more information, see Create an ingress gateway.

Procedure

Perform operations based on the version of your ASM instance.

For an ASM instance of v1.18.0.124 or later

Step 1: Deploy Zipkin

  1. Run the following command to create a namespace named zipkin to deploy Zipkin:

    kubectl create namespace zipkin
  2. Run the following command to install Zipkin by using Helm:

    helm install --namespace zipkin my-zipkin carlosjgp/zipkin --version 0.2.0
  3. Run the following command to check whether Zipkin is running properly:

    kubectl -n zipkin get pods

    Expected output:

    NAME                                   READY   STATUS    RESTARTS   AGE
    my-zipkin-collector-79c6dc9cd7-jmswm   1/1     Running   0          29m
    my-zipkin-ui-64c97b4d6c-f742j          1/1     Running   0          29m

Step 2: Deploy the OpenTelemetry Operator

  1. Run the following command to create the opentelemetry-operator-system namespace:

    kubectl create namespace opentelemetry-operator-system
  2. Run the following commands to use Helm to install the OpenTelemetry Operator in the opentelemetry-operator-system namespace:

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    helm install  --namespace=opentelemetry-operator-system --set admissionWebhooks.certManager.enabled=false --set admissionWebhooks.certManager.autoGenerateCert=true opentelemetry-operator open-telemetry/opentelemetry-operator
  3. Run the following command to check whether the OpenTelemetry Operator works properly:

    kubectl get pod -n opentelemetry-operator-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          1m

    The output indicates that the status is running. This means that the OpenTelemetry Operator works properly.

Step 3: Create an OpenTelemetry Collector

  1. Create a collector.yaml file that contains the following content shown in the code block.

    Replace ${ENDPOINT} in the YAML file with a virtual private cloud (VPC) endpoint supporting the gRPC protocol. Replace ${TOKEN} with the authentication token. For more information about how to obtain the endpoints supported by Managed Service for OpenTelemetry and authentication tokens, see Connect to Managed Service for OpenTelemetry and authenticate clients.

    Show the collector.yaml file

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      labels:
        app.kubernetes.io/managed-by: opentelemetry-operator
      name: default
      namespace: opentelemetry-operator-system
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      config: |
        extensions:
          memory_ballast:
            size_mib: 512
          zpages:
            endpoint: 0.0.0.0:55679
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: "0.0.0.0:4317"
        exporters:
          debug:
          zipkin:
          	endpoint: http://my-zipkin-collector.zipkin.svc.cluster.local:9411/api/v2/spans
        service:
          pipelines:
            traces:
              receivers: [otlp]
              processors: []
              exporters: [zipkin, debug]
      ingress:
        route: {}
      managementState: managed
      mode: deployment
      observability:
        metrics: {}
      podDisruptionBudget:
        maxUnavailable: 1
      replicas: 1
      resources: {}
      targetAllocator:
        prometheusCR:
          scrapeInterval: 30s
        resources: {}
      upgradeStrategy: automatic
  2. Use kubectl to connect to a Container Service for Kubernetes (ACK) cluster based on the information in the kubeconfig file, and then run the following command to deploy the OpenTelemetry Collector in the cluster:

    kubectl apply -f collector.yaml
  3. Run the following command to check whether the OpenTelemetry Collector is started:

    kubectl get pod -n opentelemetry-operator-system

    Expected output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    opentelemetry-operator-854fb558b5-pvllj   2/2     Running   0          3m
    default-collector-5cbb4497f4-2hjqv        1/1     Running   0          30s

    The output indicates that the OpenTelemetry Collector starts normally.

  4. Run the following command to check whether a service is created for the OpenTelemetry Collector:

    kubectl get svc -n opentelemetry-operator-system

    Expected output:

    opentelemetry-operator           ClusterIP   172.16.138.165   <none>        8443/TCP,8080/TCP   3m
    opentelemetry-operator-webhook   ClusterIP   172.16.127.0     <none>        443/TCP             3m
    default-collector              ClusterIP   172.16.145.93    <none>        4317/TCP   30s
    default-collector-headless     ClusterIP   None             <none>        4317/TCP   30s
    default-collector-monitoring   ClusterIP   172.16.136.5     <none>        8888/TCP   30s

    The output indicates that a service is created for the OpenTelemetry Collector.

Step 4: Deploy test applications

Deploy the Bookinfo and sleep applications. For more information, see Deploy an application in an ASM instance.

  • bookinfo.yaml

  • sleep.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image: curl:8.1.2
            command: ["/bin/sleep", "infinity"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
    ---

Step 5: Access the productpage application and view the tracing data

  1. Run the following command to access the productpage application:

    kubectl exec -it deploy/sleep -c sleep -- curl  productpage:9080/productpage?u=normal
  2. After the access is successful, view the logs of the OpenTelemetry Collector and the output printed by debug exporter.

    2023-11-20T08:44:27.531Z	info	TracesExporter	{"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 3}

Step 6: Configure an ASM gateway and view the tracing data in the Zipkin service

  1. Create an Istio gateway.

    1. Use the following content to create an ingressgateway.yaml file:

      Expand to view the ingressgateway.yaml file

      apiVersion: networking.istio.io/v1beta1
      kind: Gateway
      metadata:
        name: ingressgateway
        namespace: istio-system
      spec:
        selector:
          istio: ingressgateway
        servers:
          - hosts:
              - '*'
            port:
              name: http
              number: 80
              protocol: HTTP
      ---
      apiVersion: networking.istio.io/v1beta1
      kind: VirtualService
      metadata:
        name: ingressgateway
        namespace: istio-system
      spec:
        gateways:
          - ingressgateway
        hosts:
          - '*'
        http:
          - route:
              - destination:
                  host: my-zipkin-collector.zipkin.svc.cluster.local
                  port:
                    number: 9411
      
    2. Use kubectl to connect to the ASM instance based on the information in the kubeconfig file. Then, run the following command to enable the ASM gateway to listen on port 80 and configure a route to the Zipkin service:

      kubectl apply -f ingressgateway.yaml
  2. Access the Zipkin service by using the IP address of the ASM gateway and view the tracing data.

    image.png

For an ASM instance of a version earlier than v1.18.0.124

Step 1: Enable the tracing feature for the ASM instance

  • If the version of the ASM instance is earlier than v1.17.2.28, you can enable the tracing feature by performing the following operations: Log on to the ASM console. On the Base Information page of the ASM instance, click Settings. On the page that appears, select Enable Tracing Analysis, configure the related parameters, and then click OK.

  • If the version of the ASM instance is v1.17.2.28 or later, you can enable the tracing feature by referring to the "Description of Tracing Analysis Settings" section in Configure Observability Settings.

Step 2: Deploy Zipkin in the Kubernetes cluster on the data plane

  1. Use the following content to create a zipkin-server.yaml file:

    Expand to view the zipkin-server.yaml file

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: zipkin-server
      namespace: istio-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: zipkin-server
          component: zipkin
      template:
       metadata:
        labels:
          app: zipkin-server
          component: zipkin
       spec:
        containers:
         - name: zipkin-server
           image: openzipkin/zipkin
           imagePullPolicy: IfNotPresent
           readinessProbe:
                httpGet:
                  path: /health
                  port: 9411
                initialDelaySeconds: 5
                periodSeconds: 5
    Note

    If you use a custom YAML file to deploy Zipkin, make sure that Zipkin is deployed in the istio-system namespace.

  2. Run the following command to deploy Zipkin in the Kubernetes cluster on the data plane:

    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-server.yaml
    Note

    If you use the sample code in this topic, replace ${DATA_PLANE_KUBECONFIG} in the command with the path to the kubeconfig file of the Kubernetes cluster on the data plane. In addition, replace ${ASM_KUBECONFIG} with the path to the kubeconfig file of the ASM instance.

  3. After the preceding deployment is complete, verify that the pod in which the Zipkin server is deployed can properly start.

Step 3: Create a service to expose the Zipkin server

Create a service that is named zipkin in the istio-system namespace to receive ASM tracing data.

  • To expose the Zipkin server to the Internet, use the zipkin-svc-expose-public.yaml file.

  • Otherwise, use the zipkin-svc.yaml file.

In this example, the zipkin-svc-expose-public.yaml file is used to expose the Zipkin server to the Internet so that you can view tracing data in a convenient manner.

Note

The name of the created service must be zipkin.

  1. Use the following code based on your business requirements to create a YAML file.

    • To expose the Zipkin server to the Internet, use the zipkin-svc-expose-public.yaml file with the following content:

      Expand to view the zipkin-svc-expose-public.yaml file

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: tracing
          component: zipkin
        name: zipkin
        namespace: istio-system
      spec:
        ports:
        - name: zipkin
          port: 9411
          protocol: TCP
          targetPort: 9411
        selector:
          app: zipkin-server
          component: zipkin
        type: LoadBalancer
    • If you do not need to expose the Zipkin server to the Internet, use the zipkin-svc.yaml file with the following content:

      Expand to view the zipkin-svc.yaml file

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: tracing
          component: zipkin
        name: zipkin
        namespace: istio-system
      spec:
        ports:
        - name: zipkin
          port: 9411
          protocol: TCP
          targetPort: 9411
        selector:
          app: zipkin-server
          component: zipkin
        type: ClusterIP
    Note

    If you use a custom YAML file to deploy the zipkin service, make sure that this service is deployed in the istio-system namespace.

  2. Run the following commands to deploy the zipkin service in the Kubernetes cluster on the data plane:

    # Deploy the zipkin service to expose the Zipkin server to the internal network. 
    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc.yaml
    # Deploy the zipkin service to expose the Zipkin server to the Internet. 
    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc-expose-public.yaml

Step 4: Deploy the Bookinfo application

  1. Run the following command to deploy the Bookinfo application to the Kubernetes cluster on the data plane:

    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f bookinfo.yaml

    Expand to view the bookinfo.yaml file

    apiVersion: v1
    kind: Service
    metadata:
      name: details
      labels:
        app: details
        service: details
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: details
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: bookinfo-details
      labels:
        account: details
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: details-v1
      labels:
        app: details
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: details
          version: v1
      template:
        metadata:
          labels:
            app: details
            version: v1
        spec:
          serviceAccountName: bookinfo-details
          containers:
          - name: details
            image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9080
    ---
    ##################################################################################################
    # Ratings service
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: ratings
      labels:
        app: ratings
        service: ratings
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: ratings
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: bookinfo-ratings
      labels:
        account: ratings
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ratings-v1
      labels:
        app: ratings
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ratings
          version: v1
      template:
        metadata:
          labels:
            app: ratings
            version: v1
        spec:
          serviceAccountName: bookinfo-ratings
          containers:
          - name: ratings
            image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9080
    ---
    ##################################################################################################
    # Reviews service
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: reviews
      labels:
        app: reviews
        service: reviews
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: reviews
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: bookinfo-reviews
      labels:
        account: reviews
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: reviews-v1
      labels:
        app: reviews
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: reviews
          version: v1
      template:
        metadata:
          labels:
            app: reviews
            version: v1
        spec:
          serviceAccountName: bookinfo-reviews
          containers:
          - name: reviews
            image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
            imagePullPolicy: IfNotPresent
            env:
            - name: LOG_DIR
              value: "/tmp/logs"
            ports:
            - containerPort: 9080
            volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          volumes:
          - name: wlp-output
            emptyDir: {}
          - name: tmp
            emptyDir: {}
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: reviews-v2
      labels:
        app: reviews
        version: v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: reviews
          version: v2
      template:
        metadata:
          labels:
            app: reviews
            version: v2
        spec:
          serviceAccountName: bookinfo-reviews
          containers:
          - name: reviews
            image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
            imagePullPolicy: IfNotPresent
            env:
            - name: LOG_DIR
              value: "/tmp/logs"
            ports:
            - containerPort: 9080
            volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          volumes:
          - name: wlp-output
            emptyDir: {}
          - name: tmp
            emptyDir: {}
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: reviews-v3
      labels:
        app: reviews
        version: v3
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: reviews
          version: v3
      template:
        metadata:
          labels:
            app: reviews
            version: v3
        spec:
          serviceAccountName: bookinfo-reviews
          containers:
          - name: reviews
            image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
            imagePullPolicy: IfNotPresent
            env:
            - name: LOG_DIR
              value: "/tmp/logs"
            ports:
            - containerPort: 9080
            volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: wlp-output
              mountPath: /opt/ibm/wlp/output
          volumes:
          - name: wlp-output
            emptyDir: {}
          - name: tmp
            emptyDir: {}
    ---
    ##################################################################################################
    # Productpage services
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: productpage
      labels:
        app: productpage
        service: productpage
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: productpage
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: bookinfo-productpage
      labels:
        account: productpage
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: productpage-v1
      labels:
        app: productpage
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: productpage
          version: v1
      template:
        metadata:
          labels:
            app: productpage
            version: v1
        spec:
          serviceAccountName: bookinfo-productpage
          containers:
          - name: productpage
            image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9080
            volumeMounts:
            - name: tmp
              mountPath: /tmp
          volumes:
          - name: tmp
            emptyDir: {}
    ---
  2. Run the following command on the kubectl client to deploy virtual services for Bookinfo:

    kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f virtual-service-all-v1.yaml

    Expand to view the virtual-service-all-v1.yaml file

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: productpage
    spec:
      hosts:
      - productpage
      http:
      - route:
        - destination:
            host: productpage
            subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
      - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: ratings
    spec:
      hosts:
      - ratings
      http:
      - route:
        - destination:
            host: ratings
            subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: details
    spec:
      hosts:
      - details
      http:
      - route:
        - destination:
            host: details
            subset: v1
    ---
  3. Run the following command on the kubectl client to deploy destination rules for Bookinfo:

    kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f destination-rule-all.yaml

    Expand to view the destination-rule-all.yaml file

    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: productpage
    spec:
      host: productpage
      subsets:
      - name: v1
        labels:
          version: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: reviews
    spec:
      host: reviews
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
      - name: v3
        labels:
          version: v3
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: ratings
    spec:
      host: ratings
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
      - name: v2-mysql
        labels:
          version: v2-mysql
      - name: v2-mysql-vm
        labels:
          version: v2-mysql-vm
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: details
    spec:
      host: details
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
    ---
  4. Run the following command on the kubectl client to deploy a gateway for Bookinfo:

    kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f bookinfo-gateway.yaml

    Expand to view the bookinfo-gateway.yaml file

    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
      name: bookinfo-gateway
    spec:
      selector:
        istio: ingressgateway # use istio default controller
      servers:
      - port:
          number: 80
          name: http
          protocol: HTTP
        hosts:
        - "*"
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: bookinfo
    spec:
      hosts:
      - "*"
      gateways:
      - bookinfo-gateway
      http:
      - match:
        - uri:
            exact: /productpage
        - uri:
            prefix: /static
        - uri:
            exact: /login
        - uri:
            exact: /logout
        - uri:
            prefix: /api/v1/products
        route:
        - destination:
            host: productpage
            port:
              number: 9080

Step 5: Generate tracing data

  1. Run the following command to query the IP address of the ingress gateway:

    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} get svc -n istio-system|grep ingressgateway|awk -F ' ' '{print $4}' 
  2. Enter IP address of the ingress gateway/productpage in the address bar of your browser to access Bookinfo.

Step 6: View tracing data

  1. Run the following command to obtain the address of the zipkin service:

    kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG}get svc -n istio-system|grep zipkin|awk -F ' ' '{print $4}'
  2. Enter IP address of the zipkin service:9411 in the address bar of your browser to access the Zipkin console. In the Zipkin console, you can view tracing data.

    链路追踪