If your Service Mesh (ASM) instance is earlier than v1.18.0.124, you can export the tracing data only to a self-managed system that is compatible with Zipkin. If your ASM instance is v1.18.0.124 or later, you can export the tracing data only to Managed Service for OpenTelemetry. This topic describes how to export ASM tracing data to a self-managed system that is compatible with Zipkin or Managed Service for OpenTelemetry.
Prerequisites
A self-managed system that is compatible with Zipkin is built and listens on port 9411 of the Zipkin server. If you use Jaeger, Zipkin collectors must be deployed.
The self-managed system is deployed in a cluster on the data plane.
An ASM instance is created, and a Kubernetes cluster is added to the ASM instance. For more information, see Add a cluster to an ASM instance.
An ingress gateway is created in the ASM instance For more information, see Create an ingress gateway.
Procedure
For an ASM instance of a version earlier than 1.18.0.124
Step 1: Enable the tracing feature for the ASM instance
If the ASM instance is earlier than 1.17.2.28, you can enable the tracing feature by performing the following operations: On the Base Information page of the ASM instance, click Settings. On the page that appears, select Enable Tracing Analysis and click OK to save configurations.
If the ASM instance is 1.17.2.28 or later, you can enable the tracing feature by referring to Description of Tracing Analysis Settings.
Step 2: Deploy Zipkin in the Kubernetes cluster on the data plane
Use the following content to create a zipkin-server.yaml file:
NoteIf you use a custom YAML file to deploy Zipkin, make sure that Zipkin is deployed in the istio-system namespace.
Run the following command to deploy Zipkin in the Kubernetes cluster on the data plane:
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-server.yaml
NoteIf you use the sample code in this topic, replace
${DATA_PLANE_KUBECONFIG}
in the command with the path to the kubeconfig file of the Kubernetes cluster on the data plane. In addition, replace${ASM_KUBECONFIG}
with the path to the kubeconfig file of the ASM instance.After the preceding deployment is complete, verify that the pod in which the Zipkin server is deployed can properly start.
Step 3: Create a service to expose the Zipkin server
Create a service that is named zipkin in the istio-system namespace to receive ASM tracing data.
To expose the Zipkin server to the Internet, use the zipkin-svc-expose-public.yaml file.
Otherwise, use the zipkin-svc.yaml file.
In this example, the zipkin-svc-expose-public.yaml file is used to expose the Zipkin server to the Internet so that you can view tracing data in a convenient manner.
The name of the created service must be zipkin.
Use the following code based on your business requirements to create a YAML file.
To expose the Zipkin server to the Internet, use the zipkin-svc-expose-public.yaml file with the following content:
Otherwise, use the zipkin-svc.yaml file.
NoteIf you use a custom YAML file to deploy the zipkin service, make sure that this service is deployed in the istio-system namespace.
Run the following commands to deploy the zipkin service in the Kubernetes cluster on the data plane:
# Deploy the zipkin service to expose the Zipkin server to the internal network. kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc.yaml # Deploy the zipkin service to expose the Zipkin server to the Internet. kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f zipkin-svc-expose-public.yaml
Step 4: Deploy the Bookinfo application
Run the following command to deploy the Bookinfo application to the Kubernetes cluster on the data plane:
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} apply -f bookinfo.yaml
Run the following command on the kubectl client to deploy virtual services for Bookinfo:
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f virtual-service-all-v1.yaml
Run the following command on the kubectl client to deploy destination rules for Bookinfo:
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f destination-rule-all.yaml
Run the following command on the kubectl client to deploy a gateway for Bookinfo:
kubectl --kubeconfig=${ASM_KUBECONFIG} apply -f bookinfo-gateway.yaml
Step 5: Generate tracing data
Run the following command to query the IP address of the ingress gateway:
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG} get svc -n istio-system|grep ingressgateway|awk -F ' ' '{print $4}'
Enter
IP address of the ingress gateway/productpage
in the address bar of your browser to access Bookinfo.
Step 6: View tracing data
Run the following command to obtain the address of the zipkin service:
kubectl --kubeconfig=${DATA_PLANE_KUBECONFIG}get svc -n istio-system|grep zipkin|awk -F ' ' '{print $4}'
Enter
IP address of the zipkin service:9411
in the address bar of your browser to access the Zipkin console. In the Zipkin console, you can view tracing data.
For an ASM instance of a version of 1.18.0.124 or later
Step 1: Deploy Zipkin
Run the following command to create a namespace named zipkin to deploy Zipkin:
kubectl create namespace zipkin
Run the following helm command to install Zipkin:
helm install --namespace zipkin my-zipkin carlosjgp/zipkin --version 0.2.0
Run the following command to check whether Zipkin is running properly:
kubectl -n zipkin get pods
Expected output:
NAME READY STATUS RESTARTS AGE my-zipkin-collector-79c6dc9cd7-jmswm 1/1 Running 0 29m my-zipkin-ui-64c97b4d6c-f742j 1/1 Running 0 29m
Step 2: Deploy the OpenTelemetry Operator
Run the following command to create the opentelemetry-operator-system namespace:
kubectl create namespace opentelemetry-operator-system
Run the following commands to use Helm to install the OpenTelemetry Operator in the opentelemetry-operator-system namespace:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install --namespace=opentelemetry-operator-system --set admissionWebhooks.certManager.enabled=false --set admissionWebhooks.certManager.autoGenerateCert=true opentelemetry-operator open-telemetry/opentelemetry-operator
Run the following command to check whether the OpenTelemetry Operator works properly:
kubectl get pod -n opentelemetry-operator-system
Expected output:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 1m
The output indicates that the status is running. This means that the OpenTelemetry Operator works properly.
Step 3: Create an OpenTelemetry Collector
Create a collector.yaml file that contains the following content shown in the code block.
Replace
${ENDPOINT}
in the YAML file with a virtual private cloud (VPC) endpoint supporting the gRPC protocol. Replace${TOKEN}
with the authentication token. For more information about how to obtain the endpoints supported by Managed Service for OpenTelemetry and authentication tokens, see Connect to Managed Service for OpenTelemetry and authenticate clients.Use kubectl to connect to a Container Service for Kubernetes (ACK) cluster based on the information in the kubeconfig file, and then run the following command to deploy the OpenTelemetry Collector in the cluster:
kubectl apply -f collector.yaml
Run the following command to check whether the OpenTelemetry Collector is started:
kubectl get pod -n opentelemetry-operator-system
Expected output:
NAME READY STATUS RESTARTS AGE opentelemetry-operator-854fb558b5-pvllj 2/2 Running 0 3m default-collector-5cbb4497f4-2hjqv 1/1 Running 0 30s
The output indicates that the OpenTelemetry Collector starts normally.
Run the following command to check whether a service is created for the OpenTelemetry Collector:
kubectl get svc -n opentelemetry-operator-system
Expected output:
opentelemetry-operator ClusterIP 172.16.138.165 <none> 8443/TCP,8080/TCP 3m opentelemetry-operator-webhook ClusterIP 172.16.127.0 <none> 443/TCP 3m default-collector ClusterIP 172.16.145.93 <none> 4317/TCP 30s default-collector-headless ClusterIP None <none> 4317/TCP 30s default-collector-monitoring ClusterIP 172.16.136.5 <none> 8888/TCP 30s
The output indicates that a service is created for the OpenTelemetry Collector.
Step 4: Deploy test applications
Deploy the Bookinfo and sleep applications. For more information, see Deploy an application in an ASM instance.
Step 5: Access productpage and view the tracing data
Run the following command to access the productpage application:
kubectl exec -it deploy/sleep -c sleep -- curl productpage:9080/productpage?u=normal
After the access is successful, view the logs of the OpenTelemetry Collector and the output printed by debug exporter.
2023-11-20T08:44:27.531Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 3}
Step 6: Configure an ASM gateway and view the tracing data in the Zipkin service
Create an Istio gateway.
Use the following content to create an ingressgateway.yaml file:
Use kubectl to connect to the ASM instance based on the information in the kubeconfig file. Then, run the following command to enable the ASM gateway to listen on port 80 and configure a route to the Zipkin service:
kubectl apply -f ingressgateway.yaml
Access the Zipkin service by using the IP address of the ASM gateway and view the tracing data.