All Products
Search
Document Center

Alibaba Cloud Service Mesh:Use Mixerless Telemetry to observe ASM instances

Last Updated:Mar 04, 2024

When you need to observe internal communication in a Service Mesh (ASM) instance and optimize monitoring policies, you can use Mixerless Telemetry to collect telemetry data and monitor application containers in a non-intrusive manner. You can use Managed Service for Prometheus or self-managed Prometheus systems to collect these metrics to effectively monitor service performance such as request rates, error rates, and latency. In the following example, a self-managed Prometheus system is used to demonstrate how to configure Prometheus and collect metrics. These metrics can help you detect and resolve service issues in a timely manner, thereby improving system stability and reliability.

Prerequisites

The cluster is added to the ASM instance.

Step 1: Install Prometheus

  1. Download and decompress the installation package of Istio. To download the installation package of Istio, see Download Istio.

  2. Use kubectl to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  3. Run the following command to install Prometheus:

    kubectl --kubeconfig <Path of the kubeconfig file> apply -f <Path to which the installation package of Istio is decompressed>/samples/addons/prometheus.yaml

Step 2: Enable metrics collection in ASM

Note

If the version of your ASM is earlier than 1.17.2.35, perform this step. If the version of your ASM is 1.17.2.35 or later, skip this step.

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Monitoring indicators.

  3. On the Monitoring indicators page, select Integrate the self-built Prometheus to achieve metrics monitoring, and confirm that the relevant parameter configuration has been completed according to the corresponding documents. Then, click Collect Metrics to Managed Service for Prometheus. In the Submit message, click OK.

    For more information about how to integrate a self-managed Prometheus system to monitor an ASM instance, see Monitor ASM instances by using a self-managed Prometheus instance.

Step 3: Configure Prometheus

  1. Configure the metrics of Istio.

    1. Log on to the ACK console.

    2. In the left-side navigation pane of the ACK console, click Clusters.

    3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.

    4. In the left-side navigation pane of the details page, choose Configurations > ConfigMaps.

    5. In the upper part of the ConfigMap page, select istio-system from the Namespace drop-down list. Find the item that is named prometheus and click Edit in the Actions column.

    6. In the Edit panel, enter configuration information in the Value column and click OK. To obtain the configuration information, visit GitHub.

  2. Delete the pod of Prometheus to make Prometheus configurations take effect.

    1. Log on to the ACK console.

    2. In the left-side navigation pane of the ACK console, click Clusters.

    3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.

    4. In the left-side navigation pane of the details page, choose Workloads > Pods.

    5. On the Pods page, find the pod whose name contains Prometheus and select More > Delete > Actions column.

    6. In the Note message, click OK.

  3. Run the following command to viewjob_name in the Prometheus configurations:

    kubectl --kubeconfig <Path of the kubeconfig file> get cm prometheus -n istio-system -o jsonpath={.data.prometheus\\.yml} | grep job_name

    Expected output:

    - job_name: 'istio-mesh'
    - job_name: 'envoy-stats'
    - job_name: 'istio-policy'
    - job_name: 'istio-telemetry'
    - job_name: 'pilot'
    - job_name: 'sidecar-injector'
    - job_name: prometheus
      job_name: kubernetes-apiservers
      job_name: kubernetes-nodes
      job_name: kubernetes-nodes-cadvisor
    - job_name: kubernetes-service-endpoints
    - job_name: kubernetes-service-endpoints-slow
      job_name: prometheus-pushgateway
    - job_name: kubernetes-services
    - job_name: kubernetes-pods
    - job_name: kubernetes-pods-slow

Step 4: Generate metric data

  1. Deploy the podinfo application in the ACK cluster.

    1. Download the required YAML files of the podinfo application. For more information, visit GitHub.

    2. Run the following commands to deploy the podinfo application in the ACK cluster:

      kubectl --kubeconfig <Path of the kubeconfig file> apply -f <Path of the podinfo application>/kustomize/deployment.yaml -n test
      kubectl --kubeconfig <Path of the kubeconfig file> apply -f <Path of the podinfo application>/kustomize/service.yaml -n test
  2. Run the following command to request the podinfo application to generate metric data:

    podinfo_pod=$(k get po -n test -l app=podinfo -o jsonpath={.items..metadata.name})
    for i in {1..10}; do
       kubectl --kubeconfig "$USER_CONFIG" exec $podinfo_pod -c podinfod -n test -- curl -s podinfo:9898/version
      echo
    done
  3. Check whether metric data is generated in the Envoy container.

    1. Run the following command to request Envoy to check whether the data of the istio_requests_total metric is generated:

      kubectl --kubeconfig <Path of the kubeconfig file> exec $podinfo_pod -n test -c istio-proxy -- curl -s localhost:15090/stats/prometheus | grep istio_requests_total

      Expected output:

      :::: istio_requests_total ::::
      # TYPE istio_requests_total counter
      istio_requests_total{response_code="200",reporter="destination",source_workload="podinfo",source_workload_namespace="test",source_principal="spiffe://cluster.local/ns/test/sa/default",source_app="podinfo",source_version="unknown",source_cluster="c199d81d4e3104a5d90254b2a210914c8",destination_workload="podinfo",destination_workload_namespace="test",destination_principal="spiffe://cluster.local/ns/test/sa/default",destination_app="podinfo",destination_version="unknown",destination_service="podinfo.test.svc.cluster.local",destination_service_name="podinfo",destination_service_namespace="test",destination_cluster="c199d81d4e3104a5d90254b2a210914c8",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="mutual_tls",source_canonical_service="podinfo",destination_canonical_service="podinfo",source_canonical_revision="latest",destination_canonical_revision="latest"} 10
      
      istio_requests_total{response_code="200",reporter="source",source_workload="podinfo",source_workload_namespace="test",source_principal="spiffe://cluster.local/ns/test/sa/default",source_app="podinfo",source_version="unknown",source_cluster="c199d81d4e3104a5d90254b2a210914c8",destination_workload="podinfo",destination_workload_namespace="test",destination_principal="spiffe://cluster.local/ns/test/sa/default",destination_app="podinfo",destination_version="unknown",destination_service="podinfo.test.svc.cluster.local",destination_service_name="podinfo",destination_service_namespace="test",destination_cluster="c199d81d4e3104a5d90254b2a210914c8",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="unknown",source_canonical_service="podinfo",destination_canonical_service="podinfo",source_canonical_revision="latest",destination_canonical_revision="latest"} 10
    2. Run the following command to request Envoy to check whether the data of the istio_request_duration metric is generated:

      kubectl --kubeconfig <Path of the kubeconfig file> exec $podinfo_pod -n test -c istio-proxy -- curl -s localhost:15090/stats/prometheus | grep istio_request_duration

      Expected output:

      :::: istio_request_duration ::::
      # TYPE istio_request_duration_milliseconds histogram
      istio_request_duration_milliseconds_bucket{response_code="200",reporter="destination",source_workload="podinfo",source_workload_namespace="test",source_principal="spiffe://cluster.local/ns/test/sa/default",source_app="podinfo",source_version="unknown",source_cluster="c199d81d4e3104a5d90254b2a210914c8",destination_workload="podinfo",destination_workload_namespace="test",destination_principal="spiffe://cluster.local/ns/test/sa/default",destination_app="podinfo",destination_version="unknown",destination_service="podinfo.test.svc.cluster.local",destination_service_name="podinfo",destination_service_namespace="test",destination_cluster="c199d81d4e3104a5d90254b2a210914c8",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="mutual_tls",source_canonical_service="podinfo",destination_canonical_service="podinfo",source_canonical_revision="latest",destination_canonical_revision="latest",le="0.5"} 10
      
      istio_request_duration_milliseconds_bucket{response_code="200",reporter="destination",source_workload="podinfo",source_workload_namespace="test",source_principal="spiffe://cluster.local/ns/test/sa/default",source_app="podinfo",source_version="unknown",source_cluster="c199d81d4e3104a5d90254b2a210914c8",destination_workload="podinfo",destination_workload_namespace="test",destination_principal="spiffe://cluster.local/ns/test/sa/default",destination_app="podinfo",destination_version="unknown",destination_service="podinfo.test.svc.cluster.local",destination_service_name="podinfo",destination_service_namespace="test",destination_cluster="c199d81d4e3104a5d90254b2a210914c8",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="mutual_tls",source_canonical_service="podinfo",destination_canonical_service="podinfo",source_canonical_revision="latest",destination_canonical_revision="latest",le="1"} 10
      ...

Verify the result

  1. Expose Prometheus by using a Classic Load Balancer (CLB) instance. For more information, see Use Services to expose applications.

  2. Log on to the ACK console.

  3. In the left-side navigation pane of the ACK console, click Clusters.

  4. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.

  5. In the left-side navigation pane of the details page, choose Network > Services

  6. On the Services page, find the service whose name contains Prometheus and click the IP address in the External IP column.

  7. On the Prometheus page, enter istio_requests_total in the search box and click Execute.

    The following figure shows that application metrics are collected by Prometheus.prometheus