All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure buckets for histogram metrics

Last Updated:Oct 14, 2024

Service Mesh (ASM) allows you to collect different types of metrics to Prometheus, such as histogram and counter. Histogram is an important metric type in Prometheus. Histogram metrics are used to collect and analyze the distribution of data, especially to measure metrics such as request durations and response sizes. A histogram metric allows you to record a set of values and their distribution. It not only provides total counts and total values, but also allows you to define multiple buckets to count the values in different ranges. This topic describes how to configure buckets for a histogram metric in ASM.

Prerequisites

A cluster is created and added to an ASM instance of V1.19 or later. For more information, see Add a cluster to an ASM instance.

Configure metric buckets by using annotations

ASM allows you to configure buckets for the metrics of a pod. You can add sidecar.istio.io/statsHistogramBuckets annotations to application pods to configure buckets for specified histogram metrics.

You can add annotations to configure buckets for the following histogram metrics.

Metric type

Metric name

Istio metrics

  • istiocustom.istio_request_duration_milliseconds

  • istiocustom.istio_request_bytes

  • istiocustom.istio_response_bytes

Envoy metrics

  • cluster_manager

  • listener_manager

  • server

  • cluster.xds-grpc

For more information about the preceding metrics, see Istio Standard Metrics and Envoy Statistics.

The following example demonstrates how to configure the histogram metric cluster.xds-grpc of Istio and Envoy for application pods and change buckets to [1,5,10].

kubectl patch pod <POD_NAME> -p '{"metadata":{"annotations":{"sidecar.istio.io/statsHistogramBuckets": {"istiocustom":[1,5,10],"cluster.xds-grpc":[1,5,10]}}}}'
Important

Istio matches metrics by prefix. For example, the configuration of istiocustom applies to all Istio histogram metrics.

Demo

The following example demonstrates how to modify the buckets of the xds-grpc metric of Envoy by adding annotations.

Deploy a sample application

  1. Create an HTTPBin application with the following content. For more information, see Deploy the HTTPBin application.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/httpbin:0.1.0
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
  2. Run the following command to view the status of the HTTPBin application:

    kubectl get pod

    Expected output:

    NAME                      READY   STATUS    RESTARTS   AGE
    httpbin-fd686xxxx         2/2     Running   0          2m16s

View and modify the buckets of the xds-grpc metric

  1. Run the following command to view the buckets of the metric of the HTTPBin application:

    kubectl exec -it httpbin-fd686xxxx -c istio-proxy -- curl localhost:15000/stats/prometheus |grep envoy_cluster_upstream_cx_connect_ms_bucket

    Expected output:

    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="0.5"} 10
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1"} 10
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="25"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="50"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="100"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="250"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="500"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="2500"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="30000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="60000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="300000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="600000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1800000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="3600000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="+Inf"} 11

    The output indicates that the buckets of the xds-grpc metric is [0.5,1,5,10,25,50,100,250,500,1000,2500,5000,10000,30000,60000,300000,600000,1800000,3600000].

  2. Run the following command to modify the buckets of the xds-grpc metric for the HTTPBin application pod:

    kubectl patch deployment httpbin -p '{"spec":{"template":{"metadata":{"annotations":{"sidecar.istio.io/statsHistogramBuckets":"{\"cluster.xds-grpc\":[1,5,10,25,50,100,250,500,1000,2500,5000,10000]}"}}}}}'
  3. Run the following command to query the status of the pod:

    kubectl get pod

    Expected output:

    NAME                       READY   STATUS    RESTARTS   AGE
    httpbin-85b555xxxx-xxxxx   2/2     Running   0          2m2s
  4. Run the following command to view the buckets of the metric for the HTTPBin application:

    kubectl exec -it httpbin-85b555xxxx-xxxxx -c istio-proxy -- curl localhost:15000/stats/prometheus |grep envoy_cluster_upstream_cx_connect_ms_bucket

    Expected output:

    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1"} 0
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="25"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="50"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="100"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="250"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="500"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="2500"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="+Inf"} 1

    The output indicates that the buckets of the xds-grpc metric have been changed to [1,5,10,25,50,100,250,500,1000,2500,5000,10000].