All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure observability settings

Last Updated:Mar 11, 2026

Service Mesh (ASM) provides granular control over access logs, metrics, and distributed tracing. Configure these settings at three levels -- global, namespace, or workload -- to balance observability coverage against resource consumption.

Prerequisites

An ASM instance of V1.17.2.35 or later is created. For more information, see Create an ASM instance or Update an ASM instance.

Configuration scopes

ASM supports three configuration scopes.

ScopeCoversAvailable settingsLimit
GlobalAll workloads in the meshLog settings, metric settings, and tracing settingsOne global configuration per mesh
NamespaceAll workloads in a single namespaceLog settings and metric settingsOne configuration per namespace
CustomWorkloads matched by label selectorsLog settings and metric settingsEach workload can be matched by only one custom configuration
Note

Tracing settings are available only at the global scope in the ASM console. Starting from ASM V1.24.6.83, you can modify Telemetry resources through the Kubernetes API to enable namespace-level and workload-level tracing configuration. For more information, see Description of Telemetry fields.

Configure global settings

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Settings page, click the Global tab.

  4. Configure log settings, metric settings, and tracing settings, then click submit.

Configure namespace-level settings

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Settings page, click the Namespace tab, then click Create.

  4. Select the target namespace from the Namespace drop-down list, configure log settings and metric settings, then click Create.

Configure custom settings

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Settings page, click the Custom tab, select the target namespace from the Namespace drop-down list, then click Create.

  4. Set Name and Matching Label to select the target workloads, configure log settings and metric settings, then click Create.

Log settings

Enable or disable access log output

In the Log Settings section, turn on or turn off Enable Log Output.

  • On: Sidecar proxies and gateways on the data plane send access logs to their containers, which emit logs to stdout.

  • Off: Access log output stops. No logs are emitted to stdout.

Verify log output

Run the following kubectl command to view the latest access log entry from a sidecar proxy:

kubectl logs <pod-name> -c istio-proxy --tail 1

Replace <pod-name> with the name of the target pod. To find it dynamically:

kubectl logs "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy --tail 1

Sample output:

{
    "authority_for": "47.110.XX.XXX",
    "bytes_received": "0",
    "bytes_sent": "22382",
    "downstream_local_address": "192.168.0.29:80",
    "downstream_remote_address": "221.220.XXX.XXX:0",
    "duration": "80",
    "istio_policy_status": "-",
    "method": "GET",
    "path": "/static/favicon.ico",
    "protocol": "HTTP/1.1",
    "request_id": "0f2cf829-3da5-4810-a618-08d9745d****",
    "requested_server_name": "outbound_.8000_._.httpbin.default.svc.cluster.local",
    "response_code": "200",
    "response_flags": "-",
    "route_name": "default",
    "start_time": "2023-06-30T04:00:36.841Z",
    "trace_id": "-",
    "upstream_cluster": "inbound|80||",
    "upstream_host": "192.168.0.29:80",
    "upstream_local_address": "127.0.X.X:55879",
    "upstream_response_time": "79",
    "upstream_service_time": "79",
    "upstream_transport_failure_reason": "-",
    "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.X.X Safari/537.36",
    "x_forwarded_for": "221.220.XXX.XXX"
}

To view ingress gateway logs:

kubectl -n istio-system logs <gateway-pod-name> --tail 1

Sample output:

{
    "authority_for": "47.110.XX.XXX",
    "bytes_received": "0",
    "bytes_sent": "22382",
    "downstream_local_address": "192.168.0.63:80",
    "downstream_remote_address": "221.220.XXX.XXX:64284",
    "duration": "81",
    "istio_policy_status": "-",
    "method": "GET",
    "path": "/static/favicon.ico",
    "protocol": "HTTP/1.1",
    "request_id": "0f2cf829-3da5-4810-a618-08d9745d****",
    "requested_server_name": "-",
    "response_code": "200",
    "response_flags": "-",
    "route_name": "httpbin",
    "start_time": "2023-06-30T04:00:36.841Z",
    "trace_id": "-",
    "upstream_cluster": "outbound|8000||httpbin.default.svc.cluster.local",
    "upstream_host": "192.168.0.29:80",
    "upstream_local_address": "192.168.0.63:36140",
    "upstream_response_time": "81",
    "upstream_service_time": "81",
    "upstream_transport_failure_reason": "-",
    "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.X.X Safari/537.36",
    "x_forwarded_for": "221.220.XXX.XXX"
}

You can also view access logs in the Container Service for Kubernetes (ACK) console:

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster and click its name. In the left-side pane, choose Workloads > Pods.

  3. On the Pods page, click the pod name and then click the Logs tab.

Set the log output format

Note

This feature requires ASM V1.20.6.36 or later. For more information about how to update an ASM instance, see Update an ASM instance.

In the Log Settings section, set Log Output Format to one of the following:

  • JSON: Access logs are output as JSON strings.

  • TEXT: Access logs are output as plain text strings.

Customize log fields

  1. In the Log Settings section, select or clear custom fields, or click the add icon to add new log fields. Log field customization requires Enable Log Output to be turned on. The default log fields in the Log Format section are mandatory and cannot be removed. Each field value can come from a request header, response header, or Envoy built-in value.

  2. For example, to add the Accept-Encoding request header to logs, set:

    • accessLogFormat key: accept-encoding

    • Type: Request Properties

    • accessLogFormat value: Accept-Encoding

  3. Verify the custom field appears in the output:

    kubectl logs "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy --tail 1 | grep accept-encoding

    Sample output:

    {
           "bytes_received": "0",
           "bytes_sent": "9593",
           "downstream_local_address": "192.168.0.29:80",
           "downstream_remote_address": "69.164.XXX.XX:0",
           "duration": "2",
           "istio_policy_status": "-",
           "method": "GET",
           "path": "/",
           "protocol": "HTTP/1.1",
           "request_id": "29939dc9-62be-4ddf-acf6-32cb098d****",
           "requested_server_name": "outbound_.8000_._.httpbin.default.svc.cluster.local",
           "response_code": "200",
           "response_flags": "-",
           "route_name": "default",
           "start_time": "2023-06-30T04:18:19.734Z",
           "trace_id": "-",
           "upstream_cluster": "inbound|80||",
           "upstream_host": "192.168.0.29:80",
           "upstream_local_address": "127.0.X.X:34723",
           "upstream_service_time": "2",
           "upstream_transport_failure_reason": "-",
           "user_agent": "Mozilla/5.0 zgrab/0.x",
           "x_forwarded_for": "69.164.XXX.XX",
           "authority_for": "47.110.XX.XXX",
           "upstream_response_time": "2",
           "accept-encoding": "gzip"
       }

    The accept-encoding field added in the previous step now appears in the access log.

Filter logs

In the Log Settings section, select Enable Log Filter and enter a filter expression. Only requests matching the expression produce access logs.

Filter expressions use the Common Expression Language (CEL). For example, to log only error responses (HTTP status code 400 or higher), set the expression to response.code >= 400.

For the full CEL syntax, see CEL. For all available attributes, see Envoy attributes.

CEL attributes reference

AttributeTypeDescription
request.pathstringThe request path.
request.url_pathstringThe request path without the query string.
request.hoststringThe host name portion of the URL.
request.methodstringThe request method.
request.headersmap<string, string>All request headers indexed by the lowercase header name.
request.useragentstringThe user agent header value.
request.timetimestampThe time when the first byte of the request arrives.
request.idstringThe request ID.
request.protocolstringThe request protocol. Valid values: HTTP/1.0, HTTP/1.1, HTTP/2, and HTTP/3.
request.querystringThe query portion of the request URL.
response.codeintThe HTTP status code in the response.
response.code_detailsstringThe response code details.
response.grpc_statusintThe gRPC status code in the response.
response.headersmap<string, string>All response headers indexed by the lowercase header name.
response.sizeintThe response body size in bytes.
response.total_sizeintThe total response size in bytes.

Metric settings

Enable or disable metrics

Metrics fall into two categories:

  • Client-side metrics: Generated when a sidecar proxy initiates requests as a client. Gateway metrics also belong to this category.

  • Server-side metrics: Generated when a sidecar proxy receives requests as a server.

To enable or disable a metric, select or clear the Enabled check box for the corresponding metric in the CLIENT side Indicator or SERVER side index column.

  • Enabled: The sidecar proxy or gateway exposes the metric at the /stats/prometheus path on port 15020.

  • Disabled: The metric is not exposed.

Verify metrics

Run the following command to view the metrics exposed by a sidecar proxy or gateway:

kubectl exec "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" \
    -c istio-proxy -- curl -sS 127.0.0.1:15020/stats/prometheus | head -n 10

Sample output:

# TYPE istio_agent_cert_expiry_seconds gauge
istio_agent_cert_expiry_seconds{resource_name="default"} 46725.287654548
# HELP istio_agent_endpoint_no_pod Endpoints without an associated pod.
# TYPE istio_agent_endpoint_no_pod gauge
istio_agent_endpoint_no_pod 0
# HELP istio_agent_go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE istio_agent_go_gc_duration_seconds summary
istio_agent_go_gc_duration_seconds{quantile="0"} 5.0149e-05
istio_agent_go_gc_duration_seconds{quantile="0.25"} 9.8807e-05
......

Edit default dimensions

Dimensions attached to metrics provide filtering context in Prometheus. For example, the source_app dimension filters requests from a specific client application.

  1. In the Metric Settings section, click Edit dimension for an enabled metric in the Client-side Metrics or Server-side Metrics column.

  2. In the Customize CLIENT dimension configuration or Customize SERVER dimension configuration dialog box, select the dimensions to export, then click Submit.

Example: reduce dimensions to lower memory usage

With all dimensions enabled, the istio_request_bytes_sum metric (corresponding to REQUEST_SIZE in the console) contains all dimension labels:

istio_request_bytes_sum{reporter="destination",source_workload="istio-ingressgateway",source_canonical_service="unknown",source_canonical_revision="latest",source_workload_namespace="istio-system",source_principal="spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway",source_app="istio-ingressgateway",source_version="unknown",source_cluster="c479fc4abd2734bfaaa54e9e36fb26c01",destination_workload="httpbin",destination_workload_namespace="default",destination_principal="spiffe://cluster.local/ns/default/sa/httpbin",destination_app="httpbin",destination_version="v1",destination_service="httpbin.default.svc.cluster.local",destination_canonical_service="httpbin",destination_canonical_revision="v1",destination_service_name="httpbin",destination_service_namespace="default",destination_cluster="c479fc4abd2734bfaaa54e9e36fb26c01",request_protocol="http",response_code="200",grpc_response_status="",response_flags="-",connection_security_policy="mutual_tls"} 18000

After you edit the REQUEST_SIZE server-side metric to keep only the response_code dimension, the output becomes:

istio_request_bytes_sum{response_code="200"} 16550
Important

Remove dimensions that your services do not need to reduce memory consumption of Envoy and Prometheus. The Metric Settings section shows only the dimensions that have been removed from the default set.

Add custom dimensions

  1. In the Metric Settings section, click Edit dimension for an enabled metric in the Client-side Metrics or Server-side Metrics column.

  2. In the dialog box, click the Custom Dimension tab, set Dimension Name and Value, then click Submit.

Example: add a request path dimension

Edit the REQUEST_SIZE server-side metric and add a custom dimension:

  • Dimension Name: request_path

  • Value: request.path

After this change, the metric output includes the custom dimension:

istio_request_bytes_sum{response_code="200",request_path="/spec.json"} 5800

Tracing settings

Tracing requires consistent configuration across the entire call chain -- inconsistent sampling rates or reporting endpoints can produce incomplete traces. For this reason, tracing settings are available only at the global scope in the ASM console. Starting from V1.24.6.83, ASM supports namespace-level and workload-level tracing configuration through the Kubernetes API. For more information, see Description of Telemetry fields.

Set the sampling percentage

The sampling percentage determines what proportion of requests trigger trace collection. Set it to 0 to disable tracing entirely.

Add custom tags

Custom tags attach additional metadata to trace spans. In the Tracing Analysis Settings section, click Add Custom Tags and set Name, Type, and Value.

The following table describes the available tag types:

TypeDescriptionExample
Fixed ValueThe tag value is fixed to the string you specify.Name: env, Type: Fixed Value, Value: prod
Request HeaderThe tag value is read from a specified request header. If the header is absent, the default value is used.Name: useragent, Type: Request Header, Header name: User-Agent, Default value: unknow
Environment VariableThe tag value is read from a workload environment variable. If the variable is absent, the default value is used.Name: env, Type: Environment Variable, Environment Variable name: ENV, Default value: unknow

What's next