Service Mesh (ASM) provides granular control over access logs, metrics, and distributed tracing. Configure these settings at three levels -- global, namespace, or workload -- to balance observability coverage against resource consumption.
Prerequisites
An ASM instance of V1.17.2.35 or later is created. For more information, see Create an ASM instance or Update an ASM instance.
Configuration scopes
ASM supports three configuration scopes.
| Scope | Covers | Available settings | Limit |
|---|---|---|---|
| Global | All workloads in the mesh | Log settings, metric settings, and tracing settings | One global configuration per mesh |
| Namespace | All workloads in a single namespace | Log settings and metric settings | One configuration per namespace |
| Custom | Workloads matched by label selectors | Log settings and metric settings | Each workload can be matched by only one custom configuration |
Tracing settings are available only at the global scope in the ASM console. Starting from ASM V1.24.6.83, you can modify Telemetry resources through the Kubernetes API to enable namespace-level and workload-level tracing configuration. For more information, see Description of Telemetry fields.
Configure global settings
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.
On the Observability Settings page, click the Global tab.
Configure log settings, metric settings, and tracing settings, then click submit.
Configure namespace-level settings
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.
On the Observability Settings page, click the Namespace tab, then click Create.
Select the target namespace from the Namespace drop-down list, configure log settings and metric settings, then click Create.
Configure custom settings
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.
On the Observability Settings page, click the Custom tab, select the target namespace from the Namespace drop-down list, then click Create.
Set Name and Matching Label to select the target workloads, configure log settings and metric settings, then click Create.
Log settings
Enable or disable access log output
In the Log Settings section, turn on or turn off Enable Log Output.
On: Sidecar proxies and gateways on the data plane send access logs to their containers, which emit logs to stdout.
Off: Access log output stops. No logs are emitted to stdout.
Verify log output
Run the following kubectl command to view the latest access log entry from a sidecar proxy:
kubectl logs <pod-name> -c istio-proxy --tail 1Replace <pod-name> with the name of the target pod. To find it dynamically:
kubectl logs "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy --tail 1To view ingress gateway logs:
kubectl -n istio-system logs <gateway-pod-name> --tail 1You can also view access logs in the Container Service for Kubernetes (ACK) console:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster and click its name. In the left-side pane, choose Workloads > Pods.
On the Pods page, click the pod name and then click the Logs tab.
Set the log output format
This feature requires ASM V1.20.6.36 or later. For more information about how to update an ASM instance, see Update an ASM instance.
In the Log Settings section, set Log Output Format to one of the following:
JSON: Access logs are output as JSON strings.
TEXT: Access logs are output as plain text strings.
Customize log fields
In the Log Settings section, select or clear custom fields, or click the add icon to add new log fields. Log field customization requires Enable Log Output to be turned on. The default log fields in the Log Format section are mandatory and cannot be removed. Each field value can come from a request header, response header, or Envoy built-in value.
For example, to add the
Accept-Encodingrequest header to logs, set:accessLogFormat key:
accept-encodingType: Request Properties
accessLogFormat value:
Accept-Encoding
Verify the custom field appears in the output:
kubectl logs "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy --tail 1 | grep accept-encodingThe
accept-encodingfield added in the previous step now appears in the access log.
Filter logs
In the Log Settings section, select Enable Log Filter and enter a filter expression. Only requests matching the expression produce access logs.
Filter expressions use the Common Expression Language (CEL). For example, to log only error responses (HTTP status code 400 or higher), set the expression to response.code >= 400.
For the full CEL syntax, see CEL. For all available attributes, see Envoy attributes.
CEL attributes reference
| Attribute | Type | Description |
|---|---|---|
request.path | string | The request path. |
request.url_path | string | The request path without the query string. |
request.host | string | The host name portion of the URL. |
request.method | string | The request method. |
request.headers | map<string, string> | All request headers indexed by the lowercase header name. |
request.useragent | string | The user agent header value. |
request.time | timestamp | The time when the first byte of the request arrives. |
request.id | string | The request ID. |
request.protocol | string | The request protocol. Valid values: HTTP/1.0, HTTP/1.1, HTTP/2, and HTTP/3. |
request.query | string | The query portion of the request URL. |
response.code | int | The HTTP status code in the response. |
response.code_details | string | The response code details. |
response.grpc_status | int | The gRPC status code in the response. |
response.headers | map<string, string> | All response headers indexed by the lowercase header name. |
response.size | int | The response body size in bytes. |
response.total_size | int | The total response size in bytes. |
Metric settings
Enable or disable metrics
Metrics fall into two categories:
Client-side metrics: Generated when a sidecar proxy initiates requests as a client. Gateway metrics also belong to this category.
Server-side metrics: Generated when a sidecar proxy receives requests as a server.
To enable or disable a metric, select or clear the Enabled check box for the corresponding metric in the CLIENT side Indicator or SERVER side index column.
Enabled: The sidecar proxy or gateway exposes the metric at the
/stats/prometheuspath on port 15020.Disabled: The metric is not exposed.
Verify metrics
Run the following command to view the metrics exposed by a sidecar proxy or gateway:
kubectl exec "$(kubectl get pod -l app=httpbin -o jsonpath='{.items[0].metadata.name}')" \
-c istio-proxy -- curl -sS 127.0.0.1:15020/stats/prometheus | head -n 10Sample output:
# TYPE istio_agent_cert_expiry_seconds gauge
istio_agent_cert_expiry_seconds{resource_name="default"} 46725.287654548
# HELP istio_agent_endpoint_no_pod Endpoints without an associated pod.
# TYPE istio_agent_endpoint_no_pod gauge
istio_agent_endpoint_no_pod 0
# HELP istio_agent_go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE istio_agent_go_gc_duration_seconds summary
istio_agent_go_gc_duration_seconds{quantile="0"} 5.0149e-05
istio_agent_go_gc_duration_seconds{quantile="0.25"} 9.8807e-05
......Edit default dimensions
Dimensions attached to metrics provide filtering context in Prometheus. For example, the source_app dimension filters requests from a specific client application.
In the Metric Settings section, click Edit dimension for an enabled metric in the Client-side Metrics or Server-side Metrics column.
In the Customize CLIENT dimension configuration or Customize SERVER dimension configuration dialog box, select the dimensions to export, then click Submit.
Example: reduce dimensions to lower memory usage
With all dimensions enabled, the istio_request_bytes_sum metric (corresponding to REQUEST_SIZE in the console) contains all dimension labels:
istio_request_bytes_sum{reporter="destination",source_workload="istio-ingressgateway",source_canonical_service="unknown",source_canonical_revision="latest",source_workload_namespace="istio-system",source_principal="spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway",source_app="istio-ingressgateway",source_version="unknown",source_cluster="c479fc4abd2734bfaaa54e9e36fb26c01",destination_workload="httpbin",destination_workload_namespace="default",destination_principal="spiffe://cluster.local/ns/default/sa/httpbin",destination_app="httpbin",destination_version="v1",destination_service="httpbin.default.svc.cluster.local",destination_canonical_service="httpbin",destination_canonical_revision="v1",destination_service_name="httpbin",destination_service_namespace="default",destination_cluster="c479fc4abd2734bfaaa54e9e36fb26c01",request_protocol="http",response_code="200",grpc_response_status="",response_flags="-",connection_security_policy="mutual_tls"} 18000After you edit the REQUEST_SIZE server-side metric to keep only the response_code dimension, the output becomes:
istio_request_bytes_sum{response_code="200"} 16550Remove dimensions that your services do not need to reduce memory consumption of Envoy and Prometheus. The Metric Settings section shows only the dimensions that have been removed from the default set.
Add custom dimensions
In the Metric Settings section, click Edit dimension for an enabled metric in the Client-side Metrics or Server-side Metrics column.
In the dialog box, click the Custom Dimension tab, set Dimension Name and Value, then click Submit.
Example: add a request path dimension
Edit the REQUEST_SIZE server-side metric and add a custom dimension:
Dimension Name:
request_pathValue:
request.path
After this change, the metric output includes the custom dimension:
istio_request_bytes_sum{response_code="200",request_path="/spec.json"} 5800Tracing settings
Tracing requires consistent configuration across the entire call chain -- inconsistent sampling rates or reporting endpoints can produce incomplete traces. For this reason, tracing settings are available only at the global scope in the ASM console. Starting from V1.24.6.83, ASM supports namespace-level and workload-level tracing configuration through the Kubernetes API. For more information, see Description of Telemetry fields.
Set the sampling percentage
The sampling percentage determines what proportion of requests trigger trace collection. Set it to 0 to disable tracing entirely.
Add custom tags
Custom tags attach additional metadata to trace spans. In the Tracing Analysis Settings section, click Add Custom Tags and set Name, Type, and Value.
The following table describes the available tag types:
| Type | Description | Example |
|---|---|---|
| Fixed Value | The tag value is fixed to the string you specify. | Name: env, Type: Fixed Value, Value: prod |
| Request Header | The tag value is read from a specified request header. If the header is absent, the default value is used. | Name: useragent, Type: Request Header, Header name: User-Agent, Default value: unknow |
| Environment Variable | The tag value is read from a workload environment variable. If the variable is absent, the default value is used. | Name: env, Type: Environment Variable, Environment Variable name: ENV, Default value: unknow |