All Products
Search
Document Center

Alibaba Cloud Service Mesh:Customize data plane access logs

Last Updated:Mar 11, 2026

After you add a Kubernetes cluster to a Service Mesh (ASM) instance, the Envoy sidecar proxies deployed on the data plane can print access logs for the cluster. ASM allows you to customize the fields of access logs printed by Envoy proxies -- such as specific request headers, response headers, or Envoy built-in variables -- to capture the data you need for troubleshooting and observability.

Prerequisites

Before you begin, make sure that you have:

Step 1: Enable access logging

The configuration path differs depending on your ASM instance version.

ASM instances earlier than v1.17.2.35

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose ASM Instance > Base Information.

  3. In the upper-right corner, click Settings.

  4. In the Settings Update panel, select Enable access logging and print it to container stdout, then click OK.

After you enable this setting, the istio-proxy container outputs access logs in JSON format. If access logging is disabled, the istio-proxy container does not print access logs in JSON format. Each log entry contains the following default fields:

Default log fields

{
    "authority_for": "%REQ(:AUTHORITY)%",
    "bytes_received": "%BYTES_RECEIVED%",
    "bytes_sent": "%BYTES_SENT%",
    "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
    "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
    "duration": "%DURATION%",
    "istio_policy_status": "%DYNAMIC_METADATA(istio.mixer:status)%",
    "method": "%REQ(:METHOD)%",
    "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
    "protocol": "%PROTOCOL%",
    "request_id": "%REQ(X-REQUEST-ID)%",
    "requested_server_name": "%REQUESTED_SERVER_NAME%",
    "response_code": "%RESPONSE_CODE%",
    "response_flags": "%RESPONSE_FLAGS%",
    "route_name": "%ROUTE_NAME%",
    "start_time": "%START_TIME%",
    "trace_id": "%REQ(X-B3-TRACEID)%",
    "upstream_cluster": "%UPSTREAM_CLUSTER%",
    "upstream_host": "%UPSTREAM_HOST%",
    "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
    "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
    "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
    "user_agent": "%REQ(USER-AGENT)%",
    "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%"
}

ASM instances v1.17.2.35 or later

ASM v1.17.2.35 and later provides granular log controls at three levels:

LevelScope
GlobalAll sidecar proxies and gateways in the mesh
NamespaceAll workloads in a specific namespace
CustomSpecific workloads matched by label
  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Settings page, select a tab based on the scope you want to configure:

    • Global -- Applies to all sidecar proxies and gateways in the mesh.

    • Namespace -- Click Create, then select a namespace from the Namespace drop-down list.

    • Custom -- Click Create, select a namespace from the Namespace drop-down list, then set Name and Matching Label to target specific workloads.

  4. In the Log Settings section, turn on Enable Log Output, then click submit. Sidecar proxies and gateways on the data plane now print access logs to container standard output (stdout).

    ASM also supports log filtering. For details, see Filter logs in "Configure observability settings".
  5. Verify that logs are being generated. Run kubectl against a sidecar proxy or an ingress gateway:

    Sidecar proxy:

    kubectl logs <pod-name> -c istio-proxy --tail 1

    Show the sample output

    {
           "authority_for": "47.110.XX.XXX",
           "bytes_received": "0",
           "bytes_sent": "22382",
           "downstream_local_address": "192.168.0.29:80",
           "downstream_remote_address": "221.220.XXX.XXX:0",
           "duration": "80",
           "istio_policy_status": "-",
           "method": "GET",
           "path": "/static/favicon.ico",
           "protocol": "HTTP/1.1",
           "request_id": "0f2cf829-3da5-4810-a618-08d9745d****",
           "requested_server_name": "outbound_.8000_._.httpbin.default.svc.cluster.local",
           "response_code": "200",
           "response_flags": "-",
           "route_name": "default",
           "start_time": "2023-06-30T04:00:36.841Z",
           "trace_id": "-",
           "upstream_cluster": "inbound|80||",
           "upstream_host": "192.168.0.29:80",
           "upstream_local_address": "127.0.X.X:55879",
           "upstream_response_time": "79",
           "upstream_service_time": "79",
           "upstream_transport_failure_reason": "-",
           "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.X.X Safari/537.36",
           "x_forwarded_for": "221.220.XXX.XXX"
       }

    Ingress gateway:

    kubectl -n istio-system logs <gateway-pod-name> --tail 1

    Show the sample output

    {
           "authority_for": "47.110.XX.XXX",
           "bytes_received": "0",
           "bytes_sent": "22382",
           "downstream_local_address": "192.168.0.63:80",
           "downstream_remote_address": "221.220.XXX.XXX:64284",
           "duration": "81",
           "istio_policy_status": "-",
           "method": "GET",
           "path": "/static/favicon.ico",
           "protocol": "HTTP/1.1",
           "request_id": "0f2cf829-3da5-4810-a618-08d9745d****",
           "requested_server_name": "-",
           "response_code": "200",
           "response_flags": "-",
           "route_name": "httpbin",
           "start_time": "2023-06-30T04:00:36.841Z",
           "trace_id": "-",
           "upstream_cluster": "outbound|8000||httpbin.default.svc.cluster.local",
           "upstream_host": "192.168.0.29:80",
           "upstream_local_address": "192.168.0.63:36140",
           "upstream_response_time": "81",
           "upstream_service_time": "81",
           "upstream_transport_failure_reason": "-",
           "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.X.X Safari/537.36",
           "x_forwarded_for": "221.220.XXX.XXX"
       }
  6. (Optional) View access logs in the Container Service for Kubernetes (ACK) console:

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the cluster name, then choose Workloads > Pods in the left-side navigation pane.

    3. On the Pods page, click the pod name and open the Logs tab.

Step 2: Customize log fields

Beyond the default fields, you can customize log fields to capture specific request headers, response headers, or Envoy built-in variables.

ASM instances earlier than v1.17.2.35

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose ASM Instance > Base Information.

  3. In the Config Info section, click Update Access Log Format next to Enable access logging and print it to container stdout.

  4. In the Update Access Log Format dialog box, add a custom field: This example extracts the end-user request header and writes it to the access log under the key my_custom_key. You can select from the built-in optional fields provided by ASM or define your own.

    • Set accessLogFormat key to the field name (for example, my_custom_key).

    • Set accessLogFormat value to the Envoy command operator (for example, %REQ(end-user)%).

    • Click OK.

    Optional fields and custom field configuration

ASM instances v1.17.2.35 or later

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose Observability Management Center > Observability Settings.

  3. On the Observability Settings page, select the Global, Namespace, or Custom tab.

    • Namespace tab: Click Create, then select a namespace from the Namespace drop-down list.

    • Custom tab: Click Create, select a namespace from the Namespace drop-down list, then set Name and Matching Label.

  4. In the Log Settings section, customize the log format: Each log field value can come from one of the following sources: Example: To log the Accept-Encoding request header, set accessLogFormat key to accept-encoding, Type to Request Properties, and accessLogFormat value to Accept-Encoding.

    • Select or deselect built-in fields.

    • Modify existing custom fields.

    • Click the add icon icon to add a new field.

    • Click submit.

    Log field customization requires Enable Log Output to be turned on. Default fields in the Log Format section are mandatory and cannot be modified.
    Source typeDescriptionExample
    Request headerA header from the incoming requestAccept-Encoding, end-user
    Response headerA header from the upstream responseX-Custom-Response
    Envoy built-in variableAn Envoy command operator%PROTOCOL%, %DURATION%

    Log format configuration

  5. Verify the custom field appears in the logs:

    kubectl logs <pod-name> -c istio-proxy --tail 1 | grep accept-encoding --color=auto

    Show the sample output

    {
           "bytes_received": "0",
           "bytes_sent": "9593",
           "downstream_local_address": "192.168.0.29:80",
           "downstream_remote_address": "69.164.XXX.XX:0",
           "duration": "2",
           "istio_policy_status": "-",
           "method": "GET",
           "path": "/",
           "protocol": "HTTP/1.1",
           "request_id": "29939dc9-62be-4ddf-acf6-32cb098d****",
           "requested_server_name": "outbound_.8000_._.httpbin.default.svc.cluster.local",
           "response_code": "200",
           "response_flags": "-",
           "route_name": "default",
           "start_time": "2023-06-30T04:18:19.734Z",
           "trace_id": "-",
           "upstream_cluster": "inbound|80||",
           "upstream_host": "192.168.0.29:80",
           "upstream_local_address": "127.0.X.X:34723",
           "upstream_service_time": "2",
           "upstream_transport_failure_reason": "-",
           "user_agent": "Mozilla/5.0 zgrab/0.x",
           "x_forwarded_for": "69.164.XXX.XX",
           "authority_for": "47.110.XX.XXX",
           "upstream_response_time": "2",
           "accept-encoding": "gzip"
       }

    The accept-encoding field now appears in the log output with the value gzip.

Step 3: View access logs

After you enable access logging and customize log fields, sidecar proxies print logs in the updated format for every request they handle. Use the Bookinfo sample application to verify end-to-end:

  1. Open http://<ingress-gateway-IP>/productpage in your browser to generate traffic.

  2. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, click the cluster name, then choose Workloads > Deployments in the left-side navigation pane.

  4. Select default from the Namespace drop-down list, then click Details in the Actions column for the productpage-v1 deployment.

  5. On the deployment details page, click the Logs tab and set Container to istio-proxy. The log output shows the custom fields you configured. For example, a field specifying end-user: jason confirms that custom log extraction is working.

    Access log with custom end-user field

Duration fields reference

Use these fields to diagnose latency issues in the request lifecycle.

In service mesh terminology, upstream refers to the service receiving a request, and downstream refers to the service initiating a request. For example, when service A calls service B, A is downstream and B is upstream.

FieldEnvoy variableDescription (HTTP)Description (TCP)
duration%DURATION%Time from when the proxy starts reading the request to when it sends the last response byte. This is the total processing time.Duration of the downstream connection.
request_duration%REQUEST_DURATION%Time to read the entire request (header + body) from the downstream service. A high value may indicate network congestion or I/O bottlenecks.Not available (outputs -).
request_tx_duration%REQUEST_TX_DURATION%Time from request initiation to sending the last request byte to the upstream service. A high value may indicate network issues or I/O bottlenecks between the proxy and upstream.Not available (outputs -).
response_duration%RESPONSE_DURATION%Time from request initiation to reading the first response byte from the upstream service. If this value is high but request_tx_duration is normal, the upstream service likely has a performance bottleneck.Not available (outputs -).
response_tx_duration%RESPONSE_TX_DURATION%Time from reading the first response byte from the upstream service to sending the last response byte to the downstream service. A high value may indicate network issues or I/O bottlenecks.Not available (outputs -).
upstream_service_time (sidecar) / upstream_response_time (gateway)%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%Processing time of the upstream service plus network overhead. A high value may indicate slow upstream processing or high network latency.--
For HTTP requests with a body (Content-Length > 0), the proxy reads and forwards the request simultaneously -- it does not buffer the full request body before forwarding. As a result, slow reads from the downstream service can increase the upstream service's perceived processing time.

What's next