All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure Sidecar proxy

Last Updated:Mar 11, 2026

Sidecar proxies intercept service-to-service traffic to add mutual TLS, load balancing, retries, and observability -- without changing application code. This topic describes how to configure sidecar proxy settings at the global, namespace, and workload levels through the ASM console.

Prerequisites

How configuration levels work

ASM applies sidecar proxy settings at four scope levels. A higher-priority level overrides conflicting settings from lower-priority levels.

PriorityLevelScopeHow to configure
4 (highest)PodIndividual PodsAdd annotations to the Pod spec. See Configure a sidecar proxy by adding resource annotations.
3WorkloadPods matching a label selector within a namespaceASM console > Sidecar Proxy Setting > workload tab
2NamespaceAll Pods in a namespaceASM console > Sidecar Proxy Setting > Namespace tab
1 (lowest)GlobalAll Pods across all namespacesASM console > Sidecar Proxy Setting > global tab

For example, if both a global setting and a namespace-level setting exist for the default namespace, the namespace-level setting takes precedence for workloads deployed in default.

Note

At the namespace and workload levels, unconfigured items inherit the global-level value. These levels have no independent defaults.

When to use each level:

LevelUse when
GlobalYou want a baseline configuration for all workloads in the mesh
NamespaceA specific namespace requires different resource limits, traffic rules, or lifecycle settings
WorkloadIndividual workloads have unique requirements (for example, higher concurrency or custom drain duration)
PodYou need one-off overrides without creating a workload-level configuration

Configuration items at a glance

Use this table to find the setting you need and verify that your ASM instance meets the version requirement.

CategoryConfiguration itemGlobalNamespaceWorkload
Resource settingsConfigure resources for Injected Istio proxyAll versions1.10.5.341.13.4.20
Configure Sidecar Resources Proportionally1.24.6.831.24.6.831.24.6.83
Configure Resources for istio-init Container1.9.7.931.10.5.341.13.4.20
Set ACK resources that can be dynamically overcommitted for Sidecar proxy1.16.3.471.16.3.471.16.3.47
Number of Sidecar proxy threads1.15.3.1041.12.4.191.13.4.20
Enable/disable Sidecar proxy by ports or IP addressesConfigure addresses to which external access is redirected to Sidecar proxyAll versions1.10.5.341.13.4.20
Configure addresses to which external access is not redirected to Sidecar proxyAll versions1.10.5.341.13.4.20
Configure ports on which inbound traffic redirected to Sidecar proxy1.15.3.1041.10.5.341.13.4.20
Configure ports on which outbound traffic redirected to Sidecar proxy1.15.3.1041.10.5.341.13.4.20
Configure ports on which inbound traffic not redirected to Sidecar proxyAll versions1.10.5.341.13.4.20
Configure ports on which outbound traffic not redirected to Sidecar proxyAll versions1.10.5.341.13.4.20
DNS proxyEnable DNS Proxy1.8.3.171.10.5.341.13.4.20
Manage environment variables for Sidecar proxySidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS)1.15.3.1041.15.3.1041.15.3.104
Lifecycle managementSidecar Graceful Startup1.15.3.1041.12.4.581.13.4.20
Configure Sidecar proxy drain duration at pod termination1.9.7.931.10.5.341.13.4.20
Configure lifecycle of Sidecar proxy1.9.7.931.10.5.341.13.4.20
Outbound traffic policyOutbound Traffic PolicyAll versions1.10.5.341.13.4.20
Sidecar traffic interception policySidecar Traffic Interception Policy1.15.3.251.15.3.251.15.3.25
MonitoringLog Level1.15.3.1041.12.4.581.13.4.20
Configure proxyStatsMatcher1.15.3.1041.12.4.581.13.4.20
Envoy runtime parametersLimits on Downstream Connections1.21.6.951.21.6.951.21.6.95

If a configuration item is not available in your ASM console, update your ASM instance to the required version.

Important

If your ASM version is V1.22 or later and your Kubernetes cluster version is V1.30 or later, sidecar proxies are deployed as native sidecar containers. Kubernetes manages the lifecycle of native sidecar containers, overriding all lifecycle management configurations.

Configure sidecar proxy settings

All configuration items are managed from the same console page. The general workflow applies to global, namespace, and workload levels:

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.

  3. Select the scope tab (global, Namespace, or workload).

  4. Expand the relevant configuration section, set the values, and click Update Settings.

  5. If the change requires a pod restart, redeploy affected workloads.

The following sections describe each configuration item in detail.

Resource settings

Sidecar proxy resources (istio-proxy)

Set the CPU and memory resource requests and limits for the istio-proxy container.

SettingDescription
Resource LimitsMaximum CPU (cores) and memory (MiB) the sidecar proxy container can consume.
Required ResourcesMinimum CPU (cores) and memory (MiB) guaranteed to the sidecar proxy container at runtime.

Configure in the console

  1. On the Sidecar Proxy Setting page, select the target scope tab and expand Resource Settings.

  2. At the namespace or workload level, select Configure Resources for istio-init Container if you also want to set init container resources.

  3. In Resource Limits, set CPU to 2 cores and Memory to 1025 MiB. In Required Resources, set CPU to 0.1 cores and Memory to 128 MiB.

  4. Click Update Settings.

  5. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

In the output, confirm that the istio-proxy container's resources field matches the configured values:

spec:
  containers:
    - name: istio-proxy
      resources:
        limits:
          cpu: '2'
          memory: 1025Mi
        requests:
          cpu: 100m
          memory: 128Mi

Proportional sidecar resource allocation

Instead of setting fixed values, allocate sidecar proxy resources as a percentage of the workload container's resources. This overrides the default Istio proxy resource settings.

Two computing policies are available:

  • Max container resources: Uses the highest CPU or memory limit among all containers in the Pod as the baseline.

  • Specified container: Uses a named container as the baseline. If the named container does not exist, the default Istio proxy resources apply. The Pod annotation scaled-resource.inject.istio.alibabacloud.com/container-ref takes priority over a manually specified container name.

Important

Regardless of the policy:

  • If the baseline container has no limits, the sidecar proxy has no limits configured.

  • If the baseline container has no requests, the system allocates minimum resources: CPU 100m, memory 128Mi.

  • Both requests and limits for CPU must be at least 100m and for memory at least 128Mi. Calculated values below these thresholds are automatically raised to the minimum.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.

  2. Select Configure sidecar resources by ratio.

  3. Set the Proportion of Resources percentage, optionally select a Computing Policy, and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Example outcomes with a workload container configured as requests: {cpu: 300m, memory: 512Mi} and limits: {cpu: 500m, memory: 1Gi}:

ProportionSidecar requestsSidecar limits
50%cpu: 150m, memory: 256Micpu: 250m, memory: 512Mi
20%cpu: 100m, memory: 128Micpu: 100m, memory: 204Mi

When the workload container has no requests configured and the proportion is 50%:

resources:
  requests:
    cpu: 100m       # Minimum fallback
    memory: 128Mi   # Minimum fallback
  limits:
    cpu: 250m
    memory: 512Mi

When the workload container has no limits configured and the proportion is 50%:

resources:
  requests:
    cpu: 150m
    memory: 256Mi
  # No limits set

istio-init container resources

Set resource requests and limits for the istio-init container -- the init container that configures iptables traffic interception rules before the sidecar proxy starts.

SettingDescription
Resource LimitsMaximum CPU (cores) and memory (MiB) for the istio-init container.
Required ResourcesMinimum CPU (cores) and memory (MiB) for the istio-init container at runtime.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.

  2. At the namespace or workload level, select Configure Resources for istio-init Container.

  3. In Resource Limits, set CPU to 1 core and Memory to 512 MiB. In Required Resources, set CPU to 0.1 cores and Memory to 128 MiB.

  4. Click Update Settings.

  5. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the istio-init container's resources:

spec:
  initContainers:
    - name: istio-init
      resources:
        limits:
          cpu: '1'
          memory: 512Mi
        requests:
          cpu: 100m
          memory: 128Mi

ACK dynamically overcommitted resources

Allocate dynamically overcommitted resources to the sidecar proxy and istio-init containers. This setting takes effect only when the Pod has the koordinator.sh/qosClass label, which indicates that ACK dynamic resource overcommitment is enabled.

Resource units for dynamically overcommitted resources use kubernetes.io/batch-cpu (in millicores) and kubernetes.io/batch-memory (in MiB).

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.

  2. Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy and configure the following:

ComponentSettingExample values
Sidecar proxy (istio-proxy)Resource LimitsCPU: 2000 millicores, Memory: 2048 MiB
Required ResourcesCPU: 200 millicores, Memory: 256 MiB
istio-init containerResource LimitsCPU: 1000 millicores, Memory: 1024 MiB
Required ResourcesCPU: 100 millicores, Memory: 128 MiB
  1. Click Update Settings.

  2. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm that both containers use batch resource types:

spec:
  containers:
    - name: istio-proxy
      resources:
        limits:
          kubernetes.io/batch-cpu: 2k
          kubernetes.io/batch-memory: 2Gi
        requests:
          kubernetes.io/batch-cpu: '200'
          kubernetes.io/batch-memory: 256Mi
  initContainers:
    - name: istio-init
      resources:
        limits:
          kubernetes.io/batch-cpu: 1k
          kubernetes.io/batch-memory: 1Gi
        requests:
          kubernetes.io/batch-cpu: '100'
          kubernetes.io/batch-memory: 128Mi

Sidecar proxy worker threads

Set the number of Envoy worker threads for the sidecar proxy. The value must be a non-negative integer. When set to 0, the thread count is automatically determined based on the sidecar proxy's CPU resource limits (or resource requests if no limits are set).

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.

  2. At the namespace or workload level, set Number of Sidecar Proxy Threads to 3.

  3. Click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the --concurrency argument in the istio-proxy container:

spec:
  containers:
    - name: istio-proxy
      args:
        - proxy
        - sidecar
        - '--concurrency'
        - '3'

Traffic interception by port or IP address

Control which traffic the sidecar proxy intercepts by specifying IP address ranges and port numbers. These settings are managed under the Enable/Disable Sidecar Proxy by Ports or IP Addresses section.

The following table summarizes all traffic interception settings and the istio-init parameter each controls:

SettingDirectionEffectistio-init parameter
Redirect addresses (include)OutboundIntercept traffic to these CIDR ranges only. Default: * (all).-i
Bypass addresses (exclude)OutboundSkip interception for these CIDR ranges. Takes precedence over the include list.-x
Redirect ports - inbound (include)InboundIntercept inbound traffic on these ports only. Default: * (all).-b
Bypass ports - inbound (exclude)InboundSkip interception for inbound traffic on these ports. Effective only when redirect ports is *.-d
Redirect ports - outbound (include)OutboundIntercept outbound traffic to these destination ports.-q
Bypass ports - outbound (exclude)OutboundSkip interception for outbound traffic to these ports, regardless of other settings.-o

Outbound traffic: redirect addresses (include list)

Specify IP address ranges in CIDR notation, separated by commas. The sidecar proxy intercepts outbound traffic only to destinations within these ranges. The default value * intercepts all outbound traffic.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Addresses to Which External Access Is Redirected to Sidecar Proxy.

  3. Enter 192.168.0.0/16,10.1.0.0/24 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -i parameter in the istio-init container reflects the configured CIDR ranges:

initContainers:
  - name: istio-init
    args:
      - '-i'
      - '192.168.0.0/16,10.1.0.0/24'

Outbound traffic: bypass addresses (exclude list)

Specify IP address ranges in CIDR notation, separated by commas. Outbound traffic to destinations within these ranges bypasses the sidecar proxy.

Important

If an IP address appears in both the redirect list and the bypass list, the bypass list takes precedence -- the sidecar proxy does not intercept traffic to that address.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Addresses to Which External Access Is Not Redirected to Sidecar Proxy.

  3. Enter 10.1.0.0/24 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -x parameter in the istio-init container includes the configured CIDR range along with the default host CIDR (192.168.0.1/32):

initContainers:
  - name: istio-init
    args:
      - '-x'
      - '192.168.0.1/32,10.1.0.0/24'

Inbound traffic: redirect ports (include list)

Specify port numbers separated by commas. The sidecar proxy intercepts inbound traffic only on these ports. The default value * intercepts all inbound traffic.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Ports on Which Inbound Traffic Redirected to Sidecar Proxy.

  3. Enter 80,443 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -b parameter in the istio-init container reflects the configured ports:

initContainers:
  - name: istio-init
    args:
      - '-b'
      - '80,443'

Inbound traffic: bypass ports (exclude list)

Specify port numbers separated by commas. Inbound traffic on these ports bypasses the sidecar proxy.

Important

This setting takes effect only when Ports on Which Inbound Traffic Redirected to Sidecar Proxy is set to the default value * (intercept all inbound traffic).

By default, the sidecar proxy already excludes its own application ports: 15090, 15021, 15081, 9191, and 15020.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy.

  3. Enter 8000 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -d parameter in the istio-init container includes port 8000 alongside the default excluded ports:

initContainers:
  - name: istio-init
    args:
      - '-d'
      - '15090,15021,15081,9191,15020,8000'

Outbound traffic: redirect ports (include list)

Specify port numbers separated by commas. The sidecar proxy intercepts outbound traffic only to these destination ports.

Important

Even if a port is in this list, the sidecar proxy does not intercept a request when the destination IP address falls within the bypass address list, or the destination port falls within the outbound bypass port list.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Ports on Which Outbound Traffic Redirected to Sidecar Proxy.

  3. Enter 80,443 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -q parameter in the istio-init container reflects the configured ports:

initContainers:
  - name: istio-init
    args:
      - '-q'
      - '80,443'

Outbound traffic: bypass ports (exclude list)

Specify port numbers separated by commas. Outbound traffic to these ports bypasses the sidecar proxy, regardless of any redirect address or redirect port settings.

Configure in the console

  1. Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. At the namespace or workload level, select Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.

  3. Enter 8000 and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

The -o parameter in the istio-init container reflects the configured port:

initContainers:
  - name: istio-init
    args:
      - '-o'
      - '8000'

DNS proxy

When enabled, the sidecar proxy intercepts DNS requests from the workload and resolves them locally using its cached IP-to-domain mappings. Most queries are resolved without contacting a remote DNS server, which improves DNS performance and availability. Queries that the sidecar cannot resolve are forwarded to the upstream DNS server.

For more information, see Use the DNS proxy feature in an ASM instance.

Important

DNS proxy is not supported for sidecar proxies in ACK Serverless clusters or Elastic Container Instance (ECI)-based Pods due to network permission restrictions.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand DNS Proxy.

  2. At the namespace or workload level, select Enable DNS Proxy, turn on the switch, and click Update Settings.

  3. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm that the istio-proxy container has both DNS-related environment variables set to true:

env:
  - name: ISTIO_META_DNS_AUTO_ALLOCATE
    value: 'true'
  - name: ISTIO_META_DNS_CAPTURE
    value: 'true'

Environment variables

Add custom environment variables to the sidecar proxy container.

Graceful shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS)

When enabled, the sidecar proxy performs the following during Pod termination:

  1. The pilot-agent process stops Envoy from accepting new inbound connections.

  2. After a 5-second wait, pilot-agent polls Envoy's active connection count.

  3. Once active connections reach zero, pilot-agent terminates the Envoy process.

This reduces dropped requests during termination and minimizes shutdown time.

Important

When EXIT_ON_ZERO_ACTIVE_CONNECTIONS is set to true, the Sidecar Proxy Drain Duration at Pod Termination setting has no effect.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Manage Environment Variables for Sidecar Proxy.

  2. At the namespace or workload level, select Sidecar Graceful Shutdown.

  3. Turn on the switch and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the environment variable in the istio-proxy container:

env:
  - name: EXIT_ON_ZERO_ACTIVE_CONNECTIONS
    value: 'true'

Lifecycle management

Graceful startup

By default, the sidecar proxy starts before the application containers in a Pod. This prevents traffic loss that could occur if inbound traffic reaches the application before the sidecar proxy is ready.

Disable this setting to start the sidecar proxy and application containers simultaneously. This can speed up deployments in clusters with many Pods where the API server is under heavy load.

Configure in the console (example: global level)

  1. On the Sidecar Proxy Setting page, select the global tab and expand Lifecycle Management.

  2. Turn off the switch next to Sidecar Graceful Startup and click Update Settings.

  3. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

When disabled, the istio-proxy container's PROXY_CONFIG environment variable contains "holdApplicationUntilProxyStarts":false, and no default lifecycle field is declared.

Drain duration at Pod termination

After a Pod begins terminating, the sidecar proxy continues processing existing inbound traffic for a configurable period before closing connections. This period is the drain duration. The default is 5s.

If any API served by the workload takes longer than the drain duration to respond, increase this value to prevent in-flight requests from being dropped.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Lifecycle Management.

  2. At the namespace or workload level, select Sidecar Proxy Drain Duration at Pod Termination.

  3. Enter 10s and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the istio-proxy container has the drain duration configured:

env:
  - name: TERMINATION_DRAIN_DURATION_SECONDS
    value: '10'
  - name: PROXY_CONFIG
    value: >-
      {..."terminationDrainDuration":"10s"}

Custom lifecycle hooks

Customize the Kubernetes container lifecycle hooks for the sidecar proxy container by providing the lifecycle field in JSON format. This replaces the default lifecycle configuration.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Lifecycle Management.

  2. At the namespace or workload level, select Lifecycle of Sidecar Proxy.

  3. Enter the lifecycle configuration in JSON:

{
  "postStart": {
    "exec": {
      "command": [
        "pilot-agent",
        "wait"
      ]
    }
  },
  "preStop": {
    "exec": {
      "command": [
        "/bin/sh",
        "-c",
        "sleep 13"
      ]
    }
  }
}

In this example:

  • postStart: Waits for pilot-agent and Envoy to fully start after the container is created.

  • preStop: Sleeps 13 seconds before the container is stopped, allowing in-flight requests to complete.

  1. Click Update Settings.

  2. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the istio-proxy container has the expected lifecycle hooks:

lifecycle:
  postStart:
    exec:
      command:
        - pilot-agent
        - wait
  preStop:
    exec:
      command:
        - /bin/sh
        - -c
        - sleep 13

Outbound traffic policy

Control whether the sidecar proxy allows workloads to access services outside the ASM service registry. External services are those not registered in the mesh -- either through Kubernetes service discovery or manually declared ServiceEntry resources.

PolicyBehavior
ALLOW_ANY (default)The sidecar proxy forwards requests to external services.
REGISTRY_ONLYThe sidecar proxy blocks connections to external services. Requests to unregistered services return HTTP 502.
Note

This is a global-level setting. To configure outbound traffic policy at the namespace or workload level, go to Traffic Management Center > Sidecar Traffic Configuration in the ASM console.

Configure in the console

  1. On the global tab of the Sidecar Proxy Setting page, expand Outbound Traffic Policy.

  2. Select REGISTRY_ONLY and click Update Settings.

  3. Redeploy workloads to apply the change.

Verify

Deploy a test workload with sidecar injection enabled. The following example uses a minimal sleep application:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sleep
---
apiVersion: v1
kind: Service
metadata:
  name: sleep
  labels:
    app: sleep
    service: sleep
spec:
  ports:
  - port: 80
    name: http
  selector:
    app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      labels:
        app: sleep
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3650d"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /etc/sleep/tls
          name: secret-volume
      volumes:
      - name: secret-volume
        secret:
          secretName: sleep-secret
          optional: true
kubectl apply -f sleep.yaml -n default

Then attempt to access an external service from the test workload:

kubectl exec -it <sleep-pod-name> -c sleep -- curl www.aliyun.com -v

A 502 Bad Gateway response confirms that the sidecar proxy is blocking access to the unregistered external service:

> GET / HTTP/1.1
> Host: www.aliyun.com
< HTTP/1.1 502 Bad Gateway
< server: envoy

Traffic interception mode

By default, the sidecar proxy uses iptables REDIRECT mode to intercept inbound traffic. In this mode, the application sees the sidecar proxy's IP as the source address and cannot identify the original client IP.

Switch to TPROXY (transparent proxy) mode to preserve the original client source IP. See Preserve the source IP address of a client when the client accesses services in ASM.

Important

TPROXY mode does not support CentOS. If your Pods run on CentOS nodes, use the default REDIRECT mode.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Sidecar Traffic Interception Mode.

  2. At the namespace or workload level, select the Sidecar Traffic Interception Mode checkbox.

  3. Select TPROXY and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm that both the istio-proxy environment variable and the istio-init startup argument use TPROXY:

# In istio-proxy container:
env:
  - name: PROXY_CONFIG
    value: >-
      {..."interceptionMode":"TPROXY",...}

# In istio-init container:
initContainers:
  - name: istio-init
    args:
      - '-m'
      - TPROXY

Monitoring

Log level

Set the Envoy proxy log level for the sidecar proxy container. The default level is info.

Available levels: debug, trace, info, warning, error, critical, off.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Monitoring Statistics.

  2. At the namespace or workload level, select Log Level.

  3. Select error from the drop-down list and click Update Settings.

  4. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the --proxyLogLevel argument in the istio-proxy container:

containers:
  - name: istio-proxy
    args:
      - '--proxyLogLevel=error'

Custom Envoy metrics (proxyStatsMatcher)

By default, the sidecar proxy reports only a subset of Envoy metrics to minimize performance overhead. Use proxyStatsMatcher to collect additional metrics by matching metric names with prefixes, suffixes, or regular expressions.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab and expand Monitoring Statistics.

  2. Select proxyStatsMatcher and Regular Expression Match.

  3. Enter .*outlier_detection.* to collect circuit breaker metrics.

  4. Click Update Settings.

  5. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the proxyStatsMatcher in the PROXY_CONFIG environment variable:

env:
  - name: PROXY_CONFIG
    value: >-
      {..."proxyStatsMatcher":{"inclusionRegexps":[".*outlier_detection.*"]},...}

Envoy runtime parameters

Downstream connection limits

By default, the sidecar proxy does not limit downstream connections, which can be exploited in denial-of-service scenarios. See ISTIO-SECURITY-2020-007 for details.

Set a maximum number of downstream connections to protect your workloads.

Configure in the console

  1. On the Sidecar Proxy Setting page, select a scope tab.

  2. In the Envoy Runtime Parameters section, enter 5000 next to Limits on Downstream Connections and click Update Settings.

  3. Redeploy workloads to apply the change.

Verify

kubectl get pod -n <namespace> <pod-name> -o yaml

Confirm the runtimeValues field in the PROXY_CONFIG environment variable:

env:
  - name: PROXY_CONFIG
    value: >-
      {..."runtimeValues":{"overload.global_downstream_max_connections":"5000"},...}

Manage workload-level configurations

At the workload level, multiple sidecar proxy configurations can coexist for different workloads within the same namespace.

Create a workload-level configuration

  1. On the Sidecar Proxy Setting page, click the workload tab and then click Create.

  2. Set the configuration items and click Create.

Update a workload-level configuration

  1. On the workload tab, find the target configuration and click Update in the Actions column.

  2. Modify the settings and click Update.

Delete a workload-level configuration

  1. On the workload tab, find the target configuration and click Delete in the Actions column.

  2. In the confirmation dialog, click OK.

Redeploy workloads

Most sidecar proxy configuration changes require a Pod restart. Redeploy affected workloads to apply the new settings.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left-side navigation pane, choose Workloads > Deployments.

  3. Redeploy workloads using one of the following methods:

ScenarioSteps
Single workloadFind the target workload and click More > Redeploy in the Actions column.
Multiple workloadsSelect the target workloads and click Batch Redeploy at the bottom of the page.

Verify global settings

After updating global-level sidecar proxy settings, verify that the ASM instance has applied the changes:

  1. In the left-side navigation pane, choose ASM Instance > Base Information.

  2. Confirm that the Status is Running.