Injecting Sidecar proxy to application containers enhances network security, reliability, and observability for service-to-service calls. Learn how to configure Sidecar proxy.
Sidecar proxy configuration levels
Sidecar proxy configurations are applied based on their scope and priority. The configuration levels, ordered from lowest to highest priority, are:
Global
Namespace
Workload
Scope: Applies to specific workloads selected by a label selector.
Description: You must define a label selector to target specific workloads. Only the selected workloads will apply the configuration during Sidecar proxy injection.
Pod
Configuration Priority Rules
Procedure
The following section describes how to configure Sidecar proxy at different levels.
Global level
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
On the global tab, configure the proxy and then click Update Settings.
For more information about the configuration items of Sidecar proxy, see Configuration items of Sidecar proxy.
Check whether the Sidecar proxy configurations take effect.
In the left-side navigation pane, choose .
If the Status is Running, the global Sidecar proxy configurations take effect.
Namespace level
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
Click the Namespace tab, select a namespace, set the related configuration items, and then click Update Settings. The update takes effect immediately.
The namespace level is not the lowest configuration level of Sidecar proxy. Therefore, all Sidecar proxy configuration items at the namespace level have no default values (for Sidecar proxy configuration items that are not selected and configured, the configurations at the global level take effect by default). For more information, see Configuration items of Sidecar proxy.
Workload level
In the same namespace, you can create multiple Sidecar proxy configurations for different workloads.
Log on to the ASM console. In the left-side navigation pane, choose .
On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose .
Click the workload tab and then click Create.
Set the related configuration items, and then click Create.
Since the workload level is not the lowest in the Sidecar proxy configuration hierarchy, all Sidecar proxy configuration options lack default values and instead default to the global-level configuration. See Configuration items of Sidecar proxy for more details.
After creation, you can update or delete workload-level Sidecar proxy configurations.
To update a workload-Level configuration
select the workload tab.
In the Actions column, click Update for the target Sidecar proxy configuration.
Modify configurations.
Click Update to apply changes.
To delete a workload-level vonfiguration
select the workload tab.
In the Actions column, click Delete for the target Sidecar proxy configuration.
In the confirmation dialog, click OK to proceed.
(Optional) Redeploy workloads
Redeploy pod to make the Sidecar proxy configuration take effect.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
Perform the following operations to redeploy workloads.
Scenario | Procedure |
For a single workload | Find the workload that you want to redeploy and click in the Actions column. |
For multiple workloads | Select multiple workloads in the Name column and click Batch Redeploy in the lower part of the page. |
Supported version for Sidecar proxy configuration items
If you cannot find a Sidecar proxy configuration item, refer to the information provided in the following table to see if you need to update an ASM instance.
Important If your ASM version is V1.22 or later and Kubernetes cluster version is V1.30 or later, Sidecar proxy are deployed as native Sidecar containers. In this case, the Kubernetes cluster manages the lifecycle of Sidecar proxy containers, overriding all lifecycle management configurations in them.
Configuration items of Sidecar proxy
You can configure the resource usage, traffic interception mode, Domain Name System (DNS) proxy, and lifecycle of Sidecar proxy.
Configure resources for Injected Istio proxy
Expand to learn about the configurations
Configuration reference
Configuration item | Description |
Resource Limits | The maximum CPU and memory resources a Sidecar proxy container can request, measured in cores (CPU) and MiB (memory), respectively. |
Required Resources | The minimum CPU and memory resources a Sidecar proxy container needs to use at runtime, measured in cores (CPU) and MiB (memory), respectively. |
Configuration example
On the Sidecar Proxy Setting page, select the target configuration level tab, and then click Resource Settings.
(Optional) Select Configure Resources for istio-init Container. It applies to Namespace and workload only.
In the Resource Limits section, set CPU to 2 cores and Memory to 1025 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB.
Click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configured resources of the Sidecar proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
- proxy
...
name: istio-proxy
...
resources:
limits:
cpu: '2'
memory: 1025Mi
requests:
cpu: 100m
memory: 128Mi
...
The istio-proxy container is a Sidecar proxy container. The resources field of the istio-proxy container is set to the expected resource values. This indicates that the configurations of Configure Resources for istio-init Container have taken effect.
Configure Sidecar resources by ratio
Expand to learn about the configurations
Configuration reference
This configuration enables automatic resource allocation for the Sidecar proxy based on the workload container's specifications. When enabled, it overrides the default Istio proxy resource settings injected into the Pod. Following strategies are supported:
Allocate resource based on max container resources
Allocate resource based on a specified container
Important Regardless of the strategies:
If the baseline container lacks limits: The system assumes no limits for the Sidecar proxy and does not configure limits for it.
If the baseline container lacks requests: To ensure Sidecar functionality, the system allocates the minimum feasible resources:
CPU: At least 100m.
Memory: At least 128Mi.
Minimum resource requirements:
Fallback behavior: If calculated resources fall below the minimum requirements, the Sidecar proxy resources are automatically set to the minimum values.
Configuration example
Below are the examples of configuarions in different scenarios and corresponding effects.
Standard configuration
resources:
requests:
cpu: 300m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
Requests not configured
resources:
limits:
cpu: 500m
memory: 1Gi
Limits not configured
resources:
requests:
cpu: 300m
memory: 512Mi
On the Sidecar Proxy Setting page, select a configuration level tab, and then expand the Resource Settings.
Select Configure sidecar resources by ratio.
Configure Proportion of Resources, select Computing Policy (optional), and then click Update Settings.
Redeploy the workloads to make the configurations take effect.
Run the following command to view the configured resources of the Sidecar proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
Standard configuration
Standard configuration If the Proportion of Resources is set to 50 (meaning the Sidecar resources get 50% of the application container's resource), the final Sidecar resource allocation is as follows: ...
resources:
requests:
cpu: 150m
memory: 256Mi
limits:
cpu: 250m
memory: 512Mi
| Minimum configuration If the resource proportion is set to 20, the final Sidecar resource allocation is as follows: ...
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 204Mi
|
No requests configured
If the resource proportion is set to 50, the final Sidecar resource allocation is as follows:
...
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
No limits configured
If the resource proportion is set to 50, the final Sidecar resource allocation is as follows:
...
resources:
requests:
cpu: 150m
memory: 256Mi
Configure Resources for istio-init Container
Expand to learn about the configurations
Configuration reference
This configuration item specifies the minimum required and maximum allowable CPU and memory resources for the istio-init container in Pods with injected Sidecar proxy. The istio-init container is an init container executed when the Pod starts, responsible for configuring traffic interception routing rules and other prerequisites for the Sidecar proxy container.
Configuration item | Description |
Resource Limits | The maximum CPU cores and memory resources that the istio-init container can request, measured in cores (CPU) and MiB (memory), respectively. |
Required Resources | The minimum CPU cores and memory resources that the istio-init container needs to use at runtime, measured in cores (CPU) and MiB (memory), respectively. |
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.
(Optional) Perform this step if you selected Namespace or workload in step 1. Then, in the Resource Settings section, select Configure Resources for istio-init Container.
In the Resource Limits section, set CPU to 1 core and Memory to 512 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB. Then, click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the resource configurations of the istio-init container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
...
name: istio-init
resources:
limits:
cpu: '1'
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
...
The resources field of the istio-init container in the pod is set to the expected resource values. This indicates that the configurations of Configure Resources for istio-init Container take effect.
Set ACK resources that can be dynamically overcommitted for Sidecar proxy
Expand to learn about the configurations
Configuration reference
This configuration setting specifies the allocation of ACK dynamically over-allocated resources for the injected Istio proxy and the istio-init container. For more information, see Dynamic resource overcommitment.
The configuration items are configured in the same way as those of Configure resources for Injected Istio proxy and Configure Resources for istio-init Container. After the preceding configurations are complete, resources that can be dynamically overcommitted, instead of regular CPU and memory resources, are allocated to the Sidecar proxy and istio-init containers in a pod if the pod has the koordinator.sh/qosClass label. This label indicates that dynamic overcommitment of ACK resources is enabled.
Note In ACK resources, the unit of CPU resources is millicores.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.
Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy, configure the settings, and then click Update Settings.
Configuration item | Child configuration item | Description |
Configure Resources for Injected Sidecar Proxy (ACK Dynamically Overcommitted Resources) | Resource Limits | Set CPU to 2000 millicores and Memory to 2048 MiB. |
Required Resources | Set CPU to 200 millicores and Memory to 256 MiB. |
istio-init container resource (ACK dynamic oversold resource) | Resource Limits | Set CPU to 1000 millicores and Memory to 1024 MiB. |
Required Resources | Set CPU to 100 millicores and Memory to 128 MiB. |
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the resource configurations of the istio-init container:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
metadata:
...
labels:
koordinator.sh/qosClass: BE
spec:
containers:
- args:
...
name: istio-proxy
...
resources:
limits:
kubernetes.io/batch-cpu: 2k
kubernetes.io/batch-memory: 2Gi
requests:
kubernetes.io/batch-cpu: '200'
kubernetes.io/batch-memory: 256Mi
...
initContainers:
- args:
...
name: istio-init
resources:
limits:
kubernetes.io/batch-cpu: 1k
kubernetes.io/batch-memory: 1Gi
requests:
kubernetes.io/batch-cpu: '100'
kubernetes.io/batch-memory: 128Mi
...
Both the istio-proxy container (Sidecar proxy container) and the istio-init container in the pod contain the resources field and have the configured resource values. This indicates that the configurations of Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy take effect.
Number of Sidecar proxy threads
Expand to learn about the configurations
Configuration reference
This configuration item specifies the number of worker threads for the Sidecar proxy container. Must be non-negative integer. When set to 0, the number of worker threads is automatically determined based on the requested CPU resources or CPU resource limits configured for the Sidecar proxy. Resource limits take precedence over requested resources.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.
(Optional) Perform this step if you selected Namespace or workload in step 1. Set Number of Sidecar Proxy Threads to 3 and click Update Settings.
This configuration indicates that the Sidecar proxy container will start three worker threads when it is running.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration of Number of Sidecar Proxy Threads:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=warning'
- '--proxyComponentLogLevel=misc:error'
- '--log_output_level=default:info'
- '--concurrency'
- '3'
...
name: istio-proxy
...
The concurrency parameter of the istio-proxy container is set to 3. This indicates that the configuration of Number of Sidecar Proxy Threads takes effect.
Configure addresses to which external access is redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of IP address ranges separated by commas (,), where each IP address range is specified in CIDR notation. When a workload with an injected Sidecar proxy accesses other services, only requests with destination IP addresses within the configured ranges are intercepted by the Sidecar proxy container. Requests outside these ranges will bypass the Sidecar proxy and be sent directly to their destinations. The default configuration is set to *, which means the Sidecar proxy container intercepts all outbound traffic from the workload.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Addresses to Which External Access Is Redirected to Sidecar Proxy.
Set Addresses to Which External Access Is Redirected to Sidecar Proxy to 192.168.0.0/16,10.1.0.0/24 and click Update Settings.
This configuration indicates that the Sidecar proxy container will intercept requests whose destination IP addresses are within the 192.168.0.0/16 and 10.1.0.0/24 CIDR blocks.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '192.168.0.0/16,10.1.0.0/24'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -i of the istio-init container is set to 192.168.0.0/16,10.1.0.0/24. This indicates that the configuration of Addresses to Which External Access Is Redirected to Sidecar Proxy takes effect.
Configure addresses to which external access is not redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of IP address ranges separated by comma (,), where each range is specified in CIDR notation. When a workload with an injected Sidecar proxy accesses other services, the Sidecar proxy container intercepts outbound traffic. However, if the destination IP address of a request falls within the configured CIDR ranges, the request will not be intercepted by the Sidecar proxy container.
Important If an IP address is specified by both the configuration items Addresses to Which External Access Is Not Redirected to Sidecar Proxy and Addresses to Which External Access Is Redirected to Sidecar Proxy, the Sidecar proxy container does not intercept the requests whose destination addresses is this IP address. For more information, see Configure addresses to which external access is redirected to Sidecar proxy.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Addresses to Which External Access Is Not Redirected to Sidecar Proxy.
Set Addresses to Which External Access Is Not Redirected to Sidecar Proxy to 10.1.0.0/24 and click Update Settings.
This configuration indicates that the sidecar proxy container will not intercept the requests whose destination IP addresses are within the 10.1.0.0/24 CIDR block.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- '192.168.0.1/32,10.1.0.0/24'
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -x of the istio-init container is set to 192.168.0.1/32,10.1.0.0/24. 192.168.0.1/32 is the CIDR block of hosts configured by default. 10.1.0.0/24 is the same as the IP address range specified in the Sidecar proxy configuration. This indicates that the configuration of Addresses to Which External Access Is Not Redirected to Sidecar Proxy takes effect.
Configure ports on which inbound traffic redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of port numbers that are separated by commas (,). The Sidecar proxy container intercepts inbound traffic whose destination ports are in the list. The default value is *, which indicates that the Sidecar proxy container intercepts all inbound traffic of the workload.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Inbound Traffic Redirected to Sidecar Proxy.
Set Ports on Which Inbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings.
This configuration indicates that the Sidecar proxy container will intercept requests destined for ports 80 and 443 of the corresponding workload.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configured:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '80,443'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -b of the istio-init container is set to 80,443, which is the same as the inbound ports set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Redirected to Sidecar Proxy takes effect.
Configure ports on which outbound traffic redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of port numbers that are separated by commas (,). The Sidecar proxy container intercepts outbound traffic whose destination ports are in the list.
Important Even if the destination port is included in the specified list, the Sidecar proxy container will not intercept this request when all the following conditions are met: 1. Both this configuration item and Addresses to Which External Access Is Not Redirected to Sidecar Proxy or Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy are configured. 2. The destination IP address of the request is included in Addresses to Which External Access Is Not Redirected to Sidecar Proxy, or the destination service port of the request is included in Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. For more information, see Configure addresses to which external access is not redirected to Sidecar proxy and Configure ports on which outbound traffic not redirected to Sidecar proxy.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Outbound Traffic Redirected to Sidecar Proxy.
Set Ports on Which Outbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings.
This configuration indicates that the Sidecar proxy container intercepts requests destined for ports 80 and 443.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
- '-q'
- '80,443'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -q of the istio-init container is set to 80,443, which is the same as the outbound ports set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Outbound Traffic Redirected to Sidecar Proxy takes effect.
Configure ports on which inbound traffic not redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of ports that are separated by commas (,). Inbound traffic destined for the ports in the list will not be intercepted by the Sidecar proxy container.
Important This configuration item takes effect only when Ports on Which Inbound Traffic Redirected to Sidecar Proxy is set to the default value *, which indicates that the Sidecar proxy container intercepts all inbound traffic.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy.
Set Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings.
This configuration indicates that the Sidecar proxy container no longer intercepts requests destined for port 8000 of the workload.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020,8000'
- '--log_output_level=default:info'
...
name: istio-init
...
The runtime parameter -d of the istio-init container is set to 15090,15021,15081,9191,8000. 15090, 15021, 15081, and 9191 are application ports of Sidecar proxy. By default, the Sidecar proxy container does not intercept inbound traffic destined for these ports. Port 8000 is the same as the inbound port set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy takes effect.
Configure ports on which outbound traffic not redirected to Sidecar proxy
Expand to learn about the configuration
Configuration reference
Configure a list of ports that are separated by commas (,). Outbound traffic destined for the ports in the list will not be intercepted by the Sidecar proxy, regardless of whether the IP addresses of the destination services are in Addresses to Which External Access Is Redirected to Sidecar Proxy and whether the ports of the destination services are in Ports on Which Outbound Traffic Redirected to Sidecar Proxy.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.
Set Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings.
This configuration indicates that the Sidecar proxy container no longer intercepts service requests destined for port 8000.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- REDIRECT
- '-i'
- '*'
- '-x'
- 192.168.0.1/32
- '-b'
- '*'
- '-d'
- '15090,15021,15081,9191,15020'
- '--log_output_level=default:info'
- '-o'
- '8000'
...
name: istio-init
...
The runtime parameter -o of the istio-init container is set to 8000, which is the same as the port set in the configuration item Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. This indicates that the configuration item takes effect.
Enable DNS proxy
Expand to learn about the configuration
Configuration reference
If the DNS proxy is enabled, the Sidecar proxy intercepts DNS requests of the workload to improve the performance and availability of ASM. All requests from the workload are redirected to the Sidecar proxy. Since the Sidecar proxy stores the mapping of IP addresses to local domain names, it returns DNS responses to the workload without requesting remote DNS services in most cases, unless the DNS request cannot be resolved by it. For more information, see Use the DNS proxy feature in an ASM instance.
Important Because of network permission, you cannot enable the DNS proxy feature for Sidecar proxy in ACK Serverless clusters or ACK clusters of Elastic Container Instance-based pods.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click DNS Proxy.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Enable DNS Proxy, turn on the switch, and then click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
spec:
containers:
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=warning'
- '--proxyComponentLogLevel=misc:error'
- '--log_output_level=default:info'
- '--concurrency'
- '3'
env:
...
- name: ISTIO_META_DNS_AUTO_ALLOCATE
value: 'true'
- name: ISTIO_META_DNS_CAPTURE
value: 'true'
...
name: istio-proxy
The ISTIO_META_DNS_AUTO_ALLOCATE and ISTIO_META_DNS_CAPTURE environment variables of the istio-proxy container are set to true, which indicates that the configuration of DNS Proxy takes effect.
Manage environment variables for Sidecar proxy
Expand to learn about the configuration
Configuration reference
These configuration items are used to add additional environment variables in the Sidecar proxy container. You can configure the following environment variables for the Sidecar proxy.
Configuration item | Description |
Sidecar Graceful Shutdown | If you turn on this switch, the environment variable EXIT_ON_ZERO_ACTIVE_CONNECTIONS: "true" is added to the environment variables of the Sidecar proxy container. This environment variable works in the following way: When the Sidecar proxy container is terminated, the pilot-agent process in the container first stops the Envoy proxy from listening to inbound traffic and waits for the default period of 5 seconds. Then, the pilot-agent process starts to poll the number of active connections of the Envoy proxy until the number of active connections becomes zero, and finally terminates the Envoy proxy process. Configure EXIT_ON_ZERO_ACTIVE_CONNECTIONS to perfect the termination process of the Sidecar proxy container in common situations. This reduces the number of requests that are discarded during termination and minimizes the termination time. |
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Manage Environment Variables for Sidecar Proxy.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Graceful Shutdown.
Turn on the switch and then click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: EXIT_ON_ZERO_ACTIVE_CONNECTIONS
value: 'true'
name: istio-proxy
...
The EXIT_ON_ZERO_ACTIVE_CONNECTIONS environment variable is added to the environment variables of the istio-proxy container in the pod. This indicates that the configuration of Manage Environment Variables for Sidecar Proxy takes effect.
Sidecar Graceful Startup
Expand to learn about the configuration
Configuration reference
Sidecar Graceful Startup is used to manage the lifecycle of Sidecar proxy. By default, it is enabled, which indicates that for a pod injected with a Sidecar proxy, has to start up the Sidecar proxy before the application containers. The purpose is to ensure that the traffic destined for the application containers is not lost in the case that the Sidecar proxy is not started.
If this configuration item is disabled, the Sidecar proxy container and the application containers in the pod are started at the same time. When many pods are deployed in a cluster, the Sidecar proxy containers may be started slowly due to the heavy load on the API server. You can disable this configuration item to speed up deployment.
Configuration example
The following example describes how to disable Sidecar Graceful Startup on the global tab.
On the Sidecar Proxy Setting page, select the global tab, and then click Lifecycle Management.
Turn off the switch next to Sidecar Graceful Startup and click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- command:
...
name: sleep
...
env:
- name: PROXY_CONFIG
value: >-
{..."holdApplicationUntilProxyStarts":false,...}
...
name: istio-proxy
...
Once Sidecar Graceful Startup is disabled, the istio-proxy container is not required to start before application containers are started, and the default lifecycle field is not declared.
Configure Sidecar proxy drain duration at pod termination
Expand to learn about the configuration
Configuration reference
Sidecar Proxy Drain Duration at Pod Termination is for managing the lifecycle of a Sidecar proxy.
After the pod begins being terminated, services no longer route traffic to the pod. The Sidecar proxy container continues to process existing inbound traffic for a period after receiving an exit signal, but it does not accept new inbound traffic. This period of time is called Sidecar Proxy Drain Duration at Pod Termination. The default value is 5s. This configuration item must be set to a value in seconds, for example, 10s.
If response time of some API provided by a service to be stopped exceeds Sidecar Proxy Drain Duration at Pod Termination, all existing inbound and outbound connections will be terminated even if they are not processed. As a result, some requests are lost. In this case, set it to a greater value so that the processing of inbound and outbound traffic can be completed.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Lifecycle Management.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Proxy Drain Duration at Pod Termination.
Set it to 10s, and click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: TERMINATION_DRAIN_DURATION_SECONDS
value: '10'
...
- name: PROXY_CONFIG
value: >-
{..."terminationDrainDuration":"10s"}
...
name: istio-proxy
...
The istio-proxy container in the pod is configured with an environment variable named TERMINATION_DRAIN_DURATION_SECONDS with the value of 10, and terminationDrainDuration is 10s in the PROXY_CONFIG. This indicates that the configuration of Sidecar Proxy Drain Duration at Pod Termination takes effect.
Configure lifecycle of Sidecar proxy
Expand to learn about the configuration
Configuration reference
To customize the lifecycle hook of the Sidecar proxy container, enter the container lifecycle hook field (lifecycle) in JSON format. This field will replace the default container lifecycle hook field configured for the Sidecar proxy container. For more information, see Container Lifecycle Hooks.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Lifecycle Management.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Lifecycle of Sidecar Proxy.
In the edit box under Lifecycle of Sidecar Proxy, configure the following YAML file and then click Update Settings.
This YAML file configures the postStart and preStop hook parameters.
postStart: indicates that the pilot-agent starts to wait for the complete startup of the pilot-agent and Envoy proxy after the Sidecar proxy container is started.
preStop: indicates that the Sidecar proxy container sleeps for 13s before it is stopped.
{
"postStart": {
"exec": {
"command": [
"pilot-agent",
"wait"
]
}
},
"preStop": {
"exec": {
"command": [
"/bin/sh",
"-c",
"sleep 13"
]
}
}
}
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
...
lifecycle:
postStart:
exec:
command:
- pilot-agent
- wait
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 13
name: istio-proxy
...
The lifecycle hook field (lifecycle) of the istio-proxy container in the pod is changed to the expected configuration. This indicates that the configuration of Lifecycle of Sidecar Proxy takes effect.
Configure outbound traffic policy
Expand to learn about the configuration
Configuation reference
This configuration item is used to configure an outbound traffic policy for the Sidecar proxy container. External services indicate services that are not defined in the service registry of ASM. By default, services in the Kubernetes clusters that are managed by ASM are registered services. You can manually register services with ASM by declaring service entry (ServiceEntry) resources.
This configuration item can be set to one of the following two values:
ALLOW_ANY: the default outbound traffic policy. The Sidecar proxy allows access to external services and forwards requests destined for external services.
REGISTRY_ONLY: The Sidecar proxy denies access to external services. The workload cannot establish connections to external services.
Note This configuration item is global-level. To configure an outbound traffic policy at the namespace or workload level, you can log on to the ASM console, find the desired ASM instance, navigate to , and configure the related parameters.
Configuration example
On the global tab of the Sidecar Proxy Setting page, click Outbound Traffic Policy, select REGISTRY_ONLY, and then click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Create a sleep.yaml file that contains the following content:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sleep
---
apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
service: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
labels:
app: sleep
spec:
terminationGracePeriodSeconds: 0
serviceAccountName: sleep
containers:
- name: sleep
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/sleep/tls
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sleep-secret
optional: true
---
Run the following command to deploy the sleep application:
kubectl apply -f sleep.yaml -n default
Run the following command to use the sleep application to access external services:
kubectl exec -it {Name of the pod for the sleep service} -c sleep -- curl www.aliyun.com -v
Expected output:
* Trying *********...
* Connected to www.aliyun.com (********) port 80 (#0)
> GET / HTTP/1.1
> Host: www.aliyun.com
> User-Agent: curl/7.87.0-DEV
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< date: Mon,********* 03:25:00 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host www.aliyun.com left intact
The HTTP status code 502 is returned, indicating that the sleep application for which the Sidecar proxy is injected cannot access the external service www.aliyun.com. This indicates that the configuration of Outbound Traffic Policy takes effect.
Configure Sidecar traffic interception mode
Expand to learn about the configuration
Configuration reference
This configuration item sets the inbound traffic interception strategy for the Sidecar proxy. By default, the Sidecar proxy container uses an iptables redirect policy to intercept inbound traffic destined for the application workload. After interception via redirection, the application will only see the Sidecar proxy container's IP as the source IP of the request and will not be able to identify the original client source IP.
By changing the inbound traffic interception strategy to transparent proxy mode (TPROXY), ASM allows the Sidecar proxy container to intercept inbound traffic using iptables' transparent proxy mode. After this configuration, the application will be able to see the original client source IP. For more information, see Preserve the source IP address of a client when the client accesses services in ASM.
Important The transparent proxy mode does not support CentOS. If your pod runs on the CentOS operating system, use redirection policies.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Sidecar Traffic Interception Mode.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Traffic Interception Mode.
On the right side of Sidecar Traffic Interception Mode, select TPROXY, and then click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration of Sidecar Traffic Interception Mode:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
- name: PROXY_CONFIG
value: >-
{..."interceptionMode":"TPROXY",...}
- name: ISTIO_META_POD_PORTS
value: |-
[
]
...
name: istio-proxy
...
initContainers:
- args:
- istio-iptables
- '-p'
- '15001'
- '-z'
- '15006'
- '-u'
- '1337'
- '-m'
- TPROXY
...
name: istio-init
...
"interceptionMode":"TPROXY" is recorded in the environment variable of the istio-proxy container in the pod. The istio-init container also uses the TPROXY setting to run the initialization commands. This indicates that the configuration of Sidecar Traffic Interception Mode takes effect.
Configure log level
Expand to learn about the configuration
Configuration reference
This configuration item is used to set the log level of the Sidecar proxy container. By default, the log level of the Sidecar proxy is info. You can set the log level to one of the seven levels: info, debug, trace, warning, error, critical, and off.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Monitoring Statistics.
(Optional) Perform this step if you selected workload or Namespace in step 1. Select Log Level. .
Select error from the Log Level drop-down list, and then click Update Settings.
This configuration indicates that the Sidecar proxy displays logs at the error or higher levels.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
- proxy
- sidecar
- '--domain'
- $(POD_NAMESPACE).svc.cluster.local
- '--proxyLogLevel=error'
...
name: istio-proxy
...
The runtime parameter --proxyLogLevel of the istio-proxy container is set to error, which indicates that the configuration of Log Level takes effect.
Configure proxyStatsMatcher
Expand to learn about the configuration
Description of configuration items
This configuration item defines the custom Envoy statistics metrics reported by the Sidecar proxy. Envoy, as the technical implementation of the Sidecar proxy, can collect and report a wide range of metrics. However, ASM defaults to enabling only a subset of these metrics to minimize performance overhead on the Sidecar proxy.
You can use this configuration item to specify additional metrics that the Sidecar proxy should collect and expose by matching prefixes, suffixes, or regular expressions.
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab, and then click Monitoring Statistics.
Select proxyStatsMatcher and Regular Expression Match, and set Regular Expression Match to .*outlier_detection.*.
This configuration indicates that the Sidecar proxy collects the statistics of circuit breaker metrics.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configuration of proxyStatsMatcher:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
- name: PROXY_CONFIG
value: >-
{..."proxyStatsMatcher":{"inclusionRegexps":[".*outlier_detection.*"]},...}
...
The custom metrics are updated in the environment variables of the istio-proxy container in the pod. This indicates that the configuration of proxyStatsMatcher takes effect.
Configure Envoy runtime parameters
Expand to learn about the configuration
Configuration reference
This configuration item is used to define runtime parameters of Envoy proxy processes in the Sidecar proxy container.
Configuration item | Description |
Limits on Downstream Connections | By default, a Sidecar proxy does not limit the number of downstream connections. This may be exploited by malicious activities. For more information, see ISTIO-SECURITY-2020-007. You can configure the maximum number of downstream connections allowed by a Sidecar proxy. |
Configuration example
On the Sidecar Proxy Setting page, select a configuration level tab.
In the Envoy Runtime Parameters section, enter 5000 in the input box on the right side of Limits on Downstream Connections and then click Update Settings.
Redeploy the workloads to make the Sidecar proxy configurations take effect.
Run the following command to view the configurations of Manage Environment Variables for Sidecar Proxy:
kubectl get pod -n <Namespace> <Pod name> -o yaml
Expected output:
apiVersion: v1
kind: Pod
...
spec:
containers:
- args:
...
env:
- name: PROXY_CONFIG
value: >-
{"concurrency":2,"configPath":"/etc/istio/proxy","discoveryAddress":"istiod-1-22-6.istio-system.svc:15012","holdApplicationUntilProxyStarts":true,"interceptionMode":"REDIRECT","proxyMetadata":{"BOOTSTRAP_XDS_AGENT":"false","DNS_AGENT":"","EXIT_ON_ZERO_ACTIVE_CONNECTIONS":"true"},"runtimeValues":{"overload.global_downstream_max_connections":"5000"},"terminationDrainDuration":"5s","tracing":{"zipkin":{"address":"zipkin.istio-system:9411"}}}
name: istio-proxy
...
The "runtimeValues":{"overload.global_downstream_max_connections":"5000"} field is added to the PROXY_CONFIG environment variable of the istio-proxy container in the pod. This indicates that the configuration of Envoy Runtime Parameters takes effect.