Sidecar proxies intercept service-to-service traffic to add mutual TLS, load balancing, retries, and observability -- without changing application code. This topic describes how to configure sidecar proxy settings at the global, namespace, and workload levels through the ASM console.
Prerequisites
How configuration levels work
ASM applies sidecar proxy settings at four scope levels. A higher-priority level overrides conflicting settings from lower-priority levels.
| Priority | Level | Scope | How to configure |
|---|---|---|---|
| 4 (highest) | Pod | Individual Pods | Add annotations to the Pod spec. See Configure a sidecar proxy by adding resource annotations. |
| 3 | Workload | Pods matching a label selector within a namespace | ASM console > Sidecar Proxy Setting > workload tab |
| 2 | Namespace | All Pods in a namespace | ASM console > Sidecar Proxy Setting > Namespace tab |
| 1 (lowest) | Global | All Pods across all namespaces | ASM console > Sidecar Proxy Setting > global tab |
For example, if both a global setting and a namespace-level setting exist for the default namespace, the namespace-level setting takes precedence for workloads deployed in default.
At the namespace and workload levels, unconfigured items inherit the global-level value. These levels have no independent defaults.
When to use each level:
| Level | Use when |
|---|---|
| Global | You want a baseline configuration for all workloads in the mesh |
| Namespace | A specific namespace requires different resource limits, traffic rules, or lifecycle settings |
| Workload | Individual workloads have unique requirements (for example, higher concurrency or custom drain duration) |
| Pod | You need one-off overrides without creating a workload-level configuration |
Configuration items at a glance
Use this table to find the setting you need and verify that your ASM instance meets the version requirement.
| Category | Configuration item | Global | Namespace | Workload |
|---|---|---|---|---|
| Resource settings | Configure resources for Injected Istio proxy | All versions | 1.10.5.34 | 1.13.4.20 |
| Configure Sidecar Resources Proportionally | 1.24.6.83 | 1.24.6.83 | 1.24.6.83 | |
| Configure Resources for istio-init Container | 1.9.7.93 | 1.10.5.34 | 1.13.4.20 | |
| Set ACK resources that can be dynamically overcommitted for Sidecar proxy | 1.16.3.47 | 1.16.3.47 | 1.16.3.47 | |
| Number of Sidecar proxy threads | 1.15.3.104 | 1.12.4.19 | 1.13.4.20 | |
| Enable/disable Sidecar proxy by ports or IP addresses | Configure addresses to which external access is redirected to Sidecar proxy | All versions | 1.10.5.34 | 1.13.4.20 |
| Configure addresses to which external access is not redirected to Sidecar proxy | All versions | 1.10.5.34 | 1.13.4.20 | |
| Configure ports on which inbound traffic redirected to Sidecar proxy | 1.15.3.104 | 1.10.5.34 | 1.13.4.20 | |
| Configure ports on which outbound traffic redirected to Sidecar proxy | 1.15.3.104 | 1.10.5.34 | 1.13.4.20 | |
| Configure ports on which inbound traffic not redirected to Sidecar proxy | All versions | 1.10.5.34 | 1.13.4.20 | |
| Configure ports on which outbound traffic not redirected to Sidecar proxy | All versions | 1.10.5.34 | 1.13.4.20 | |
| DNS proxy | Enable DNS Proxy | 1.8.3.17 | 1.10.5.34 | 1.13.4.20 |
| Manage environment variables for Sidecar proxy | Sidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS) | 1.15.3.104 | 1.15.3.104 | 1.15.3.104 |
| Lifecycle management | Sidecar Graceful Startup | 1.15.3.104 | 1.12.4.58 | 1.13.4.20 |
| Configure Sidecar proxy drain duration at pod termination | 1.9.7.93 | 1.10.5.34 | 1.13.4.20 | |
| Configure lifecycle of Sidecar proxy | 1.9.7.93 | 1.10.5.34 | 1.13.4.20 | |
| Outbound traffic policy | Outbound Traffic Policy | All versions | 1.10.5.34 | 1.13.4.20 |
| Sidecar traffic interception policy | Sidecar Traffic Interception Policy | 1.15.3.25 | 1.15.3.25 | 1.15.3.25 |
| Monitoring | Log Level | 1.15.3.104 | 1.12.4.58 | 1.13.4.20 |
| Configure proxyStatsMatcher | 1.15.3.104 | 1.12.4.58 | 1.13.4.20 | |
| Envoy runtime parameters | Limits on Downstream Connections | 1.21.6.95 | 1.21.6.95 | 1.21.6.95 |
If a configuration item is not available in your ASM console, update your ASM instance to the required version.
If your ASM version is V1.22 or later and your Kubernetes cluster version is V1.30 or later, sidecar proxies are deployed as native sidecar containers. Kubernetes manages the lifecycle of native sidecar containers, overriding all lifecycle management configurations.
Configure sidecar proxy settings
All configuration items are managed from the same console page. The general workflow applies to global, namespace, and workload levels:
Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the target ASM instance. In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.
Select the scope tab (global, Namespace, or workload).
Expand the relevant configuration section, set the values, and click Update Settings.
If the change requires a pod restart, redeploy affected workloads.
The following sections describe each configuration item in detail.
Resource settings
Sidecar proxy resources (istio-proxy)
Set the CPU and memory resource requests and limits for the istio-proxy container.
| Setting | Description |
|---|---|
| Resource Limits | Maximum CPU (cores) and memory (MiB) the sidecar proxy container can consume. |
| Required Resources | Minimum CPU (cores) and memory (MiB) guaranteed to the sidecar proxy container at runtime. |
Configure in the console
On the Sidecar Proxy Setting page, select the target scope tab and expand Resource Settings.
At the namespace or workload level, select Configure Resources for istio-init Container if you also want to set init container resources.
In Resource Limits, set CPU to
2cores and Memory to1025MiB. In Required Resources, set CPU to0.1cores and Memory to128MiB.Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlIn the output, confirm that the istio-proxy container's resources field matches the configured values:
spec:
containers:
- name: istio-proxy
resources:
limits:
cpu: '2'
memory: 1025Mi
requests:
cpu: 100m
memory: 128MiProportional sidecar resource allocation
Instead of setting fixed values, allocate sidecar proxy resources as a percentage of the workload container's resources. This overrides the default Istio proxy resource settings.
Two computing policies are available:
Max container resources: Uses the highest CPU or memory limit among all containers in the Pod as the baseline.
Specified container: Uses a named container as the baseline. If the named container does not exist, the default Istio proxy resources apply. The Pod annotation
scaled-resource.inject.istio.alibabacloud.com/container-reftakes priority over a manually specified container name.
Regardless of the policy:
If the baseline container has no
limits, the sidecar proxy has no limits configured.If the baseline container has no
requests, the system allocates minimum resources: CPU100m, memory128Mi.Both
requestsandlimitsfor CPU must be at least100mand for memory at least128Mi. Calculated values below these thresholds are automatically raised to the minimum.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.
Select Configure sidecar resources by ratio.
Set the Proportion of Resources percentage, optionally select a Computing Policy, and click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlExample outcomes with a workload container configured as requests: {cpu: 300m, memory: 512Mi} and limits: {cpu: 500m, memory: 1Gi}:
| Proportion | Sidecar requests | Sidecar limits |
|---|---|---|
| 50% | cpu: 150m, memory: 256Mi | cpu: 250m, memory: 512Mi |
| 20% | cpu: 100m, memory: 128Mi | cpu: 100m, memory: 204Mi |
When the workload container has no requests configured and the proportion is 50%:
resources:
requests:
cpu: 100m # Minimum fallback
memory: 128Mi # Minimum fallback
limits:
cpu: 250m
memory: 512MiWhen the workload container has no limits configured and the proportion is 50%:
resources:
requests:
cpu: 150m
memory: 256Mi
# No limits setistio-init container resources
Set resource requests and limits for the istio-init container -- the init container that configures iptables traffic interception rules before the sidecar proxy starts.
| Setting | Description |
|---|---|
| Resource Limits | Maximum CPU (cores) and memory (MiB) for the istio-init container. |
| Required Resources | Minimum CPU (cores) and memory (MiB) for the istio-init container at runtime. |
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.
At the namespace or workload level, select Configure Resources for istio-init Container.
In Resource Limits, set CPU to
1core and Memory to512MiB. In Required Resources, set CPU to0.1cores and Memory to128MiB.Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the istio-init container's resources:
spec:
initContainers:
- name: istio-init
resources:
limits:
cpu: '1'
memory: 512Mi
requests:
cpu: 100m
memory: 128MiACK dynamically overcommitted resources
Allocate dynamically overcommitted resources to the sidecar proxy and istio-init containers. This setting takes effect only when the Pod has the koordinator.sh/qosClass label, which indicates that ACK dynamic resource overcommitment is enabled.
Resource units for dynamically overcommitted resources use kubernetes.io/batch-cpu (in millicores) and kubernetes.io/batch-memory (in MiB).
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.
Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy and configure the following:
| Component | Setting | Example values |
|---|---|---|
| Sidecar proxy (istio-proxy) | Resource Limits | CPU: 2000 millicores, Memory: 2048 MiB |
| Required Resources | CPU: 200 millicores, Memory: 256 MiB | |
| istio-init container | Resource Limits | CPU: 1000 millicores, Memory: 1024 MiB |
| Required Resources | CPU: 100 millicores, Memory: 128 MiB |
Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm that both containers use batch resource types:
spec:
containers:
- name: istio-proxy
resources:
limits:
kubernetes.io/batch-cpu: 2k
kubernetes.io/batch-memory: 2Gi
requests:
kubernetes.io/batch-cpu: '200'
kubernetes.io/batch-memory: 256Mi
initContainers:
- name: istio-init
resources:
limits:
kubernetes.io/batch-cpu: 1k
kubernetes.io/batch-memory: 1Gi
requests:
kubernetes.io/batch-cpu: '100'
kubernetes.io/batch-memory: 128MiSidecar proxy worker threads
Set the number of Envoy worker threads for the sidecar proxy. The value must be a non-negative integer. When set to 0, the thread count is automatically determined based on the sidecar proxy's CPU resource limits (or resource requests if no limits are set).
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Resource Settings.
At the namespace or workload level, set Number of Sidecar Proxy Threads to
3.Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the --concurrency argument in the istio-proxy container:
spec:
containers:
- name: istio-proxy
args:
- proxy
- sidecar
- '--concurrency'
- '3'Traffic interception by port or IP address
Control which traffic the sidecar proxy intercepts by specifying IP address ranges and port numbers. These settings are managed under the Enable/Disable Sidecar Proxy by Ports or IP Addresses section.
The following table summarizes all traffic interception settings and the istio-init parameter each controls:
| Setting | Direction | Effect | istio-init parameter |
|---|---|---|---|
| Redirect addresses (include) | Outbound | Intercept traffic to these CIDR ranges only. Default: * (all). | -i |
| Bypass addresses (exclude) | Outbound | Skip interception for these CIDR ranges. Takes precedence over the include list. | -x |
| Redirect ports - inbound (include) | Inbound | Intercept inbound traffic on these ports only. Default: * (all). | -b |
| Bypass ports - inbound (exclude) | Inbound | Skip interception for inbound traffic on these ports. Effective only when redirect ports is *. | -d |
| Redirect ports - outbound (include) | Outbound | Intercept outbound traffic to these destination ports. | -q |
| Bypass ports - outbound (exclude) | Outbound | Skip interception for outbound traffic to these ports, regardless of other settings. | -o |
Outbound traffic: redirect addresses (include list)
Specify IP address ranges in CIDR notation, separated by commas. The sidecar proxy intercepts outbound traffic only to destinations within these ranges. The default value * intercepts all outbound traffic.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Addresses to Which External Access Is Redirected to Sidecar Proxy.
Enter
192.168.0.0/16,10.1.0.0/24and click Update Settings.Redeploy workloads to apply the change.
Verify
The -i parameter in the istio-init container reflects the configured CIDR ranges:
initContainers:
- name: istio-init
args:
- '-i'
- '192.168.0.0/16,10.1.0.0/24'Outbound traffic: bypass addresses (exclude list)
Specify IP address ranges in CIDR notation, separated by commas. Outbound traffic to destinations within these ranges bypasses the sidecar proxy.
If an IP address appears in both the redirect list and the bypass list, the bypass list takes precedence -- the sidecar proxy does not intercept traffic to that address.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Addresses to Which External Access Is Not Redirected to Sidecar Proxy.
Enter
10.1.0.0/24and click Update Settings.Redeploy workloads to apply the change.
Verify
The -x parameter in the istio-init container includes the configured CIDR range along with the default host CIDR (192.168.0.1/32):
initContainers:
- name: istio-init
args:
- '-x'
- '192.168.0.1/32,10.1.0.0/24'Inbound traffic: redirect ports (include list)
Specify port numbers separated by commas. The sidecar proxy intercepts inbound traffic only on these ports. The default value * intercepts all inbound traffic.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Ports on Which Inbound Traffic Redirected to Sidecar Proxy.
Enter
80,443and click Update Settings.Redeploy workloads to apply the change.
Verify
The -b parameter in the istio-init container reflects the configured ports:
initContainers:
- name: istio-init
args:
- '-b'
- '80,443'Inbound traffic: bypass ports (exclude list)
Specify port numbers separated by commas. Inbound traffic on these ports bypasses the sidecar proxy.
This setting takes effect only when Ports on Which Inbound Traffic Redirected to Sidecar Proxy is set to the default value * (intercept all inbound traffic).
By default, the sidecar proxy already excludes its own application ports: 15090, 15021, 15081, 9191, and 15020.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy.
Enter
8000and click Update Settings.Redeploy workloads to apply the change.
Verify
The -d parameter in the istio-init container includes port 8000 alongside the default excluded ports:
initContainers:
- name: istio-init
args:
- '-d'
- '15090,15021,15081,9191,15020,8000'Outbound traffic: redirect ports (include list)
Specify port numbers separated by commas. The sidecar proxy intercepts outbound traffic only to these destination ports.
Even if a port is in this list, the sidecar proxy does not intercept a request when the destination IP address falls within the bypass address list, or the destination port falls within the outbound bypass port list.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Ports on Which Outbound Traffic Redirected to Sidecar Proxy.
Enter
80,443and click Update Settings.Redeploy workloads to apply the change.
Verify
The -q parameter in the istio-init container reflects the configured ports:
initContainers:
- name: istio-init
args:
- '-q'
- '80,443'Outbound traffic: bypass ports (exclude list)
Specify port numbers separated by commas. Outbound traffic to these ports bypasses the sidecar proxy, regardless of any redirect address or redirect port settings.
Configure in the console
Expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.
At the namespace or workload level, select Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.
Enter
8000and click Update Settings.Redeploy workloads to apply the change.
Verify
The -o parameter in the istio-init container reflects the configured port:
initContainers:
- name: istio-init
args:
- '-o'
- '8000'DNS proxy
When enabled, the sidecar proxy intercepts DNS requests from the workload and resolves them locally using its cached IP-to-domain mappings. Most queries are resolved without contacting a remote DNS server, which improves DNS performance and availability. Queries that the sidecar cannot resolve are forwarded to the upstream DNS server.
For more information, see Use the DNS proxy feature in an ASM instance.
DNS proxy is not supported for sidecar proxies in ACK Serverless clusters or Elastic Container Instance (ECI)-based Pods due to network permission restrictions.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand DNS Proxy.
At the namespace or workload level, select Enable DNS Proxy, turn on the switch, and click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm that the istio-proxy container has both DNS-related environment variables set to true:
env:
- name: ISTIO_META_DNS_AUTO_ALLOCATE
value: 'true'
- name: ISTIO_META_DNS_CAPTURE
value: 'true'Environment variables
Add custom environment variables to the sidecar proxy container.
Graceful shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS)
When enabled, the sidecar proxy performs the following during Pod termination:
The
pilot-agentprocess stops Envoy from accepting new inbound connections.After a 5-second wait,
pilot-agentpolls Envoy's active connection count.Once active connections reach zero,
pilot-agentterminates the Envoy process.
This reduces dropped requests during termination and minimizes shutdown time.
When EXIT_ON_ZERO_ACTIVE_CONNECTIONS is set to true, the Sidecar Proxy Drain Duration at Pod Termination setting has no effect.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Manage Environment Variables for Sidecar Proxy.
At the namespace or workload level, select Sidecar Graceful Shutdown.
Turn on the switch and click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the environment variable in the istio-proxy container:
env:
- name: EXIT_ON_ZERO_ACTIVE_CONNECTIONS
value: 'true'Lifecycle management
Graceful startup
By default, the sidecar proxy starts before the application containers in a Pod. This prevents traffic loss that could occur if inbound traffic reaches the application before the sidecar proxy is ready.
Disable this setting to start the sidecar proxy and application containers simultaneously. This can speed up deployments in clusters with many Pods where the API server is under heavy load.
Configure in the console (example: global level)
On the Sidecar Proxy Setting page, select the global tab and expand Lifecycle Management.
Turn off the switch next to Sidecar Graceful Startup and click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlWhen disabled, the istio-proxy container's PROXY_CONFIG environment variable contains "holdApplicationUntilProxyStarts":false, and no default lifecycle field is declared.
Drain duration at Pod termination
After a Pod begins terminating, the sidecar proxy continues processing existing inbound traffic for a configurable period before closing connections. This period is the drain duration. The default is 5s.
If any API served by the workload takes longer than the drain duration to respond, increase this value to prevent in-flight requests from being dropped.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Lifecycle Management.
At the namespace or workload level, select Sidecar Proxy Drain Duration at Pod Termination.
Enter
10sand click Update Settings.Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the istio-proxy container has the drain duration configured:
env:
- name: TERMINATION_DRAIN_DURATION_SECONDS
value: '10'
- name: PROXY_CONFIG
value: >-
{..."terminationDrainDuration":"10s"}Custom lifecycle hooks
Customize the Kubernetes container lifecycle hooks for the sidecar proxy container by providing the lifecycle field in JSON format. This replaces the default lifecycle configuration.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Lifecycle Management.
At the namespace or workload level, select Lifecycle of Sidecar Proxy.
Enter the lifecycle configuration in JSON:
{
"postStart": {
"exec": {
"command": [
"pilot-agent",
"wait"
]
}
},
"preStop": {
"exec": {
"command": [
"/bin/sh",
"-c",
"sleep 13"
]
}
}
}In this example:
postStart: Waits forpilot-agentand Envoy to fully start after the container is created.preStop: Sleeps 13 seconds before the container is stopped, allowing in-flight requests to complete.
Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the istio-proxy container has the expected lifecycle hooks:
lifecycle:
postStart:
exec:
command:
- pilot-agent
- wait
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 13Outbound traffic policy
Control whether the sidecar proxy allows workloads to access services outside the ASM service registry. External services are those not registered in the mesh -- either through Kubernetes service discovery or manually declared ServiceEntry resources.
| Policy | Behavior |
|---|---|
| ALLOW_ANY (default) | The sidecar proxy forwards requests to external services. |
| REGISTRY_ONLY | The sidecar proxy blocks connections to external services. Requests to unregistered services return HTTP 502. |
This is a global-level setting. To configure outbound traffic policy at the namespace or workload level, go to Traffic Management Center > Sidecar Traffic Configuration in the ASM console.
Configure in the console
On the global tab of the Sidecar Proxy Setting page, expand Outbound Traffic Policy.
Select REGISTRY_ONLY and click Update Settings.
Redeploy workloads to apply the change.
Verify
Deploy a test workload with sidecar injection enabled. The following example uses a minimal sleep application:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sleep
---
apiVersion: v1
kind: Service
metadata:
name: sleep
labels:
app: sleep
service: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
labels:
app: sleep
spec:
terminationGracePeriodSeconds: 0
serviceAccountName: sleep
containers:
- name: sleep
image: curlimages/curl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/sleep/tls
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sleep-secret
optional: truekubectl apply -f sleep.yaml -n defaultThen attempt to access an external service from the test workload:
kubectl exec -it <sleep-pod-name> -c sleep -- curl www.aliyun.com -vA 502 Bad Gateway response confirms that the sidecar proxy is blocking access to the unregistered external service:
> GET / HTTP/1.1
> Host: www.aliyun.com
< HTTP/1.1 502 Bad Gateway
< server: envoyTraffic interception mode
By default, the sidecar proxy uses iptables REDIRECT mode to intercept inbound traffic. In this mode, the application sees the sidecar proxy's IP as the source address and cannot identify the original client IP.
Switch to TPROXY (transparent proxy) mode to preserve the original client source IP. See Preserve the source IP address of a client when the client accesses services in ASM.
TPROXY mode does not support CentOS. If your Pods run on CentOS nodes, use the default REDIRECT mode.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Sidecar Traffic Interception Mode.
At the namespace or workload level, select the Sidecar Traffic Interception Mode checkbox.
Select TPROXY and click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm that both the istio-proxy environment variable and the istio-init startup argument use TPROXY:
# In istio-proxy container:
env:
- name: PROXY_CONFIG
value: >-
{..."interceptionMode":"TPROXY",...}
# In istio-init container:
initContainers:
- name: istio-init
args:
- '-m'
- TPROXYMonitoring
Log level
Set the Envoy proxy log level for the sidecar proxy container. The default level is info.
Available levels: debug, trace, info, warning, error, critical, off.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Monitoring Statistics.
At the namespace or workload level, select Log Level.
Select
errorfrom the drop-down list and click Update Settings.Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the --proxyLogLevel argument in the istio-proxy container:
containers:
- name: istio-proxy
args:
- '--proxyLogLevel=error'Custom Envoy metrics (proxyStatsMatcher)
By default, the sidecar proxy reports only a subset of Envoy metrics to minimize performance overhead. Use proxyStatsMatcher to collect additional metrics by matching metric names with prefixes, suffixes, or regular expressions.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab and expand Monitoring Statistics.
Select proxyStatsMatcher and Regular Expression Match.
Enter
.*outlier_detection.*to collect circuit breaker metrics.Click Update Settings.
Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the proxyStatsMatcher in the PROXY_CONFIG environment variable:
env:
- name: PROXY_CONFIG
value: >-
{..."proxyStatsMatcher":{"inclusionRegexps":[".*outlier_detection.*"]},...}Envoy runtime parameters
Downstream connection limits
By default, the sidecar proxy does not limit downstream connections, which can be exploited in denial-of-service scenarios. See ISTIO-SECURITY-2020-007 for details.
Set a maximum number of downstream connections to protect your workloads.
Configure in the console
On the Sidecar Proxy Setting page, select a scope tab.
In the Envoy Runtime Parameters section, enter
5000next to Limits on Downstream Connections and click Update Settings.Redeploy workloads to apply the change.
Verify
kubectl get pod -n <namespace> <pod-name> -o yamlConfirm the runtimeValues field in the PROXY_CONFIG environment variable:
env:
- name: PROXY_CONFIG
value: >-
{..."runtimeValues":{"overload.global_downstream_max_connections":"5000"},...}Manage workload-level configurations
At the workload level, multiple sidecar proxy configurations can coexist for different workloads within the same namespace.
Create a workload-level configuration
On the Sidecar Proxy Setting page, click the workload tab and then click Create.
Set the configuration items and click Create.
Update a workload-level configuration
On the workload tab, find the target configuration and click Update in the Actions column.
Modify the settings and click Update.
Delete a workload-level configuration
On the workload tab, find the target configuration and click Delete in the Actions column.
In the confirmation dialog, click OK.
Redeploy workloads
Most sidecar proxy configuration changes require a Pod restart. Redeploy affected workloads to apply the new settings.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster. In the left-side navigation pane, choose Workloads > Deployments.
Redeploy workloads using one of the following methods:
| Scenario | Steps |
|---|---|
| Single workload | Find the target workload and click More > Redeploy in the Actions column. |
| Multiple workloads | Select the target workloads and click Batch Redeploy at the bottom of the page. |
Verify global settings
After updating global-level sidecar proxy settings, verify that the ASM instance has applied the changes:
In the left-side navigation pane, choose ASM Instance > Base Information.
Confirm that the Status is Running.