All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure Sidecar proxy

Last Updated:Jun 17, 2025

Injecting Sidecar proxy to application containers enhances network security, reliability, and observability for service-to-service calls. Learn how to configure Sidecar proxy.

Before you start

Sidecar proxy configuration levels

Sidecar proxy configurations are applied based on their scope and priority. The configuration levels, ordered from lowest to highest priority, are:

  1. Global

    • Scope: Applies to all Pods in the cluster.

    • Description: Configurations at this level are automatically injected into every Pod during Sidecar proxy injection.

  2. Namespace

    • Scope: Applies to all Pods in a specified namespace.

    • Description: Only Pods within the selected namespace will apply the configuration.

  3. Workload

    • Scope: Applies to specific workloads selected by a label selector.

    • Description: You must define a label selector to target specific workloads. Only the selected workloads will apply the configuration during Sidecar proxy injection.

  4. Pod

Configuration Priority Rules

  • High-priority configurations override lower ones. For example:

    • If a global configuration and a namespace-level configuration both exist for the default namespace, the namespace-level settings take precedence when deploying a new workload in default.

Procedure

The following section describes how to configure Sidecar proxy at different levels.

Global level

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.

  3. On the global tab, configure the proxy and then click Update Settings.

    For more information about the configuration items of Sidecar proxy, see Configuration items of Sidecar proxy.

  4. Check whether the Sidecar proxy configurations take effect.

    1. In the left-side navigation pane, choose ASM Instance > Base Information.

    2. If the Status is Running, the global Sidecar proxy configurations take effect.

Namespace level

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.

  3. Click the Namespace tab, select a namespace, set the related configuration items, and then click Update Settings. The update takes effect immediately.

    The namespace level is not the lowest configuration level of Sidecar proxy. Therefore, all Sidecar proxy configuration items at the namespace level have no default values (for Sidecar proxy configuration items that are not selected and configured, the configurations at the global level take effect by default). For more information, see Configuration items of Sidecar proxy.

Workload level

In the same namespace, you can create multiple Sidecar proxy configurations for different workloads.

  1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.

  2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.

  3. Click the workload tab and then click Create.

  4. Set the related configuration items, and then click Create.

Since the workload level is not the lowest in the Sidecar proxy configuration hierarchy, all Sidecar proxy configuration options lack default values and instead default to the global-level configuration. See Configuration items of Sidecar proxy for more details.

After creation, you can update or delete workload-level Sidecar proxy configurations.

To update a workload-Level configuration

  1. select the workload tab.

  2. In the Actions column, click Update for the target Sidecar proxy configuration.

  3. Modify configurations.

  4. Click Update to apply changes.

To delete a workload-level vonfiguration

  1. select the workload tab.

  2. In the Actions column, click Delete for the target Sidecar proxy configuration.

  3. In the confirmation dialog, click OK to proceed.

Pod level

You must configure specific annotations for pod. For more information, see Configure a Sidecar proxy by adding resource annotations.

(Optional) Redeploy workloads

Redeploy pod to make the Sidecar proxy configuration take effect.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Workloads > Deployments.

  3. Perform the following operations to redeploy workloads.

    Scenario

    Procedure

    For a single workload

    Find the workload that you want to redeploy and click More > Redeploy in the Actions column.

    For multiple workloads

    Select multiple workloads in the Name column and click Batch Redeploy in the lower part of the page.

Supported version for Sidecar proxy configuration items

If you cannot find a Sidecar proxy configuration item, refer to the information provided in the following table to see if you need to update an ASM instance.

Important

If your ASM version is V1.22 or later and Kubernetes cluster version is V1.30 or later, Sidecar proxy are deployed as native Sidecar containers. In this case, the Kubernetes cluster manages the lifecycle of Sidecar proxy containers, overriding all lifecycle management configurations in them.

Category

Configuration item

Global level

Namespace level

Workload level

Resource settings

Configure resources for Injected Istio proxy

All versions

1.10.5.34

1.13.4.20

Configure Sidecar Resources Proportionally

1.24.6.83

Configure Resources for istio-init Container

1.9.7.93

1.10.5.34

1.13.4.20

Set ACK resources that can be dynamically overcommitted for Sidecar proxy

1.16.3.47

1.16.3.47

1.16.3.47

Number of Sidecar proxy threads

1.15.3.104

1.12.4.19

1.13.4.20

Enable/disable Sidecar proxy by ports or IP Addresses

Configure addresses to which external access is redirected to Sidecar proxy

All versions

1.10.5.34

1.13.4.20

Configure addresses to which external access is not redirected to Sidecar proxy

All versions

1.10.5.34

1.13.4.20

Configure ports on which inbound traffic redirected to Sidecar proxy

1.15.3.104

1.10.5.34

1.13.4.20

Configure ports on which outbound traffic redirected to Sidecar proxy

1.15.3.104

1.10.5.34

1.13.4.20

Configure ports on which inbound traffic not redirected to Sidecar proxy

All versions

1.10.5.34

1.13.4.20

Configure ports on which outbound traffic not redirected to Sidecar proxy

All versions

1.10.5.34

1.13.4.20

DNS proxy

Enable DNS Proxy

1.8.3.17

1.10.5.34

1.13.4.20

Manage environment variables for Sidecar proxy

Sidecar Graceful Shutdown (EXIT_ON_ZERO_ACTIVE_CONNECTIONS)

1.15.3.104

1.15.3.104

1.15.3.104

Lifecycle Management

Sidecar Graceful Startup

1.15.3.104

1.12.4.58

1.13.4.20

Configure Sidecar proxy drain duration at pod termination

1.9.7.93

1.10.5.34

1.13.4.20

Configure lifecycle of Sidecar proxy

1.9.7.93

1.10.5.34

1.13.4.20

Outbound traffic policy

Outbound Traffic Policy

All versions

1.10.5.34

1.13.4.20

Sidecar traffic interception policy

Sidecar Traffic Interception Policy

1.15.3.25

1.15.3.25

1.15.3.25

Monitoring

Log Level

1.15.3.104

1.12.4.58

1.13.4.20

Configure proxyStatsMatcher

1.15.3.104

1.12.4.58

1.13.4.20

Envoy runtime parameters

Limits on Downstream Connections

1.21.6.95

1.21.6.95

1.21.6.95

Configuration items of Sidecar proxy

You can configure the resource usage, traffic interception mode, Domain Name System (DNS) proxy, and lifecycle of Sidecar proxy.

Configure resources for Injected Istio proxy

Expand to learn about the configurations

Configuration reference

Configuration item

Description

Resource Limits

The maximum CPU and memory resources a Sidecar proxy container can request, measured in cores (CPU) and MiB (memory), respectively.

Required Resources

The minimum CPU and memory resources a Sidecar proxy container needs to use at runtime, measured in cores (CPU) and MiB (memory), respectively.

Configuration example

  1. On the Sidecar Proxy Setting page, select the target configuration level tab, and then click Resource Settings.

  2. (Optional) Select Configure Resources for istio-init Container. It applies to Namespace and workload only.

  3. In the Resource Limits section, set CPU to 2 cores and Memory to 1025 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB.

  4. Click Update Settings.

  5. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  6. Run the following command to view the configured resources of the Sidecar proxy:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
            - proxy
    ...
          name: istio-proxy
    ...
          resources:
            limits:
              cpu: '2'
              memory: 1025Mi
            requests:
              cpu: 100m
              memory: 128Mi
    ...

    The istio-proxy container is a Sidecar proxy container. The resources field of the istio-proxy container is set to the expected resource values. This indicates that the configurations of Configure Resources for istio-init Container have taken effect.

Configure Sidecar resources by ratio

Expand to learn about the configurations

Configuration reference

This configuration enables automatic resource allocation for the Sidecar proxy based on the workload container's specifications. When enabled, it overrides the default Istio proxy resource settings injected into the Pod. Following strategies are supported:

  1. Allocate resource based on max container resources

    • Iterates through all containers in the Pod, using the highest CPU or memory limit as the baseline for the Sidecar proxy.

  2. Allocate resource based on a specified container

    • Specify a container by its name in the Pod as the baseline.

      • If the specified container is not found in the Pod, the Sidecar resources of the injected Istio proxy are applied.

      • If the Pod includes the annotation scaled-resource.inject.istio.alibabacloud.com/container-ref, this annotation takes priority over manually specified container names.

Important

Regardless of the strategies:

  • If the baseline container lacks limits: The system assumes no limits for the Sidecar proxy and does not configure limits for it.

  • If the baseline container lacks requests: To ensure Sidecar functionality, the system allocates the minimum feasible resources:

    • CPU: At least 100m.

    • Memory: At least 128Mi.

  • Minimum resource requirements:

    • Both requests and limits for CPU must be ≥ 100m.

    • Both requests and limits for memory must be ≥ 128Mi.

  • Fallback behavior: If calculated resources fall below the minimum requirements, the Sidecar proxy resources are automatically set to the minimum values.

Configuration example

Below are the examples of configuarions in different scenarios and corresponding effects.

Standard configuration

resources:
  requests:
    cpu: 300m
    memory: 512Mi
  limits:
    cpu: 500m
    memory: 1Gi

Requests not configured

resources:
  limits:
    cpu: 500m
    memory: 1Gi

Limits not configured

resources:
  requests:
    cpu: 300m
    memory: 512Mi
  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then expand the Resource Settings.

  2. Select Configure sidecar resources by ratio.

  3. Configure Proportion of Resources, select Computing Policy (optional), and then click Update Settings.

  4. Redeploy the workloads to make the configurations take effect.

  5. Run the following command to view the configured resources of the Sidecar proxy:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    Standard configuration

    Standard configuration

    If the Proportion of Resources is set to 50 (meaning the Sidecar resources get 50% of the application container's resource), the final Sidecar resource allocation is as follows:

    ...
    resources:
      requests:
        cpu: 150m
        memory: 256Mi
      limits:
        cpu: 250m
        memory: 512Mi

    Minimum configuration

    If the resource proportion is set to 20, the final Sidecar resource allocation is as follows:

    ...
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 100m
        memory: 204Mi

    No requests configured

    If the resource proportion is set to 50, the final Sidecar resource allocation is as follows:

    ...
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 250m
        memory: 512Mi

    No limits configured

    If the resource proportion is set to 50, the final Sidecar resource allocation is as follows:

    ...
    resources:
      requests:
        cpu: 150m
        memory: 256Mi

Configure Resources for istio-init Container

Expand to learn about the configurations

Configuration reference

This configuration item specifies the minimum required and maximum allowable CPU and memory resources for the istio-init container in Pods with injected Sidecar proxy. The istio-init container is an init container executed when the Pod starts, responsible for configuring traffic interception routing rules and other prerequisites for the Sidecar proxy container.

Configuration item

Description

Resource Limits

The maximum CPU cores and memory resources that the istio-init container can request, measured in cores (CPU) and MiB (memory), respectively.

Required Resources

The minimum CPU cores and memory resources that the istio-init container needs to use at runtime, measured in cores (CPU) and MiB (memory), respectively.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.

  2. (Optional) Perform this step if you selected Namespace or workload in step 1. Then, in the Resource Settings section, select Configure Resources for istio-init Container.

  3. In the Resource Limits section, set CPU to 1 core and Memory to 512 MiB. In the Required Resources section, set CPU to 0.1 cores and Memory to 128 MiB. Then, click Update Settings.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the resource configurations of the istio-init container:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
    ...
          name: istio-init
          resources:
            limits:
              cpu: '1'
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 128Mi
    ...

    The resources field of the istio-init container in the pod is set to the expected resource values. This indicates that the configurations of Configure Resources for istio-init Container take effect.

Set ACK resources that can be dynamically overcommitted for Sidecar proxy

Expand to learn about the configurations

Configuration reference

This configuration setting specifies the allocation of ACK dynamically over-allocated resources for the injected Istio proxy and the istio-init container. For more information, see Dynamic resource overcommitment.

The configuration items are configured in the same way as those of Configure resources for Injected Istio proxy and Configure Resources for istio-init Container. After the preceding configurations are complete, resources that can be dynamically overcommitted, instead of regular CPU and memory resources, are allocated to the Sidecar proxy and istio-init containers in a pod if the pod has the koordinator.sh/qosClass label. This label indicates that dynamic overcommitment of ACK resources is enabled.

Note

In ACK resources, the unit of CPU resources is millicores.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.

  2. Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy, configure the settings, and then click Update Settings.

    Configuration item

    Child configuration item

    Description

    Configure Resources for Injected Sidecar Proxy (ACK Dynamically Overcommitted Resources)

    Resource Limits

    Set CPU to 2000 millicores and Memory to 2048 MiB.

    Required Resources

    Set CPU to 200 millicores and Memory to 256 MiB.

    istio-init container resource (ACK dynamic oversold resource)

    Resource Limits

    Set CPU to 1000 millicores and Memory to 1024 MiB.

    Required Resources

    Set CPU to 100 millicores and Memory to 128 MiB.

  3. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  4. Run the following command to view the resource configurations of the istio-init container:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    metadata:
    ...
      labels:
        koordinator.sh/qosClass: BE
    spec:
      containers:
        - args:
    ...
          name: istio-proxy
    ...
          resources:
            limits:
              kubernetes.io/batch-cpu: 2k
              kubernetes.io/batch-memory: 2Gi
            requests:
              kubernetes.io/batch-cpu: '200'
              kubernetes.io/batch-memory: 256Mi
    ...
      initContainers:
        - args:
    ...
          name: istio-init
          resources:
            limits:
              kubernetes.io/batch-cpu: 1k
              kubernetes.io/batch-memory: 1Gi
            requests:
              kubernetes.io/batch-cpu: '100'
              kubernetes.io/batch-memory: 128Mi
    ...

    Both the istio-proxy container (Sidecar proxy container) and the istio-init container in the pod contain the resources field and have the configured resource values. This indicates that the configurations of Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy take effect.

Number of Sidecar proxy threads

Expand to learn about the configurations

Configuration reference

This configuration item specifies the number of worker threads for the Sidecar proxy container. Must be non-negative integer. When set to 0, the number of worker threads is automatically determined based on the requested CPU resources or CPU resource limits configured for the Sidecar proxy. Resource limits take precedence over requested resources.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Resource Settings.

  2. (Optional) Perform this step if you selected Namespace or workload in step 1. Set Number of Sidecar Proxy Threads to 3 and click Update Settings.

  3. This configuration indicates that the Sidecar proxy container will start three worker threads when it is running.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration of Number of Sidecar Proxy Threads:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
        - args:
            - proxy
            - sidecar
            - '--domain'
            - $(POD_NAMESPACE).svc.cluster.local
            - '--proxyLogLevel=warning'
            - '--proxyComponentLogLevel=misc:error'
            - '--log_output_level=default:info'
            - '--concurrency'
            - '3'
    ...
          name: istio-proxy
    ...

    The concurrency parameter of the istio-proxy container is set to 3. This indicates that the configuration of Number of Sidecar Proxy Threads takes effect.

Configure addresses to which external access is redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of IP address ranges separated by commas (,), where each IP address range is specified in CIDR notation. When a workload with an injected Sidecar proxy accesses other services, only requests with destination IP addresses within the configured ranges are intercepted by the Sidecar proxy container. Requests outside these ranges will bypass the Sidecar proxy and be sent directly to their destinations. The default configuration is set to *, which means the Sidecar proxy container intercepts all outbound traffic from the workload.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Addresses to Which External Access Is Redirected to Sidecar Proxy.

  3. Set Addresses to Which External Access Is Redirected to Sidecar Proxy to 192.168.0.0/16,10.1.0.0/24 and click Update Settings.

    This configuration indicates that the Sidecar proxy container will intercept requests whose destination IP addresses are within the 192.168.0.0/16 and 10.1.0.0/24 CIDR blocks.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '192.168.0.0/16,10.1.0.0/24'
            - '-x'
            - 192.168.0.1/32
            - '-b'
            - '*'
            - '-d'
            - '15090,15021,15081,9191,15020'
            - '--log_output_level=default:info'
    ...
          name: istio-init
    ...

    The runtime parameter -i of the istio-init container is set to 192.168.0.0/16,10.1.0.0/24. This indicates that the configuration of Addresses to Which External Access Is Redirected to Sidecar Proxy takes effect.

Configure addresses to which external access is not redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of IP address ranges separated by comma (,), where each range is specified in CIDR notation. When a workload with an injected Sidecar proxy accesses other services, the Sidecar proxy container intercepts outbound traffic. However, if the destination IP address of a request falls within the configured CIDR ranges, the request will not be intercepted by the Sidecar proxy container.

Important

If an IP address is specified by both the configuration items Addresses to Which External Access Is Not Redirected to Sidecar Proxy and Addresses to Which External Access Is Redirected to Sidecar Proxy, the Sidecar proxy container does not intercept the requests whose destination addresses is this IP address. For more information, see Configure addresses to which external access is redirected to Sidecar proxy.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then expand Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Addresses to Which External Access Is Not Redirected to Sidecar Proxy.

  3. Set Addresses to Which External Access Is Not Redirected to Sidecar Proxy to 10.1.0.0/24 and click Update Settings.

    This configuration indicates that the sidecar proxy container will not intercept the requests whose destination IP addresses are within the 10.1.0.0/24 CIDR block.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '*'
            - '-x'
            - '192.168.0.1/32,10.1.0.0/24'
            - '-b'
            - '*'
            - '-d'
            - '15090,15021,15081,9191,15020'
            - '--log_output_level=default:info'
    ...
          name: istio-init
    ...

    The runtime parameter -x of the istio-init container is set to 192.168.0.1/32,10.1.0.0/24. 192.168.0.1/32 is the CIDR block of hosts configured by default. 10.1.0.0/24 is the same as the IP address range specified in the Sidecar proxy configuration. This indicates that the configuration of Addresses to Which External Access Is Not Redirected to Sidecar Proxy takes effect.

Configure ports on which inbound traffic redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of port numbers that are separated by commas (,). The Sidecar proxy container intercepts inbound traffic whose destination ports are in the list. The default value is *, which indicates that the Sidecar proxy container intercepts all inbound traffic of the workload.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. In the Enable/Disable Sidecar Proxy by Ports or IP Addresses section, select Ports on Which Inbound Traffic Redirected to Sidecar Proxy.

  3. Set Ports on Which Inbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings.

    This configuration indicates that the Sidecar proxy container will intercept requests destined for ports 80 and 443 of the corresponding workload.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configured:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '*'
            - '-x'
            - 192.168.0.1/32
            - '-b'
            - '80,443'
            - '-d'
            - '15090,15021,15081,9191,15020'
            - '--log_output_level=default:info'
    ...
          name: istio-init
    ...
                                

    The runtime parameter -b of the istio-init container is set to 80,443, which is the same as the inbound ports set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Redirected to Sidecar Proxy takes effect.

Configure ports on which outbound traffic redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of port numbers that are separated by commas (,). The Sidecar proxy container intercepts outbound traffic whose destination ports are in the list.

Important

Even if the destination port is included in the specified list, the Sidecar proxy container will not intercept this request when all the following conditions are met: 1. Both this configuration item and Addresses to Which External Access Is Not Redirected to Sidecar Proxy or Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy are configured. 2. The destination IP address of the request is included in Addresses to Which External Access Is Not Redirected to Sidecar Proxy, or the destination service port of the request is included in Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. For more information, see Configure addresses to which external access is not redirected to Sidecar proxy and Configure ports on which outbound traffic not redirected to Sidecar proxy.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Outbound Traffic Redirected to Sidecar Proxy.

  3. Set Ports on Which Outbound Traffic Redirected to Sidecar Proxy to 80,443 and click Update Settings.

    This configuration indicates that the Sidecar proxy container intercepts requests destined for ports 80 and 443.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '*'
            - '-x'
            - 192.168.0.1/32
            - '-b'
            - '*'
            - '-d'
            - '15090,15021,15081,9191,15020'
            - '--log_output_level=default:info'
            - '-q'
            - '80,443'
            - '--log_output_level=default:info'
    ...
          name: istio-init
    ...
                                

    The runtime parameter -q of the istio-init container is set to 80,443, which is the same as the outbound ports set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Outbound Traffic Redirected to Sidecar Proxy takes effect.

Configure ports on which inbound traffic not redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of ports that are separated by commas (,). Inbound traffic destined for the ports in the list will not be intercepted by the Sidecar proxy container.

Important

This configuration item takes effect only when Ports on Which Inbound Traffic Redirected to Sidecar Proxy is set to the default value *, which indicates that the Sidecar proxy container intercepts all inbound traffic.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy.

  3. Set Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings.

    This configuration indicates that the Sidecar proxy container no longer intercepts requests destined for port 8000 of the workload.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '*'
            - '-x'
            - 192.168.0.1/32
            - '-b'
            - '*'
            - '-d'
            - '15090,15021,15081,9191,15020,8000'
            - '--log_output_level=default:info'
    ...
          name: istio-init
    ...
                                

    The runtime parameter -d of the istio-init container is set to 15090,15021,15081,9191,8000. 15090, 15021, 15081, and 9191 are application ports of Sidecar proxy. By default, the Sidecar proxy container does not intercept inbound traffic destined for these ports. Port 8000 is the same as the inbound port set in the Sidecar proxy configuration. This indicates that the configuration of Ports on Which Inbound Traffic Not Redirected to Sidecar Proxy takes effect.

Configure ports on which outbound traffic not redirected to Sidecar proxy

Expand to learn about the configuration

Configuration reference

Configure a list of ports that are separated by commas (,). Outbound traffic destined for the ports in the list will not be intercepted by the Sidecar proxy, regardless of whether the IP addresses of the destination services are in Addresses to Which External Access Is Redirected to Sidecar Proxy and whether the ports of the destination services are in Ports on Which Outbound Traffic Redirected to Sidecar Proxy.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Enable/Disable Sidecar Proxy by Ports or IP Addresses.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy.

  3. Set Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy to 8000, and then click Update Settings.

    This configuration indicates that the Sidecar proxy container no longer intercepts service requests destined for port 8000.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - REDIRECT
            - '-i'
            - '*'
            - '-x'
            - 192.168.0.1/32
            - '-b'
            - '*'
            - '-d'
            - '15090,15021,15081,9191,15020'
            - '--log_output_level=default:info'
            - '-o'
            - '8000'
    ...
          name: istio-init
    ...
                                

    The runtime parameter -o of the istio-init container is set to 8000, which is the same as the port set in the configuration item Ports on Which Outbound Traffic Not Redirected to Sidecar Proxy. This indicates that the configuration item takes effect.

Enable DNS proxy

Expand to learn about the configuration

Configuration reference

If the DNS proxy is enabled, the Sidecar proxy intercepts DNS requests of the workload to improve the performance and availability of ASM. All requests from the workload are redirected to the Sidecar proxy. Since the Sidecar proxy stores the mapping of IP addresses to local domain names, it returns DNS responses to the workload without requesting remote DNS services in most cases, unless the DNS request cannot be resolved by it. For more information, see Use the DNS proxy feature in an ASM instance.

Important

Because of network permission, you cannot enable the DNS proxy feature for Sidecar proxy in ACK Serverless clusters or ACK clusters of Elastic Container Instance-based pods.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click DNS Proxy.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Enable DNS Proxy, turn on the switch, and then click Update Settings.

  3. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  4. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    spec:
      containers:
        - args:
            - proxy
            - sidecar
            - '--domain'
            - $(POD_NAMESPACE).svc.cluster.local
            - '--proxyLogLevel=warning'
            - '--proxyComponentLogLevel=misc:error'
            - '--log_output_level=default:info'
            - '--concurrency'
            - '3'
          env:
    ...
            - name: ISTIO_META_DNS_AUTO_ALLOCATE
              value: 'true'
            - name: ISTIO_META_DNS_CAPTURE
              value: 'true'
    ...
          name: istio-proxy
                                

    The ISTIO_META_DNS_AUTO_ALLOCATE and ISTIO_META_DNS_CAPTURE environment variables of the istio-proxy container are set to true, which indicates that the configuration of DNS Proxy takes effect.

Manage environment variables for Sidecar proxy

Expand to learn about the configuration

Configuration reference

These configuration items are used to add additional environment variables in the Sidecar proxy container. You can configure the following environment variables for the Sidecar proxy.

Configuration item

Description

Sidecar Graceful Shutdown

If you turn on this switch, the environment variable EXIT_ON_ZERO_ACTIVE_CONNECTIONS: "true" is added to the environment variables of the Sidecar proxy container. This environment variable works in the following way:

When the Sidecar proxy container is terminated, the pilot-agent process in the container first stops the Envoy proxy from listening to inbound traffic and waits for the default period of 5 seconds. Then, the pilot-agent process starts to poll the number of active connections of the Envoy proxy until the number of active connections becomes zero, and finally terminates the Envoy proxy process. Configure EXIT_ON_ZERO_ACTIVE_CONNECTIONS to perfect the termination process of the Sidecar proxy container in common situations. This reduces the number of requests that are discarded during termination and minimizes the termination time.

Important

If EXIT_ON_ZERO_ACTIVE_CONNECTIONS is set to true, the configuration item Sidecar Proxy Drain Duration at Pod Termination does not take effect. For more information, see Configure Sidecar proxy drain duration at pod termination.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Manage Environment Variables for Sidecar Proxy.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Graceful Shutdown.

  3. Turn on the switch and then click Update Settings.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
    ...
          env:
            - name: EXIT_ON_ZERO_ACTIVE_CONNECTIONS
              value: 'true'
          name: istio-proxy
    ...

    The EXIT_ON_ZERO_ACTIVE_CONNECTIONS environment variable is added to the environment variables of the istio-proxy container in the pod. This indicates that the configuration of Manage Environment Variables for Sidecar Proxy takes effect.

Sidecar Graceful Startup

Expand to learn about the configuration

Configuration reference

Sidecar Graceful Startup is used to manage the lifecycle of Sidecar proxy. By default, it is enabled, which indicates that for a pod injected with a Sidecar proxy, has to start up the Sidecar proxy before the application containers. The purpose is to ensure that the traffic destined for the application containers is not lost in the case that the Sidecar proxy is not started.

If this configuration item is disabled, the Sidecar proxy container and the application containers in the pod are started at the same time. When many pods are deployed in a cluster, the Sidecar proxy containers may be started slowly due to the heavy load on the API server. You can disable this configuration item to speed up deployment.

Configuration example

The following example describes how to disable Sidecar Graceful Startup on the global tab.

  1. On the Sidecar Proxy Setting page, select the global tab, and then click Lifecycle Management.

  2. Turn off the switch next to Sidecar Graceful Startup and click Update Settings.

  3. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  4. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - command:
    ...
          name: sleep
    ...
          env:
            - name: PROXY_CONFIG
              value: >-
                {..."holdApplicationUntilProxyStarts":false,...}
    ...
          name: istio-proxy
    ...

    Once Sidecar Graceful Startup is disabled, the istio-proxy container is not required to start before application containers are started, and the default lifecycle field is not declared.

Configure Sidecar proxy drain duration at pod termination

Expand to learn about the configuration

Configuration reference

Sidecar Proxy Drain Duration at Pod Termination is for managing the lifecycle of a Sidecar proxy.

After the pod begins being terminated, services no longer route traffic to the pod. The Sidecar proxy container continues to process existing inbound traffic for a period after receiving an exit signal, but it does not accept new inbound traffic. This period of time is called Sidecar Proxy Drain Duration at Pod Termination. The default value is 5s. This configuration item must be set to a value in seconds, for example, 10s.

If response time of some API provided by a service to be stopped exceeds Sidecar Proxy Drain Duration at Pod Termination, all existing inbound and outbound connections will be terminated even if they are not processed. As a result, some requests are lost. In this case, set it to a greater value so that the processing of inbound and outbound traffic can be completed.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Lifecycle Management.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Proxy Drain Duration at Pod Termination.

  3. Set it to 10s, and click Update Settings.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
    ...
          env:
            - name: TERMINATION_DRAIN_DURATION_SECONDS
              value: '10'
    ...
            - name: PROXY_CONFIG
              value: >-
                {..."terminationDrainDuration":"10s"}
    ...
          name: istio-proxy
    ...

    The istio-proxy container in the pod is configured with an environment variable named TERMINATION_DRAIN_DURATION_SECONDS with the value of 10, and terminationDrainDuration is 10s in the PROXY_CONFIG. This indicates that the configuration of Sidecar Proxy Drain Duration at Pod Termination takes effect.

Configure lifecycle of Sidecar proxy

Expand to learn about the configuration

Configuration reference

To customize the lifecycle hook of the Sidecar proxy container, enter the container lifecycle hook field (lifecycle) in JSON format. This field will replace the default container lifecycle hook field configured for the Sidecar proxy container. For more information, see Container Lifecycle Hooks.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Lifecycle Management.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Lifecycle of Sidecar Proxy.

  3. In the edit box under Lifecycle of Sidecar Proxy, configure the following YAML file and then click Update Settings.

    This YAML file configures the postStart and preStop hook parameters.

    • postStart: indicates that the pilot-agent starts to wait for the complete startup of the pilot-agent and Envoy proxy after the Sidecar proxy container is started.

    • preStop: indicates that the Sidecar proxy container sleeps for 13s before it is stopped.

    {
      "postStart": {
        "exec": {
          "command": [
            "pilot-agent",
            "wait"
          ]
        }
      },
      "preStop": {
        "exec": {
          "command": [
            "/bin/sh",
            "-c",
            "sleep 13"
          ]
        }
      }
    }
  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
    ...
    ...
            lifecycle:
            postStart:
              exec:
                command:
                - pilot-agent
                - wait
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - sleep 13
          name: istio-proxy
    ...

    The lifecycle hook field (lifecycle) of the istio-proxy container in the pod is changed to the expected configuration. This indicates that the configuration of Lifecycle of Sidecar Proxy takes effect.

Configure outbound traffic policy

Expand to learn about the configuration

Configuation reference

This configuration item is used to configure an outbound traffic policy for the Sidecar proxy container. External services indicate services that are not defined in the service registry of ASM. By default, services in the Kubernetes clusters that are managed by ASM are registered services. You can manually register services with ASM by declaring service entry (ServiceEntry) resources.

This configuration item can be set to one of the following two values:

  • ALLOW_ANY: the default outbound traffic policy. The Sidecar proxy allows access to external services and forwards requests destined for external services.

  • REGISTRY_ONLY: The Sidecar proxy denies access to external services. The workload cannot establish connections to external services.

Note

This configuration item is global-level. To configure an outbound traffic policy at the namespace or workload level, you can log on to the ASM console, find the desired ASM instance, navigate to Traffic Management Center > Sidecar Traffic Configuration, and configure the related parameters.

Configuration example

  1. On the global tab of the Sidecar Proxy Setting page, click Outbound Traffic Policy, select REGISTRY_ONLY, and then click Update Settings.

  2. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  3. Create a sleep.yaml file that contains the following content:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image: curlimages/curl
            command: ["/bin/sleep", "3650d"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
    ---
  4. Run the following command to deploy the sleep application:

    kubectl apply -f sleep.yaml -n default
  5. Run the following command to use the sleep application to access external services:

    kubectl exec -it {Name of the pod for the sleep service} -c sleep -- curl www.aliyun.com -v

    Expected output:

    *   Trying *********...
    * Connected to www.aliyun.com (********) port 80 (#0)
    > GET / HTTP/1.1
    > Host: www.aliyun.com
    > User-Agent: curl/7.87.0-DEV
    > Accept: */*
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 502 Bad Gateway
    < date: Mon,********* 03:25:00 GMT
    < server: envoy
    < content-length: 0
    <
    * Connection #0 to host www.aliyun.com left intact

    The HTTP status code 502 is returned, indicating that the sleep application for which the Sidecar proxy is injected cannot access the external service www.aliyun.com. This indicates that the configuration of Outbound Traffic Policy takes effect.

Configure Sidecar traffic interception mode

Expand to learn about the configuration

Configuration reference

This configuration item sets the inbound traffic interception strategy for the Sidecar proxy. By default, the Sidecar proxy container uses an iptables redirect policy to intercept inbound traffic destined for the application workload. After interception via redirection, the application will only see the Sidecar proxy container's IP as the source IP of the request and will not be able to identify the original client source IP.

By changing the inbound traffic interception strategy to transparent proxy mode (TPROXY), ASM allows the Sidecar proxy container to intercept inbound traffic using iptables' transparent proxy mode. After this configuration, the application will be able to see the original client source IP. For more information, see Preserve the source IP address of a client when the client accesses services in ASM.

Important

The transparent proxy mode does not support CentOS. If your pod runs on the CentOS operating system, use redirection policies.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Sidecar Traffic Interception Mode.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Sidecar Traffic Interception Mode.

  3. On the right side of Sidecar Traffic Interception Mode, select TPROXY, and then click Update Settings.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration of Sidecar Traffic Interception Mode:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
    ...
            - name: PROXY_CONFIG
              value: >-
                {..."interceptionMode":"TPROXY",...}
            - name: ISTIO_META_POD_PORTS
              value: |-
                [
                ]
    ...
          name: istio-proxy
    ...
      initContainers:
        - args:
            - istio-iptables
            - '-p'
            - '15001'
            - '-z'
            - '15006'
            - '-u'
            - '1337'
            - '-m'
            - TPROXY
    ...
          name: istio-init
    ...

    "interceptionMode":"TPROXY" is recorded in the environment variable of the istio-proxy container in the pod. The istio-init container also uses the TPROXY setting to run the initialization commands. This indicates that the configuration of Sidecar Traffic Interception Mode takes effect.

Configure log level

Expand to learn about the configuration

Configuration reference

This configuration item is used to set the log level of the Sidecar proxy container. By default, the log level of the Sidecar proxy is info. You can set the log level to one of the seven levels: info, debug, trace, warning, error, critical, and off.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Monitoring Statistics.

  2. (Optional) Perform this step if you selected workload or Namespace in step 1. Select Log Level. .

  3. Select error from the Log Level drop-down list, and then click Update Settings.

    This configuration indicates that the Sidecar proxy displays logs at the error or higher levels.

  4. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  5. Run the following command to view the configuration:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
            - proxy
            - sidecar
            - '--domain'
            - $(POD_NAMESPACE).svc.cluster.local
            - '--proxyLogLevel=error'
    ...
          name: istio-proxy
    ...

    The runtime parameter --proxyLogLevel of the istio-proxy container is set to error, which indicates that the configuration of Log Level takes effect.

Configure proxyStatsMatcher

Expand to learn about the configuration

Description of configuration items

This configuration item defines the custom Envoy statistics metrics reported by the Sidecar proxy. Envoy, as the technical implementation of the Sidecar proxy, can collect and report a wide range of metrics. However, ASM defaults to enabling only a subset of these metrics to minimize performance overhead on the Sidecar proxy.

You can use this configuration item to specify additional metrics that the Sidecar proxy should collect and expose by matching prefixes, suffixes, or regular expressions.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab, and then click Monitoring Statistics.

  2. Select proxyStatsMatcher and Regular Expression Match, and set Regular Expression Match to .*outlier_detection.*.

    This configuration indicates that the Sidecar proxy collects the statistics of circuit breaker metrics.

  3. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  4. Run the following command to view the configuration of proxyStatsMatcher:

    kubectl  get pod -n <Namespace> <Pod name>  -o yaml

    Expected output:

    apiVersion: v1
    kind: Pod
    ...
    spec:
      containers:
        - args:
     ...
            - name: PROXY_CONFIG
              value: >-
                {..."proxyStatsMatcher":{"inclusionRegexps":[".*outlier_detection.*"]},...}
    ...

    The custom metrics are updated in the environment variables of the istio-proxy container in the pod. This indicates that the configuration of proxyStatsMatcher takes effect.

Configure Envoy runtime parameters

Expand to learn about the configuration

Configuration reference

This configuration item is used to define runtime parameters of Envoy proxy processes in the Sidecar proxy container.

Configuration item

Description

Limits on Downstream Connections

By default, a Sidecar proxy does not limit the number of downstream connections. This may be exploited by malicious activities. For more information, see ISTIO-SECURITY-2020-007. You can configure the maximum number of downstream connections allowed by a Sidecar proxy.

Configuration example

  1. On the Sidecar Proxy Setting page, select a configuration level tab.

  2. In the Envoy Runtime Parameters section, enter 5000 in the input box on the right side of Limits on Downstream Connections and then click Update Settings.

  3. Redeploy the workloads to make the Sidecar proxy configurations take effect.

  4. Run the following command to view the configurations of Manage Environment Variables for Sidecar Proxy:

kubectl  get pod -n <Namespace> <Pod name>  -o yaml

Expected output:

apiVersion: v1
kind: Pod
...
spec:
  containers:
    - args:
...
      env:
        - name: PROXY_CONFIG
          value: >-
            {"concurrency":2,"configPath":"/etc/istio/proxy","discoveryAddress":"istiod-1-22-6.istio-system.svc:15012","holdApplicationUntilProxyStarts":true,"interceptionMode":"REDIRECT","proxyMetadata":{"BOOTSTRAP_XDS_AGENT":"false","DNS_AGENT":"","EXIT_ON_ZERO_ACTIVE_CONNECTIONS":"true"},"runtimeValues":{"overload.global_downstream_max_connections":"5000"},"terminationDrainDuration":"5s","tracing":{"zipkin":{"address":"zipkin.istio-system:9411"}}}
      name: istio-proxy
...

The "runtimeValues":{"overload.global_downstream_max_connections":"5000"} field is added to the PROXY_CONFIG environment variable of the istio-proxy container in the pod. This indicates that the configuration of Envoy Runtime Parameters takes effect.