All Products
Search
Document Center

Container Compute Service:ACS pod overview

Last Updated:Nov 13, 2025

In cloud computing and containerization environments, a pod is the smallest deployable unit in Kubernetes and usually consists of one or more containers. The compute class and computing power of a pod affect application performance and resource utilization. Container Compute Service (ACS) provides a variety of compute classes and computing quality of service (QoS) classes to meet diverse business requirements. This topic describes the prerequisites, limits, and key features of ACS pods. The key features of ACS pods include security isolation, configurations for CPU, memory, and GPU resources, image pulling, storage, networking, and log collection.

Compute classes

ACS provides cost-effective CPU and GPU compute classes. The compute classes may differ in resource supply and are suitable for different business scenarios.

Compute class

Label

Benefit

General-purpose (default)

general-purpose

This compute class is suitable for most stateful microservices applications, Java web applications, and computing tasks.

Performance-enhanced

performance

This compute class is suitable for scenarios that require higher performance, such as CPU-based AI and machine learning model training and inference and high-performance computing (HPC) batch processing.

GPU-accelerated

gpu

This compute class is suitable for heterogeneous computing scenarios such as AI and HPC scenarios. For example, you can use GPU-accelerated pods to perform inference on a one-pod-one-GPU basis or on a one-pod-multi-GPU basis. You can also use GPU-accelerated pods to run GPU parallel computing tasks.

GPU-HPN

gpu-hpn

This compute class is suitable for AI, HPC, and other heterogeneous computing scenarios, such as GPU-accelerated distributed training, distributed inference, and GPU-accelerated high-performance computing.

To specify the compute class of a pod, add the alibabacloud.com/compute-class label to the pod. The following code block shows the sample code of an NGINX application that runs pods of the general-purpose, gpu, and gpu-hpn compute classes.

General-purposes

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        alibabacloud.com/compute-class: general-purpose 
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

GPU-accelerated

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        # Set the compute class to gpu.
        alibabacloud.com/compute-class: "gpu"
        # example-model indicates the GPU model. Replace it with the actual GPU model, such as T4.
        alibabacloud.com/gpu-model-series: "example-model"
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
        resources:
          limits:
            cpu: 4
            memory: "8Gi"
            nvidia.com/gpu: "1" # Specify the number of GPUs. Set the resource label and quantity based on the actual business requirement. 
          requests:
            cpu: 4
            memory: "8Gi"
            nvidia.com/gpu: "1" # Specify the number of GPUs. Set the resource label and quantity based on the actual business requirement.
Note

For more information about the GPU models supported by ACS and their specifications, see Computing-accelerated.

GPU-HPN

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        # Set the compute class to gpu-hpn.
        alibabacloud.com/compute-class: "gpu-hpn"
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
        resources:
          limits:
            cpu: 4
            memory: "8Gi"
            nvidia.com/gpu: "1" # Specify the number of GPUs. Set the resource label and quantity based on the actual business requirement.
          requests:
            cpu: 4
            memory: "8Gi"
            nvidia.com/gpu: "1" # Specify the number of GPUs. Set the resource label and quantity based on the actual business requirement.
Note

To use HPN GPUs in ACS, first create GPU-HPN capacity reservations.

QoS classes

ACS supports two QoS classes. The resource supply for a pod also varies based on the QoS class of the pod.

QoS class

Label

Description

Scenario

Default

default

  • Computing power allocation may be unstable.

  • ACS does not forcefully evict pods assigned the default computing power QoS class. If issues occur in a pod, ACS performs a hot migration to migrate the workloads in the pod to a new one or notifies you to manually trigger an eviction.

  • Microservices applications

  • Web applications

  • Computing tasks

BestEffort

best-effort

  • Computing power allocation may be unstable.

  • ACS forcefully preempts and evicts pods in specific cases. ACS sends an event notification 5 minutes before it preempts or evicts a pod.

  • Big data computing

  • Audio and video transcoding

  • Batching processing

To assign a computing power QoS class to a pod, add the alibabacloud.com/compute-qos label to the pod. The following code block shows the sample code of an NGINX application that runs in pods assigned the default computing power QoS class.

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        alibabacloud.com/compute-qos: default
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest 
Note
  • The computing power QoS classes defined by ACS are different from the pod QoS classes defined by Kubernetes. The default computing power QoS class in ACS corresponds to the Guaranteed pod QoS class in Kubernetes.

  • BestEffort instances use a dynamic stock. We recommend that you prioritize BestEffort instances in the stock in a production environment and automatically switch to the default QoS class when BestEffort instances are out of stock. For more information, see Custom resource scheduling policies.

Mappings between compute classes and computing power QoS classes

Compute class (label)

Supported QoS class (label)

General-purpose (general-purpose)

Default (default) and BestEffort (best-effort)

Performance-enhanced (performance)

Default (default) and BestEffort (best-effort)

GPU-accelerated (gpu)

Default (default) and BestEffort (best-effort)

GPU-HPN

Default (default)

Specify the CPU vendor

For general-purpose and performance compute types, you can choose to launch instances with CPUs from either Intel or AMD.

To specify a CPU vendor, add the alibabacloud.com/cpu-vendors annotation to your pod or define it in the pod template of your workload.

Note the following:

  • To use AMD CPUs, submit a ticket to have your account added to the allowlist.

  • This feature is only available for general-purpose and performance compute types. If you add this annotation to other types, a message is returned indicating the CPU vendor is not supported.

The following values are supported for the annotation:

Key

Value

Description

alibabacloud.com/cpu-vendors

intel (default)

Specifies an Intel CPU. If not specified, the system defaults to intel.

amd

Specifies an AMD CPU (requires allowlisting).

intel,amd

Allows the system to launch the instance with either an Intel or AMD CPU based on current inventory. You cannot specify a preferred order.

After the instance is created, you can verify which CPU brand is being used by inspecting the value of the alibabacloud.com/cpu-vendor label in the pod's YAML file.

The following is an example of a Deployment for an Nginx application that requests an AMD CPU.

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        alibabacloud.com/compute-class: general-purpose
        alibabacloud.com/compute-qos: default
      annotations:
        alibabacloud.com/cpu-vendors: amd
    spec:
      containers:
      - name: nginx
        image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest 
Warning

Do not add ACS system labels (such as alibabacloud.com/compute-class, alibabacloud.com/compute-qos, and alibabacloud.com/cpu-vendor) in the matchLabels selector of your workload controller (such as a Deployment). These labels are managed by the system and can be modified at any time. If a label changes, it will no longer match the selector, causing the controller to frequently recreate pods and leading to application instability.

Key features

Feature

Description

Security isolation

ACS pods provide a secure and reliable runtime environment for containers by running sandboxes at the underlying layer to isolate pods from each other. In addition, ACS preferentially schedules different pods to different physical machines to ensure the high availability of the pods.

CPU, memory, GPU, and ephemeral storage configurations

  • You can use Kubernetes standard definitions to configure CPU, memory, GPU, and ephemeral storage requests in the resources.requests parameter for each container. The resource request of a pod equals the sum of the resource requests of all containers that run in the pod. ACS automatically adjusts the resource request of a pod.

  • You can use Kubernetes standard definitions to configure CPU, memory, GPU, and ephemeral storage limits in the resources.limits parameter for each container. The default resource limit of a container in a pod equals the sum of the adjusted resource requests of all containers that run in the pod.

Images

By default, each time an ACS pod is restarted, it pulls an image from a remote container registry through the virtual private cloud (VPC) where the pod is deployed. If the registry is publicly accessible, you must configure a NAT gateway for the VPC. We recommend that use Container Registry (ACR) to host container images. This can accelerate image pulling through VPCs. In addition, ACS allows you to pull private images from Container Registry without using Secrets.

Storage

ACS supports disk volumes, NAS volumes, and Cloud Parallel File Storage (CPFS) volumes for data persistence.

  • Cloud disk

    • Supports Enterprise SSDs (ESSDs), ESSD AutoPL disks, standard SSDs, and ultra disks. You can select the preceding disk types based on your business requirements. For more information, see Disk volume overview.

    • ACS allows you to dynamically provision disks as persistent volumes (PVs). For more information, see Mount a dynamically provisioned disk volume.

  • NAS

    • You can use statically provisioned NAS volumes to mount Capacity and Extreme NAS file systems as volumes. If you use dynamically provisioned NAS volumes, Capacity NAS file systems are mounted by default. For more information, see Details.

    • ACS allows you to statically and dynamically provision disks as PVs. For more information, see NAS volume overview

  • OSS

  • CPFS

Networks

By default, each ACS pod is assigned a separate IP address and a separate elastic network interface (ENI) from a vSwitch.

ACS pods use the following methods to communicate with each other:

Log collection

You can specify pod environment variables to collect stdout or log files from pods to Simple Log Service.

Resource specifications

Warning

Pods on GPU and GPU-HPN compute types are automatically configured with a Guaranteed QoS class (where requests equal limits) when running in an ACS cluster. However, if you are using ACS GPU capacity from another environment such as Container Service for Kubernetes (ACK) or Alibaba Cloud Distributed Cloud Container Platform (ACK One), you must manually ensure the pod is submitted with Guaranteed QoS to prevent potential status update failures.

General-purpose

General-purpose

vCPU

Memory (GiB)

Memory step size (GiB)

Bandwidth (bidirectional), Gbit/s

Storage

0.25

0.5, 1, and 2

N/A

0.08

30 30 GiB or less is free of charge. If the capacity exceeds 30 GiB, you must pay for the overage. The maximum capacity is 512 GiB.

You can expand the storage space by mounting NAS volumes.

0.5

1–4

1

0.08

1

1–8

0.1

1.5

2–12

1

2

2–16

2.5

3–20

1.5

3

3–24

3.5

4–28

4

4–32

4.5

5–36

5

5–40

5.5

6–44

6

6–48

6.5

7–52

2.5

7

7–56

7.5

8–60

8

8–64

8.5

9–68

9

9–72

9.5

10–76

10

10–80

10.5

11–84

11

11–88

11.5

12–92

12

12–96

12.5

13–100

3

13

13–104

13.5

14–108

14

14–112

14.5

15–116

15

15–120

15.5

16–124

16

16–128

24

24, 48, 96, and 192

N/A

4.5

32

32, 64, 128, and 256

N/A

6

48

48, 96, 192, and 384

N/A

12.5

64

64, 128, 256, and 512

N/A

20

Performance-enhanced

vCPU

Memory (GiB)

Memory step size (GiB)

Bandwidth (bidirectional), Gbit/s

Storage

0.25

0.5, 1, and 2

N/A

0.1

30 30 GiB or less is free of charge. If the capacity exceeds 30 GiB, you must pay for the overage. The maximum capacity is 512 GiB.

You can expand the storage space by mounting NAS volumes.

0.5

1–4

1

0.5

1

1–8

1.5

2–12

2

2–16

1.5

2.5

3–20

3

3–24

3.5

4–28

4

4–32

2

4.5

5–36

5

5–40

5.5

6–44

6

6–48

2.5

6.5

7–52

7

7–56

7.5

8–60

8

8–64

3

8.5

9–68

9

9–72

9.5

10–76

10

10–80

3.5

10.5

11–84

11

11–88

11.5

12–92

12

12–96

4

12.5

13–100

13

13–104

13.5

14–108

14

14–112

4.5

14.5

15–116

15

15–120

15.5

16–124

16

16–128

6

24

24, 48, 96, and 192

N/A

8

32

32, 64, 128, and 256

N/A

10

48

48, 96, 192, and 384

N/A

16

64

64, 128, 256, and 512

N/A

25

Important

To create ACS pods that use more than 16 vCPUs or more than 128 GiB of memory, submit a ticket.

If you do not specify .resources.requests or .resources.limits for containers, 2 vCPUs and 4 GiB of memory are allocated to each pod by default.

ACS automatically rounds the pod specification to the nearest .resources.requests or .resources.limits value listed in the preceding table based on the step size and exposes the value with the alibabacloud.com/pod-use-spec annotation. If the pod specification is rounded up to the nearest value listed in the preceding table, ACS also adjusts .resources.requests or .resources.limits to ensure that all paid resources are in use.

How ACS adjusts pod specifications

For example, the steps of the .resources.requests and .resources.limits values are 2 vCPUs and 3.5 GiB of memory. When ACS launches a pod, it automatically sets the pod specification to 2 vCPUs and 4 GiB of memory. The additional resources are allocated to the first container and the alibabacloud.com/pod-use-spec=2-4Gi annotation is added to the pod. Resource requests before adjustment:

apiVersion: apps/v1 
kind: Pod
metadata:
  labels:
    app: nginx
    alibabacloud.com/compute-class: general-purpose
    alibabacloud.com/compute-qos: default
spec:
  containers:
  - name: nginx
    resources:
      requests:
        cpu: 2 # Request 2 vCPUs.
        memory: "3.5Gi" #Request 3.5 GiB of memory.
        ephemeral-storage: "30Gi" # Request 30 GiB of storage space.

Resource requests after adjustment:

apiVersion: apps/v1 
kind: Pod
metadata:
  annotations:
    alibabacloud.com/pod-use-spec: "2-4Gi"
  labels:
    app: nginx
    alibabacloud.com/compute-class: general-purpose
    alibabacloud.com/compute-qos: default
spec:
  containers:
  - name: nginx
    resources:
      requests:
        cpu: 2 # Request 2 vCPUs.
        memory: "4Gi" # Request 4 GiB of memory.
        ephemeral-storage: "30Gi" # Request 30 GiB of storage space.

Computing-accelerated

ACS supports the following GPU models. The specifications of different GPU models may vary. For more information, submit a ticket.

GU8TF

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (96 GB)

2

2–16

1

30–256

4

4–32

1

6

6–48

1

8

8–64

1

10

10–80

1

12

12–96

1

14

14–112

1

16

16–128

1

22

22, 32, 64, 128

N/A

2 (96 GB)

16

16–128

1

30–512

32

32, 64, 128, 230

N/A

46

64, 128, 230

N/A

4 (96 GB)

32

32, 64, 128, 256

N/A

30–1,024

64

64, 128, 256, 460

N/A

92

128, 256, 460

N/A

8 (96 GB)

64

64, 128, 256, 512

N/A

30–2,048

128

128, 256, 512, 920

N/A

184

256, 512, 920

N/A

GU8TEF

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (141 GB)

2

2–16

1

30–768

4

4–32

1

6

6–48

1

8

8–64

1

10

10–80

1

12

12–96

1

14

14–112

1

16

16–128

1

22

22, 32, 64, 128, 225

N/A

2 (141 GB)

16

16–128

1

30–1,536

32

32, 64, 128, 256

N/A

46

64, 128, 256, 450

N/A

4 (141 GB)

32

32, 64, 128, 256

N/A

30–3,072

64

64, 128, 256, 512

N/A

92

128, 256, 512, 900

N/A

8 (141 GB)

64

64, 128, 256, 512

N/A

30–6,144

128

128, 256, 512, 1,024

N/A

184

256, 512, 1024, 1,800

N/A

L20 (GN8IS)

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (48 GB)

2

2–16

1

30–256

4

4–32

1

6

6–48

1

8

8–64

1

10

10–80

1

12

12–96

1

14

14–112

1

16

16–120

1

2 (48 GB)

16

16–128

1

30–512

32

32, 64, 128, 230

N/A

4 (48 GB)

32

32, 64, 128, 256

N/A

30–1,024

64

64, 128, 256, 460

N/A

8 (48 GB)

64

64, 128, 256, 512

N/A

30–2,048

128

128, 256, 512, 920

N/A

L20X (GX8SF)

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

8 (141 GB)

184

1,800

N/A

30–6,144

P16EN

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (96 GB)

2

2-16

1

30–384

4

4-32

1

6

6-48

1

8

8-64

1

10

10-80

1

2 (96 GB)

4

4-32

1

30–768

6

6-48

1

8

8-64

1

16

16-128

1

22

32, 64, 128, 225

N/A

4 (96 GB)

8

8-64

1

30–1,536

16

16-128

1

32

32, 64, 128, 256

N/A

46

64, 128, 256, 450

N/A

8 (96 GB)

16

16–128

1

30–3,072

32

32, 64, 128, 256

N/A

64

64, 128, 256, 512

N/A

92

128, 256, 512, 900

N/A

16 (96 GB)

32

32, 64, 128, 256

N/A

30–6,144

64

64, 128, 256, 512

N/A

128

128, 256, 512, 1,024

N/A

184

256, 512, 1,024, 1,800

N/A

G49E

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (48 GB)

2

2–16

1

30–256

4

4–32

1

6

6–48

1

8

8–64

1

10

10–80

1

12

12–96

1

14

14–112

1

16

16–120

1

2 (48 GB)

16

16–128

1

30–512

32

32, 64, 128, 230

N/A

4 (48 GB)

32

32, 64, 128, 256

N/A

30–1,024

64

64, 128, 256, 460

N/A

8 (48 GB)

64

64, 128, 256, 512

N/A

30–2,048

128

128, 256, 512, 920

N/A

T4

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (16 GB)

2

2-8

1

30–1,536

4

4-16

1

6

6-24

1

8

8-32

1

10

10-40

1

12

12-48

1

14

14-56

1

16

16-64

1

24

24, 48, 90

N/A

30–1,536

2 (16 GB)

16

16-64

1

24

24, 48, 96

N/A

32

32, 64, 128

N/A

48

48, 96, 180

N/A

A10

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

1 (24 GB)

2

2-8

1

30–256

4

4-16

1

6

6-24

1

8

8-32

1

10

10-40

1

12

12-48

1

14

14-56

1

16

16-60

1

2 (24 GB)

16

16-64

1

30–512

32

32, 64, 120

N/A

4 (24 GB)

32

32, 64, 128

N/A

30–1,024

64

64, 128, 240

N/A

8 (24 GB)

64

64, 128, 256

N/A

30–2,048

128

128, 256, 480

N/A

G59

GPU count (memory per GPU)

vCPUs

Memory options (GiB)

Memory increment (GiB)

Storage range (GiB)

Network bandwidth

1 (32 GB)

2

2–16

1

30–256

1 Gbps per vCPU

4

4–32

1

6

6–48

1

8

8–64

1

10

10–80

1

12

12–96

1

14

14–112

1

16

16–128

1

22

22, 32, 64, 128

N/A

2 (32 GB)

16

16–128

1

30–512

32

32, 64, 128, 256

N/A

46

64, 128, 256, 360

N/A

4 (32 GB)

32

32, 64, 128, 256

N/A

30–1,024

64

64, 128, 256, 512

N/A

92

128, 256, 512, 720

N/A

8 (32 GB)

64

64, 128, 256, 512

N/A

30–2,048

128

128, 256, 512, 1024

N/A

100 Gbps per vCPU

184

256, 512, 1024, 1440

N/A

Important

The preceding GPU models share the same specifications in pay-as-you-go, capacity reservation, and BestEffort scenarios. The figure provides the following information:

  • For specifications with 16 GB of memory or less, the memory overheads are allocated to ACS. For specifications with more than 16 GB of memory, the memory overheads are allocated to the corresponding pods. Make sure that sufficient resources are reserved for the application to ensure its stability.

  • For system disks of 30 GB or less (including the image size), no additional fee is charged. For system disks larger than 30 GB, the overage is billed.

Automatic specification adjustment

The default resource request of a GPU-accelerated pod is 2 vCPUs, 2 GiB of memory, and 1 GPU.

If the request is not included in the preceding table, ACS automatically adjusts the resource request of a container. The adjustment does not change the value of the .resources.requests parameter. However, the .alibabacloud.com/pod-use-spec annotation is added to the pod configurations to indicate the adjustment. If the resource limit (.resources.limits) of a container in a pod is greater than the resource request of the pod, ACS resets the resource limit of the container to the value of the resource request of the pod.

Note
  • If a pod requests 2 vCPUs and 3.5 GiB of memory, ACS adjusts the resource request to 2 vCPUs and 4 GiB of memory. The additional resources are allocated to the first container in the pod. The .alibabacloud.com/pod-use-spec=2-4Gi annotation is added to the pod. In this case, if the resource limit of a container in the pod is 3 vCPUs and 5 GiB of memory, ACS resets the resource limit to 2 vCPUs and 5GiB of memory.

  • If the GPU resource request of a pod is not included in the preceding table, the pod creation request cannot be submitted.

GPU-HPN

For pods scheduled on High Performance Networking (HPN) GPU instances, ACS automatically adjusts resource specifications to enforce a Guaranteed QoS class. It does this by setting the resource limits to be equal to the requests you define.

However, a pod's resource requests are still constrained by the total capacity of the node. If your pod requests more resources than the node can provide, it will fail to schedule and remain in a pending state due to insufficient resources.

For details on available resources for each instance type, see the node specification documentation.

Limits on Kubernetes

ACS is seamlessly integrated with Kubernetes based on virtual nodes. ACS pods are not collectively deployed on a physical machine, but are spread across Alibaba Cloud resource pools. Due to the purpose of public cloud security and the limits of virtual nodes, ACS does not support Kubernetes features such as hostPath volumes and DaemonSets. The following table describes the details.

Limit

Description

Solution to validation failures

Recommended alternative

DaemonSet

DaemonSets are not supported.

Pods can run but cannot work as expected.

Deploy sidecar containers in the pod.

NodePort Services

NodePort Services are not supported. NodePort Services can expose containers by using the ports on the host.

You cannot submit requests for creating NodePort Services.

Create type=LoadBalancer Services.

HostNetwork

The hostNetwork mode is not supported. The hostNetwork mode can expose containers by using the ports on the host.

Specify HostNetwork=false to disable the hostNetwork mode.

None

HostIPC

The hostIPC mode is not supported. The hostIPC mode can enable processes in containers to communicate with the processes on the host.

Specify HostIPC=false to disable the hostIPC mode.

None

HostPID

The hostPID mode is not supported. The hostIPC mode can enable processes in containers to access the process ID (PID) namespace on the host.

Specify HostPID=false to disable the hostPID mode.

None

HostUsers

User namespaces are not supported.

Leave the hostUsers parameter empty.

None

DNSPolicy

Only specific DNS policies are supported.

Note
  • None

  • Default

  • ClusterFirst

  • If you specify the ClusterFirstWithHostNet policy, ACS automatically replaces the policy with the ClusterFirst policy.

  • You cannot submit requests for configuring other DNS policies.

Specify the supported DNS policies.

Container environment variables

In addition to the standard Kubernetes constraints, ACS requires that for GPU and GPU-HPN compute types, environment variable names must: 

  • Consist of letters, numbers, underscores (_), dots (.), or hyphens (-). 

  • Not start with a number.

The pod will fail to start.

Use a compliant environment variable name.

Port usage

The following table describes the ports used by ACS. Do not use the following ports for your applications.

Port

Description

111, 10250, and 10255

ACS clusters use the ports to perform the following operations: exec, logs, and metrics.