An enterprise-level user can select configurations by following the process.



Note

For more information about the instance types, see instance generations and type families.

As an enterprise-lever user of ECS, you may have specific requirements. To meet your requirements, Alibaba Cloud provides you with recommendations for instance configuration in the following scenarios:

  • Balanced performance

    A balanced CPU to memory ratio is required to meet the application resource requirements in most scenarios.

  • Applications with high packet forwarding rate

    High packet forwarding rate is required. You can select a more reasonable computing capacity to memory resource ratio based on the specific scenarios.

  • High performance computing

    Many computing resources are required. GPU parallel computing and a high clock speed are typical applications in this scenario.

  • High-performance client games

    Your services require a high-frequency processor to carry more users. Therefore, a high clock speed is required in this scenario.

  • Mobile and Web games

    Many computing resources are required for this scenario. A CPU to memory ratio of 1:2 can help achieve optimal cost performance of computing resources.

  • Video forwarding

    Many computing resources are required for this scenario. A CPU to memory ratio of 1:2 can help achieve optimal cost performance of computing resources.

  • Live bullet screen

    High packet forwarding rate is required for this scenario. You can select a more reasonable computing capacity to memory resource ratio based on the specific scenarios.

  • Relational databases

    In this scenario, SSD cloud disks or higher-performance NVMe SSD local disks are required to provide higher IOPS capability and a low read latency. The CPU to memory ratio is balanced (1:4) or the memory proportion is larger (1:8).

  • Distributed cache

    In this scenario, a balanced CPU to memory ratio (1:4) or a higher memory proportion (1:8), and stable computing performance are required.

  • NoSQL databases

    In this scenario, SSD cloud disks or higher-performance NVMe SSD local disks are required to provide higher IOPS capacity and a low read latency. The CPU to memory ratio is balanced (1:4) or the memory proportion is larger (1:8).

  • Elastic search

    In this scenario, SSD cloud disks or higher-performance NVMe SSD local disks are required to provide higher IOPS capacity and a low read latency. The CPU to memory ratio is balanced (1:4) or the memory proportion is larger (1:8).

  • Hadoop

    Data nodes require a high disk throughput, high network throughput, and balanced CPU to memory ratio. Computing nodes focus more on the computing performance, network bandwidth, and CPU to memory ratio.

  • Spark

    Data nodes require a high disk throughput, high network throughput, and balanced CPU to memory ratio. Computing nodes focus more on the computing performance, network bandwidth, and CPU to memory ratio.

  • Kafka

    Data nodes require a high disk throughput, high network throughput, and balanced CPU to memory ratio. Computing nodes focus more on the computing performance, network bandwidth, and CPU to memory ratio.

  • Machine learning

    In this scenario, a high performance Nvidia GPU computing processor is required, and the memory size must be at least twice the video memory.

  • Video encoding

    In this scenario, a high performance GPU computing processor or a high performance CPU is required for encoding and decoding.

  • Rendering

    In this scenario, a high performance GPU computing processor is required for rendering.