All Products
Search
Document Center

Elastic Container Instance:Use Auto Scaling to automatically scale elastic container instances

Last Updated:Dec 26, 2022

If you want to use elastic container instances to run containerized applications in scenarios in which Kubernetes is not involved, you can use Alibaba Cloud Auto Scaling to automatically scale instances based on your business requirements. This way, you can reduce resource costs and ensure that your business runs as expected.

Description

Auto Scaling is a service that automatically changes the number of instances to adjust the computing power based on the business requirements and policies. You can use Auto Scaling to automatically scale elastic container instances in scenarios in which Kubernetes is not involved. This way, resource utilization is improved and labor and resource costs are reduced based on your business requirements. For more information, see What is Auto Scaling?

You can add elastic container instances that run the same service to a scaling group, configure the minimum number of instances in the scaling group to ensure daily business operations, and configure the maximum number of instances in the scaling group to prevent excessive costs. You can also run scheduled tasks or event-triggered tasks based on scaling rules to automatically scale elastic container instances in the scaling group. The following figure shows the feature of Auto Scaling in elastic container instances.

ESS2

In the following scenarios, you can use Auto Scaling to scale elastic container instances:

  • Changes in data transfers are predictable and instances need to be scaled at specific points in time.

    For example, if your game enterprise has a sharp increase in data transfers from 18:00:00 to 24:00:00 every night, you can create a scheduled task to automatically increase the number of elastic container instances at 18:00:00 and decrease the number of instances at 24:00:00 every day.

    ESS3
  • Changes in data transfers are unpredictable and instances need to be automatically scaled based on metrics.

    For example, if the changes in data transfers of your video streaming enterprise are unpredictable, you can create an event-triggered task to monitor the CPU utilization of elastic container instances in a scaling group. The system automatically scales instances based on the monitoring result to maintain the CPU utilization at 60%.

    ESS4

Procedure

The following figure shows how to use Auto Scaling to automatically scale elastic container instances:

ESS1
  1. Create a scaling group

    A scaling group is used to manage elastic container instances that are designed for the same scenario. You can use a scaling group to specify the maximum number and minimum number of instances, the instance template that you want to use to scale instances, and the policy to remove instances. This way, the scaling group can manage instances based on your business requirements. For more information, see Create a scaling group.

  2. Create a scaling configuration

    When Auto Scaling automatically scales out the scaling group, Auto Scaling creates elastic container instances based on the instance configuration source in the scaling configuration, and then adds the instances to the scaling group. For more information, see Create a scaling configuration for a scaling group of elastic container instances.

    Note

    In most cases, container images are large in size. Therefore, the startup speed of elastic container instances decreases if you pull container images. We recommend that you enable automatic match of image caches when you create the scaling configuration. This way, elastic container instances can be created in an accelerated manner.

  3. Enable the scaling group

    You can scale elastic container instances only if scaling groups are in the Enabled state. The first time you create a scaling configuration, you are prompted to enable the scaling group. You can also enable a scaling group in the scaling group list. For more information, see Enable a scaling group.

  4. Create a scaling rule

    Scaling rules are used to trigger scaling activities. Scaling rules are categorized into step scaling rules, predictive scaling rules, simple scaling rules, and target tracking scaling rules. You can create a type of scaling rule based on your business requirements.

    • Simple scaling rule: Specifies the number of instances that you want to increase or decrease, or specifies the number of instances that you want to maintain in a scaling group.

    • Target tracking scaling rule: Allows you to select a metric and configure a target value for the metric. Then, the system automatically increases or decreases the number of instances to maintain the metric value close to the target value.

    For more information, see Create a scaling rule.

  5. Execute a scaling rule

    You can use one of the following methods to execute the scaling rule. This way, you can increase the number of elastic container instances during peak hours to relieve business pressure and decrease the number of instances during off-peak hours to reduce resource costs.

    • Manually execute a scaling rule

      If you want to temporarily scale elastic container instances, you can manually execute a scaling rule. For more information, see Execute a scaling rule.

    • Create a scheduled task to automatically execute the scaling rule

      You can configure scheduled tasks to execute scaling rules at the specified time. If the changes in your data transfers are predictable, you can create a scheduled task. For more information, see Create a scheduled task.

    • Create an event-triggered task to automatically execute the scaling rule

      You can configure event-triggered tasks to monitor specific metrics such as CPU utilization, memory size, and custom metrics and collect statistics on the metrics in real time. If the statistics meet the specified alert conditions, alerts are triggered and scaling rules are executed. If the changes in your data transfers are unpredictable, you can create an event-triggered task. For more information, see Create an event-triggered task.

Configuration examples

Scenario

Example

Result

Your enterprise requires 10 elastic container instances for daily data transfers. However, the data transfers increase from 18:00:00 to 23:00:00 every night. Your enterprise must increase five elastic container instances during this period.

  1. Create a scaling group. Configure the type of the scaling group to elastic container instance, the minimum number of instances to 10, and the maximum number of instances to 20.

  2. Create a scaling configuration. Specify the instance configuration source when you create the scaling configuration.

  3. Enable the scaling group. The system creates 10 elastic container instances during this period.

  4. Create two simple scaling rules:

    • Rule 1: Adjusts the number of instances to 15.

    • Rule 2: Adjusts the number of instances to 10.

  5. Create two scheduled tasks:

    • Task 1: Executes Rule 1 at 17:55:00 every day.

    • Task 1: Executes Rule 2 at 23:05:00 every day.

During the off-peak hours, 10 elastic container instances support daily data transfers. During peak hours from 18:00:00 to 23:00:00, 15 elastic container instances are used.

Your enterprise requires 10 elastic container instances for daily data transfers. However, your enterprise cannot predict the changes in data transfers and the number of elastic container instances that must be increased or decreased.

  1. Create a scaling group. Configure the type of the scaling group to elastic container instance, the minimum number of instances to 10, and the maximum number of instances to 30.

  2. Create a scaling configuration. Specify the instance configuration source when you create the scaling configuration.

  3. Enable the scaling group. The system creates 10 elastic container instances during this period.

  4. Create a target tracking scaling rule: Set the metric to CPU utilization and the target value to 60%.

  5. The system automatically creates an event-triggered task to monitor the CPU utilization.

During off-peak hours, 10 elastic container instances support daily data transfers. During peak hours, the system monitors the CPU utilization of elastic container instances in the scaling group and automatically increases the number of instances to maintain the CPU utilization at approximately 60%.

For more information, see Scale elastic container instances.