All Products
Search
Document Center

Use an elastic container instance to run jobs

Last Updated: Apr 27, 2021

You can run short-running jobs on elastic container instances to avoid resource wastes and reduce computing costs. This topic describes how to use an elastic container instance to run jobs.

Many Kubernetes clusters must concurrently support a variety of online and offline workloads. The traffic volume of online workloads fluctuates and the amount of time required to complete offline workloads is unpredictable, which causes resource demands to vary with time. For example, many enterprises perform intensive computing on weekends and in the middle and end of each month, and their demands on computing resources increase sharply during these periods.

Typically, a Kubernetes cluster uses an autoscaler to scale out temporary nodes until all pods are scheduled. It takes about 2 minutes to deploy a temporary node. After the pods are scheduled and start to run, the temporary nodes are automatically released. In this scale-out mode, a pod must wait 2 or more minutes before it can be scheduled.

In this scenario, we recommend that you use elastic container instances to run jobs. You can connect elastic container instances to Kubernetes clusters by deploying virtual nodes. Elastic container instances can be started within seconds and scale out on demand to make Kubernetes clusters more elastic. You do not need to estimate the traffic volume of your business or reserve idle resources before you use elastic container instances to run jobs. This ensures that your business needs are met and reduces your use and O&M costs.

Note

You can also use preemptible elastic container instances to run jobs at reduced costs. For more information, see Use a preemptible instance to run a job.