You can run short-term jobs on elastic container instances to improve resource utilization and reduce computing costs. This topic describes how to use an elastic container instance to run jobs.
Many Kubernetes clusters must concurrently support a variety of online and offline workloads. The traffic volume of online workloads fluctuates and the amount of time required to complete offline workloads is unpredictable, which causes resource demands to vary with time. For example, many enterprises perform intensive computing on weekends and in the middle and end of each month, and their demands on computing resources increase sharply during these periods.
Typically, a Kubernetes cluster uses an autoscaler to scale out temporary nodes until all pods are scheduled. It takes about 2 minutes to deploy a temporary node. After the pods are scheduled and their execution is complete, the temporary nodes are automatically released. In this scale-out mode, a pod must wait 2 or more minutes before it can be scheduled.
In this scenario, we recommend that you use elastic container instances to run jobs. You can connect elastic container instances to Kubernetes clusters by deploying virtual nodes. Elastic container instances can be started within seconds and scaled out on demand to make Kubernetes clusters more elastic. You do not need to estimate the traffic volume of your business or reserve idle resources before you use elastic container instances to run jobs. This ensures that your business needs are met and reduces your use and O&M costs.
If you use Container Service for Kubernetes (ACK) clusters, you must deploy virtual nodes within the clusters. Then, you can create elastic container instances on the virtual nodes to run jobs.
For more information, see Use an elastic container instance to run a job.
If you use Serverless Kubernetes (ASK) clusters, you can directly use elastic container instances to run jobs.
For more information, see Use ASK to run jobs.
If you use self-managed Kubernetes clusters on the cloud or in data centers, you can deploy virtual nodes within the clusters. Then, you can create elastic container instances on the virtual nodes and schedule jobs to the elastic container instances. For information about how to use elastic container instances in self-managed Kubernetes clusters, see Overview.
You can also use preemptible elastic container instances to run jobs at reduced costs. For more information, see Run jobs on a preemptible instance.