Container Service for Kubernetes (ACK) supports production workloads across four common scenarios: continuous delivery pipelines, microservices applications, hybrid cloud deployments, and traffic-driven auto scaling. This page describes each scenario, the Alibaba Cloud services typically used alongside ACK, and links to hands-on guides.
DevOps and continuous delivery
ACK integrates with Jenkins to automate the CI/CD pipeline from code commit to application deployment. Code advances to deployment only after passing automated tests, eliminating manual handoffs and reducing the risk of releasing untested changes.
Example: A development team pushes a commit to their repository. Jenkins triggers an automated build, runs the test suite, and—on success—builds a new container image and deploys it to the ACK cluster. Failed tests block the deployment automatically.
Benefits:
DevOps pipeline automation — Automates the entire flow from code updates to code builds, image builds, and application deployments.
Environment consistency — Delivers code and runtime environments using the same architecture across all stages.
Continuous feedback — Provides immediate feedback on every integration and delivery event.
Recommended services: Elastic Compute Service (ECS) and ACK.
Microservices architecture
ACK manages microservice applications through Alibaba Cloud image repositories and handles scheduling, orchestration, deployment, and canary releases—so teams can focus on feature work rather than infrastructure. Microservices on ACK benefit from high cohesion, low coupling, and high fault tolerance by design.
The following capabilities are available without modifying application code or configurations:
Example: A team running an e-commerce platform splits checkout, inventory, and notification into separate microservices. When the notification service experiences a traffic spike, throttling and peak-load shifting protect the rest of the platform. A configuration change is validated through a canary release before rolling out to all users, with graceful start and shutdown ensuring no in-flight requests are dropped.
Benefits:
Elimination of risks during application updates — Combines configuration management, graceful start and shutdown, and end-to-end canary releases to reduce deployment risk.
Elimination of risks that are caused by occasional issues — Applies throttling protection, peak-load shifting, fault isolation, and degradation protection when traffic spikes or dependent services become unavailable.
Agile development of microservices with low costs — Expands logically isolated development environments without adding physical servers, eliminating environment conflicts and enabling faster iteration.
Recommended services: Microservice Engine (MSE), ECS, ApsaraDB RDS, Object Storage Service (OSS), and ACK.
References:
Hybrid cloud architecture
ACK runs on any infrastructure, so the same container images and orchestration templates work for both cloud and on-premises environments. Operations and Maintenance (O&M) teams manage cloud and on-premises resources from a single ACK console.
Example: A financial services team runs its core transaction system on-premises for low-latency processing and mirrors it to the cloud for disaster recovery. During planned maintenance on the on-premises cluster, traffic shifts automatically to cloud nodes. When on-premises capacity is restored, the team releases it back without any application changes.
Benefits:
Application scaling in the cloud — During peak hours, ACK scales out applications in the cloud and forwards traffic to the additional capacity automatically.
Disaster recovery in the cloud — Run primary workloads on-premises and use the cloud as a standby environment for failover.
On-premises development and testing — Build and test applications on-premises, then release them to the cloud using the same images and templates.
Recommended services: ECS, Virtual Private Cloud (VPC), and Express Connect.
References:
Auto scaling architecture
ACK scales workloads in and out based on network traffic. When traffic reaches a configured threshold, a scale-out event triggers within a few seconds. When traffic drops, containers scale in automatically, freeing resources without manual intervention.
Example: A media platform sees predictable traffic spikes every evening. The Horizontal Pod Autoscaler (HPA) monitors traffic metrics via CloudMonitor and adds pods within seconds as the spike begins. After peak hours, the cluster scales back in, reclaiming idle capacity and reducing compute costs overnight.
Benefits:
Quick response — A scale-out event triggers within a few seconds when network traffic reaches the threshold.
Auto scaling — The scaling process is fully automated, eliminating errors caused by manual intervention.
Low costs — Containers scale in automatically when traffic drops, maximizing resource usage across the cluster.
Recommended services: ECS and CloudMonitor.
References: