All Products
Search
Document Center

Container Service for Kubernetes:Registered clusters

Last Updated:Mar 26, 2026

ACK One allows you to connect external Kubernetes clusters, such as those in your data center or on a third-party public cloud, to Container Service for Kubernetes (ACK) for unified management, which allows you to build and operate a hybrid cloud architecture. This topic outlines the key features and use cases of registered clusters.

Registered cluster console

ACK One registered clusters console

Features

image

If you manage Kubernetes clusters across different environments, such as Container Service for Kubernetes (ACK) clusters, self-managed Kubernetes clusters in your data center, or clusters on a third-party public cloud, you can use ACK One to register these clusters for unified management. Consider using registered clusters if you have the following requirements for your hybrid or multi-cloud architecture:

  • Hybrid cloud elasticity: Elastically scale your self-managed Kubernetes clusters by adding cloud-based resources, such as Elastic Compute Service (ECS) instances, physical servers, or serverless resources like Elastic Container Instance (ECI). The ack-co-scheduler provides flexible scaling policies for managing resources across your data center and the cloud, letting you prioritize resource scale-out, scale in on demand, proportionally distribute pod replicas, and elastically scale GPU-based node pools.

  • Consistent operational experience with ACK: Manage all your Kubernetes clusters—whether on Alibaba Cloud or in your data center—from a single console for a consistent operational experience and unified security governance. You can manage clusters and applications, centralize logs, monitoring, and alerts, and apply consistent authorization policies using Alibaba Cloud accounts, RAM users, and RAM roles.

  • AI and big data capabilities: Improve computing efficiency by 30% to 40% with topology-aware CPU scheduling and NUMA awareness for mainstream servers. Increase GPU resource utilization by up to 300% through GPU sharing and scheduling. Scale heterogeneous resources flexibly with unified management across cloud and on-premises environments. Accelerate data access by up to 10x and reduce bandwidth usage by 90% by using Fluid to unify storage access in a hybrid cloud distributed cache.

  • Backup and disaster recovery: An integrated cloud solution for backup, recovery, and migration provides disaster recovery for both data and applications, significantly improving business continuity.

Use cases

Scenario 1: Build a hybrid cloud

image

Description

  • Self-managed clusters in a data center: Establish a network connection between your data center and the cloud to enable resource sharing.

  • On-demand scaling for resources and applications: During peak traffic, rapidly scale out resources in the cloud and direct a portion of your business traffic to them.

Scenario 2: Create a consistent experience with cloud services

image

Description

  • Consistent operational experience: Extend the unified operational capabilities of Container Service for Kubernetes (ACK) to clusters in your data center and on third-party public clouds.

  • Enhanced observability: Collect logs, metrics, and events to achieve the same level of observability as your cloud-native clusters.

  • Improved security capabilities: Enable auditing, security inspections, node risk detection, and policy governance with a single click.

  • Microservice governance: Microservices Engine (MSE) and Service Mesh (ASM) provide advanced microservice governance capabilities.

Scenario 3: Implement disaster recovery

image

Description

  • Application migration to the cloud: Perform consistent, cross-region application backups with second-level recovery times to accelerate your cloud migration.

  • Data disaster recovery: Perform cross-region and cross-data center backups for stateful applications, with support for custom backup and recovery policies. Continuously back up data to the cloud to enhance protection against threats like ransomware.

  • Business disaster recovery: Enable scheduled and geo-redundant backups for both applications and data across different regions and data centers.

  • Active-active geo-redundancy: Build Kubernetes-native, active-active geo-redundancy systems to ensure high availability.

Scenario 4: Empower AI and big data

image

Description

  • AI algorithm development: Gain comprehensive management of tasks, quotas, and observability.

  • AI training: Improve training efficiency with topology-aware scheduling and a rich set of task scheduling policies. A storage-compute decoupled architecture significantly accelerates distributed data training. You can also schedule jobs across clusters, with optimized distribution for workloads like TensorFlow, Spark, and CronJobs.

  • AI inference: Increase resource utilization by approximately 300% with GPU sharing. Scale heterogeneous resources elastically with unified scheduling management across cloud and on-premises environments.

  • Intelligent CPU scheduling: Provides NUMA-aware intelligent CPU scheduling for bare metal servers.