This page answers common questions about Fleet management in Distributed Cloud Container Platform for Kubernetes (ACK One).
Configuration and quotas
Does Fleet management support multiple Fleet instances?
Yes. The default Fleet instance quota provided by Distributed Cloud Container Platform for Kubernetes (ACK One) is 1. To create additional Fleet instances, go to the Quota Center console and request a quota increase.
Connectivity
What are the connectivity requirements for clusters associated with a Fleet instance?
Two-way connectivity between the Fleet instance and each associated cluster is required:
The virtual private cloud (VPC) of the Fleet instance must be able to reach the API server of the associated cluster.
The VPC of the associated cluster must be able to reach the API server of the Fleet instance.
If the Fleet instance and the associated cluster are in different VPCs, connect the VPCs using Cloud Enterprise Network (CEN). Alternatively, enable the public endpoints on both the Fleet instance and the associated cluster to allow communication over the Internet.
For more information about CEN, see Cloud Enterprise Network.
Tools
Can I use kubectl to manage Fleet instances?
Yes. Fleet instances are fully compatible with the Kubernetes API server, so any standard Kubernetes tooling works with them:
kubectl — Distribute Kubernetes-native resources directly from a Fleet instance.
Helm — Package an application with Helm and deploy it to a Fleet instance using the Helm CLI.
AMC — ACK One's command-line tool that runs as a kubectl plugin. Use AMC to manage applications and jobs in multi-cluster management scenarios. See Use AMC.
Troubleshooting
Error: secrets"sec-c58faedb8a7864d3****-public"not find
This error appears when you try to associate a cluster with a Fleet instance, and it means the two cannot reach each other.
Check network connectivity between the Fleet instance and the cluster. If they are in different VPCs, either connect the VPCs using CEN or enable public endpoints on both sides to allow Internet-based communication.
A namespace is stuck in Terminating after removing an associated cluster
When you remove an associated cluster, some API services in that cluster may already be unavailable. This can leave the ack-multiple-clusters and ack-cluster-gateway namespaces in a Terminating state, blocking re-association.
To force-delete the stuck namespace:
Export the namespace manifest. Replace
<YOUR_NAMESPACE>with the actual namespace name.kubectl get namespace <YOUR_NAMESPACE> -o json > <YOUR_NAMESPACE>.jsonOpen
<YOUR_NAMESPACE>.jsonand delete the array in thefinalizersfield underspec.Apply the updated manifest to remove the finalizer. Replace
<YOUR_NAMESPACE>with the actual namespace name.kubectl replace --raw "/api/v1/namespaces/<YOUR_NAMESPACE>/finalize" -f ./<YOUR_NAMESPACE>.jsonConfirm the namespace is deleted.
kubectl get ns
How to reassociate a sub-cluster after the Fleet was accidentally deleted
If the Fleet instance was accidentally deleted or the API server of the Fleet's Server Load Balancer (SLB) was cleaned up, the associated sub-clusters were not properly removed. To reassociate them with a newly created Fleet instance:
Log on to the ACK One console. In the left-side navigation pane, choose Fleet > Associated Clusters.
On the Associate Cluster page, click the expand button next to the Fleet name, select the new Fleet instance, and click Add Associated Cluster.
In the Add Associated Cluster panel, select the cluster and click OK.
Find the same cluster in the list and click Disassociate.
Click Add Associated Cluster again, select the cluster, and click OK. The cluster is now correctly associated with the new Fleet instance.