In a multi-tenant ACK cluster, multiple teams sharing the same cluster can exhaust resources without boundaries. Namespaces let you divide cluster resources into isolated groups so you can enforce quotas, manage permissions, and track costs per team or environment.
Namespaces
In an ACK cluster, namespaces divide cluster resources into isolated groups. When multiple teams share a cluster, create namespaces to classify resources by team or environment, then use resource quotas to limit what each namespace can consume.
By default, pods in the running state can consume CPU and memory resources on nodes without limit. Pods in a single namespace can exhaust the entire cluster's resources. Configure resource quotas per namespace — including CPU, memory, and pod count — to prevent this.
Example allocations: In a cluster with 32 GiB RAM and 16 CPU cores, allocate 20 GiB and 10 cores to team A, 10 GiB and 4 cores to team B, and keep 2 GiB and 2 cores in reserve. Alternatively, give the testing namespace 1 core and 1 GiB RAM, and let production use whatever remains.
Create a namespace
Use the ACK console
Log on to the ACK console. In the left-side navigation pane, click ACK consoleClusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, click Namespaces and Quotas.
Click Create. In the dialog box that appears, configure the name and label of the namespace and click OK.
Use kubectl
Create a namespace.
kubectl create namespace testVerify that the namespace is created.
kubectl get namespacesExpected output:
NAME STATUS AGE default Active 46h kube-node-lease Active 46h kube-public Active 46h kube-system Active 46h test Active 9sThe namespace
testis now active in the list.
Configure resource quotas and limits
After you configure CPU and memory quotas for a namespace, all pods created in that namespace must specify CPU and memory limits. Configure default resource limits for containers in the namespace to avoid pod creation failures.
Configure via the ACK console
On the Namespace page, click Resource Quotas and Limits in the Actions column of the namespace that you want to manage.
In the Resource Quotas and Limits dialog box, configure the resource quotas and the default resource limits. For configuration details, see Resource Quotas and Configure Default Memory Requests and Limits for a Namespace.
Best practices for modifying quotas
Best practice: Modify quotas during off-peak hours. Check the resource usage of existing workloads before making changes.
Reserve enough resources for Horizontal Pod Autoscaler (HPA) auto scaling.
Monitor the cluster for at least 30 minutes after adjusting quotas to confirm that HPA runs as expected.
Delete a namespace
Built-in namespaces (default, kube-system, kube-public, kube-node-lease) cannot be deleted. Clear all resources in the namespace before deleting it. If the namespace stays in the Terminating status for a long time, see What do I do if the namespace is stuck in Terminating status?.
Use the ACK console
On the Namespace page, find the namespace that you want to delete and click
> Delete in the Actions column. To disable deletion protection first, click
> Disable Deletion Protection in the Actions column.In the Confirm dialog box, confirm the associated resources in the namespace and click Confirm Deletion.
Use kubectl
kubectl delete namespace testFAQ
What do I do if the namespace is stuck in Terminating status?
When you delete a namespace that still has resources, the deletion stays in Terminating status indefinitely. The fix is to clear the finalizers array in the namespace spec — Kubernetes then automatically removes the namespace. Note that this may leave orphaned resources in the cluster, so clean up workloads in the namespace before proceeding.
Use the following steps to force-delete the namespace (the istio-system namespace is used as an example):
Open a terminal and start a reverse proxy for your cluster.
kubectl proxyExpected output:
Starting to serve on 127.0.0.1:8001Open a second terminal. Export a token and verify connectivity to the API server.
export TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t') curl http://localhost:8001/api/v1/namespaces --header "Authorization: Bearer $TOKEN" --insecureExport the namespace configuration to a JSON file.
kubectl get namespace istio-system -o json > istio-system.jsonOpen
istio-system.jsonand clear thefinalizersarray in thespecsection."spec": { "finalizers": [] }Apply the updated configuration to remove the finalizers.
curl -X PUT --data-binary @istio-system.json http://localhost:8001/api/v1/namespaces/istio-system/finalize -H "Content-Type: application/json" --header "Authorization: Bearer $TOKEN" --insecure
What's next
For cloud service and cluster configuration limits, individual cluster capacity limits, cluster quotas, and dependent cloud service quotas, see Quotas and limits.
For quota configuration on specific API object types, see Configure Quotas for API Objects.
To implement fine-grained permission management for clusters or namespaces using RAM users, RAM roles, and role-based access control (RBAC), see Use RAM to authorize access to clusters and cloud resources and Use RBAC to manage the operation permissions on resources in a cluster.