This topic shows how to configure inbound and outbound bandwidth limits for pods in a cluster that uses the Flannel network plugin. Use the standard Kubernetes annotations kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth to set these limits. Bandwidth limiting is not enabled by default in Flannel clusters.
Prerequisites
An existing ACK Managed Cluster or ACK Dedicated Cluster that uses the Flannel network plugin. For instructions, see Create an ACK managed cluster or Create an ACK dedicated cluster (discontinued). To learn how to install the Flannel network plugin when you create a cluster, see Work with Flannel.
Your cluster must have Flannel network plugin v0.15.1.4-e02c8f12-aliyun or later. Check the plugin version on the
Componentspage. For information about how to access theComponentspage, see Manage components.NoteIf you cannot upgrade the Flannel component to the required version, first upgrade the Kubernetes version of your cluster. For instructions, see Manually upgrade a cluster.
Configure bandwidth limits
To enable bandwidth limiting, you must modify the Flannel configuration file. If you have not yet enabled it, see Enable the bandwidth limit feature for an ACK cluster.
The Flannel network plugin lets you control the network bandwidth of a pod. Use the following pod annotations to specify the maximum inbound and outbound bandwidth:
Annotation | Description |
| Sets the maximum inbound bandwidth for the pod. The example value |
| Sets the maximum outbound bandwidth for the pod. The example value |
Example
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo
name: demo
namespace: default
spec:
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
# Limit the maximum inbound bandwidth of the pod to 10 Mbps.
kubernetes.io/ingress-bandwidth: 10M
# Limit the maximum outbound bandwidth of the pod to 30 Mbps.
kubernetes.io/egress-bandwidth: 30M
labels:
app: demo
spec:
containers:
# The rest of the configuration is omitted.Enable the bandwidth limit feature for an ACK cluster
Run the following command to open and edit the
ConfigMapfor the Flannel component.kubectl edit cm -n kube-system kube-flannel-cfgUnder
data.cni-conf.json, add thebandwidthconfiguration at the end of thepluginsarray. Then, save and exit the file.NoteEnsure the content is valid JSON. Use only single-byte characters and verify there are no missing or extra commas.
apiVersion: v1 data: cni-conf.json: | { "name": "cb0", "cniVersion":"0.3.1", "plugins": [ { "type": "flannel", "delegate": { "isDefaultGateway": true, "hairpinMode": true }, "dataDir": "/var/run/cni/flannel", "ipam": { "type": "host-local", "dataDir": "/var/run/cni/networks" } }, { "type": "portmap", "capabilities": { "portMappings": true }, "externalSetMarkChain": "KUBE-MARK-MASQ" }, { "type": "bandwidth", "capabilities": { "bandwidth": true } } ] }
Run the following command to delete all Flannel pods. The system automatically recreates them, and the new configuration takes effect.
This process does not affect running services.
kubectl -n kube-system delete pod -l app=flannelRun the following command to ensure that all Flannel pods are in the
Runningstate.kubectl -n kube-system get pod -o wide -l app=flannelThe following output indicates that the Flannel pods are running correctly.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-flannel-ds-h45zj 1/1 Running 0 67s 192.XX.XX.118 cn-hangzhou.192.XX.XX.118 <none> <none> kube-flannel-ds-mvfcw 1/1 Running 0 67s 192.XX.XX.119 cn-hangzhou.192.XX.XX.119 <none> <none>Run the following command to check the logs for each Flannel pod and ensure that there are no errors.
kubectl -n kube-system logs <pod_name>The following output indicates that the Flannel pods are running correctly.
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init) I0925 07:20:53.794715 1 main.go:219] CLI flags config: {etcdEndpoints:http://127.XX.XX.1:4001,http://127.XX.XX.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true ipforwardResyncSeconds:600 netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:false} W0925 07:20:53.794782 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0925 07:20:53.897149 1 kube.go:121] Node controller skips sync I0925 07:20:53.897176 1 main.go:239] Created subnet manager: Kubernetes Subnet Manager - cn-hangzhou.192.XX.XX.118