eci-profile allows you to configure elastic container instances within a cluster and orchestrate pods based on selectors. This topic describes how to deploy and use the eci-profile component.
Features of eci-profile
eci-profile can filter pods by using the labels of pods and namespaces and implement the following features:
Add annotations and labels.
Execute scheduling policies.
eci-profile can execute the following scheduling policies.
Policy
Description
fair
This policy specifies fair scheduling. kube-scheduler determines to schedule a pod to a real node or VNode.
normalNodePrefer
Pods are preferentially scheduled to real node. If real nodes are insufficient, pods can be scheduled to VNodes.
virtualNodeOnly
Pods are scheduled only to VNodes.
In this topic, eci-profile uses selector CRDs (custom resource definitions) to automatically schedule pods. If you have deployed the legacy eci-profile that uses a ConfigMap to schedule pods, you can continue to use your eci-profile. We recommend that you update your eci-profile from the ConfigMap mode to the selector CRD mode. eci-profile of the ConfigMap mode no longer supports new features that are published in the future. For more information, see Update eci-profile.
Deploy eci-profile
Use VNodectl to deploy eci-profile
If you have installed and configured the VNodectl tool, you can run the following commands to conveniently deploy eci-profile.
Deploy eci-profile.
vnode addon enable eci-profile --kubeconfig /path/to/kubeconfig
View the deployment status of eci-profile.
vnode addon list
The following command output is returned. The status of eci-profile is enabled.
|----------------|------------|------------|-------------------------------------------------| | ADDON NAME | STATUS | MAINTAINER | REPOSITORY | |----------------|------------|------------|-------------------------------------------------| | eci-profile | enabled ✅ | ECI Group | https://github.com/aliyuneci/eci-profile.git | | vnode-approver | enabled ✅ | ECI Group | https://github.com/aliyuneci/vnode-approver.git | |----------------|------------|------------|-------------------------------------------------|
Manually deploy eci-profile
Create a YAML file named eci-profile.yaml and copy the following content to the file.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: eci-profile rules: - apiGroups: - "" resources: - nodes - namespaces - resourcequotas verbs: - get - list - watch - apiGroups: - "" resources: - pods verbs: - get - list - watch - create - patch - apiGroups: - "admissionregistration.k8s.io" resources: - mutatingwebhookconfigurations verbs: - get - patch - create - delete - apiGroups: - "eci.aliyun.com" resources: - selectors verbs: - get - watch - list --- apiVersion: apiextensions.k8s.io/v1beta1 #apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: selectors.eci.aliyun.com spec: group: eci.aliyun.com version: v1beta1 names: kind: Selector plural: selectors shortNames: - selectors categories: - all scope: Cluster validation: openAPIV3Schema: type: object required: - metadata - spec properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object --- apiVersion: v1 kind: ServiceAccount metadata: name: eci-profile namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: eci-profile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: eci-profile subjects: - kind: ServiceAccount name: eci-profile namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: role: eci-profile name: eci-profile namespace: kube-system spec: ports: - port: 443 targetPort: 443 selector: app: eci-profile --- apiVersion: apps/v1 kind: Deployment metadata: name: eci-profile namespace: kube-system labels: app: eci-profile spec: replicas: 1 selector: matchLabels: app: eci-profile template: metadata: labels: app: eci-profile spec: serviceAccount: eci-profile containers: - name: eci-profile image: registry.cn-beijing.aliyuncs.com/eci-release/eci-profile:2.0.0-477875b-aliyun imagePullPolicy: Always resources: requests: cpu: 2 memory: 4Gi limits: cpu: 4 memory: 8Gi env: - name: KUBERNETES_MASTER value: https://kubernetes:443
Deploy eci-profile.
kubectl create -f eci-profile.yaml
View the deployment result.
kubectl -n kube-system get pods
The following command output is returned. The pod corresponding to eci-profile is in the Running state.
NAME READY STATUS RESTARTS AGE eci-profile-6454756cb8-8xlz8 1/1 Running 0 76s
Configuration description and sample configurations
After you deploy eci-profile, you can create selectors to configure a pod scheduling policy, and the annotations and labels that you want to add. Sample YAML file of a selector:
apiVersion: eci.aliyun.com/v1beta1
kind: Selector
metadata:
name: test-fair
spec:
objectLabels:
matchLabels:
app: nginx
namespaceLabels:
matchLabels:
app: test
effect:
annotations:
k8s.aliyun.com/eci-auto-imc: "true"
labels:
eci-schedulable: "true"
policy:
fair: {}
priority: 3
The following table describes the parameters in the spec section:
Parameter | Description |
objectLabels.matchLabels | The pod labels to match. |
namespaceLabels.matchLabels | The namespace labels to match. |
effect.annotations | The annotations that you want to add. |
effect.labels | The labels that you want to add. |
policy | The scheduling policy. The following policies are supported:
|
priority | The priority of selectors. If you configure multiple conflict selectors, the selector that is assigned a higher priority takes effect. A larger value of the parameter indicates a higher priority for the selector. |
You must specify at least one of the objectLabels and namespaceLabels parameters. If you specify both the parameters, the pod must match both of the parameters.
Example 1: set the scheduling policy to fair
Create the following selector. By using the selector, eci-profile adds VNode tolerations to the pods that have the app: nginx
labels. kube-scheduler determines to schedule the pods to real nodes or VNodes. eci-profile also adds the annotations and labels that are defined in the effect section to the pods.
apiVersion: eci.aliyun.com/v1beta1
kind: Selector
metadata:
name: test-fair
spec:
objectLabels:
matchLabels:
app: nginx
effect:
annotations:
k8s.aliyun.com/eci-auto-imc: "true"
labels:
eci-schedulable: "true"
policy:
fair: {}
Example 2: set the scheduling policy to normalNodePrefer
Create the following selector. By using the selector, eci-profile schedules the pods that have the app: nginx
label to VNodes when real nodes are insufficient. eci-profile also adds the annotations and labels that are defined in the effect section to the pods.
apiVersion: eci.aliyun.com/v1beta1
kind: Selector
metadata:
name: test-normal-node-prefer
spec:
objectLabels:
matchLabels:
app: nginx
effect:
annotations:
k8s.aliyun.com/eci-auto-imc: "true"
labels:
eci-schedulable: "true"
policy:
normalNodePrefer: {}
Example 3: set the scheduling policy to virtualNodeOnly
Create the following selector. By using the selector, eci-profile adds VNode tolerations and VNode nodeSelectors to the pods that have the app: nginx
labels. eci-profile also adds the annotations and labels that are defined in the effect section to the pods.
apiVersion: eci.aliyun.com/v1beta1
kind: Selector
metadata:
name: test-virtual-node-only
spec:
objectLabels:
matchLabels:
app: nginx
effect:
annotations:
k8s.aliyun.com/eci-auto-imc: "true"
labels:
eci-schedulable: "true"
policy:
virtualNodeOnly: {}
Update eci-profile
If you have deployed the legacy eci-profile that uses a ConfigMap to schedule pods, we recommend that you update your eci-profile to the eci-profile that use a selector CRD to schedule pods. To update eci-profile, perform the following operations:
Record the content of the selectors in the kube-system namespace of the legacy eci-profile.
Delete the legacy eci-profile.
Deploy the new eci-profile.
Create new selectors based on the original selectors.
If you have any requirements or problems when you use eci-profile, you can join the DingTalk group with the ID of 44666389 to obtain assistance.