All Products
Search
Document Center

FAQ related to Virtual Kubelet deployment in an on-premises Kubernetes cluster

Last Updated: Nov 11, 2021

To use an elastic container instance together with an on-premises Kubernetes cluster, you must deploy Virtual Kubelet in the Kubernetes cluster. Then, you can connect the elastic container instance to the cluster through Virtual Kubelet. This topic describes the commonly asked questions about Virtual Kubelet deployment in an on-premises Kubernetes cluster.

How do cloud services access the IP addresses of on-premises pods?

If you use leased lines of Express Connect to connect your cloud and on-premises networks, the cloud and on-premises services can learn routing rules from each other by using Border Gateway Protocol (BGP). Then, the on-premises equipment can broadcast the pod IP addresses to the cloud service by using BGP. As a result, the cloud service can access the IP addresses of the on-premises pods. For more information, see Configure BGP.

How do on-premises services access the IP addresses of cloud pods?

If you use leased lines of Express Connect to connect your cloud and on-premises networks, the cloud and on-premises services can learn routing rules from each other by using BGP. You can deploy a cloud controller manager (CCM) to automatically synchronize pod IP addresses on the cloud to the virtual private cloud (VPC) route table. For more information about a CCM, see Cloud Controller Manager.

After you deploy a CCM in a self-managed or an on-premises cluster, you can synchronize the routing IP addresses of the Kubernetes pods to the VPC route table. When you deploy the CCM, take note of the following items:

  • Change the value of providerID of the Kubernetes cluster nodes to the <region-id>.<ecs-id> format, for example, cn-shanghai.i-ankb8zjh2nzchf*******.

  • Ensure that the pod IP addresses of the cluster nodes are all within the pod CIDR blocks of the nodes. For example, you must configure the Calico IPAM configuration file as the host-local type. This configuration specifies that the pod CIDR field of Kubernetes cluster nodes is obtained from the Kubernetes API. This ensures all the pod IP addresses of the cluster nodes are within the pod CIDR blocks of the nodes.

    You can check the pod CIDR blocks in the spec data of the nodes.

    spec:
      podCIDR: 172.23.XX.0/26
      podCIDRs:
      - 172.23.XX.0/26
      providerID: cn-shanghai.i-ankb8zjh2nzchfxxxxxxx

What do I do if an internal network domain name cannot be resolved?

Problem description

Cloud and on-premises services cannot mutually invoke services because the internal network domain names of the services cannot be resolved. The failure to resolve internal network domain names includes:

  • Cloud services cannot resolve the internal network domain names of on-premises networks.

  • On-premises services cannot resolve cloud PrivateZone domain names.

Solution

On-premises solutions and Alibaba Cloud VPC are deployed in different network environments. If cloud and on-premises services can communicate each other only after the internal network domain names are resolved by using Alibaba Cloud DNS, you can configure Alibaba Cloud DNS PrivateZone to resolve the internal network domain names. For more information, see Use Alibaba Cloud DNS PrivateZone and VPN Gateway to allow ECS instances in a VPC to access an on-premises DNS.

Why on-premises services cannot access cloud services?

Problem description

On-premises services cannot access Alibaba Cloud services such as ApsaraDB RDS, Object Storage Service (OSS), and Log Service.

Solutions

You can use one of the following solutions to solve the problem. We recommend that you use Solution 1.

  • Solution 1

    Configure the domain name of the cloud service on the cloud. Then, the virtual border router (VBR) publishes the route to the on-premises network over BGP. For more information, see Access cloud services.

  • Solution 2: Add a static route to the on-premises network to route 100.64.0.0/10 to the leased line.

What do I do if I cannot pull images from a self-managed container image repository?

Problem description

When I try to pull images from a self-managed container image repository, the following error is reported:

Error during Virtual Kubelet deployment

Solution

The reason of the preceding problem is that the image repository uses certificates that are issued by you. The certificates that are issued by you are ineligible. Therefore, the certificate-based authentication fails when you pull images. When you create a pod, you can add the following annotation to skip the certificate-based authentication:

"k8s.aliyun.com/insecure-registry": "<host-name>"

For example, if the endpoint of the NGINX image in the private image repository is test.example.com/test/nginx:apline, you can add the "k8s.aliyun.com/insecure-registry": "test.example.com" annotation to skip the certificate-based authentication.

How do I schedule pods to Virtual Kubelet?

You can select one of the following methods to schedule pods to Virtual Kubelet based on your business requirements. Then, the pods that are scheduled to Virtual Kubelet are run on your elastic container instance.

  • Manually schedule pods

    You can manually schedule pods to make pods run on your elastic container instance by configuring namespace labels, pod labels, or elastic scheduling of Elastic Container Instance. For more information, see Schedule pods to elastic container instances.

  • Automatically schedule pods

    Elastic Container Instance Profile provides the Elastic Container Instance Scheduler feature to implement a new scheduling mechanism based on Mutating Webhook. You can declare the namespace or pod labels to be matched in Elastic Container Instance Profile. Pods that have the declared labels are automatically scheduled to elastic container instances. For more information, see Configure Elastic Container Instance Profile.

Why do DaemonSet pods remain in the Pending state after they are scheduled to Virtual Kubelet?

Virtual Kubelet creates a virtual node and does not support DaemonSets. When you create a DaemonSet, you must configure an anti-affinity scheduling policy to prevent DaemonSet pods from being scheduled to Virtual Kubelet. The following sample code shows how to configure an anti-affinity scheduling policy.
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: type
            operator: NotIn
            values:
            - virtual-kubelet

Why does the scheduling fail when I attempt to schedule pods to Virtual Kubelet by configuring pod labels?

This problem occurs when the version of your Kubernetes cluster is earlier than v1.16. Error 2 during Virtual Kubelet deployment