Install Kubernetes on Alibaba Cloud ECS

Install three Ubuntu virtual machines on Alibaba Cloud ECS. Install the Containerd-based Kubernetes system in the Ubuntu virtual machine, and configure a master node Master and two worker nodes Worker respectively.

1. Preparation


Register an Alibaba Cloud account: Alibaba Cloud home console. Note that the access key file is reserved and can only be exported once. The key file in the current practice is Alibaba Cloud -root.

Refer to the configuration below to register and apply for three ECS (Elastic Computer Service) service instances:

Host: 2vCPU+4GiB
OS: Ubuntu 20.04 x86_64
Instance type: ecs.sn1.medium
Instance names: cka001, cka002, cka003
Network configuration: both public IPs and private IPs
Maximum network bandwidth: 100Mbps (Peak Value)
Cloud Disk: 40GiB
Payment method: preemptible instance
Open a terminal window locally, and access the remote ECS node cka001 through the key file Alibaba Cloud -root.

Create a common user to install Kubernetes, create the user vagrant in the current exercise, and modify the user's primary group to sudo and the secondary group to include root.

Open a new local terminal window and create a key for the user vagrant.

The above command will generate 2 files, the 2 files in the current exercise are Alibaba Cloud-vagrant and Alibaba Cloud-vagrant.pub

Upload the public key file Alibaba Cloud-vagrant.pub to the remote node cka001 through the sftp command.

Open a new terminal window and log in to the cka001 node with the root key. Copy the key file Alibaba Cloud-vagrant.pub uploaded in the previous step from the /root directory to /home/vagrant/.ssh/. Rename the public key file Alibaba Cloud-vagrant.pub to authorized_keys. Change the owner of the file authorized_keys to vagrant. Change the primary group of the file authorized_keys to sudo.

Check the file /etc/ssh/sshd_config, and make sure that the password login verification parameter asswordAuthentication is set to no, that is, you can only log in remotely through the certificate.

Open a new terminal window, use the user vagrant to log in to the remote node cka001, and verify that the user vagrant can log in to the node cka001 through the certificate created earlier.

Repeat the above steps, upload the public key file Alibaba Cloud-vagrant.pub to the remote nodes cka002 and cka003 respectively through the sftp command, and complete the same configuration, so that the user vagrant can also log in to the remote nodes cka002 and cka003 through the key file.

So far, the user vagrant can log in to the remote nodes cka001, cka002 and cka003 from the local terminal window through the key file Alibaba Cloud-vagrant.

All steps below are done by user vagrant.

• Initialize ECS nodes

• Configuration file /etc/hosts

Update the file /etc/hosts of all ECS nodes, and add the private IP (private IP) of other nodes.

• disable firewall

Disable the firewall on all nodes.

• Check firewall status.

sudo ufw status verbose

• close swap

Turn off swap on all nodes.

sudo swapoff -a

• Set time zone and region

Set time zone and locale on all nodes. This configuration has been completed when ECS is initialized. It can be set by the following command.

Check the time zone and region settings with the command.

• Kernel settings

Execute the command below on all nodes to configure the kernel.

Use the module overlay:

Create the Containerd service configuration file /etc/modules-load.d/containerd.conf , skip this step if it already exists. The purpose of configuring this file is to load the modules overlay and br_netfilter into the kernel.

The service Containerd relies on the module overlay to implement the overlay-filesystem file system function.

The overlay module in Linux provides the ability to create a merged view of two directories, called layers. It is often used to implement union mounts, a way of mounting two or more directories together as if they were one (union-filesystems).

The overlay module is widely used in container technologies, such as Docker, because it allows multiple containers to share a base image while maintaining their own filesystem.

To use the overlay module, two directories are required: a lower directory and an upper directory. Lower directories are usually read-only and contain original files, while upper directories are read-write and contain changes to files. When a file is requested, the overlay module first looks in the upper directory, and if not found, it looks in the lower directory.

Use the module br_netfilter:

br_netfilter is a module in the Linux kernel that provides a mechanism to filter network traffic for bridges. This module allows administrators to configure rules to allow or deny specific network traffic through the bridge.

A bridge is a network device that connects multiple network segments and forwards traffic to enable communication between different network segments. The br_netfilter module can be used to restrict or filter this traffic.

When the br_netfilter module is enabled, it automatically enables a feature called bridge-nf which will apply rules as network traffic passes through the bridge. Administrators can use tools such as iptables to configure these rules. For example, we can allow traffic from one network segment to another, or deny traffic from specific IP addresses or ports.

In Kubernetes, the br_netfilter module is mainly used to enable traffic forwarding and load balancing of Kubernetes services. These services use iptables rules in the Linux kernel to manage traffic, and these rules are implemented through the br_netfilter module.

Specifically, when we create a service in a Kubernetes cluster, the service is assigned a virtual IP address, which is used to represent the service. Then, via iptables rules, map this virtual IP address to the IP address of one or more backend pods so that traffic can be routed to those pods when needed.

In this process, the br_netfilter module is responsible for monitoring the traffic of the service, and forwarding and load balancing according to the iptables rules. This includes filtering traffic from untrusted sources and restricting access to services.

It should be noted that in order to enable traffic forwarding and load balancing of Kubernetes services, the br_netfilter module must be enabled on all nodes, and the correct iptables rules must be configured.

Due to the critical role of the br_netfilter module, special attention needs to be paid to its configuration and status when upgrading or changing the system.

IP forwarding is also known as routing. In Linux, it is also called kernel IP forwarding because it uses kernel variable net.ipv4.ip_forward to enable or disable IP forwarding feature. The default is ip_forward=0. Therefore, Linux's IP forwarding feature is disabled by default.

Via the sysctl -w net.ipv4.ip_forward=1 command, the changes take effect immediately, but not permanently. After a system restart, the default values will be loaded. To set parameters permanently, the settings need to be written to /etc/sysctl.conf or another configuration file in the /etc/sysctl.d directory:

sudo sysctl --system

Verify that the parameters are valid.

sysctl net.ipv4.ip_forward

• Install Containerd

Install Containerd service on all nodes.

Back up the Ubuntu installation source configuration file before installation.

Add the appropriate installation source. Ali ECS-based Ubuntu 20.04 has already pre-configured Ali's internal source. In this step, you only need to check whether the Ali source has been configured.

• Configuring Containerd

• Install nerdctl

Install the nerdctl service on all nodes.

The nerdctl service supports the containerization features provided by Contanerd, especially some new features that Docker does not have.

The binary installation package can be obtained through this link: Releases containerd/nerdctl GitHub.

• Verify the nerdctl service.

List the container list when initially installing Kubernetes.

• Install kubeadm

Install Kubeadm, kubectl, kubelet on all nodes.

Install and upgrade Ubuntu system dependent packages apt-transport-https, ca-certificates, curl.

• Install the gpg certificate.

Add the Kubernetes installation source.

• Install and upgrade Ubuntu system dependencies.

• Check the currently available kubeadm version.

• Configure the master node

2. kubeadm initialization


Configure the control plane (Control Plane) in the virtual machine that assumes the master node.

Check kubeadm's current default configuration parameters.

Similar results are as follows. Save the result of the default configuration, which will be used as a reference later.

Simulated installation and formal installation.

The master node is initialized through the command kubeadm init. The following is a description of the main parameters of this command, especially the three choices of network parameters.

Specifies the range of IP addresses used by pods. If this parameter is specified, Control Plane will automatically assign the specified CIDR to each node.

The IP address range 10.244.0.0/16 is the default address range for Flannel network components. If you need to modify the IP address segment of Flannel, you need to specify it here, and keep a consistent IP segment when deploying Flannel.

Kube-proxy is a component in the Kubernetes cluster, which is responsible for providing proxy services for the Service, and is also one of the important components of the Kubernetes network model.

kube-proxy will start a proxy process on each node, and maintain a local cache of Service and Endpoint by monitoring the changes of Service and Endpoint of Kubernetes API Server. When a request arrives at a Service, kube-proxy will generate corresponding iptables rules based on the Service type (ClusterIP, NodePort, LoadBalancer, ExternalName) and port number, and forward the request to the backend Pod that the Service proxies.

Iptables is an important network tool in the Linux system. It can set the filtering, forwarding and modifying rules of IP packets, and can realize functions such as firewall and NAT at the network layer. In a Kubernetes cluster, kube-proxy generates and updates iptables rules to implement forwarding and proxying between Service and Endpoint. Specifically, kube-proxy will create three iptables rule chains for each Service (KUBE-SERVICES and KUBE-NODEPORTS chains in the nat table, and KUBE-SVC-XXXXX chains in the filter table), through these rule chains, the The request is forwarded to the corresponding Pod or Service.

Therefore, kube-proxy and iptables are two closely related components, and forwarding and proxying between Service and Pod are realized through iptables rules. This implementation has scalability and high availability, and also provides a flexible network model, which can easily implement functions such as service discovery and load balancing.

• kubeconfig file

Configure the kubeconfig file for the current installing user (user vagrant in the current example).

kubectl controls the Kubernetes cluster manager.

For configuration, kubectl looks for a file named config in the $HOME/.kube directory, which is a copy of the file /etc/kubernetes/admin.conf generated by kubeadm init.

We can specify additional kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag. If the KUBECONFIG environment variable does not exist, kubectl will use the default kubeconfig file $HOME/.kube/config.

The context element in the kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool communicates with the cluster using parameters from the current context.

• Configure worker nodes

Use kubeadm token to generate a token and hash value for joining the cluster.

Execute the following command on all working nodes to add the working nodes to the Kubernetes cluster.

Execute the following command to check the status of all nodes. The current status of all nodes is NotReady. No need to do anything now, we will install related network services (Calico or Flannel) later, and the status of each node will become Ready.

• Install Calico or Flannel

Install Calico or Flannel on the control plane Control Plane. Select Calico if you need to configure network policies.

• Install Flannel

Flannel is a simple and easy-to-use method for configuring a three-tier network designed for Kubernetes.

• Deploy Flannel

In kube-flannel.yml we can get Flannel's default network settings, which are the same as the parameter --pod-network-cidr=10.244.0.0/16 we specified when initializing the cluster with kubeadm.

• Create a Flannel service.

• Install Calico

• Download and install the Calico service.

• Check the network status of the cluster.

• Check cluster status

• View the status of Pods.

• update installation

• Bash autocompletion

• Configure Bash autocompletion on each node.

• Troubleshooting

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us