Docker and Kubernetes: A tornado tutorial for programmers-Alibaba Cloud Developer Community

2018-12-18 3659

introduction: as early as a few months after Docker was officially released, LeanCloud began to use Docker on a large scale in the production environment. In the past few years, the technology stack of Docker has supported our main back-end architecture. This is a Docker and Kubernetes tutorial for programmers. The purpose is to enable readers who are familiar with technology to have a basic understanding of Docker and Kubernetes in the shortest possible time, roll back a service to experience the principles and benefits of a containerized production environment.
+ Follow to continue viewing
as early as a few months after Docker was officially released, LeanCloud began to use Docker on a large scale in the production environment. In the past few years, the technology stack of Docker has supported our main back-end architecture. This is a Docker and Kubernetes tutorial for programmers. The purpose is to enable readers who are familiar with technology to have a basic understanding of Docker and Kubernetes in the shortest possible time, roll back a service to experience the principles and benefits of a containerized production environment. This article assumes that the readers are all developers and familiar with the Mac/Linux environment, so we will not introduce the basic technical concepts. The command line environment uses Mac as an example. In Linux, you only need to make adjustments based on your own distribution and package management tools.

Docker Express

first, quickly introduce Docker: As an example, we start the Docker daemon locally and run a simple HTTP service in a container. Complete the installation first:

$ brew cask install docker
the preceding command Homebrew install Docker for Mac, which includes Docker background processes and command line tools. The Docker background process is installed in the/Mac App as a Applications and needs to be started manually. After starting the Docker application, you can confirm the version of the command line tool in the Terminal:
$ docker --version
Docker version 18.03.1-ce, build 9ee9f40
the Docker version shown above may be different from mine, but as long as it is not too old. Create a separate directory to store the files required by the sample. To simplify the example, the service we want to deploy is to use Nginx to serve a simple HTML file html/index.html.
$ mkdir docker-demo
$ cd docker-demo
$ mkdir html
$ echo '<h1>Hello Docker!</h1>' > html/index.html
Next, create a new file named Dockerfile in the current directory, which contains the following content:
FROM nginx
COPY html/* /usr/share/nginx/html
each Dockerfile starts with a FROM epoch. FROM nginx means building an image based on the image provided by Nginx. Docker searches for and downloads the required images from the Docker Hub. The function of Docker Hub for Docker images is the same as that of GitHub for code. It is a service that hosts and shares images. Used and built images are cached locally. In the second line, copy the static files to the/usr/share/nginx/html Directory of the image. That is, the directory where Nginx looks for static files. Dockerfile contains instructions for building images. For more information, see here . Then you can build the image:
$ docker build -t docker-demo:0.1 .
follow the preceding steps to create a new directory for this experiment and run docker build in this directory. If you run in other directories that have many files, such as your user directory or/tmp, Docker sends all the files in the current directory as the context to the background process responsible for building. The name docker-demo in this command can be understood as the application name or service name corresponding to the image. 0.1 is the tag. Docker identifies images by combining names and tags. Run the following command to view the image you just created:
$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE docker-demo 0.1 efb8ca048d5a 5 minutes ago 109MB 
run the image. Nginx listens to port 80 by default, so we map port 8080 of the host to port 80 of the container:
$ docker run --name docker-demo -d -p 8080:80 docker-demo:0.1
run the following command to view the running container:
$ docker container ps
CONTAINER ID IMAGE ... PORTS NAMES
c495a7ccf1c7 docker-demo:0.1 ... 0.0.0.0:8080->80/tcp docker-demo
if you use a browser to access http://localhost:8080 , you can see the "Hello Docker!" we just created! 」 Page. In the actual production environment, Docker itself is a relatively underlying Container Engine. In clusters with many servers, it is unlikely to manage tasks and resources in the following way. Therefore, we need to Kubernetes such a system to orchestrate and schedule tasks. Before proceeding to the next step, do not forget to clean up the containers used in the experiment:
$ docker container stop docker-demo
$ docker container rm docker-demo

mounting Kubernetes

after introducing Docker, you can finally try Kubernetes. We need to install three things: Kubernetes command line client kubctl, a Kubernetes environment Minikube that can run locally, and the virtualization engine xhyve for Minikube.
$ brew install kubectl
$ brew cask install minikube
$ brew install docker-machine-driver-xhyve
Minikube, the default virtualization engine is VirtualBox, and xhyve is a lighter and better performance replacement. It needs to run with the root permission, so after the installation, change the owner to root:wheel, and open the setuid permission:
$ sudo chown root:wheel /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
$ sudo chmod u+s /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
then you can start the Minikube:
$ minikube start --vm-driver xhyve
most of you will see a warning that xhyve will be replaced by hyperkit in future versions. We recommend that you use hyperkit. However, when I wrote this tutorial, docker-machine-driver-hyperkit had not entered the Homebrew and needed to be compiled and installed manually. I was lazy and still used xhyve. In the future, you only need to change xhyve to hyperkit in the installation and running commands. If an error occurs or is interrupted when you start the Minikube for the first time and the retry still fails, you can run minikube delete to delete the cluster and restart it. When the Minikube is started, kubectl is automatically configured to point it to the Minikube service provided by the Kubernetes API. Run the following command to confirm:
$ kubectl config current-context
minikube

Kubernetes architecture

A typical Kubernetes cluster contains a master and many nodes. The Master is the center of the control cluster, and the node is the node that provides CPU, memory, and storage resources. Multiple processes run on the Master, including user-oriented API services, Controller Manager responsible for maintaining cluster status, and Scheduler responsible for scheduling tasks. Each node runs a kubelet that maintains the node status and communicates with the master, and a kube-proxy that implements cluster network services. As a development and testing environment, Minikube creates a cluster with a node. Run the following command:
$ kubectl get nodes
NAME STATUS AGE VERSION
minikube Ready 1h v1.10.0

deploy a single instance service

first, we try to deploy a simple service, just like when the article begins to introduce Docker. The smallest unit deployed in a Kubernetes is a pod, not a Docker container. Kubernetes in real time does not depend on Docker. You can use other container engines to replace Docker in Kubernetes-managed clusters. When used with Docker, a pod can contain one or more Docker containers. However, in the case of tight coupling, a pod usually has only one container, which facilitates independent expansion of different services. Minikube has its own Docker engine, we need to reconfigure the client so that the docker command line can communicate with the Docker process in the Minikube:
$ eval $(minikube docker-env)
after running the preceding command and then running docker image ls, you can only see some images that come with the Minikube, but you cannot see the docker-demo:0.1 image that we just built. Before proceeding, we need to rebuild our image. By the way, change the name here and call it k8s-demo:0.1.
$ docker build -t k8s-demo:0.1 .
Create a definition file named pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: k8s-demo
spec:
containers:
- name: k8s-demo
 image: k8s-demo:0.1
 ports:
 - containerPort: 80
A Pod named k8s-demo is defined here, using the k8s-demo:0.1 image that we just built. This file also tells Kubernetes processes in the container listen to port 80. Then run it:
$ kubectl create -f pod.yml
pod "k8s-demo" created
kubectl submits the file to the Kubernetes API service, and then Kubernetes Master allocates the Pod to the node as required. Run the following command to view the created Pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-demo 1/1 Running 0 5s
because our image is local and the service is simple, the STATUS is already running when running kubectl get pods. If you use a remote image (such as an image on a Docker Hub), the status you see may not be Running, and you need to wait for a while. Although the pod is running, we cannot use a browser to access the services it runs as we did when we tested Docker. It can be understood that all pods are running in the same internal network and we cannot directly access them from the outside. To expose services, we need to create a Service. Service is similar to creating a reverse proxy and load balancer to distribute requests to subsequent pods. Create a Service definition file named svc.yml:
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-svc
labels:
app: k8s-demo
spec:
type: NodePort
ports:
- port: 80
 nodePort: 30050
selector:
app: k8s-demo
this service exposes Port 80 of the container from Port 30050 of the node. Note the selector section of the last two lines of the file, which determines which pods will be sent to the cluster. All pods that contain the "app: k8s-demo" label are defined here. However, the pods that were previously deployed do not have tags:
$ kubectl describe pods | grep Labels
Labels: <none>
therefore, you need to update the pod.yml first and add the labels (note that the labels section is added under metadata:):
apiVersion: v1
kind: Pod
metadata:
name: k8s-demo
labels:
app: k8s-demo
spec:
containers:
- name: k8s-demo
 image: k8s-demo:0.1
 ports:
 - containerPort: 80
then, update the pod and confirm that the tag has been added:
$ kubectl apply -f pod.yml
pod "k8s-demo" configured
$ kubectl describe pods | grep Labels
Labels: app=k8s-demo
then you can create the service:
$ kubectl create -f svc.yml
service "k8s-demo-svc" created
run the following command to obtain the exposed URL and access the web page in the browser.
$ minikube service k8s-demo-svc --url
http://192.168.64.4:30050

Scale-out, rolling update, and version rollback

in this section, let's experiment with some operations that are commonly used in a production environment of high-availability services. Before proceeding, delete the Pod that you just deployed (but retain the service, which will be used later):
$ kubectl delete pod k8s-demo
pod "k8s-demo" deleted
in a formal environment, we need to protect a service from single node failure and dynamically adjust the number of nodes according to load changes. Therefore, it is impossible to manage pods one by one as above. Kubernetes users usually use Deployment to manage services. A deployment can create a specified number of pods to deploy on each node, and can complete operations such as update and rollback. First, create a definition file deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-demo-deployment
spec:
replicas: 10
template:
metadata:
 labels:
 app: k8s-demo
spec:
 containers:
 - name: k8s-demo-pod
 image: k8s-demo:0.1
 ports:
 - containerPort: 80
note that the initial apiVersion is different from the previous one, because the Deployment API is not included in v1. replicas: 10 specifies that the deployment must have 10 pods, and the subsequent and previous pod definitions are similar. Submit this file to create a deployment:
$ kubectl create -f deployment.yml
deployment "k8s-demo-deployment" created
Run the following command to view the replica set of the deployment. There are 10 pods running.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
k8s-demo-deployment-774878f86f 10 10 10 19s
Assume that we have made some changes to the project and want to release a new version. As an example, we only modify the content of the HTML file, and then build a new image k8s-demo:0.2:
$ echo '<h1>Hello Kubernetes!</h1>' > html/index.html
$ docker build -t k8s-demo:0.2 .
then update deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-demo-deployment
spec:
replicas: 10
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
 maxUnavailable: 1
 maxSurge: 1
template:
metadata:
 labels:
 app: k8s-demo
spec:
 containers:
 - name: k8s-demo-pod
 image: k8s-demo:0.2
 ports:
 - containerPort: 80
there are two changes. The first is that the image version number image: k8s-demo:0.2 is updated, and the second is that the minReadySeconds: 10 and strategy are added. The new section defines the update policy: minReadySeconds: 10 indicates that after a pod is updated, the next pod needs to be updated 10 seconds after it enters the normal state; maxUnavailable: 1 indicates that no more than one pod is available at the same time. maxSurge: 1 indicates that no more than one Pod is available. In this way, Kubernetes replaces the pods behind the service one by one. Run the following command to start the update:
$ kubectl apply -f deployment.yml --record=true
deployment "k8s-demo-deployment" configured
here, -- record = true allows Kubernetes to record this line of commands in the release history for future reference. Run the following command to view the status of each Pod:
$ kubectl get pods
NAME READY STATUS ... AGE
k8s-demo-deployment-774878f86f-5wnf4 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-6kgjp 0/1 Terminating ... 7m
k8s-demo-deployment-774878f86f-8wpd8 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-hpmc5 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-rd5xw 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-wsztw 1/1 Running ... 7m
k8s-demo-deployment-86dbd79ff6-7xcxg 1/1 Running ... 14s
k8s-demo-deployment-86dbd79ff6-bmvd7 1/1 Running ... 1s
k8s-demo-deployment-86dbd79ff6-hsjx5 1/1 Running ... 26s
k8s-demo-deployment-86dbd79ff6-mkn27 1/1 Running ... 14s
k8s-demo-deployment-86dbd79ff6-pkmlt 1/1 Running ... 1s
k8s-demo-deployment-86dbd79ff6-thh66 1/1 Running ... 26s
from the AGE column, you can see that some pods are newly created, while some pods are still old. The following command displays the real-time status of the release:
$ kubectl rollout status deployment k8s-demo-deployment
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
deployment "k8s-demo-deployment" successfully rolled out
due to my late input, the release is almost over, so there are only three lines of output. The following command shows the release history. Because -- record = true is used for the second release, you can see the commands used for publishing.
$ kubectl rollout history deployment k8s-demo-deployment
deployments "k8s-demo-deployment"
REVISION CHANGE-CAUSE
1 <none>
2 kubectl apply --filename=deploy.yml --record=true
If you refresh the browser, you can see the updated content "Hello Kubernetes!」. Assume that after the release of the new version, we find a serious bug and need to roll back to the previous version immediately. You can use this simple operation:
$ kubectl rollout undo deployment k8s-demo-deployment --to-revision=1
deployment "k8s-demo-deployment" rolled back
Kubernetes replaces each Pod according to the established policy, similar to the release of a new version, but this time the new version is replaced with the old version:
$ kubectl rollout status deployment k8s-demo-deployment
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for rollout to finish: 1 old replicas are pending termination...
deployment "k8s-demo-deployment" successfully rolled out
after the rollback is completed, refresh the browser to confirm that the webpage content is changed back to "Hello Docker!」.

Conclusion

we practiced image building and container deployment at different levels, and deployed a deployment with 10 containers to test the rolling update and rollback process. Kubernetes provides a very multi-function, this article just to skim the way made a fast-paced walkthrough omit many details. Although you cannot add "proficient Kubernetes" to your resume, you should be able to test your front and back-end projects in a local Kubernetes environment. If you encounter specific problems, you can turn to Google and official documents. On this basis, if you are more familiar with it, you should be able to publish your own services in the Kubernetes production environment provided by others.

Most of LeanCloud services run on Docker-based infrastructure, including API services, middleware, and backend tasks. Most developers who use LeanCloud mainly work at the front end, but cloud engines make container technology closest to users in our products. The cloud engine provides the advantages of good container isolation and easy scale-out. It also directly supports native dependency management in various languages, eliminating the burden of image building, monitoring, and restoration, it is very suitable for users who want to devote their energy to development.

This article is transferred from DockOne-Docker and Kubernetes from listening to Understanding: A tornado tutorial for programmers

Programmer Docker container Perl Kubernetes application service middleware nginx API scheduling virtualization
developer Community&gt; development and O &amp; M &gt; article
Selected, One-Stop Store for Enterprise Applications
Support various scenarios to meet companies' needs at different stages of development

Start Building Today with a Free Trial to 50+ Products

Learn and experience the power of Alibaba Cloud.

Sign Up Now