10 minutes to understand Docker and K8S
Related Tags：1. How to Install Docker CE and Docker-Compose on CentOS 8
2. Collect Docker events
Abstract: What is "open source"? Open source means open source code. That is to open the original internal confidential program source code to everyone, and then let everyone participate together and contribute code and opinions.
In 2010, several young people engaged in IT established a company called "dotCloud" in San Francisco, USA.
This company mainly provides cloud computing technology services based on PaaS. Specifically, container technology related to LXC.
LXC is Linux container virtualization technology (Linux container)
Later, dotCloud simplified and standardized its container technology and named it Docker.
After the birth of Docker technology, it did not attract the attention of the industry. And dotCloud, as a small start-up company, is also struggling under the fierce competition.
Just when they were about to lose it, the idea of "open source" popped into their minds.
What is "open source"? Open source means open source code. That is to say, open the original internal secret program source code to everyone, and then let everyone participate together and contribute code and opinions.
Open Source, open source
Some software is open source from the beginning. There are also some software that cannot survive, and the creators do not want to give up, so they choose to open source. If you can't support yourself, you can eat "hundred family meals".
In March 2013, 28-year-old Solomon Hykes, one of the founders of dotCloud and the father of Docker, officially decided to open source the Docker project.
Solomon Hykes (just left Docker this year)
If you don't open it, it's amazing.
More and more IT engineers have discovered the advantages of Docker, and then flocked to join the Docker open source community.
Docker's popularity has grown rapidly, and the pace is astounding.
In the month of open source, Docker version 0.1 was released. Every month after that, Docker releases a version. By June 9, 2014, Docker version 1.0 was officially released.
At this time, Docker has become one of the most popular open source technologies in the industry. Even giants such as Google, Microsoft, Amazon, and VMware have favored it and expressed their full support.
After Docker became popular, dotCloud simply changed the company name to Docker Inc.
Why are Docker and container technology so popular? To put it bluntly, it is because it is "light".
Before container technology, the internet celebrity in the industry was virtual machines. The representatives of virtual machine technology are VMWare and OpenStack.
I believe many people have used virtual machines. A virtual machine is to install a software in your operating system, and then simulate one or even multiple "sub-computers" through this software.
A virtual machine, similar to a "child computer"
In the "sub-computer", you can run programs like a normal computer, such as opening QQ. If you want, you can conjure up several "sub-computers" with QQ on them. The "sub-computer" and "sub-computer" are isolated from each other and do not affect each other.
A virtual machine is a virtualization technology. Container technology such as Docker is also a virtualization technology, which belongs to lightweight virtualization.
Although virtual machines can isolate many "sub-computers", they take up more space, start more slowly, and virtual machine software may cost money (eg VMWare).
And container technology just doesn't have these drawbacks. It does not need to virtualize the entire operating system, only a small-scale environment (similar to a "sandbox").
It has a quick start-up time and is done in seconds. Also, it is very resource efficient (a host can run thousands of Docker containers simultaneously). In addition, it occupies a small space. Virtual machines generally require several GB to dozens of GB of space, while containers only need MB or even KB.
Containers vs Virtual Machines
Because of this, container technology has been warmly welcomed and sought after and developed rapidly.
Let's take a look at Docker in detail.
You need to note that Docker itself is not a container, it is a tool for creating containers and an application container engine.
If you want to understand Docker, you can actually read its two slogans.
The first sentence is "Build, Ship and Run".
That is, "build, send, run", the three-pronged approach.
I came to a vacant lot and wanted to build a house, so I moved stones, chopped wood, drew blueprints, and finally built the house.
As a result, I lived for a while and wanted to move to another vacant lot. At this time, according to the previous method, I can only move stones, chop wood, draw blueprints, and build houses again.
But an old witch came and taught me a magic trick.
This kind of magic can make a copy of the house I built, make a "mirror", and put it in my backpack.
When I get to another open space, I will use this "mirror" to duplicate a house, put it there, and move in with my bags.
How about it? Isn't it amazing?
Therefore, the second slogan of Docker is: "Build once, Run anywhere (build once, can be used everywhere)".
The three core concepts of Docker technology are:
In my example just now, the "image" in the package is the Docker image. And my backpack is the Docker repository. I am in the open space, and the house I built with magic is a Docker container.
To put it bluntly, this Docker image is a special file system. In addition to providing the programs, libraries, resources, configuration and other files required by the container runtime, it also includes some configuration parameters (such as environment variables) prepared for the runtime. The image does not contain any dynamic data and its contents will not be changed after construction.
That is to say, every time the house is changed, the house is the same, but the daily necessities and the like are ignored. Whoever lives is responsible for the addition.
Each mirror image can be transformed into a kind of house. Well, I can have multiple mirrors!
That is to say, I built a European-style villa and created a mirror image. Another buddy may have built a Chinese courtyard, and also generated a mirror image. And my buddy, built an African thatched house, and also generated a mirror image. . .
In this way, we can exchange mirrors, you use mine, and I use yours, isn't it cool?
Since then, it has become a large public warehouse.
Responsible for managing Docker images is the Docker Registry service (similar to warehouse administrators).
Not any mirror built by anyone is legal. What if someone builds a problematic house?
Therefore, the management of images by the Docker Registry service is very strict.
The most commonly used public service of Registry is the official Docker Hub, which is also the default Registry and has a large number of high-quality official images.
Well, after talking about Docker, let's turn our attention to K8S.
Just when the Docker container technology was in full swing, everyone found that it is difficult to apply Docker to specific business implementations - orchestration, management, scheduling and other aspects are not easy. Therefore, people urgently need a management system for more advanced and flexible management of Docker and containers.
At this time, K8S appeared.
K8S is a container-based cluster management platform. Its full name is kubernetes.
The word Kubernetes comes from the Greek and means helmsman or navigator. K8S is its abbreviation, replacing the 8 characters of "ubernete" with the word "8".
Unlike Docker, the creator of K8S is the well-known industry giant - Google.
However, the K8S is not a completely new invention. Its predecessor is the Borg system that Google has been tinkering with for more than ten years.
K8S was officially announced by Google in June 2014 and announced as open source.
In July of the same year, companies such as Microsoft, Red Hat, IBM, Docker, CoreOS, Mesosphere and Saltstack joined K8S one after another.
In the following year, VMware, HP, Intel and other companies also joined in one after another.
In July 2015, Google officially joined the OpenStack Foundation. At the same time,
Kuberentes v1.0 was officially released.
At present, the version of kubernetes has been developed to V1.13.
The architecture of K8S is a little complicated, so let's take a brief look at it.
A K8S system is usually called a K8S cluster (Cluster).
This cluster mainly consists of two parts:
A Master node (master node)
A group of Node nodes (computing nodes)
At a glance, it is clear that the Master node is mainly responsible for management and control. A Node node is a workload node with a specific container inside.
Take a closer look at these two nodes.
The first is the Master node.
Master nodes include API Server, Scheduler, Controller manager, etcd.
API Server is the external interface of the entire system, which is called by clients and other components, and is equivalent to a "business hall".
The Scheduler is responsible for scheduling resources within the cluster, which is equivalent to a "scheduling room".
The Controller manager is responsible for managing the controller, which is equivalent to the "big manager".
Then there is the Node node.
Node nodes include Docker, kubelet, kube-proxy, Fluentd, kube-dns (optional), and Pod.
Pod is the most basic unit of operation in Kubernetes. A Pod represents a process running in the cluster, which encapsulates one or more closely related containers. In addition to Pod, K8S also has a concept of Service. A Service can be regarded as the external access interface of a group of Pods that provide the same service. This paragraph is not very easy to understand, skip it.
Docker, needless to say, creates containers.
Kubelet is mainly responsible for monitoring Pods assigned to its Node, including creation, modification, monitoring, deletion, etc.
Kube-proxy is mainly responsible for providing a proxy for Pod objects.
Fluentd is mainly responsible for log collection, storage and query.
Are you a little confused? Alas, it's really hard to explain in a few words, so let's skip it.
Docker and K8S have been introduced, but the article is not over.
The next part is written for core network engineers and even all communication engineers.
From 1G decades ago, to 4G now, to 5G in the future, mobile communications have undergone earth-shaking changes, and so has the core network.
However, if you look closely at these changes, you will find that the so-called core network has not changed in essence, it is nothing more than a lot of servers. Different core network elements are different servers and different computing nodes.
What has changed is the form and interface of these "servers": form, from a single board in a cabinet, to a blade in a cabinet, from a blade in a cabinet, to an X86 general-purpose blade server; interfaces, from a trunk cable to a network cable, from a network cable to a into optical fiber.
Even if it changes, it is still a server, a computing node, and a CPU.
Since it is a server, it is bound to embark on the road of virtualization like IT cloud computing. After all, virtualization has too many advantages, such as the aforementioned low cost, high utilization, full flexibility, dynamic scheduling, and so on.
A few years ago, everyone thought that virtual machines were the ultimate form of core networks. For now, containerization is more likely. NFV (Network Element Function Virtualization), which is often mentioned in recent years, may also be changed to NFC (Network Element Function Containerization).
Taking VoLTE as an example, if the previous 2G/3G method is used, a large number of dedicated devices are required to act as different network elements of EPC and IMS respectively.
VoLTE-related network elements
After the container is adopted, it is likely that only one server is needed, more than a dozen containers are created, and different containers are used to run the service programs of different network elements respectively.
These containers can be created and destroyed at any time. It can also become larger, smaller, stronger, and weaker at will without downtime, dynamically balancing performance and power consumption.
In the 5G era, the core network adopts the micro-service architecture, which is also perfectly matched with the container. The monolithic architecture becomes the microservice architecture (Microservices). Each specialty, assigned to an isolated container, grants maximum flexibility.
Fine division of labor
According to such a development trend, in the mobile communication system, in addition to the antenna, the rest may be virtualized. The core network is the first, but not the last. The core network after virtualization should be classified as IT rather than communication. The function of the core network is just an ordinary software function in the container.
As for the core network engineers here, congratulations, you are about to successfully transform!
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00