Community Blog Cloud Native: From Docker to Kubernetes and to Service Meshes

Cloud Native: From Docker to Kubernetes and to Service Meshes

In this blog, an Alibaba engineer discusses his thoughts and understanding of Docker, Kubernetes, and Service Meshes like Istio and everything else Cloud Native.

By Lv Renqi.

Back last year around when I was doing some research into Service Meshes, I read all about Kubernetes, mostly looking through some presentations online. And around the same time, I also deployed and played around a bit with OpenShift, which is a Kubernetes application platform. Afterwards, I wrote two articles in which I discussed my experiences and thoughts about Service Mesh and Kubernetes.

Looking back on those previous articles now, though, I think that my understanding back then was a bit limited and one sided, as with most information you'll find online about these topics. Now that my team are in the middle of considering using Dubbo for a cloud-native architecture, I spent some time brushing up on the basics reading some practical guides to Docker and Kubernetes in the hope of providing a better and more comprehensive understanding of all the relevant technology.

Now, to begin discussion, I'd like to consider what exactly Docker, Kubernetes, and Service Mesh are and what do they do to try to make better sense of what cloud native exactly is and means. Last in the blog, we'll also tackle the concept and meaning of cloud native and take a stab at what exactly "cloud nativeness" is.

Thoughts on Docker

In this section, I'm going to present my thoughts after reading the practical guide to Docker.

Docker is a Linux container toolset that is designed for building, shipping, and running distributed applications. Docker helps with the process of containerization, which ensures application isolation and reusability, and also virtualization, which is a functional way to ensure the security of physical machines.

In many ways, Docker was one of the first mainstream pieces of what would become the cloud native puzzle.

Below is a graphic that shows how Docker can be used to streamline things:


The Core of What Docker Is

The points below sum up what exactly Docker is and what it does.

  • Docker can be described as a sort of contract that concatenates the entire lifecycle of an application. Its core strengths are that it speeds up application shipping and improves overall productivity.
  • Docker is fundamentally different to virtual machines. Docker, or the concept of containerization itself, is not virtualization. It does not simulate the hardware of a machine. Rather, Docker is built around applications, and it encapsulates an application, including its configurations and dependencies, using the open dockerfile standard. Virtual machines work on the operating system layer with a focus on resource management.
  • Docker can also be tightly integrated with Chef and Jenkins.
  • Docker Hub is the center of container image management and best reflects the features of Docker applications. Each image can be named and tagged and has a unique ID.

Docker is also well integrated with all the stages of the application lifecycle:

  • Development: Docker ensures the consistency of the development environment given that the process is its environment, and can work with traditional configuration tools, such as Make, Chef, and Puppet. It also features smaller images and is developer friendly. Dockerfiles can be used to enable source-to-image automation.
  • DevOps: Docker enables automated workflows from Git to Docker Hub to Jenkins. It adopts a comprehensive workflow of continuous check-in, continuous integration, continuous deployment, and continuous delivery.
  • Production environment: Docker is also a place where Platform-as-a-Service (PaaS) products such as Swarm and Kubernetes shine.

Docker is responsible for the following:

  1. The runtime status and configuration management for multi-host containers.
  2. The security, logging, and troubleshooting management in a multi-host and multi-container orchestration environment.

Thoughts on Kubernetes

As I hinted at before, Kubernetes came after containerized applications and Docker became popular and were started to be used as a platform to manage containers. In this section, like the last, I'm going to present my thoughts on reflection after reading a practical guide to Kubernetes.

Why Kubernetes

The popularity of Kubernetes is closely connected with Docker. The wide adaptation of Docker made Platform-as-a-Service (PaaS) a viable thing. Kubernetes is a great achievement that came after much practical experience in managing massive data centers at Google. It was also largely heralded in by the explosion of containerized applications that occurred at the time. Google's goal was to establish a new industry standard, and they undoubtedly did-with Kubernetes being as wildly popular as it has come to be.

Kubernetes is really a big piece of the cloud native revolution.

Google started using containers in 2004 and released Control groups (typically referred to as cgroups) in 2006. At the same time, they were using cluster management platforms such as Borg and Omega internally in their cloud and IT infrastructure. Kubernetes was inspired by Borg and draws on the experience and lessons of container managers, including Omega.

You can read more about this fascinating bit of history on this blog.

Some things to consider moving forward is how did Kubernetes beat other early comers like Compose, Docker Swarm, or even Mesos? The answer to this question, in short, is Kubernetes's superior abstraction model. Kubernetes in principle is different at the core of its design. To understand Kubernetes, we'll need to go over all the concepts and terms that are involved in its underlying architecture.

The Major Concepts of Kubernetes

Below are some of the concepts that are more or less unique to Kubernetes. From understanding the concepts behind Kubernetes, you can gain a better general understanding of how Kubernetes works:

  • Pod: A Pod is a basic execution unit of a Kubertnetes application. A pod is combination of several containers, which all happen to run on the same host and use the same network namespace, IP address, and port. These containers use the localhost to communicate with and discover each other. In addition, these containers can share the same storage volume. Pods, not containers, are the smallest units in Kubernetes to create, schedule, and manage. Pods provide a higher level of abstraction and more flexible deployment and management models.
  • Controller: A Controller is the logic unit for task execution, and is managed by Controller Manager for load scheduling and execution. ReplicationController (RC) is one of the specific implementations of a controller, and is responsible for elastic scaling and rolling upgrade. Each object model has a corresponding controller, such as Node, Service, Endpoint, SA, PV, Deployment, and Job. You can also implement your controller using extensions.
  • Service: Service in Kubernetes is the abstraction of actual application services. It defines the logical set of and the policy for accessing the pod. Service acts as the single point of access (or proxy) for the pod, which enables access to applications without the need to know how the backend pod works. This provides a simplified mechanism of service proxy and discovery and makes expansion and maintenance much easier. Service can act as a proxy for any backend, in addition to a pod, such as a service external to Kubernetes clusters. In this case, you do not need a selector, but an endpoint of the same name as the service must be manually defined.
  • Label: A Label is a key/value pair that is attached to an object, such as a pod. Labels are used to loosely couple pods with services and replication controllers.
  • Node: A Node in Kubernetes terms is simply a host or machine. In Kubernetes, the node defined as a worker machine, which may be a virtual or physical machine.

If you'd like to learn more, you can get it from the source itself, read the official documentation directly, which you can find here.

Kubernetes and the Mainstream CI/CD Models

Continuous integration and deployment is arguably the most distinctive features of Platform-as-a-Service (PaaS) models and cloud native as well, and most cloud vendors provide PaaS models that are based on Kubernetes. Therefore, general user experience and the specific Continuous integration and continuous delivery (CI/CD) pipelines used in these models are the things that distinguish different services. Generally speaking, CI/CD pipelines start when the project is created and permeate every stage of the application lifecycle. This is the core advantage of CI/CD pipelines. They can be integrated into all steps of a development workflow and provide an all-in-one application service experience.

Below are some mainstream CI/CD tools that can be used with Kubernetes:

  • Jenkins: Jenkins came out of Hudson and is implemented in Java. It is perhaps the most widely used CI/CD tool nowadays.
  • TeamCity: TeamCity has a lot of powerful features, including a public issue tracker and forum, 100 build configurations, and several other features.
  • CircleCI: CircleCi is great if you want to easily automate your development process in a safe manner and at a large scale. Its clients include Facebook, Kickstarter, and Spotify.
  • Travis CI: Travis CI can help you test and deploy with confidence. In fact, Apache projects use Travis CI for integration testing. Each PR has a TC task, which performs unit tests and makes sure codes are up to spec.
  • Drone CI: Drone is a self-service Continuous Delivery platform that is convenient for busy development teams. Their clients include Cisco, Ebay, Gannett, VMWare, and Capital One.

API Objects and Metadata Connected with Kubernetes

Now, the last important thing to consider when it comes to Kubernetes are the API objects and metadata connected with Kubernetes:

  • kuberctl: Kuberctl is the command-line tool of Kubernetes.
  • Direct access to the Kubernetes API: Kubernetes API Server is the access point of Kubernetes. It services RESTful operations and integrates Swagger to define and describe APIs. You can read more about it here.
  • Kubebuilder SDK: Kubebuilder is the main SDK of Kubernetes.

In reality, all that metadata does really is it defines the basic information of an API object. It is represented by the metadata fields and has the following attributes:

  • namespace: The namespace field specifies the namespace of the API object. Namespaces can be defined as the multiple virtual clusters that are backed by the same physical cluster which Kubernetes supports. Different projects, teams, and users can use different namespaces for management and customized access control besides other policies. All API objects, except nodes, belong to a namespace. This is defined by metadata.namespace. If this field is not defined, then the default namespace called default is used.
  • name: The name field specifies the name of the API object. The name field is an important attribute. It is defined by metadata.nameAll API objects except Node belong to a namespace. Within the same namespace, these objects are identified by their names. Therefore, the name of an API object must be unique within the namespace. Nodes and namespaces must be unique in the system.
  • labels: The label fields specify the labels of the API object. Labels are key/value pairs that are attached to API objects. They are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users but do not directly imply semantics to the core system. Labels can be used to organize and to select the subsets of objects. ReplicationController and ReplicationService use labels to be associated with pods. Pods can also use labels to select nodes.
  • annotations: The annotation fields specify the annotations of the API object. The annotation field is used to attach arbitrary non-identifying metadata to objects, but cannot be used to select objects. Annotation can be structured or non-structured long data and is also a key/value pair.

Thoughts on Service Mesh

In this section, I'm going to quickly look into Service Mesh, which is really the next big thing after Kubernetes and Docker. To me, the core strength of Service Mesh is its control capabilities, and that's one place where the service mesh Istio in particular especially shines.

In fact, I'll go as far to say that, if the Istio model could be continued to be standardized and expanded, it could easily become the de facto PaaS product for containerized applications. Though, I think that the service meshes of Apache's Dubbo, Envoy, NGINX, and even Conduit are also viable integration choices.

Since I think that Istio really is a stellar option, let's focus on it first.

Istio, What's It All About?

To understand Service Meshes, you really need to understand the design principle of a service mesh. Let's look at what's the design principle behind Istio. In a nutshell, a service abstraction model comes first before implementation on a container scheduling platform. Consider the figure below.


If you're interested in learning all the specifics about what exactly Istio is, you can check out its official explanation here.

Generally speaking, the Istio service model involves an abstract model of services and their instances. Istio is independent of the underlying platform, having platform specific adapters populating the model object with various fields from the metadata found in the platform. Within this, service is a unit of an application with a unique name that other services refer to, and service instances are pods, virtual machines and containers that implement the service. There can be multiple versions of a service.

Next there's the services model. Each service has a fully qualified domain name (or FQDN) and one or more ports where the service is listening for connections. A service can have a single load balancer and virtual IP address associated with it. Also, involved with the service model are instances. Each service has one or more instances, which serve as actual manifestations of the service. An instance represents an entity such as a pod, with each one having a network endpoint.

Also involved in the design of Istio is service versions with each version of a service being differentiated by a unique set of labels associated with the particular version. Another concept is Labels, which are simple key value pairs assigned to the instances of a particular service version. All instances of a same version must have the same tag. Istio expects that the underlying platform provides a service registry and service discovery mechanism.

Some More Things to Know About Cloud Native

With the above discussion about Docker, Kubernetes, and Service Meshes like Istio, I left out one major thing to cover and that's "cloud native". So, what is "cloud native"? There are different interpretations of what it means or what it takes to be "cloud native", but according to the Cloud Native Computing Foundation (CNCF), cloud native can be understood as follows:

Cloud-native technologies and solutions can be defined as technologies that allow you to build and run scalable applications in the cloud. Cloud native came with the development of container technology, like what we saw with Docker and then Kubernetes, and generally involves the characteristics of being stateless, having continuous delivery, and also having micro services.

Cloud-native technologies and solutions can ensure good fault tolerance and easy management and when combined with a robust system of automation thanks to CI/CD pipelines, among other things, they can provide a strong impact without too much hassle.

Nowadays, the concept of cloud native and "cloud nativeness" even according to the CNCF involving using Kubernetes, but Kubernetes is not the only piece of the puzzle but rather is only the beginning. With traditional microservice solutions Dubbo now being a part of the CNCF landscape, more and more people are attracted to cloud native due to the unique features and capabilities that these solutions have to offer.

In other words, in many ways, in today's landscape of cloud native, cloud native may start with Kubernetes but end up going on to embrace Service Mesh solutions like Istio. It's just like the progression we saw throughout this blog, first there was Docker, and then Kubernetes, and now there's also Service Meshes.

0 0 0
Share on


4 posts | 2 followers

You may also like



4 posts | 2 followers

Related Products