Four Basic Principles of Kubernetes Design and Development

Four Basic Principles of Kubernetes Design and Development

About the author: Saad Ali is a senior software engineer from Google, working on the open source Kubernetes project. He joined the project in December 2014 and is responsible for the development of Kubernetes storage and volume subsystems. He also serves as the leader of the Kubernetes Storage Interest Group, and is a co-developer and maintainer of CSI (Container Storage Interface).
author: Saad Ali is a senior software engineer from Google, working on the open source Kubernetes project. He joined the project in December 2014 and is responsible for the development of Kubernetes storage and volume subsystems. He also serves as the leader of the Kubernetes Storage Interest Group, and is a co-developer and maintainer of CSI (Container Storage Interface). Before joining Google, Saad Ali was responsible for the IMAP prototype development project at Microsoft.
Kubernetes is fast becoming the de facto standard for deploying workloads on distributed systems. In this post, I will help you gain a deeper understanding of Kubernetes by revealing some of the principles of its design.
1. Declarative rather than imperative
Once you learn to deploy your first workload (pod) on the Kubernetes open source orchestration engine, you will encounter the first principle of Kubernetes: the Kubernetes API is declarative rather than imperative.
In the imperative API, you can directly issue commands to be executed by the server, for example: "run container", "stop container", etc. In a declarative API, you declare what the system wants to do, and the system will keep driving towards that state.
Think of manual driving and autonomous driving systems.
So, in Kubernetes, you create an API object (using the CLI or REST API) to represent what you want the system to do. All components in the system progress to this state until the object is deleted.
For example, if you want to schedule a containerized workload, instead of issuing a "run container" command, create an API object, pod, that describes the desired state:
After creation, this object persists on the API server:
If the container crashes for some reason, the system will restart the container.
To terminate the container, delete the pod object:

Four Basic Principles of Kubernetes Design and Development.Why Declarative and Not Imperative?

Declarative API makes the system more robust.
In a distributed system, any component can fail at any time. When the component comes back up, it needs to figure out what to do.
When using the imperative API, a crashed component may miss a call when it shuts down, and needs some external component to "catch up" when it comes back up. But with a declarative API, a component can simply look at the current state of the API server to determine what it needs to do ("Ah, I need to make sure this container is running").
This is also described as "horizontal triggering" rather than "edge triggering". In an edge-triggered system, if the system misses an "event" (an "edge"), the event must be replayed in order for the system to recover. In a horizontal trigger system, even if the system misses the "event" (probably because it's turned off), when it comes back up, it can look at the current state of the signal and respond accordingly.
Therefore, the declarative API makes the Kubernetes system more robust to component failures.
No hidden internal API
If you understand how the various Kubernetes components work, you will encounter the next principle of Kubernetes: the control plane is transparent because there is no hidden internal API.
This means that the API that Kubernetes components use to interact is the same API that you use to interact with Kubernetes. Combined with the first principle (Kubernetes API is declarative), this means that Kubernetes components can only interact by monitoring and modifying the Kubernetes API.
Let's illustrate this with a simple example. To start containerized workloads, you can create a pod object on the Kubernetes API server as shown above.
The Kubernetes scheduler determines the best node to run the pod on based on available resources. The scheduler does this by monitoring the Kubernetes API server for new pod objects. When a new unscheduled pod is created, the scheduler will run its algorithm to find the best node for this pod. After a pod is scheduled (the best node for the pod has been selected), the scheduler does not go and tell the selected node to start the pod. Remember that the Kubernetes API is declarative and internal components use the same API. Therefore, the scheduler updates the NodeName field in the pod object to indicate that the pod has been scheduled.
The kubelet (Kubernetes agent running on the node) monitors the Kubernetes API (just like other Kubernetes components). When the kubelet sees a pod with a NodeName field corresponding to itself , it knows that the pod has been scheduled and must start it. Once the kubelet starts the pod, it continues to monitor the container state for the pod and keeps the container running as long as the corresponding pod object continues to exist in the API server.
After deleting the pod object, the Kubelet understands that the container is no longer needed and terminates it.

Principles of Kubernetes. Why is there no hidden internal API?

Having Kubernetes components use the same external API makes Kubernetes composable and extensible .
If for some reason Kubernetes' default components (e.g. scheduler) are not sufficient for your needs, you can turn it off and replace it with your own that uses the same API.
Additionally, if the functionality you need is not yet available, you can easily write components to extend Kubernetes functionality using the public API.
Meet users where they are
The Kubernetes API allows storing information that may be of interest to a workload. For example, the Kubernetes API can be used to store secrets or configuration maps. Secrets can be any sensitive data you don't want in the container image, including passwords, certificates, etc. A configuration map can contain configuration information that should be independent of the container image, such as application startup and other similar parameters.
Due to the second principle of no hidden internal APIs defined above, applications running on Kubernetes can be modified to obtain secrets or configuration mapping information directly from the Kubernetes API server. But that means you need to modify the application to realize it's running in Kubernetes.
This is the third principle of Kubernetes: satisfy users where they are. That said, Kubernetes should not require rewriting applications to run on Kubernetes.
For example, many applications accept secrets and configuration information as files or environment variables. Therefore, Kubernetes supports injecting secrets and configuration maps into pods as files or environment variables.
Why do you want to do this?
By making design choices that minimize barriers to deploying workloads on Kubernetes, Kubernetes can easily run existing workloads without rewriting or drastically changing them.
Workload Portability
Once running stateless workloads on Kubernetes, the natural next step is to try running stateful workloads on Kubernetes. Kubernetes provides a powerful volume plugin system to use many different types of persistent storage systems with Kubernetes workloads.
For example, a user can easily request to install the Google Cloud Persistent Disk into a pod at a specific path:
When this pod is created, Kubernetes will automatically be responsible for mounting the specified GCE PD to the node where the pod is scheduled, and mount it in the specified container. The container can then write to the path where the GCE PD is mounted to persistent data outside the lifetime of the container or pod.
The problem with this approach is that the pod definition (pod YAML) directly references the Google Cloud Persistent Disk. If this pod has been deployed on a non-Google Cloud Kubernetes cluster, it will fail to start (because GCE PD is not available).
This is where another Kubernetes principle comes in: workload definitions should be portable across clusters. Users should be able to use the same workload definition file (eg the same pod yaml ) to deploy workloads across different clusters.
Ideally, the pods specified above should even run on clusters without GCE PD. To achieve this, Kubernetes introduced PersistentVolumeClaim (PVC) and PersistentVolume (PV) API objects. These objects decouple storage implementation from storage usage.
PersistentVolumeClaim object is used as a method for the user to request storage in an implementation-independent manner. For example, instead of requesting a specific GCE PD , a user could create a PVC object to request 100 GB of ReadWrite storage:
The Kubernetes system matches this request with volumes from a pool of available disks containing PersistentVolume objects, or automatically configures new volumes to satisfy the request. Either way, the objects used to deploy workloads in Kubernetes clusters are portable across clusters.
Why Workload Portability?
This workload portability principle highlights a core advantage of Kubernetes: Just as the operating system frees application developers from worrying about the details of the underlying hardware, Kubernetes frees distributed systems application developers from the details of the underlying cluster. With Kubernetes, distributed systems application developers are not locked into a specific cluster environment. Applications deployed against Kubernetes can be easily deployed to various clusters in on-premises and cloud environments without requiring environment-specific changes to the application or deployment scripts (except Kubernetes endpoints).

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00