Community Blog Fundamentals of Declarative Application Management in Kubernetes

Fundamentals of Declarative Application Management in Kubernetes

Zhang Lei, Staff Engineer in Alibaba Cloud and maintainer of Kubernetes, shares his thoughts on what makes Kubernetes similar to a database.

By Zhang Lei, Staff Engineer of Alibaba Cloud, CNCF Ambassador, Co-chair of CNCF SIG App Delivery, and maintainer of Kubernetes.

Recently, an argument claiming that "Kubernetes is the new database" has attracted a lot of attention in the Kubernetes community. To be more accurate, this argument means that Kubernetes itself works like a database rather than that it is used as a database.


At first glance, it might seem strange to compare Kubernetes to a database. After all, the working principles of Kubernetes, such as the controller pattern and declarative APIs, seem to have no direct relationship with the concept of "a database". However, this argument reveals an essence that can be traced back to the most basic theory of Kubernetes.

Fundamentals of Declarative Application Management in Kubernetes

The concept of "declarative application management" is quite common when it comes to any discussion about Kubernetes. This concept is also a design that distinguishes Kubernetes from all other infrastructure projects, as well as a unique capability specific to Kubernetes. Then, what is declarative application management in Kubernetes?

Declarative Application Management Is More than Just "Declarative APIs"

According to the core working principle of Kubernetes, it is clear that most functions in Kubernetes, such as executing containers by kubelet, executing iptables rules by kube-proxy, scheduling pods by kube-scheduler, and managing ReplicaSet by Deployment, follow the controller pattern we emphasized in the past. That is, the user claims the desired state through a YAML file, and components of Kubernetes will enforce the status of the entire cluster closer to and ultimately identical to the desired state claimed by the user. The process during which the actual state gradually approaches the desired state is called reconciliation. This principle also applies to Kubernetes Operators and custom controllers.

The way in which controllers are driven to reconcile two states through declarative descriptions is the most intuitive embodiment of declarative application management. Note that this process has two meanings:

The desired state of a declarative description, in which the description must be the final state that the user desires. If you use an intermediate state in this description, or you want to dynamically adjust this desired state, this declarative semantics cannot be executed accurately;

Reconciliation-based state approximation, where the reconciliation process ensures the system’s actual state can keep consistent with the desired state. Specifically, the reconciliation process keeps running a "check-> Diff -> execute" loop, so that the system can always deal with the differences between the system’s actual state and the desired state. That is to say, it is not enough to have only the declarative description. Because the system achieves the desired state when you first submit the description. However, this is not necessarily the case one hour later. Many people confused declarative application management with declarative APIs because they fail to properly understand the necessity of reconciliation.

You may be curious about what benefits this declarative application management system has brought to Kubernetes.

Infrastructure as Data Is the Essence of Declarative Application Management

The theoretical basis behind the declarative application management system is a concept known as Infrastructure as Data (IaD). According to this concept, infrastructure management should not be coupled with specific programming language or configuration approach, but should instead rely on pure, formatted, system-readable data that can completely represent the system state expected by the user.

Note: Infrastructure as Data is also called Configuration as Data.

Its advantage is that any operations on the infrastructure are ultimately equivalent to the create, update, retrieve, and delete (CURD) operations on such data. More importantly, the way you "create, update, retrieve, and delete" such data has nothing to do with the infrastructure itself. Therefore, your interaction with an infrastructure will not be bound to a programming language, a remote call protocol, or a software development kit (SDK). As long as "data" in a corresponding format can be generated, you can perform operations on the infrastructure as you desire.

Regarding Kubernetes, if you want to perform an operation on Kubernetes, you only need to submit a YAML file and then CURD the file, instead of using the RESTful APIs or SDKs of Kubernetes. The content in the YAML file corresponds to data in the IaD system or Kubernetes.

In this sense, Kubernetes has defined all its functions as "API objects" or pieces of data since its inception. Kubernetes users can do what they like by CURDing such data, without being bound to a specific programming language or SDK. More importantly, compared with the proprietary imperative APIs or SDKs, YAML-based declarative data can shield underlying implementations more easily, making it easier to connect to and integrate with existing infrastructure capabilities. This is also a secret weapon that enables the Kubernetes ecosystem to thrive at an amazing speed. The declarative APIs and controller pattern brought by IaD make the whole community more willing to write plug-ins and consolidate various capabilities for Kubernetes. In addition, these plug-ins and capabilities are highly universal and portable. These benefits are beyond the reach of other projects such as Mesos and OpenStack. It is safe to say that IaD is the core competitiveness for Kubernetes to achieve the goal of "The Platform for Platform".

Now, you may understand that data in this IaD design is the declarative API object in Kubernetes and that the control loop in Kubernetes ensures that the system can always be consistent with the state described by such data. From this point of view, Kubernetes is essentially a reconciliation system that expresses the setting values of the system as data and maintains itself consistent with the setting values through controller actions.

Does the theory of "maintaining the system consistent with the setting values" sound familiar?

If you have engineering background, you may have learned the fundamentals of Kubernetes in a course named Control Theory.


After learning the essence of Kubernetes, you may find it easier to understand how Kubernetes works.

For example, the reason why we write so many YAML files in Kubernetes is that we need a way to submit data to Kubernetes, or the control system. In this process, YAML is merely a carrier for writing formatted data. By analogy, YAML was a grid exercise-book, and words written in the grids are the data that matters to Kubernetes as the core of the entire system.

Now, you may be wondering if there is a fixed "format" for the data so that Kubernetes can easily parse and process such data. Such a format is referred to as the API object schema in Kubernetes. If you often write custom controllers, you may have a deep understanding of this schema. CRD is a special API object used to define the schema.

YAML vs. Database

IaD determines that Kubernetes works like a database rather than a traditional distributed system. This is one of the fundamental reasons for the high learning curve of Kubernetes.

From this perspective, API objects exposed by Kubernetes are tables with a predefined schema. We write YAML files to CURD the data in these tables. Like SQL, YAML is a tool and a carrier that helps you operate the data in a database. The only difference between Kubernetes and traditional databases is that Kubernetes does not persist the data it obtains, but uses the data to drive the controller and perform operations, to gradually reconcile the system state consistent with the final state declared in the data. Now, we are back to the "control theory" from earlier.

In Kubernetes, the whole system is based on "data". This is why writing and running YAML files are almost the only daily routines for Kubernetes engineers. However, after learning IaD in this article, you can call yourself a "database engineer", a title more appropriate than "YAML engineer".

"View Layer" of Kubernetes

As mentioned earlier, if you re-examine the designs of Kubernetes from the perspective of "database", you will have better insight into the many ingenious ideas behind the designs. For example,

  • Data model: API objects and the custom resources definition (CRD) mechanisms in Kubernetes
  • Data interception, validation, and modification mechanism: Kubernetes admission hooks
  • Data-driven execution 0mechanism: Kubernetes controllers and operators
  • Data change monitoring and indexing mechanism: Kubernetes informers

As the infrastructure of Kubernetes becomes increasingly complex and the number of third-party plug-ins and capabilities grows, the Kubernetes community begin to notice the explosive growth of the data tables embedded in the database-like Kubernetes, both in terms of scale and complexity. Therefore, the Kubernetes community has long been discussing how to design a "data view" for Kubernetes:

CREATE VIEW <Native Kubernetes object> AS <View-layer object of Kubernetes>;

The benefits of such a "view layer" built on top of Kubernetes API resources are quite similar to those of "views" in the database. For example:

1) Simplify and change the data format and representation

The view layer of Kubernetes can expose the simplified and abstracted application-layer API objects to developers and operators rather than the original infrastructure-layer API objects. Users have freedom to define the view-layer objects, without being restricted by the schemas of underlying Kubernetes objects.

2) Simplify complex data operations (simplify SQL)

The abstracted view-layer objects require a more simplified UI, and also need to be able to define and manage complex API resource topology in Kubernetes. This will reduce the complexity and burden of managing Kubernetes applications.

3) Protect underlying data tables

Developers and operators directly operate on the view-layer objects, so the original underlying objects of Kubernetes are protected. This allows the original objects of Kubernetes to change and upgrade without being perceived by users.

4) Reuse data operations (SQL reuse)

As the view-layer objects are completely decoupled from the underlying infrastructure, an application or operational capability declared by a view-layer object can be used across the Kubernetes clusters, regardless of the capabilities supported by these clusters.

5) Support standard table operations in the view

The view-layer objects in Kubernetes must be standard Kubernetes objects so that all operations and primitive pairs for the API objects apply to the view-layer objects. This will avoid introducing an extra mental burden for users of the Kubernetes API model.

Although the concept of a view layer is not implemented natively in Kubernetes, it has become the common practice for most large-scale members in the community. For example, Pinterest has developed a PinterestService CRD in Kubernetes to describe and define Pinterest applications. This CRD is, in essence, a view-layer object. However, this practice is still too rudimentary for most enterprises. Data views are much more than data abstraction and translation. To extensively use the view layer in the real production environment, the following key issues must be solved:

  1. How can mapping relationships between view-layer objects and underlying Kubernetes objects be defined and managed? Note that this is not a simple one-to-one mapping. One view-layer object may correspond to multiple Kubernetes objects.
  2. How is the "operational capability" modeled and abstracted? A real application is more about a simple Deployment or K8s Operator. It’s mostly a combination of a program and corresponding operational capabilities, for example, a containerized application and its scale-out policy. How are these operational capabilities defined in the application? Is it feasible to define all these capabilities as annotations?
  3. How are the binding relationships between operational capabilities and programs managed? How is the binding relationship converted into a real execution relationship in Kubernetes?
  4. How are standard cloud resources defined, such as an Alibaba Cloud RDS instance, through the view-layer objects?

The preceding problems are some of the reasons why Kubernetes has not implemented a native "view layer", and they are also major concerns in open-source projects such as Open Application Model (OAM) at the application layer of Kubernetes. It is worth noting that the OAM "specification" alone is not enough to solve all these problems. The standard view layer dependency libraries must be used at the implementation layer so that users can really enjoy the advantages and convenience brought by the "data view" in Kubernetes. At present, a quite powerful dependency library for the Kubernetes view layer in the community is the OAM Kubernetes runtime project.


On the one hand, the IaD-based, database-like design of Kubernetes supports the prosperity and development of this community. On the other hand, IaD also leads to the emergence of countless "independent" controllers and operators, and highly complex Kubernetes clusters assembled by these controllers. Kubernetes clusters with such high complexity in production environments are far from a cloud-native application management platform that developers and operators are willing to embrace.

The great success of Kubernetes over the past five years is attributed to gradual standardization and unification of infrastructure capabilities (such as networks, storage, and containers) with the help of declarative APIs. With the popularization of application-layer technologies of Kubernetes such as OAM, a standard application-layer ecosystem is emerging. More and more teams are trying to open APIs to end-users through the more user-friendly data view layer while providing more powerful modular and horizontally connected platform capabilities to infrastructure engineers.

Meanwhile, other areas where the database-like Kubernetes falls short will definitely become the future areas of focus for the community. For example, the rapidly-mature Open Policy Agent (OPA) project can be considered to be the evolution result of the "data interception, validation, and modification mechanism". For another example, the theoretical basis and practice of control link performance optimization carried out by Alibaba in the cluster of "ten thousand nodes" are similar to those of database performance optimization.

We welcome you to share your thoughts about IaD with us in the comments.

0 0 0
Share on

You may also like