×
Community Blog Privatized Business Delivery Practice Based on Sealer from Government Procurement Cloud

Privatized Business Delivery Practice Based on Sealer from Government Procurement Cloud

This article discusses the Sealer open-source project, its implementation, and its future outlook.

By Wang Xun from Government Procurement Cloud

The Internet has developed rapidly in recent years. New technologies have sprung up like mushrooms to follow the rapid growth of businesses. There are a large number of promising technologies throughout the industry. Cloud-native technologies centering on containers are growing quickly. Kubernetes is the de facto standard for container orchestration, which is undoubtedly the most notable technology.

Kubernetes solves the problems of large-scale application deployment, resource management, and scheduling, but it is not convenient for business delivery. Moreover, the deployment of Kubernetes is relatively complex. Among the emerging applications oriented with the Kubernetes ecosystem, there is always a lack of applications that can integrate business, middleware, and clusters for integrated delivery.

The Sealer open-source project was initiated by the Alibaba Cloud Intelligent Cloud-Native Application Platform Team and jointly built by Government Procurement Cloud and Harmony Cloud. The project remedies the Kubernetes deficiency in integrated delivery. Sealer considers the overall delivery of clusters and distributed applications with a very elegant design scheme. As a representative of the government procurement sector, Government Procurement Cloud has used Sealer to complete the overall privatized delivery of large-scale distributed applications. The delivery practice proves that Sealer has the flexible and powerful capability for integrated delivery.

Background

The customers of privatized delivery of Government Procurement Cloud are governments and enterprises. They need to deliver large-scale business with 300-plus business components and 20-plus middleware. The infrastructure of delivery targets is different and uncontrollable, and the network limits are strict. In some sensitive scenarios, the biggest pain point of business delivery is the handling of deployment dependency and delivery consistency, which are completely isolated from the network. The unified delivery of business based on Kubernetes achieves the consistency of the running environment. However, it is still urgent to solve problems, such as the unified processing of all images, various packages depended during the deployment, and the consistency of delivery systems.

1

As shown in the preceding figure, the process of localization delivery of Government Procurement Cloud is divided into six steps:

Confirm user requirements → give resource demand to users → obtain a resource list provided by users → generate preparation configurations based on the resource list → prepare deployment scripts and dependency → complete delivery

Pre-preparation and delivery require a lot of workers and time to be prepared and deployed.

Problems of Privatized Delivery

In the cloud-native era, the emergence of Docker deals with the environment consistency and packaging problems of a single application. The delivery of business no longer spends a lot of time deploying environment dependencies like traditional delivery. Then, the advent of container orchestration systems (such as Kubernetes) deals with the unified scheduling of underlying resources and unified orchestration of applications runtime. However, the delivery of a complex business itself is a huge issue. For example, Government Procurement Cloud needs to deploy and configure various resource objects, such as helm chart, RBAC, Istio gateway, CNI, and middleware, and deliver more than 300 business components. Each privatized delivery brings a lot of workforce and time costs.

Government Procurement Cloud is in a period of rapid business development, so the demand for privatized deployment projects is constantly increasing. However, it is more difficult to support actual needs with high-cost delivery methods. Learning how to reduce delivery costs and ensure delivery consistency is the most urgent problem for O&M teams to solve.

The Discovery of Sealer

In the early days, Government Procurement Cloud used Ansible to deliver business. The Ansible solution achieves automation and reduces delivery costs to some extent. However, it causes the following problems:

  1. Ansible only solves problems during deployment and needs to prepare the dependency required for deployment separately. The preparation of dependency and the verification of availability incur extra costs. In addition, the localization scenarios of Government Procurement Cloud strictly limit the external network. Therefore, it is not feasible to obtain dependency directly from the external network.
  2. Using Ansible to cope with differentiated requirements will be exhausting. Users have different requirements and business dependency in the privatized delivery scenarios of Government Procurement Cloud. It takes a lot of time to debug the Ansible playbook during each delivery and re-editing.
  3. When complex control logic is involved, the declaration language of Ansible is weak.
  4. It is required to prepare the running environment of Ansible before you deploy and deliver it. Thus, zero dependency cannot be realized.

Ansible does more bonding and O&M work with simple logic. With the continuous addition of localization projects, the disadvantages of Ansible delivery are beginning to appear. Each localization project requires a lot of time investment. The Government Procurement Cloud O&M Team has begun to explore the optimized direction of the delivery system. We have analyzed many technical solutions. The current Kubernetes delivery tools focus on the delivery of the cluster itself instead of the delivery of the business layer. It can be encapsulated based on cluster deployment tools, but this solution is not fundamentally different from using Ansible to deploy upper-layer services after the deployment of clusters.

Fortunately, we discovered the Sealer project. Sealer is a packaging and delivery solution for distributed applications. It solves the delivery problem of complex applications by packaging distributed applications and their dependency together. Its design concept is very elegant, and it can manage the packaging and delivery of the entire cluster in the ecosystem of container images.

2

When using Docker, we use Dockerfile to define the running environment and packaging of a single application. Correspondingly, the technical principle of Sealer can be explained by the analogy of Docker. The entire cluster can be regarded as a machine, and Kubernetes can be defined as the operating system. The applications in this operating system are defined by Kubefile and packaged into images. Then, Sealer run can deliver the entire cluster and applications (like Docker run) to deliver a single application.

3

We invited community partners to communicate with each other. We encounterd problems and pitfalls since Sealer was a new project. And we also found many issues that didn't meet the needs. However, we didn't give up because we had great expectations and confidence in the design model of Sealer. We chose to work together with the community to grow together. The final successful implementation practices also proved that our choice was correct.

Community Collaboration

When deciding to cooperate with the community, we conducted a comprehensive evaluation of Sealer. Considering our requirements, here are the main problems:

  1. The cost of image cache is too high. At first, Sealer only provides cloud build, so the premise of packaging Sealer cluster images aims to pull up a cluster based on cloud resources. This method costs too much. Therefore, we propose the build method of lite build, which supports image analysis and direct caching by parsing helm, resource definition YAML files, and image list. The lite build is the lowest-cost build method. Instead of pulling up a cluster, you only need a host that can run Sealer to complete the build.
  2. After the business is delivered, there is no check mechanism. You need to manually check the status of each component in a Kubernetes cluster. Therefore, we provide the feature of checking the status of clusters and components.
  3. Some configurations of early Sealer are fixed in rootfs. For example, the deployment host of the registry is fixed on the first master node. We need to customize the configuration of the registry in actual scenarios. Therefore, we offer the feature of customizing the configuration of the registry.
  4. After deploying a cluster based on Sealer, you still need to add nodes to the cluster. Therefore, we provide the feature of Sealer join.

In addition, it is necessary to talk about several practical and powerful Sealer features when we implement the Sealer solution:

  1. It must be mentioned that the cluster images generated by Sealer can be directly pushed to a private Docker image repository such as Harbor. Then, the cluster images generated by Sealer can expand features and rebuild based on the existing images like the Docker image.
  2. The Sealer community has optimized registry and Docker to support multi-source and multi-domain proxy caching. This is a useful feature. When dealing with image dependency, we need to change the address of an image to cache the image. For example, we need to cache a public image to a private image repository. The address of the image referenced by the corresponding resource object also needs to be changed to the address of the private image repository. However, the built-in registry of Sealer is optimized to match the cache without modifying the image address. In addition, when the built-in registry in Sealer is used as a proxy, it can be the proxy for multiple private image repositories, which is very practical in scenarios with multiple private repositories.

Implementation Practice

We redefined the delivery process to use Sealer. The delivery of business components, containerized middleware, image cache, and other components is completed using Sealer through Kubefile. The Sealer lite build mode is used to automate the parsing and built-in caching of dependency images.

Sealer is used to save the complex process logic and dependency processing logic of a great deal of application delivery, which simplifies the implementation. The continuous simplification of the implementation logic makes delivery possible on a large scale. We use the new delivery system in practice scenarios. The delivery period is shortened from a 15 person-day to a 2 person-day. In addition, the delivery of a cluster containing 20 GB business image cache, more than 2,000 GB memory, and 800-plus core CPU is realized. In the next step, we plan to continuously simplify the delivery process so a novice can complete the delivery of an entire project with simple training.

4

Future Outlook

The success of the implementation of Sealer is the result of the delivery system and the power of open-source. Moreover, we explore a new model of cooperation with the community. In the future, Government Procurement Cloud will continue to support and participate in the construction of the Sealer community and contribute more to the community according to actual business scenarios.

As a new open-source project, Sealer is imperfect. It has problems that need solving and features that need optimizing. Also, there are still more requirements and business scenarios to be realized. We hope Sealer can serve more user scenarios through our continuous contribution. At the same time, we also hope more partners can participate in the community construction to make Sealer more promising.

0 1 0
Share on

You may also like

Comments

Related Products

  • Container Service for Kubernetes

    Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.

    Learn More
  • Function Compute

    Alibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

    Learn More
  • Managed Service for Prometheus

    Multi-source metrics are aggregated to monitor the status of your business and services in real time.

    Learn More
  • ACK One

    Provides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources

    Learn More