Community Blog The Development Trends of Six Major Container Technologies in 2021

The Development Trends of Six Major Container Technologies in 2021

Technical Experts from the Alibaba Cloud Container Service for Kubernetes Team present six key interpretations of container technology trends for 2021.


By Alibaba Cloud Container Service Team

The survival and development of enterprises were uncertain throughout 2020. As a result of the ever-changing challenges, digital innovation capabilities are more important to enterprises than ever before.

During the pandemic, more enterprises have strengthened their beliefs and quickened their paces in cloud migration and digital transformation. They actively explore the implementation of cloud-native architecture transformation. During the 2020 Double 11 Global Shopping Festival, Alibaba achieved a major breakthrough in cloud-native technologies for its core system. Based on the cloud-native architecture, enterprises can maximize cloud usage and focus on business developments. Developers can also improve development efficiency based on cloud-native technologies and products and focus more on business logic implementation. Cloud-native technologies represented by containers have become the easiest way to realize the value of the cloud.

As the cornerstone of cloud-native development, the new trends and challenges of container technology have attracted a lot of attention. At the beginning of 2021, Technical Experts from the Alibaba Cloud Container Service for Kubernetes Team present six key interpretations of container technology trends for this year.

Trend 1: Container Technologies Represented by Kubernetes Become a New Interface for Cloud Computing

By Tang Zhimin, Senior Technical Expert of Alibaba Cloud Container Service for Kubernetes


According to the newly released CNCF China Cloud-Native Survey in 2020, 72% of interviewees use Kubernetes in production. Over the past year, the booming cloud-native ecosystem of Alibaba Cloud also proved that cloud-native technologies are becoming the easiest way to realize the value of the cloud. Early-stage stateless applications, AI big data, and storage applications are also applying container technologies. Container technologies, such as Kubernetes, have become a new interface for cloud computing and will continue to bring more value.

From Cloud Migration to Distributed Cloud Management Acceleration through Cloud-Native in Enterprises

  • For enterprises, containers continuously encapsulate infrastructures downward to shield the differences of underlying architectures.
  • The new Kubernetes interface will further align the basic cloud and edge capabilities. It will also promote the richness and standardization of edge products to accelerate container applications implementation in edge, IoT, and 5G.

High-Density and High-Frequency Challenges of Container Applications and Continuous Refactoring of the Cloud Computing Architecture

  • Technologies continue to evolve, promoted by the high-density and high-frequency application scenarios of container applications, including container-oriented optimized Operating System (OS), bare metal collaboration, and hardware acceleration. They have further enhanced the full-stack optimization and hardware and software integration of cloud computing architectures, bringing the benefits of extreme agility and elasticity to cloud computing users.
  • However, Serverless is still above the new container interface. Next-generation middleware and next-generation application PaaS are still on the rise.

Containers Are Applied in Large-Scale with New Challenges in Automatic O&M, Enterprise IT Governance, and End-to-End Security

  • With the containerization of more applications like workloads, AI big data, and databases, it is the key requirement for large-scale container implementation to unify containers and infrastructure resources to form unified IT governance capabilities, covering people, money, materials, and rights.
  • With more customized controllers and diversified cloud-native product formats, there is a strong demand for ensuring the stability of large-scale Kubernetes clusters, which urgently requires data-based and intelligent Kubernetes automation cluster O&M and fine-grained SLO capabilities.
  • DevSecOps practices continue to build an end-to-end container security network, such as zero-trust security, container identity authentication, lifecycle management of cloud-native products, secure containers, and confidential computing.

Trend 2: High Automation of Cloud-Native Applications

By Wang Siyu, Technical Expert of Alibaba Cloud, Author of OpenKruise


Thanks to Kubernetes's concept of final-state, cloud-native architectures naturally enable high automation. In the process of making cloud-native applications, the advantages of automation can be fully utilized, including the maintenance of replicas number, version consistency, error retry, and asynchronous event-driven. Compared with the previous process-oriented O&M mode, this improvement results in a new concept and technology. Building a more automated and cloud-native oriented infrastructure for applications is one of the key areas to explore in 2021:

  • More Automated Application Deployment and O&M: Cloud-native business types are diversified. Whether in traditional IT, the Internet, or niche fields, such as web services, search, games, AI, and edges, each type has a special scenario. It must abstract and extract common core deployment and O&M requirements and convert them into more automated capabilities to deepen cloud-native development.
  • More Automated in Risk Prevention and Control: The final state oriented automation is a double-edged sword that brings the declarative deployment capability and potentially enlarges some misoperations. For example, in the event of operation failures, mechanisms like maintenance of replicas number, version consistency, and cascading deletion are likely to bring more adverse impacts. Therefore, it is necessary to inhibit the defects and side effects of other functional automation capabilities through prevention and control automation capabilities, such as protection, interception, traffic limiting, and fuse. By doing so, the rapid expansion of cloud-native can be alleviated.
  • More Automated Operator Runtime: Kubernetes has become a de facto standard for scheduling management engines in container clusters. Its powerful and flexible expansion ability plays an important role. Operator is a special application and also an automated manager of many stateful applications. However, in the past, the overall trend of Operator in Kubernetes remained at a brutal growth in the number, while the surrounding runtime mechanism did not make much progress. In 2021, Operator runtime will be fully enhanced by automation in terms of horizontal expansion, phased upgrades, tenant isolation, security protection, and observability.

Trend 3: Application-Centered Highly Scalable Upper-Layer Platforms

By Sun Jianbo, Technical Expert of Alibaba Cloud and Director of the open-source project, Open Application Model (OAM)


As container technology develops, more enterprises focus on how to improve the business performance of container technology. The cloud-native ecosystem that uses Kubernetes as a delivery interface is growing. More teams add more expansion capabilities based on Kubernetes to build a highly scalable cloud-native platform centered on "applications."

  • An easy-to-use and scalable upper-layer platform based on Kubernetes and standard application models will replace the traditional PaaS and become the mainstream. Despite the increasing variety of software in the cloud-native ecosystem, application-centered software cannot be learned and used easily. Therefore, the easy-to-use ability will become the primary breakthrough point. Besides the guarantee of scalability, the open-source software that uses Kubernetes as an access point can be accessed with or without minor modifications. This is an important feature of this type of application management platform.
  • The standardized application construction method with separated concerns becomes more popular. Building an application delivery platform centered on Kubernetes has gradually become a consensus, and no PaaS platform wants to shield Kubernetes. However, it does not mean that all of the information in Kubernetes is shown to users. Builders of PaaS are eager to provide users with an optimal experience. You can use a standardized application with separated concerns to build models to solve this problem. Builders focus on Kubernetes interfaces, including Custom Resource Definition (CRD) and Operator, while application developers (users) focus on a standardized abstract application model.
  • Further integrated application middleware capabilities, gradually decoupling application logic from middleware logic. The cloud-native ecosystem and the entire ecosystem are developing and changing. The middleware field is expanding from the centralized Enterprise Service Bus (ESB) to Service Mesh supported by Sidecar mode. Instead of providing capabilities through a thick client, application middleware has become a standard access layer supported by the application management platform via Sidecar during runtime. Sidecar will be applied in more middleware scenarios, except for traffic management, routing policies, and access control. It centers on applications, making businesses more focused.

Trend 4: Rapid Cloud-Edge Integration

By Huang Yuqi, Senior Technical Expert of Alibaba Cloud and Director of OpenYurt (the open-source cloud-native project for edge computing)


With the development of 5G, IoT, live broadcasts, and CDN, more computing forces and businesses are sinking to data sources or closer to end users to obtain good response times and reduce costs. This is different from the traditional central mode of computing, edge computing. In the future, it will demonstrate three trends:

  • With the integration of AI, IoT, and edge computing, more types of businesses will be involved with larger scales and higher complexities.
  • As an extension of cloud computing, edge computing will be widely used in hybrid cloud scenarios, which requires future infrastructure to enable decentralization, autonomous edge facilities, and edge cloud hosting.
  • The development of infrastructures, such as 5G and IoT, will induce the growth of edge computing.

The scale and complexity of edge computing are increasing daily, leading to the overwhelming shortage of O&M methods and capabilities. As a result, cloud-edge integrated O&M and collaboration have become an architectural consensus. Supported by cloud-native, cloud-edge integration is accelerating rapidly:

  • The "cloud" layer retains the original cloud-native management and rich product capabilities and sinks them to the edge through the cloud-edge management channel. This transforms massive amounts of edge nodes and edge businesses into the workloads of the cloud-native system.
  • The "edge" side can interact better with the end through traffic management and service governance to obtain a consistent O&M experience on the cloud. It also has better isolation, security, and efficiency, thus completing the integration of business, O&M, and ecosystem.

Cloud-native edge computing is the new border of cloud-native and the future of edge computing.

Trend 5: Data Transformation Driven by Cloud-Native Is the New Theme

By Zhang Kai, Senior Technical Expert of Alibaba Cloud, responsible for Alibaba Cloud Container Service for Kubernetes and the Cloud-Native AI solution R&D

By Che Yang, Senior Technical Expert of Alibaba Cloud and Co-Sponsor of the open-source project, Fluid


Data is the core asset of an enterprise. Cloud-native technologies will promote more data-driven applications over the next few years to support the digital and intelligent IT transformation of an enterprise. Migrating traditional big data and HPC applications to Kubernetes platforms smoothly is a problem for the cloud-native community. This does not include cloud-native AI, which is developed by Docker and supported by Kubernetes. There are new trends, including traditional task schedulers, fine-grained scheduling of containerized resources, new scenarios of elastic data tasks, and a unified cloud-native base for AI and big data.

  • Refer to Traditional Task Scheduler: Kubernetes focuses on resource scheduling. However, compared with traditional offline schedulers like Yarn, the scheduling capabilities of big data and HPC still need to be improved. Recently, under the flexible framework of the Scheduler Plugin Framework of Kubernetes, the Capacity scheduling and batch scheduling that adapted to big data and HPC scenarios are being implemented gradually.
  • Fine-Grained Scheduling of Containerized Resources: Kubernetes cluster uses container-based and plug-in-based scheduling strategies to natively support GPU resource sharing and scheduling and isolate GPU resources. Besides, Nvidia Ampere also supports Mig-native scheduling in Kubernetes. These are the unique capabilities of Kubernetes. Resource sharing is not limited to GPU but is essential for RDMA, NPU, and storage devices.
  • New Scenarios of Elastic Data Tasks: Once the elasticity of big data and AI applications catches on, it's also important to make data elastic (like fluids) to flexibly and efficiently move, replicate, evict, transform, and manage between storage sources, such as HDFS, OSS, Ceph, and Kubernetes upper-layer cloud-native applications. By doing so, big data and AI applications in diverse cloud service scenarios can be implemented.
  • A Unified Cloud-Native Base for AI and Big Data: Based on atomic capabilities, such as job scheduling, resource utilization optimization, and data orchestration, more AI, machine learning platforms, and big data analysis platforms are built in container clusters. There are many similarities among the dependence of AI and big data on data, the demands towards computing, network and storage resources, workload characteristics, operation strategies, importance to online services, and factors affecting IT cost. Therefore, ensuring a unified cloud-native base to support AI and big data operations will force CTOs and CIOs to brainstorm.

Trend 6: Container Security Becomes a Top Priority

By Yang Yubing, Senior Technical Expert of Alibaba Cloud Container Service for Kubernetes


Containers have become the standard for application delivery and a unit for the delivery of computing resources and supporting facilities in the cloud-native era. Container runtimes with Linux containers, such as runC, offer excellent features, such as lightweight, high efficiency, self-inclusion, and one-time package and operation. They are very popular among containers, developers, and users.

Although increasingly popular container technology and applications become a new interface for cloud computing, container technology in the cloud computing environment still faces new challenges. Multiple containers share the same kernel, resulting in inherent disadvantages in isolation and security. Therefore, it limits the application scenarios and development of containers that can only be applied in single-rent scenarios, such as the internal enterprise environment. However, when cloud-native products are delivered to containers of different tenants, strong isolation is a must, even if it's on the same host. In the era of cloud-native products, container runtime needs to ensure good security isolation. In addition to the features above, container security is a top priority. Containers implemented with lightweight virtualization, such as KATA, are gradually becoming the standard container runtime for multi-tenant scenarios.

In addition to the runtime, security isolation at the network, disk, image, and Kubernetes API level must be resolved. This involves multiple tenants and the running of untrusted code. Therefore, all resources that are available to users must be isolated, including targets for network access, storage resources for use, and image contents that can be downloaded or accessed locally. Security protection requires multi-level deep protection to prevent the vulnerabilities of the isolation implementation from being exploited. In addition to VPC isolation, network protection needs detailed isolation of network strategies. Computing isolation needs namespaces, system calls, and virtualization-related isolation. For storage isolation, DiskQuota isolation must be performed on the host, except for virtualization-related isolation. Apart from network isolation, image isolation also requires local image reference isolation. These are implemented for strong isolation and multi-layer deep isolation.

The container security technology also faces other new challenges. After virtualization is introduced, container technology is no longer implemented in a lightweight manner. Optimizing the virtualization technology in a lightweight and efficient way has become a problem we must solve. There are lightweight virtualization technologies in the industry, such as gVisor and Crosvm (from Google) and Firecracker (from Amazon.) Alibaba also provides the virtualization container technology Daishu to solve this problem.

0 0 0
Share on

You may also like


Related Products