Cloud Native Architecture | Main Cloud Native Technologies
container technology
1: Background and value of container technology
As a standardized software unit, a container packages an application and all its dependencies, so that the application is no longer limited by the environment, and runs quickly and reliably between different computing environments.
Although Linux provided the Cgroups resource management mechanism and the Linux Namespace view isolation scheme in 2008, allowing applications to run in an independent sandbox environment to avoid conflicts and influences; but it was not until the open source of the Docker container engine that it was greatly reduced. It reduces the complexity of the use of container technology and accelerates the popularization of container technology. Docker containers are based on operating system virtualization technology, share the operating system kernel, are lightweight, have no resource consumption, and can be started in seconds, which greatly improves the application deployment density and flexibility of the system. More importantly, Docker proposes an innovative application packaging specification - Docker image, which decouples the application and the running environment, so that the application can run consistently and reliably between different computing environments. With the help of container technology, an elegant abstract scenario is presented: a relative balance between the flexibility and openness required for development and the standardization and automation that O&M focuses on. Container images quickly became the industry standard for application distribution.
Subsequently, the open source Kubernetes, with its excellent openness, scalability and active developer community, stood out in the battle of container orchestration and became the de facto standard for distributed resource scheduling and automated operation and maintenance. Kubernetes shields the differences in the infrastructure of the IaaS layer and, with excellent portability, helps applications run consistently in different environments including data centers, clouds, and edge computing. Enterprises can use Kubernetes to design their own cloud architecture based on their own business characteristics, so as to better support multi-cloud/hybrid cloud and avoid the concern of being locked in by manufacturers. With the gradual standardization of container technology, the division of labor and collaboration in the container ecosystem has been further promoted. Based on Kubernetes, the ecological community has begun to build upper-level business abstractions, such as service mesh Istio, machine learning platform Kubeflow, and serverless application framework Knative .
In the past few years, while container technology has been widely used, three core values are most concerned by users:
agile
Container technology improves the agility of an enterprise's IT architecture, while making business iterations faster, providing a solid technical guarantee for innovation and exploration. For example , during the epidemic, the online demand of education, video, public health and other industries has experienced explosive and rapid growth. Many enterprises have seized the sudden and rapid business growth opportunities through container technology in a timely manner. According to statistics, the use of container technology can improve delivery efficiency by 3 to 10 times , which means that enterprises can iterate products more quickly and conduct business trial and error at a lower cost.
elasticity
In the Internet era, enterprise IT systems often need to face various expected and unexpected explosive traffic growth, such as promotional activities and emergencies. Through container technology, enterprises can take full advantage of the elasticity of cloud computing and reduce operation and maintenance costs. In general, with container technology, enterprises can reduce computing costs by 50% through increased deployment density and elasticity. Taking the online education industry as an example, in the face of the exponential growth of traffic under the epidemic, seewo , a provider of educational informatization application tools, used Alibaba Cloud Container Service ACK and Elastic Container Instance ECI to greatly meet the urgent need for rapid capacity expansion. 100,000 teachers provide a good online teaching environment and help millions of students learn online.
portability
Containers have become a standard technology for application distribution and delivery, decoupling applications from the underlying operating environment; Kubernetes has become a standard for resource scheduling and orchestration, shielding the underlying architecture differences and helping applications run smoothly on different infrastructures. The CNCF Cloud Native Computing Foundation launched the Kubernetes conformance certification, which further ensures the compatibility of different K8s implementations, which also makes enterprises willing to use container technology to build application infrastructure in the cloud era .
2: Container Orchestration
Kubernetes has become the de facto standard for container orchestration and is widely used to automatically deploy, scale and manage containerized applications. Kubernetes provides core capabilities for distributed application management:
Resource scheduling: According to the amount of resources requested by the application, CPU, Memory, or GPU and other device resources, select the appropriate node in the cluster to run the application;
Application deployment and management: support the automatic release and rollback of applications, as well as the management of application-related configuration; it can also automate the orchestration of storage volumes, so that storage volumes are associated with the life cycle of container applications;
Automatic repair: Kubernetes can monitor all hosts in the cluster. When the host or OS fails, the node health check will automatically perform application migration; K8s also supports application self-healing, which greatly simplifies the complexity of operation and maintenance management ;
Service discovery and load balancing: Various application services appear through Service resources, combined with DNS and various load balancing mechanisms, to support mutual communication between containerized applications;
Elastic scaling: K8s can monitor the load on the business. If the CPU utilization of the business itself is too high or the response time is too long, it can automatically expand the capacity of the business.
Kubernetes has several key design concepts in container orchestration:
Declarative API: Developers can focus on the application itself, rather than system implementation details. For example, different resource types such as Deployment (stateless application), StatefulSet (stateful application), and Job (task application) provide abstractions for different types of workloads; for Kubernetes implementation, the “level- The "triggered" implementation can provide a more robust distributed system implementation than the "edge-triggered" approach .
Extensibility architecture: All K8s components are implemented and interacted based on a consistent and open API; third-party developers can also provide domain-related extension implementations through methods such as CRD (Custom Resource Definition)/Operator, which greatly improves the performance of K8s. ability.
Portability: Through a series of abstractions such as Loadbalance Service (Load Balance Service), CNI (Container Network Interface), and CSI (Container Storage Interface), K8s helps business applications to shield the implementation differences of the underlying infrastructure and realize the design of flexible container migration Target.
1: Background and value of container technology
As a standardized software unit, a container packages an application and all its dependencies, so that the application is no longer limited by the environment, and runs quickly and reliably between different computing environments.
Although Linux provided the Cgroups resource management mechanism and the Linux Namespace view isolation scheme in 2008, allowing applications to run in an independent sandbox environment to avoid conflicts and influences; but it was not until the open source of the Docker container engine that it was greatly reduced. It reduces the complexity of the use of container technology and accelerates the popularization of container technology. Docker containers are based on operating system virtualization technology, share the operating system kernel, are lightweight, have no resource consumption, and can be started in seconds, which greatly improves the application deployment density and flexibility of the system. More importantly, Docker proposes an innovative application packaging specification - Docker image, which decouples the application and the running environment, so that the application can run consistently and reliably between different computing environments. With the help of container technology, an elegant abstract scenario is presented: a relative balance between the flexibility and openness required for development and the standardization and automation that O&M focuses on. Container images quickly became the industry standard for application distribution.
Subsequently, the open source Kubernetes, with its excellent openness, scalability and active developer community, stood out in the battle of container orchestration and became the de facto standard for distributed resource scheduling and automated operation and maintenance. Kubernetes shields the differences in the infrastructure of the IaaS layer and, with excellent portability, helps applications run consistently in different environments including data centers, clouds, and edge computing. Enterprises can use Kubernetes to design their own cloud architecture based on their own business characteristics, so as to better support multi-cloud/hybrid cloud and avoid the concern of being locked in by manufacturers. With the gradual standardization of container technology, the division of labor and collaboration in the container ecosystem has been further promoted. Based on Kubernetes, the ecological community has begun to build upper-level business abstractions, such as service mesh Istio, machine learning platform Kubeflow, and serverless application framework Knative .
In the past few years, while container technology has been widely used, three core values are most concerned by users:
agile
Container technology improves the agility of an enterprise's IT architecture, while making business iterations faster, providing a solid technical guarantee for innovation and exploration. For example , during the epidemic, the online demand of education, video, public health and other industries has experienced explosive and rapid growth. Many enterprises have seized the sudden and rapid business growth opportunities through container technology in a timely manner. According to statistics, the use of container technology can improve delivery efficiency by 3 to 10 times , which means that enterprises can iterate products more quickly and conduct business trial and error at a lower cost.
elasticity
In the Internet era, enterprise IT systems often need to face various expected and unexpected explosive traffic growth, such as promotional activities and emergencies. Through container technology, enterprises can take full advantage of the elasticity of cloud computing and reduce operation and maintenance costs. In general, with container technology, enterprises can reduce computing costs by 50% through increased deployment density and elasticity. Taking the online education industry as an example, in the face of the exponential growth of traffic under the epidemic, seewo , a provider of educational informatization application tools, used Alibaba Cloud Container Service ACK and Elastic Container Instance ECI to greatly meet the urgent need for rapid capacity expansion. 100,000 teachers provide a good online teaching environment and help millions of students learn online.
portability
Containers have become a standard technology for application distribution and delivery, decoupling applications from the underlying operating environment; Kubernetes has become a standard for resource scheduling and orchestration, shielding the underlying architecture differences and helping applications run smoothly on different infrastructures. The CNCF Cloud Native Computing Foundation launched the Kubernetes conformance certification, which further ensures the compatibility of different K8s implementations, which also makes enterprises willing to use container technology to build application infrastructure in the cloud era .
2: Container Orchestration
Kubernetes has become the de facto standard for container orchestration and is widely used to automatically deploy, scale and manage containerized applications. Kubernetes provides core capabilities for distributed application management:
Resource scheduling: According to the amount of resources requested by the application, CPU, Memory, or GPU and other device resources, select the appropriate node in the cluster to run the application;
Application deployment and management: support the automatic release and rollback of applications, as well as the management of application-related configuration; it can also automate the orchestration of storage volumes, so that storage volumes are associated with the life cycle of container applications;
Automatic repair: Kubernetes can monitor all hosts in the cluster. When the host or OS fails, the node health check will automatically perform application migration; K8s also supports application self-healing, which greatly simplifies the complexity of operation and maintenance management ;
Service discovery and load balancing: Various application services appear through Service resources, combined with DNS and various load balancing mechanisms, to support mutual communication between containerized applications;
Elastic scaling: K8s can monitor the load on the business. If the CPU utilization of the business itself is too high or the response time is too long, it can automatically expand the capacity of the business.
Kubernetes has several key design concepts in container orchestration:
Declarative API: Developers can focus on the application itself, rather than system implementation details. For example, different resource types such as Deployment (stateless application), StatefulSet (stateful application), and Job (task application) provide abstractions for different types of workloads; for Kubernetes implementation, the “level- The "triggered" implementation can provide a more robust distributed system implementation than the "edge-triggered" approach .
Extensibility architecture: All K8s components are implemented and interacted based on a consistent and open API; third-party developers can also provide domain-related extension implementations through methods such as CRD (Custom Resource Definition)/Operator, which greatly improves the performance of K8s. ability.
Portability: Through a series of abstractions such as Loadbalance Service (Load Balance Service), CNI (Container Network Interface), and CSI (Container Storage Interface), K8s helps business applications to shield the implementation differences of the underlying infrastructure and realize the design of flexible container migration Target.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00