Cloud Native Architecture Principles
Cloud Native Architecture Principles
The cloud native architecture itself is an architecture, and there are several architectural principles as the core architectural control plane of the application architecture. By following these architectural principles, technical directors and architects can avoid major deviations when making technology choices.
1: The principle of service
When the code scale exceeds the cooperation scope of a small team, it is necessary to carry out service-based splitting, including splitting into micro-service architecture and mini-service architecture, and separating modules of different life cycles through service-based architecture. Carry out business iterations separately to avoid frequent iteration modules being slowed down by slow modules, thereby speeding up the overall progress and stability. At the same time, the service-oriented architecture is based on interface-oriented programming, the functions within the service are highly cohesive, and the degree of reuse of software is increased through the extraction of common function modules between modules.
Current limiting and degradation, fuse compartments, grayscale, back pressure, and zero-trust security in a distributed environment are essentially control strategies based on service traffic (rather than network traffic). Therefore, cloud-native architecture emphasizes the use of service-based services. The purpose is also to abstract the relationship between business modules from the architectural level and standardize the transmission of service traffic, thereby helping business modules to do policy control and governance based on service traffic, no matter what language these services are developed based on .
2: The principle of elasticity
Most of the system deployment and launch requires the preparation of a certain scale of machines based on the estimated business volume. It usually takes several months or even a year to start from the procurement application, to the supplier negotiation, machine deployment and power-on, software deployment, and performance stress testing. ; And if the business changes during this period, it is also very difficult to readjust. Elasticity means that the deployment scale of the system can be automatically scaled with changes in business volume, without the need to prepare fixed hardware and software resources according to prior capacity planning. Good elasticity not only shortens the time from procurement to launch, frees enterprises from worrying about the cost of additional hardware and software resources (idle costs), and reduces the IT cost of enterprises. More importantly, when the business scale is faced with massive sudden expansion When the time comes, it is no longer necessary to "say no" because of the lack of software and hardware resources, which guarantees the company's profits.
3: The principle of observability
Today, the scale of software in most enterprises is getting larger. It used to be that a single machine can complete all the debugging of the application, but in a distributed environment, it is necessary to correlate the information on multiple hosts to answer clearly why the service is down and which services. Violation of its defined SLO, which users are affected by the current failure, which service indicators have been affected by the recent change, etc., these systems have stronger observability capabilities. Observability is different from the capabilities provided by systems such as monitoring, business probing, and APM. The former is in a distributed system such as the cloud, which actively uses logs, link tracking, and metrics to allow an APP to click on multiple services behind it. The time-consuming, return value and parameters of the call are clearly visible, and you can even drill down to each third-party software call, SQL request, node topology, network response, etc. This capability enables operation and maintenance, development and business personnel to grasp the software operation in real time situation, combined with data indicators from multiple dimensions, to obtain unprecedented correlation analysis capabilities, and continuously digitally measure and optimize business health and user experience.
4: Principles of Resilience
When the service goes online, the most unacceptable thing is that the service is unavailable, so that users cannot use the software normally, which affects the experience and income. Resilience represents the resilience of the software when various exceptions occur in the software and hardware components on which the software depends. These exceptions usually include hardware failures, hardware resource bottlenecks (such as CPU/NIC bandwidth exhaustion), and business traffic exceeding software design capabilities. , Failures and disasters that affect the work of the computer room, software bugs, hacker attacks and other factors that have a fatal impact on business unavailability.
Resilience interprets the ability of software to continuously provide business services from multiple dimensions. The core goal is to reduce the MTBF (Mean Time Between Failure, mean time between failures) of software. In terms of architectural design, resilience includes service asynchrony, retry/current limiting/downgrade/fuse/back pressure, master-slave mode, cluster mode, high availability within AZ, unitization, cross-region disaster recovery, and remote multi-active capacity disaster etc.
5: All Process Automation Principles
Technology is often a "double-edged sword". The use of containers, microservices, DevOps, and a large number of third-party components reduces distributed complexity and improves iteration speed, because the overall complexity and components of the software technology stack increase Due to the scale, it inevitably brings the complexity of software delivery. If this is not properly controlled, the application will not be able to appreciate the advantages of cloud-native technology. Through the practice of IaC (Infrastructure as Code), GitOps , OAM (Open Application Model), Kubernetes operator and a large number of automated delivery tools in the CI/CD pipeline, on the one hand, standardize the software delivery process within the enterprise, on the other hand, on the basis of standardization Through the configuration data self-description and final state-oriented delivery process, the automation tools can understand the delivery goals and environment differences, and realize the automation of the entire software delivery and operation and maintenance.
6: Zero Trust Principles
Zero Trust Security re-evaluates and examines the traditional border security architecture idea, and gives new suggestions for the security architecture idea. The core idea is that no one/device/system inside or outside the network should be trusted by default, and the trust basis for access control needs to be reconstructed based on authentication and authorization, such as IP addresses, hosts, geographic locations, and networks. It cannot be used as a trusted credential. Zero trust has subverted the paradigm of access control, guiding the security architecture from "network centralization" to "identity centralization", and its essential appeal is identity-centered access control.
The first core problem of zero trust is Identity, giving different Entity different Identity to solve the problem of who accesses a specific resource in what environment. In the scenarios of R&D, testing, and operation and maintenance of microservices, Identity and its related policies are not only the basis for security, but also the basis for many isolation mechanisms (resources, services, and environments). Its related policies provide a flexible mechanism to provide access services anytime, anywhere.
7: The principle of continuous evolution of architecture
Today's technology and business are changing at a very fast rate. Few of the architectures are clearly defined from the beginning and are applicable throughout the entire software life cycle. On the contrary, it is often necessary to refactor the architecture within a certain range. Therefore, the cloud-native architecture itself is also It should and must be a continuously evolving architecture, not a closed architecture. In addition to factors such as incremental iteration and target selection, it is also necessary to consider architecture governance and risk control at the organizational level (such as the architecture control committee), especially the balance between architecture, business, and realization in the case of high-speed business iteration. Cloud-native architecture is relatively easy to choose for new applications (usually choosing the dimensions of elasticity, agility, and cost), but for the migration of stock applications to cloud-native architecture, the migration cost of legacy applications needs to be considered from the architecture. / Risk and migration cost/risk to the cloud, and technically fine-grained control of applications and traffic through microservices/application gateways, application integration, adapters, service meshes, data migration, online grayscale, etc.
The cloud native architecture itself is an architecture, and there are several architectural principles as the core architectural control plane of the application architecture. By following these architectural principles, technical directors and architects can avoid major deviations when making technology choices.
1: The principle of service
When the code scale exceeds the cooperation scope of a small team, it is necessary to carry out service-based splitting, including splitting into micro-service architecture and mini-service architecture, and separating modules of different life cycles through service-based architecture. Carry out business iterations separately to avoid frequent iteration modules being slowed down by slow modules, thereby speeding up the overall progress and stability. At the same time, the service-oriented architecture is based on interface-oriented programming, the functions within the service are highly cohesive, and the degree of reuse of software is increased through the extraction of common function modules between modules.
Current limiting and degradation, fuse compartments, grayscale, back pressure, and zero-trust security in a distributed environment are essentially control strategies based on service traffic (rather than network traffic). Therefore, cloud-native architecture emphasizes the use of service-based services. The purpose is also to abstract the relationship between business modules from the architectural level and standardize the transmission of service traffic, thereby helping business modules to do policy control and governance based on service traffic, no matter what language these services are developed based on .
2: The principle of elasticity
Most of the system deployment and launch requires the preparation of a certain scale of machines based on the estimated business volume. It usually takes several months or even a year to start from the procurement application, to the supplier negotiation, machine deployment and power-on, software deployment, and performance stress testing. ; And if the business changes during this period, it is also very difficult to readjust. Elasticity means that the deployment scale of the system can be automatically scaled with changes in business volume, without the need to prepare fixed hardware and software resources according to prior capacity planning. Good elasticity not only shortens the time from procurement to launch, frees enterprises from worrying about the cost of additional hardware and software resources (idle costs), and reduces the IT cost of enterprises. More importantly, when the business scale is faced with massive sudden expansion When the time comes, it is no longer necessary to "say no" because of the lack of software and hardware resources, which guarantees the company's profits.
3: The principle of observability
Today, the scale of software in most enterprises is getting larger. It used to be that a single machine can complete all the debugging of the application, but in a distributed environment, it is necessary to correlate the information on multiple hosts to answer clearly why the service is down and which services. Violation of its defined SLO, which users are affected by the current failure, which service indicators have been affected by the recent change, etc., these systems have stronger observability capabilities. Observability is different from the capabilities provided by systems such as monitoring, business probing, and APM. The former is in a distributed system such as the cloud, which actively uses logs, link tracking, and metrics to allow an APP to click on multiple services behind it. The time-consuming, return value and parameters of the call are clearly visible, and you can even drill down to each third-party software call, SQL request, node topology, network response, etc. This capability enables operation and maintenance, development and business personnel to grasp the software operation in real time situation, combined with data indicators from multiple dimensions, to obtain unprecedented correlation analysis capabilities, and continuously digitally measure and optimize business health and user experience.
4: Principles of Resilience
When the service goes online, the most unacceptable thing is that the service is unavailable, so that users cannot use the software normally, which affects the experience and income. Resilience represents the resilience of the software when various exceptions occur in the software and hardware components on which the software depends. These exceptions usually include hardware failures, hardware resource bottlenecks (such as CPU/NIC bandwidth exhaustion), and business traffic exceeding software design capabilities. , Failures and disasters that affect the work of the computer room, software bugs, hacker attacks and other factors that have a fatal impact on business unavailability.
Resilience interprets the ability of software to continuously provide business services from multiple dimensions. The core goal is to reduce the MTBF (Mean Time Between Failure, mean time between failures) of software. In terms of architectural design, resilience includes service asynchrony, retry/current limiting/downgrade/fuse/back pressure, master-slave mode, cluster mode, high availability within AZ, unitization, cross-region disaster recovery, and remote multi-active capacity disaster etc.
5: All Process Automation Principles
Technology is often a "double-edged sword". The use of containers, microservices, DevOps, and a large number of third-party components reduces distributed complexity and improves iteration speed, because the overall complexity and components of the software technology stack increase Due to the scale, it inevitably brings the complexity of software delivery. If this is not properly controlled, the application will not be able to appreciate the advantages of cloud-native technology. Through the practice of IaC (Infrastructure as Code), GitOps , OAM (Open Application Model), Kubernetes operator and a large number of automated delivery tools in the CI/CD pipeline, on the one hand, standardize the software delivery process within the enterprise, on the other hand, on the basis of standardization Through the configuration data self-description and final state-oriented delivery process, the automation tools can understand the delivery goals and environment differences, and realize the automation of the entire software delivery and operation and maintenance.
6: Zero Trust Principles
Zero Trust Security re-evaluates and examines the traditional border security architecture idea, and gives new suggestions for the security architecture idea. The core idea is that no one/device/system inside or outside the network should be trusted by default, and the trust basis for access control needs to be reconstructed based on authentication and authorization, such as IP addresses, hosts, geographic locations, and networks. It cannot be used as a trusted credential. Zero trust has subverted the paradigm of access control, guiding the security architecture from "network centralization" to "identity centralization", and its essential appeal is identity-centered access control.
The first core problem of zero trust is Identity, giving different Entity different Identity to solve the problem of who accesses a specific resource in what environment. In the scenarios of R&D, testing, and operation and maintenance of microservices, Identity and its related policies are not only the basis for security, but also the basis for many isolation mechanisms (resources, services, and environments). Its related policies provide a flexible mechanism to provide access services anytime, anywhere.
7: The principle of continuous evolution of architecture
Today's technology and business are changing at a very fast rate. Few of the architectures are clearly defined from the beginning and are applicable throughout the entire software life cycle. On the contrary, it is often necessary to refactor the architecture within a certain range. Therefore, the cloud-native architecture itself is also It should and must be a continuously evolving architecture, not a closed architecture. In addition to factors such as incremental iteration and target selection, it is also necessary to consider architecture governance and risk control at the organizational level (such as the architecture control committee), especially the balance between architecture, business, and realization in the case of high-speed business iteration. Cloud-native architecture is relatively easy to choose for new applications (usually choosing the dimensions of elasticity, agility, and cost), but for the migration of stock applications to cloud-native architecture, the migration cost of legacy applications needs to be considered from the architecture. / Risk and migration cost/risk to the cloud, and technically fine-grained control of applications and traffic through microservices/application gateways, application integration, adapters, service meshes, data migration, online grayscale, etc.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00