Main architectural patterns of cloud native
There are many architectural patterns in cloud-native architecture. Here are some of the main architectural patterns that are more profitable for applications to discuss.
1 ; Service Architecture Pattern
Service-based architecture is a standard architectural model for building cloud-native applications in the cloud era. It requires that a software be divided into application modules, define business relationships with interface contracts (such as IDL), and use standard protocols (http, gRPC , etc.) to ensure mutual Interconnection, combined with DDD (Domain Model Driven), TDD (Test Driven Development), and containerized deployment improve the code quality and iteration speed of each interface. The typical patterns of service-oriented architecture are micro-service and mini-service (Mini Service) mode, in which mini-service can be regarded as a combination of a group of very closely related services, this group of services will share data, and the small-service mode is usually suitable for very large This software system avoids excessive call loss (especially inter-service calls and data consistency processing) and governance complexity due to too fine-grained interfaces.
Through the service-based architecture, the code module relationship and the deployment relationship are separated, and each interface can be deployed with a different number of instances, and the capacity can be expanded and contracted independently, thereby making the overall deployment more economical. In addition, due to the separation of modules at the process level, each interface can be upgraded individually, thereby improving the overall iterative efficiency. However, it should also be noted that service splitting leads to an increase in the number of modules to be maintained. If the automation and governance capabilities of services are lacking, module management and organizational skills will not match, which will lead to a decrease in development and operation and maintenance efficiency.
2 : Mesh Architecture Mode
The mesh-based architecture separates the middleware framework (such as RPC, cache, asynchronous messages, etc.) from the business process, and further decouples the middleware SDK from the business code, so that the middleware upgrade has no effect on the business process, and even migrates to another A platform's middleware is also transparent to the business. After separation, only the "thin" Client part is retained in the business process. The Client usually changes little and is only responsible for communicating with the Mesh process. The flow control, security and other logic that originally needed to be handled in the SDK are completed by the Mesh process.
After implementing the mesh architecture, a large number of distributed architecture modes (fuse, current limit, downgrade, retry, back pressure, compartmentalization...) are completed by the mesh process, even if these third-party software packages are not used in the products of business code ; At the same time, better security (such as zero-trust architecture capability), dynamic environment isolation by traffic, smoke/regression testing based on traffic, etc.
3 : Serverless mode
Unlike most computing modes, Serverless "takes away" the action of "deployment" from operation and maintenance, so that developers don't need to care about where the application is running, let alone what OS to install, how to configure the network, and how many CPUs are needed... From In terms of architectural abstraction, when business traffic arrives or a business event occurs, the cloud will start or schedule a started business process for processing. After the processing is completed, the cloud will automatically close/schedule the business process and wait for the next trigger, that is, the application The entire runtime is delegated to the cloud.
Serverless has not yet reached the point where any type of application is applicable, so architecture decision makers need to care whether the application type is suitable for serverless computing. If the application is stateful, the cloud may cause context loss during scheduling. After all, serverless scheduling will not help the application to synchronize state; if the application is an intensive computing task that runs in the background for a long time, it will not get too much serverless. Advantages; if the application involves frequent external I/O (network or storage, and inter-service calls), it is also not suitable because of the heavy I/O burden and high latency. Serverless is very suitable for event-driven data computing tasks, request/response applications with short computing time, and long-period tasks without complex mutual calls.
4 : Storage computing separation mode
The CAP difficulty in a distributed environment is mainly for stateful applications, because stateless applications do not have the dimension of C (consistency), so they can obtain good A (availability) and P (partition fault tolerance), and thus obtain better elasticity. In the cloud environment, it is recommended to use cloud services to store all kinds of transient data (such as sessions), structured and unstructured persistent data, so as to realize the separation of storage and computing. However, there are still some states that, if saved to the remote cache, will cause a significant decrease in transaction performance, such as the transaction session data is too large and needs to be continuously re-fetched according to the context, etc., you can consider using Event Log + Snapshot (or Check Point) In this way, the service can be restored quickly and incrementally after restart, and the impact time of unavailability on the business can be reduced.
5 : Distributed Transaction Mode
The microservice model advocates that each service uses a private data source, rather than a shared data source like a single unit, but often large-scale businesses need to access multiple microservices, which will inevitably bring distributed transaction problems, otherwise the data will appear inconsistent. Architects need to choose the appropriate distributed transaction mode according to different scenarios.
The traditional XA mode is used, although it has strong consistency, but the performance is poor;
Message-based eventual consistency (BASE) usually has high performance, but its versatility is limited, and the message side can only succeed and cannot trigger transaction rollback on the message producer side;
The TCC mode is completely controlled by the application layer, the transaction isolation is controllable, and it can be more efficient; but it is very intrusive to the business, and the design, development and maintenance costs are high;
The advantages and disadvantages of SAGA mode and TCC mode are similar, but there is no try phase, but each forward transaction corresponds to a compensation transaction, which is also costly for development and maintenance;
The AT mode of the open source project SEATA is very high-performance and has no code development workload, and can automatically perform rollback operations, but there are also some usage scenario limitations.
6 : Observable Architecture
The observable architecture includes three aspects: Logging, Tracing, and Metrics. Logging provides detailed information tracking at multiple levels (verbose/debug/warning/error/fatal), which is actively provided by application developers; Tracing provides a request from front-end to back-end The complete call link trace of the terminal is especially useful for distributed scenarios; Metrics provides multi-dimensional metrics for system quantification.
Architecture decision makers need to choose a suitable open source framework that supports observability (such as OpenTracing , OpenTelemetry ), and standardize the observable data specification of the context (such as method name, user information, geographic location, request parameters, etc.), and plan these observable data. In which services and technical components are propagated, using span id/trace id in logs and tracing information to ensure that there is enough information for quick correlation analysis when doing distributed link analysis.
Since the main goal of establishing observability is to measure the service SLO (Service Level Objective) to optimize the SLA, the architecture design needs to define a clear SLO for each component, including concurrency, time consumption, available time, capacity , etc. .
7 : Event Driven Architecture
Event-driven architecture (EDA, Event Driven Architecture) is essentially an integrated architecture pattern between applications/components, a typical event-driven architecture.
Events are different from traditional messages. Events have a schema, so the validity of events can be verified. At the same time, EDA has a QoS guarantee mechanism and can also respond to event processing failures. Event-driven architecture is not only used for (micro)service decoupling, but also in the following scenarios:
Enhanced service resilience: Since services are integrated asynchronously, any downstream processing failure or even downtime will not be perceived by the upstream, and will naturally not affect the upstream;
CQRS (Command Query Responsibility Segregation): Use events to initiate commands that have an impact on the service status, and use the synchronously called API interface for queries that have no impact on the service status; Combined with Event Sourcing in EDA, it can be used to maintain data changes. Consistency, when the service state needs to be rebuilt, the events in the EDA can be "played" again;
Data change notification: Under the service architecture, when the data in one service changes, other services will be interested. For example, after the user's order is completed, the points service and credit service need to be notified of events and update the user points and credit levels;
Build an open interface: Under EDA, event providers don't have to care about which subscribers there are, unlike the scenario of service invocation - the data producer needs to know where the data consumer is and call it, so the interface is maintained. openness;
Event stream processing: applied to data analysis scenarios of a large number of event streams (rather than discrete events), the typical application is Kafka-based log processing;
Event-triggered response: In the IoT era, the data generated by a large number of sensors does not need to wait for the return of processing results like human-computer interaction, and is naturally suitable for EDA to build data processing applications.
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00