Assistant Engineer
Assistant Engineer
  • UID626
  • Fans1
  • Follows1
  • Posts52

Get started with software architecture

More Posted time:Jan 6, 2017 11:18 AM
Software architecture refers to the basic structure of the software.
Proper architecture is one of the most important factors for the software success. Large software companies usually have specialized architect positions that are only open to senior programmers.
O'Reilly once published a free brochure titled Software Architecture Patterns (PDF) in which five common software architectures are introduced. It is a very good primer book. I have benefited a lot reading it. Below are my reading notes.
I. Layered architecture
Layered architecture is common in software and the defacto standard architecture. It is a recommended option when you have no idea about which architecture to use.
This architecture divides the software into several horizontal layers. Each layer has clear roles and responsibilities and does not need to care about the details on other layers. Communications between layers are through interfaces.
Although no explicit conventions, the four-layer structure of software is the most common.
 Presentation: the user interface, responsible for visual display and user interaction.
 Business: implements the business logic.
 Persistence: provides the data. SQL statements are placed on this layer.
 Database: stores the data.
Some software adds a service layer between the logic layer and the persistence layer to provide some universal interfaces required by different business logic.
Users' requests will be processed by the four layers in succession. None of the four layers can be skipped.
 The structure is simple, easy for understanding and development
 Programmers with different techniques can divide the work and take charge of different layers. It is a natural fit for the organizational structures of a majority of software companies.
 Each layer can be tested independently and interfaces of other layers can be simulated.
 When the environment changes, requiring code adjustments or adding a new feature, it is usually troublesome and time-consuming.
 The deployment is comparatively cumbersome. You often need to re-deploy the whole software just to make a small change, which is inconvenient for continuous release.
 When the software is upgraded, you may need to suspend the entire service.
 Poor scalability. When users’ requests surge, you must expand each layer in sequence. But since the layer is coupled internally, the expansion may be very difficult.
II. Event-driven architecture
An event is the notification issued by the software when the status changes.
The event-driven architecture refers to the software architecture for communications through events. It can be divided into four sections.
 Event queue: the entrance for receiving events.
 Event mediator: distributes different events to different business logic units.
 Event channel: the connecting channel between the mediators and processors.
 Event processor: implements the business logic. After the processing is completed, it will issue the event to trigger the next operation.
For simple projects, the event queues, mediators and event channels can be integrated into one. As a result, the entire software can be divided into the event broker and the event processor.
 Distributed asynchronous architecture, highly decoupled event processors and sound scalability of software.
 Extensive applicability to various types of projects.
 Sound performance. Software will experience few cases of congestion because of the asynchronous nature.
 Event processors can be independently loaded and unloaded, facilitating the deployment.
 Involving asynchronous programming (cases of remote communication and loss of response should be taken into consideration), with comparatively difficult development.
 Hard to support atomic operations, because the events involve multiple processors and are hard to roll back.
 Distributed and asynchronous features make this architecture hard to be tested.
III. Microkernel architecture
Microkernel architecture is also called “plug-in architecture”. It refers to that the software kernel is comparatively small and major features and business logic are implemented through plug-ins.
The core usually only contains the smallest unit run by the system. Plug-ins are independent from each other. Communications between plug-ins should be minimized to avoid mutual dependency issues.
 Sound function extensibility. You just need to develop a plug-in to realize a function.
 Functions are isolated from each other. Plug-ins can be independently loaded and unloaded, making it easy to deploy.
 Highly customizable to adapt to different development needs.
 Progressive development is available to enhance the function step by step.
 Poor scalability. The core is usually an independent unit and hard to be distributed.
 Relatively high development difficulty because it involves communications between plug-ins and the core, and internal plug-in register mechanisms.
IV. Microservices architecture
Microservices architecture is an upgraded version of the service-oriented architecture (SOA).
Every service is a separately deployed unit. These units are distributed, decoupled and they communicate through remote communication protocols (such as REST and SOAP).
Microservices architecture has three implementation modes.
 RESTful API mode: Services are provided through APIs. Cloud services fall into this category.
 RESTful application mode: Services are provided through traditional network protocols or application protocols. Behind the services is usually a multi-functional application. This mode is often seen inside a company.
 Centralized message mode: Message brokers are adopted to achieve message queues, load balancing, and uniform log and exception handling. The disadvantage is that single point of failure may occur, and message brokers may need to be made into clusters.
 Good scalability and low coupling between various services.
 Easy to deploy. The software is split into multiple services from a single deployable unit and every service is a deployable unit.
 Easy for development. Every component can be developed in the continuous integration mode for real-time deployment and constant upgrading.
 Easy for testing and every service can be tested separately.
 The service splitting may be too fine grained due to the stress on mutual independence and low coupling. This leads to a messy and heavy system because of its dependence on a large number of microservices, compromising the performance.
 Once communications between services are required (that is, a service needs to use another service), the overall architecture will become complicated. A typical example is some universal utility classes. A solution is to copy them into every service to exchange for architecture simplicity with redundancy.
 The distributed nature makes such architecture hard to achieve atomic operations and transaction rollback is comparatively difficult.
V. Cloud architecture
Cloud architecture mainly addresses the scalability and concurrency issues and is the easiest architecture to be scaled.
Its high scalability can be primarily attributed to the absence of the central database. Instead, it copies all the data into the memory to make them reproducible memory data units. Then the business processing capacity is encapsulated into processing units. If the visits increase, new processing units can be created; if the visits decrease, some processing units can be closed. Because of the absence of the central database, the biggest bottleneck of scalability is gone. The data of every processing unit is in the memory, so you'd better perform data persistence operations.
This mode is primarily divided into two parts: processing unit and virtualized middleware.
 Processing unit: implements the business logic.
 Virtualized middleware: takes charge of communications, session persistence, data reproduction, distributed processing, and processing unit deployment.
The virtualized middleware contains four components.
 Messaging grid: manages user requests and sessions. When a request arrives, it decides which processing unit the request is allocated to.
 Data grid: copies data to every processing unit, that is, data synchronization, to ensure every processing unit can get the same data.
 Processing grid: optional. If a request involves different types of processing units, this grid is in charge of coordinating the processing units.
 Deployment manager: takes charge of activating and deactivating processing units. It monitors the load and response time. When the load increases, it activates processing units; when the load goes down, it deactivates processing units.
 High load and high scalability.
 Dynamic deployment.
 Complicated implementation and high costs.
 Mainly suitable for website applications and not suitable for large database applications with massive data throughputs.
 Hard to be tested.