Traditional database architectures rely on high-end hardware. A traditional database system provides a simple architecture that contains only a small number of servers. However, traditional database systems cannot meet the scaling requirements as new business develops. The core logic of a cloud computing architecture is to provide pooled resources by using virtualization. Cloud native databases use distributed database architectures to allow you to expand these databases at a large scale. Each cloud native database system is deployed across multiple servers and virtual machines. This presents new challenges for system management. The core challenge is how to implement scalability and high availability so that users can use resources on demand to improve resource utilization.

The next-generation architecture for enterprise-grade databases is expected to provide the features of a cloud native architecture, a distributed architecture, and hybrid transaction/analytical processing (HTAP). In this architecture, the upper layer uses a shared-nothing architecture in which sharding is used, and the underlying layer uses a cloud native architecture in which compute and storage are decoupled. This way, your databases can be scaled out on demand and are highly available. If you use the next-generation database architecture in scenarios in which high concurrent requests are processed, the number of shards that need to be scanned is greatly decreased. This helps you simplify distributed transactions. The following list describes the development trends of databases:
  • Scalability and high availability: A distributed and cloud native architecture needs to be developed. In this architecture, distributed shared storage is used and compute is decoupled from storage. This architecture incorporates the features of a cloud native architecture and a shared-nothing architecture to ensure high scalability and high availability of databases. This way, you can scale out your databases based on your business requirements.
  • Heterogeneous data sources: Databases are processing an increasing number of multi-model, structured, and unstructured data. This creates technology challenges such as how to integrate and process structured data from heterogeneous data sources. For example, you need to handle technology challenges such as how to process high-dimensional vectors, process data from heterogeneous data sources, and use vector processing engines (VPEs) to convert unstructured data into structured data.
  • Online real-time interactive analysis: Large amounts of data need to be processed and analyzed online. Therefore, you need to handle challenges on how to perform online real-time interactive analysis on large amounts of data, perform parallel processing by using models such as DSP and MPP, and optimize scheduling for parallel processing.
  • Ease of use, high reliability, and simplified O&M: Databases must be intelligent and secure. For example, you need to develop a management and control platform for intelligent scheduling, intelligent monitoring, and automatic fixing. Data security, privacy protection, and data encryption are optimized. This makes it easier to use databases, simplifies database O&M, and improves database reliability.