How to balance scalability and stability in cloud computing

In cloud computing, enterprises and individuals can activate cloud services to purchase software features, computing resources, and storage space on demand. The enterprises and individuals need only to pay for what they use. The continuous growth of business requires an increasing number of computing and storage resources. Therefore, scalability becomes a key performance metric of cloud services.

PolarDB uses a shared storage architecture that ensures optimal scalability. If your computing resources are insufficient, you can dynamically increase the number of your compute nodes or upgrade the specifications of your compute nodes without affecting your business. After you add a new node, the node can directly access your shared data replicas, without the need to back up data. A node can be added within 1 minute, regardless of the amount of your existing data. PolarDB provides a built-in proxy named PolarProxy to balance loads among nodes. This ensures that the increase or decrease in the number of nodes is transparent to business. At the storage layer, all users share one large-scale storage cluster. New storage resources can be dynamically added to this storage cluster. Therefore, in theory, the storage space provided by PolarDB can be always increased.

In addition to scalability, stability is another key performance metric of cloud services. Stability can be measured by metrics such as recovery point objective (RPO) and recovery time objective (RTO). To ensure high stability, PolarDB isolates user resources at the computing layer if possible. This way, users have exclusive access to resources. PolarDB also implements multitenancy at the network layer and the storage layer. Among the three layers, the computing layer receives and processes business data. To improve stability and performance, storage links of PolarDB are provided in user mode. At the storage layer, PolarDB stores the data of each user in multiple storage nodes. This minimizes wastes of resources on each node and ensures minimal latency and sufficient bandwidth for I/O operations. If resources on a node become insufficient or some data stored on a node is frequently accessed, PolarDB automatically migrates the data from the node to other nodes. PolarDB uses its unique Parallel-Raft technology to ensure that each data record has three replicas and that at least two replicas for each I/O operation are stored on disks. This minimizes the RPO. PolarDB uses a shared storage architecture to ensure that data is almost fully synchronized among nodes. If a node becomes faulty, the data on the node can be immediately switched to other nodes. This minimizes the RTO. PolarProxy can work together with the shared storage architecture of PolarDB to ensure that node switchovers are transparent to applications.

Comparison between a traditional distributed architecture and a storage and compute separation architecture

Distributed databases have a long history of development. Early distributed databases use two major architectures: shared-nothing and shared-disk.

To improve the storage space and IOPS of databases in a shared-disk architecture, you must expand the underlying storage area networks (SANs). However, SANs have poor scalability and require high costs. Therefore, the shared-disk architecture is often used to ensure high availability in multi-active scenarios.

The shared-nothing architecture was the preferred architecture for distributed databases and was widely used by web-based enterprises.

For a distributed database that uses the shared-nothing architecture, each node has separate computing and storage resources as well as computing and storage engines. In this architecture, compute is decoupled from storage. Data is sharded for parallel access and parallel processing. Local computing and storage resources maximize the bandwidth and minimize the latency for data reads/writes. This provides clear benefits compared with other architecture types. In the early days when network I/O bottlenecks existed, a major principle of designing and optimizing a distributed system was to implement local computing and storage. This helps users reduce network overheads. However, the shared-nothing architecture is not a preferred option if cross-shard access is required. For example, to implement transactions and global indexing in this architecture, you need to perform complex operations. The implementation efficiency is also low. In the shared-nothing architecture, data is nearly evenly distributed among nodes because compute and storage resources are coupled. If you want to expand a cluster, you must expand the storage resources and computing resources at the same time. This may cause the waste of resources. When the cluster is expanded, data must be re-allocated. The data re-allocation results in the migration of data records. This migration consumes large amounts of computing resources. As a result, the service performance is greatly reduced during the migration. In addition, data silos appear because data cannot be shared. This makes it difficult to perform cross-database computing and cross-application computing.

In recent years, the separation of compute and storage resources in the design of distributed database architectures is common. In 2001, Google File System (GFS) started to use common x86 servers and hard disks to provide large-scale storage. However, some nodes in which compute and storage are coupled were still used in GFS due to the limits on the network transmission rate and the bandwidth among machines at that time. Storage and compute separation is becoming more popular as networks and underlying hardware develop. The system bottlenecks encountered when compute nodes access the stored data over networks are shifted from I/O bottlenecks to CPU bottlenecks. Data centers are using 25G, 50G, and 100G networks. Therefore, the system bottlenecks are in resource utilization. Public cloud providers use web-based block storage to gradually replace standalone local storage as cloud computing grows. In this infrastructure, the benefits of the storage and compute separation architecture are clear. High-density and low-power servers are used for distributed storage so that storage can be scaled out in a simplified manner. Servers that have large memory sizes and multiple CPUs with high clock speeds are deployed in compute clusters so that computing resources can be scaled out in a simplified manner. Modules are decoupled from each other and the resources can be scaled out on demand. This ensures high scalability and overall resource utilization. In the storage and compute separation architecture, data is no longer migrated by data record during a storage scale-up. This eliminates the need to worry about issues such as transaction integrity issues and data consistency issues. The amount of consumed computing resources in this architecture is a lot smaller than the amount of the consumed resources in the shared-nothing architecture. In the cloud era, the storage and compute separation architecture is the mainstream architecture for online analytical processing (OLAP) and online transaction processing (OLTP) database systems. The number of database systems that are deployed using the storage and compute separation architecture continues to increase.

Advantages and disadvantages of distributed transaction processing and centralized transaction processing

Transaction processing is a core feature of databases to ensure atomicity, consistency, isolation, and durability (ACID). Database systems need to process large numbers of concurrent transactions. To ensure efficient and independent parallel execution of concurrent transactions, technologies such as multiversion concurrency control (MVCC) and optimistic concurrency control (OCC) are developed. The key purpose of using these technologies is to ensure the order of concurrent transactions that access the same data records. This way, the result of each transaction is returned as expected. For example, transactions can be pending or rolled back to ensure the execution results are returned as expected.

If the transactions are scheduled for central processing on one node, the order that the transactions are submitted is the order in which the transactions are executed. This helps you ensure strict external consistency. This is a benefit provided by centralized transaction processing. However, if centralized transaction processing is used in a distributed database, the scalability of transaction processing capabilities is restricted because transactions are centrally processed on one node. The range of the data on which the system can perform transactions is limited by the range of the data that the single node can access. The transaction processing capability of the entire system is also limited by the processing capability of the single node.