All Products
Search
Document Center

Lindorm:Multiple-zone deployment

Last Updated:Mar 22, 2024

Lindorm In a multi-zone deployment solution, a Lindorm instance is deployed in multiple zones. Multi-zone instances can improve disaster recovery capabilities. The Lindorm instance can ensure strong data consistency across multiple zones. You can send requests to the Lindorm instance. The instance can return the results at a high speed if eventual consistency is required for data. This improves the service quality of your online business.

Overview of traditional primary/secondary disaster recovery

For traditional primary/secondary disaster recovery, you can purchase two instances named Primary Instance 1 and Secondary Instance 2 in Zone A and Zone B. This helps ensure high availability. You can use Lindorm Tunnel Service (LTS) to perform two-way synchronization between Lindorm instances. When Primary Instance 1 fails or Zone A is unavailable, you can switch the connection to Secondary Instance 2 or Zone B to ensure high availability. The following figure shows the high availability architecture of primary/secondary disaster recovery.

image

The primary/secondary disaster recovery solution can meet the high availability requirement of most users. This solution is not suitable for all business scenarios. This solution provides the following disadvantages:

  • A latency occurs when data is synchronized between the primary instance and the secondary instance. This way, strong consistency of data is not ensured.

    The data synchronization between the primary instance and the secondary instance is performed in an asynchronous manner. Business data can be synchronized to Secondary Instance 2 only after the data is written to Primary Instance 1. When a failover is performed, the latest data cannot be immediately read for business applications. However, the latest data needs to be immediately obtained for business applications. If your business uses atomic semantic interfaces, such as Increment and CheckAndPut, to read data before data is written and the latest data fails to be read, the data is out of order or data rollback occurs. If issues occur in the network in Zone A, the data in Zone B remains missing due to the synchronization latency during the time period before the network in Zone A is recovers.

  • The resource utilization of the secondary instance is not high.

    In scenarios in which primary/secondary disaster recovery is performed, the resources of the secondary instance are not used for most of the time. The resources are accessed only during the failover.

  • The database administrator must perform the failover.

    Primary/secondary disaster recovery must be completed on the business side. The database administrator of the business application needs to focus on the status of the primary instance. When the primary instance fails, the database administrator determines whether to switch workloads over to the secondary instance. You need to specify the custom logic and solution of the failover. The logic and solution affect your business.

The preceding issues occur when you use traditional primary/secondary disaster recovery. The multi-zone deployment feature provided by Lindorm can be used to resolve the issues.

Architecture of multi-zone deployment

image

Lindorm supports multi-zone deployment. Each partition of a wide table in a Lindorm instance has an independent replica in each zone. Write-Ahead Logging (WAL) logs are stored in the underlying LindormDFS of Zone C. If Zone A is unavailable, data in Zone C is used to restore data in Zone B. Data between replicas is synchronized by using the replica consensus protocol provided by Lindorm. If you want to use the replica consensus protocol to synchronize data between replicas, data of at least two replicas is required. In multi-zone deployment mode, Lindorm allows you to configure table consistency levels to meet different business requirements. The table consistency levels include strong consistency and eventual consistency.

  • Strong consistency: If you set the consistency level of a table to strong consistency, data can be read only from the primary partition of the table and written only to the primary partition. The secondary partitions can receive data from the primary partition only by using the replica consensus protocol. When all the servers in the primary zone are terminated or the zone is unavailable, Lindorm automatically selects a primary zone. A period of time is required to promote a secondary zone to a primary zone. In strong consistency mode, you can read the most recent data that is written.

  • Eventual consistency: Eventual consistency is also called weak consistency. If you set the consistency level of a table to eventual consistency, data can be read from the primary partition and secondary partition of the table, and written to the primary partition and secondary partition of the table. The replica consensus protocol is used to synchronize data that is written in Zone A to Zone B. The data of the replicas may be inconsistent because the data is asynchronously synchronized. In most cases, the synchronization latency is 100 ms. In eventual consistency mode, data can be read from the primary partition and secondary partition of a table, and written to the primary partition and secondary partitions of the table. Zone A is unavailable, and faults such as glitches and server failures occur during data write operations and data read operations. In this scenario, if the specified period of time elapses, no result is returned. In this case, you do not need to wait and the secondary zone does not need to be switched to the primary zone. Lindorm can automatically select Zone B to send requests. This helps ensure high availability, and reduces glitches of read and write operations.

Features

Lindorm uses a distributed architecture that provides high availability. Some business has higher requirements on availability. In this case, Lindorm needs to handle some server failures and extreme problems such as network unavailability and city-level disasters. Multi-zone deployment provided by Lindorm supports the following features. These features can be used to ensure that databases are running as expected in various unexpected conditions.

  • Multi-zone deployment provides disaster recovery capabilities at the data center level or city level.

  • The strong data consistency requirements or eventual data consistency requirements of Lindorm instances can be met.

  • Fault identification and failover operations are automatically triggered by Lindorm. This method is easy to use.

The following table compares the features in the scenarios in which multi-zone deployment, primary/secondary disaster recovery, and the Paxos-based or Raft-based consistency protocol are used.Lindorm

Feature

Multi-zone deployment

Primary/secondary disaster recovery

Paxos-based consistency protocol or Raft-based consistency protocol

Strong consistency

Eventual consistency

Data loss (recovery point objective (RPO))

0

<100ms

<1s

0

Service recovery (recovery time objective (RTO))

1 minute

10~30s

RTO is determined based on the period of time that is required to perform a failover operation.

30s to 3 minutes

Access response time

Access data in the primary zone. In this case, data may be read and written across multiple zones.

Access data in the nearest zone. In this case, data can be read from and written to multiple zones. This reduces glitches.

Access data in the primary zone. In this case, data may be read and written across multiple zones.

Access data in the primary zone. In this case, data may be read and written across multiple zones.

Ease of use

This solution does not affect business.

This solution does not affect business.

Business needs to be transformed. The external synchronization link is provided. You can switch links based on your business requirements.

This solution does not affect business.

Minimum number of zones that are required to store logs and data

  • Minimum number of zones that are required to store logs: 3

  • Minimum number of zones that are required to store data: 2

  • Minimum number of zones that are required to store logs: 2

  • Minimum number of zones that are required to store data: 2

  • Minimum number of zones that are required to store logs: 2

  • Minimum number of zones that are required to store data: 2

  • Minimum number of zones that are required to store logs: 3

  • Minimum number of zones that are required to store data: 3

Note

For multi-zone deployment of Lindorm, fault identification and failover for wide tables of Lindorm instances are determined by the instances regardless of whether strong consistency or weak consistency is required. This entire process is transparent to users. You can access one Lindorm instance to read and write data. You do not need to develop middleware to connect multiple instances and switch between instances.

  • If your business does not require strong consistency, we recommend that you select a multi-zone Lindorm instance and set the consistency level of your table to eventual consistency.

  • If your business requires strong consistency, we recommend that you select a multi-zone Lindorm instance and set the consistency level of your table to strong consistency.

Limits

  • Multi-zone deployment is applicable only to LindormTable. Therefore, features that depend on other engines, such as search indexes and columnar indexes, are not supported by multi-zone Lindorm instances. Features supported by LindormTable, such as secondary indexes, dynamic columns, and wildcard columns, are also supported by multi-zone Lindorm instances.

  • The cold storage and hot and cold data separation features are not supported by multi-zone Lindorm instances.

Purchase a multi-zone Lindorm instance

You can purchase multi-zone Lindorm instances in the Lindorm console. For more information, see Create a multi-zone instance.

Select the consistency level

In most cases, the consistency level of tables created for Lindorm instances is set to eventual consistency by default. Eventual consistency can meet the consistency requirements of most users to ensure high availability and reduce glitches. We recommend that you use the eventual consistency model. In most cases, data generated before 100 ms can be read. You can use Lindorm clients to read and write data in the nearest zone. If your read clients and write clients are in the same zone, the data that the clients read is the latest data. Partitions in the current zone remain available or do not time out even if issues such as glitches or server shutdown occur. If your business has the following requirements, set the consistency level of your table to strong consistency: If your business has the following requirements, set the consistency level of your table to strong consistency:

  • The latest data must be read.

  • Atomic semantic interfaces such as Increment and CheckAndPut need to be used.

  • A secondary index for the table must be created.

Note

In strong consistency mode, jitters and glitches cannot be reduced by reading multiple replicas in Lindorm. If the primary zone fails, a period of time is required to promote a secondary zone to a primary zone.

Configure the consistency level of a table

Note

The HBase API and HBase shell do not ensure data consistency. If you use the HBase API to access LindormTable, use the HBase API and HBase shell to create a table. By default, the consistency level of the created table is set to eventual consistency. You can perform the following steps to modify the CONSISTENCY attribute of the table.

  1. Use Lindorm-cli to connect to LindormTable and execute SQL statements to access LindormTable. For more information, see Use Lindorm-cli to connect to and use LindormTable.

  2. Configure the CONSISTENCY attribute of the table.

    • When you create a table, execute the following statement to set the CONSISTENCY attribute of the table to strong consistency:

      CREATE TABLE dt (p1 integer, p2 integer, c1 varchar, c2 bigint,
        constraint pk primary key(p1 desc))'CONSISTENCY'='strong'; 

      When you create a table, execute the following statement to set the CONSISTENCY attribute of the table to eventual consistency:

      CREATE TABLE dt2 (p1 integer, p2 integer, c1 varchar, c2 bigint,
        constraint pk primary key(p1 desc))'CONSISTENCY'='eventual';
    • After the table is created, execute the following statement to change the value of the CONSISTENCY attribute for the table to eventual consistency:

      ALTER TABLE dt SET 'CONSISTENCY'='eventual'; 

      After the table is created, execute the following statement to change the value of the CONSISTENCY attribute for the table to strong consistency:

      ALTER TABLE dt2 SET 'CONSISTENCY'='strong';

Write data to a wide table

Multi-zone Lindorm instances are used in the same manner as single-zone instances. Connect to LindormTable and write data to a wide table. For more information, see Use Lindorm-cli to connect to and use LindormTable.

Import data to a wide table

  • You can call an API operation to import data to a multi-zone or single-zone Lindorm instance. You need to only obtain an endpoint of the Lindorm instance in the Lindorm console and use the endpoint to import data.

  • If you use BulkLoad to import data to a multi-zone Lindorm instance, import a copy of data in each zone. If you have questions when you import data, contact the technical support.