Ysera
Assistant Engineer
Assistant Engineer
  • UID634
  • Fans0
  • Follows0
  • Posts44
Reads:42936Replies:0

[Others]Comparative analysis on Redis 4.0, Codis and Alibaba Cloud ApsaraDB for Redis clusters

Created#
More Posted time:Mar 2, 2017 14:29 PM
This article makes a comparative analysis on Redis 4.0, Codis and Alibaba Cloud ApsaraDB for Redis clusters.
1. Architecture
1.1. Redis 4.0 cluster
The Redis 4.0 cluster adopts a decentralized structure. The metadata information of the cluster is distributed across nodes and its master-slave switchover relies on the master election through negotiation by multiple nodes.
Redis provides the redis-trib tool for cluster deployment and O&M operations.
The access from the client to the hashed database nodes is dependent on the smart client, that is, the client should evaluate and select the route based on the node information returned by Redis. For example, if a client initiates a request for a node and the requested key is not located on this node, the client should evaluate the returned move or ask command and redirect the request to the corresponding node.

1.2. Codis
Codis is composed of three major components:
• Codis-server: a Redis database with the source code modified. It supports slots, resizing and migration.
• Codis-proxy: multi-threaded, with a kernel written in the Go language.
• Codis Dashboard: the cluster manager.
A web-based graphical interface is provided for managing the cluster.
The metadata of the cluster is stored in ZooKeeper or etcd.
An independent component codis-ha is provided to take charge of the master-slave switchover of Redis nodes.
The proxy-based Codis client is non-route-table-change-aware. The client needs to call the list proxy command on the Codis dashboard to get the list of all proxies, and decide the proxy node to access according to its own round-robin policies to achieve load balancing.

1.3. Alibaba Cloud ApsaraDB for Redis
The cluster version of Alibaba Cloud ApsaraDB for Redis is composed of three major components:
• Redis-config: the cluster manager.
• Redis-server: a Redis database with optimized source code. It supports slots, resizing and migration.
• Redis-proxy: single-threaded, with a kernel in the C++14 language.
Redis-proxy is stateless. A cluster can mount multiple proxy nodes based on the cluster type.
Redis-config is dual-node structured and supports disaster recovery.
The metadata of the cluster is stored in RDS databases.
An independent HA component is provided to take charge of the master-slave switchover of the cluster.
ApsaraDB for Redis clusters are also based on the proxy. Users are unaware of the route information and VIP is provided to the client for access at the same time. The client only needs one connection address and does not need to care about the load balancing of proxy access.

2. Performance comparison
2.1. Stress testing environment
The above three Redis clusters are established respectively on three physical machines. Each physical machine is configured with a Gigabit NIC, 24-core CPU, and 189GB of memory. The memtier_benchmark, Codis proxy/Alibaba Cloud proxy, and Redis server stress testing tools run on each of the three physical machines respectively. Redis server adopts the self-contained Redis kernel of various clusters.
Fix the key size to 32 bytes and the ratio of set/get operations to 1:10. Each thread has 16 clients. The stress testing continued for five minutes in the 8-thread, 16-thread, 32-thread, 48-thread and 64-thread scenarios respectively.
Redis 4.0 cluster requires an additional client connection node, but memtier_benchmark does not support it. As a result, hashtag was used for the stress testing of Redis 4.0.
Every cluster has eight master databases, eight slave databases, with the AOF enabled. The minimal buffer of aof rewrite is 64MB.
The subjects of the stress testing were respectively the single Redis 4.0 node, the single Alibaba Cloud Redis-proxy, the single-core Codis-proxy and the 8-core Codis-proxy.
Codis adopts Go 1.7.4.
The stress testing result is as follows:
 
We can see that the single-core Codis-proxy delivers the poorest performance. The stress testing for the 8-core Codis-proxy didn't adopt the hashtag for the key, which is equivalent to scattering the requests to the eight database nodes in the backend, or equivalent to eight Alibaba Cloud Redis-proxies. The performance data is naturally higher.
The performance of the single-core Alibaba Cloud Redis-proxy approaches that of the native Redis database node when the stress is huge enough.
In the practical production environment, the client needs to implement the cluster protocol to use the native Redis cluster so as to parse the move, ask and other commands and redirect to the node. Two access operations may be required for random accesses to the key and the performance won't be the same with that of a single node.

3. Comparison of supported features
3.1. Comparison of major protocols supported

For more commands, refer to the instructions of the specific cluster versions.

3.2. Comparison of horizontal scaling
Redis 4.0, Codis and Alibaba Cloud Redis distributed clusters all implement slot-oriented management. The smallest scaling unit is the slot.
The essence of horizontal scaling in distributed clusters is the management over route information of the cluster nodes, and migration of data. The smallest unit of data migration for the three clusters is the key.

3.2.1 Principle of horizontal scaling of Redis cluster
Redis 4.0 cluster supports moving a specified slot in the node and automatic re-distribution of existing slots in the cluster node after an empty node is added. Taking the Redis-trib move_slot for example, the slot moving process is analyzed as follows:
• Step 1): Call the setslot command to modify the slot status on the source and target nodes.
• Step 2): Get the slot key list on the source node.
• Step 3): Call the migrate command to migrate the key. During the migration process, the Redis remains in the blocking status. Only after the target node is restored successfully will a result be returned.
• Step 4): Call the setslot command to modify the slot status on the source and target nodes.
How can we ensure data consistency during the migration process?
Redis cluster provides a redirection mechanism in the migration status. It returns ASK to the client which should send the asking command to the target node upon receiving ASK, and then initiate a request to the target node for access. When the accessed key meets all of the following conditions, the redirected return will occur:
• The slot for the key is located on this node. If not, MOVE is returned.
• The slot is in the migration status.
• The key does not exist.
As mentioned, migrate is a synchronization-blocking operation. If the key is not empty, it can be read or written even if the slot is in the migration status so as to ensure data consistency.

3.2.2 Principle of horizontal scaling of Codis
Codis implements the same slot re-distribution policy as Redis cluster. The Codis-server kernel stores no slot information and it does not parse the slot where the key is located. It only records the corresponding key to the dict with the slot as the key during a dbadd or other operation. If the key has a tag, it conducts the crc32 operation on the tag and inserts the key to the skiplist with the crc32 value as the key.
The Codis Dashboard initiates the migration state machine program in the background. It ensures that all the proxies are notified to start the migration, that is, the prepare stage. If more than one proxy fails, the migration will fail. The migration steps are similar to those in the Redis cluster, except the following:
• The slot state information is stored in ZooKeeper or etcd.
• The slotsmgrttagslot command, instead of the migrate command, is sent. The slotsmgrttagslot command will get a key for migration at random in execution. If the key has a tag, it gets all the keys in the skiplist mentioned above for bulk migration.
Codis is also a synchronization-blocking migration operation.
In terms of data consistency assurance, the Codis-server kernel does not maintain the slot state, so the consistency assurance job falls onto the shoulder of the proxy component. When Codis-proxy is processing a request, it first judges the state of the slot where the key is located. If the slot is in the migration status, it initiates the migration command for the specified key to the Codis-server. After the key is migrated, Codis-proxy turns to the target Codis-server for requests. The practice is simple and has few changes to the Redis kernel. But it also causes slow migration and the client may be stuck for a long time.

3.2.3 Principle of horizontal scaling of Alibaba Cloud ApsaraDB for Redis
Apart from supporting a specified source, node, or slot, Alibaba Cloud ApsaraDB for Redis also provides dynamic allocation of slots based on elements such as the node capacity and slot size to minimize the impact granularity to the cluster availability – which serves as its allocation principle. The migration roughly follows the steps below:
• Step 1): The Redis-config calculates the source and target nodes and slots.
• Step 2): The Redis-config sends the command for migration slots to the Redis-server.
• Step 3): The Redis-server starts the state machine and migration keys in batch.
• Step 4): The Redis-config checks the Redis-server on a regular basis and updates the slot status.
Unlike Codis, Alibaba Cloud ApsaraDB for Redis maintains the slot information in the kernel and casts away the practices of Codis, which migrates the entire slot, and Redis cluster, which migrates a single key. It supports bulk migration in the kernel to accelerate the migration speed.
The data migration process in Alibaba Cloud ApsaraDB for Redis is asynchronous. It doesn't wait for the target node to be restored successfully, but verifies the restoration success by enabling the target node to notify the source node for regular checks. In this way, the impact of synchronization blocking on the access to other slots is reduced.
Meanwhile, because of the asynchronous migration, ApsaraDB for Redis implements the normal write request process to ensure data consistency if it judges that the request as a write request and that the key exists, but not in the migration key list. Other data consistency assurance mechanisms in ApsaraDB for Redis are the same with that in Redis 4.0 cluster.

3.3. Others

Password Not supported. You should modify the Redis-trib script.  Supported Supported. The password of all components must be consistent.
The hot upgrades of the Alibaba Cloud ApsaraDB for Redis kernel and proxy require no connection interruption during the process and have no impact to the client.

4. Conclusion
ApsaraDB for Redis is a stable, reliable and scalable database service with superb performance. It is structured on Apsara distributed file system and full SSD high-performance storage, and supports master-slave and cluster-based high-availability architectures. It offers a full range of database solutions including disaster switchover, failover, online expansion, and performance optimization. You should give ApsaraDB for Redis a try.
Guest