All Products
Search
Document Center

ApsaraDB for Cassandra - Deprecated:Performance overview

Last Updated:Dec 31, 2021

Background information

This topic describes how to perform a benchmark test for ApsaraDB for Cassandra and provides sample results based on the test. The test results may not show the optimal performance of ApsaraDB for Cassandra because the test results vary based on the kernel and cloud environments. If you want to assess the size of the ApsaraDB for Cassandra instance that best suits your business, you can use the tests that are described in this topic. The best method to assess the instance size is to run simulated workloads on an ApsaraDB for Cassandra instance. The result is more accurate than the result provided by an external benchmarking tool.

Test tool

Perform a benchmark test for ApsaraDB for Cassandra by using Yahoo Cloud Serving Benchmark (YCSB) 0.15.0. YCSB is a standard benchmarking tool. For more information, visit https://github.com/brianfrankcooper/YCSB/tree/0.15.0/cassandra.

Test environment

Purchase an ApsaraDB for Cassandra instance for testing.

Network: virtual private cloud (VPC). You must deploy the client and server in the same region and zone. Instance architecture: one cloud data center that consists of three nodes. Instance storage: a standard SSD of 400 GB for each node. The storage capacity affects the instance performance. Client for stress testing: ecs.c6.2xlarge (8 vCPUs, 16 GB). Instance specifications: all specifications that are supported by ApsaraDB for Cassandra.

Workload description

The throughput and latency of ApsaraDB for Cassandra vary based on different workloads, such as the number of fields per row and the data size of each row. In this example, the default workloada of YCSB is used for testing. You can modify YCSB parameters based on your business requirements. Keep the default values for most parameters used to test ApsaraDB for Cassandra. For more information, visit https://github.com/brianfrankcooper/YCSB/tree/0.15.0/cassandra.

Key parameters

  • 10 fields per row (default).

  • 1 KB per row (default).

  • Read/write operation ratio: 95:5.

  • Read/write consistency level: ONE (default).

  • Number of replicas: specify two replicas because disks are used.

  • Stress testing threads: modify based on the instance specifications. For more information, see the test results.

  • recordcount: the number of rows imported. Modify the parameter based on the specifications. For more information, see the test results.

  • operationcount: the number of times of stress tests. The value of this parameter is the same as that of the recordcount parameter.

The consistency level setting may affect the performance. Specify a consistency level based on your business requirements.

Procedure

Step 1. Create a test table

# Replace cn-shanghai-g with the cloud data center ID of the instance that you purchased. You can view the Data Center Name parameter in the ApsaraDB for Cassandra console.
create keyspace ycsb WITH replication = {'class': 'NetworkTopologyStrategy', 'cn-shanghai-g': 2};
create table ycsb.usertable (y_id varchar primary key, field0 varchar, field1 varchar, field2 varchar, field3 varchar, field4 varchar, field5 varchar, field6 varchar, field7 varchar, field8 varchar, field9 varchar);
                        

Step 2. Install the benchmarking tool

wget https://github.com/brianfrankcooper/YCSB/releases/download/0.15.0/ycsb-cassandra-binding-0.15.0.tar.gz
tar -zxf ycsb-cassandra-binding-0.15.0.tar.gz
                        

Step 3. Modify the workloads/workloada code

Add the following content:

hosts=cds-xxxxxxxx-core-003.cassandra.rds.aliyuncs.com #The endpoint of the database. You can view the endpoint in the ApsaraDB for Cassandra console.
cassandra.username=cassandra #The account must be granted the permissions to read and write the ycsb keyspace.
cassandra.password=123456 #If you forget the password, you can change the password in the console.
                        

Step 4. Prepare data (write test)

nohup ./bin/ycsb load cassandra2-cql -threads $THREAD_COUNT -P workloads/workloada -s > $LOG_FILE 2>&1 &
                        

The result of this test shows the maximum write throughput. To test the maximum throughput, you must increase the value of $THREAD_COUNT and check whether the throughput increases. The specifications of the client for stress testing must be medium or high.

Step 5. Perform stress testing (read and write test)

nohup ./bin/ycsb run cassandra2-cql -threads $THREAD_COUNT -P workloads/workloada -s > $LOG_FILE 2>&1 &
                        

The result of this test shows the read and write performance.

Test result

The test results in this example are provided only for reference. The throughput and latency vary based on different workloads. You can use different parameters and workloads, or increase the volumes of test data (longer duration) to obtain test results that best suit your business. The client specifications may affect the test results. Do not use shared instances.

Test result description

  • Load: data preparation (write test)

  • Run: stress testing (read and write test)

  • OPS: operations per second, which indicates the overall throughput.

  • WAVG: the average write latency. Unit: microseconds.

  • RAVG: the average read latency. Unit: microseconds.

  • RP999: the write latency of the 99.9th percentile. Unit: microseconds.

  • Thread: 100/100, which indicates the number of testing threads in the data preparation phase compared with the number of testing threads in the stress testing phase.

A full load test and a regular test are performed during the stress testing phase.

80% CPU load

Specification

Thread

Data volume (ten thousand rows)

Load OPS

Load WAVG

Run OPS

Run WAVG

Run RAVG

Run RP95

Run RP99

Run RP999

4 vCPUs 8 GB

100/100

1600

32277

3071

29745

2846

3363

7795

23039

43999

60% CPU load

Specification

Thread

Data volume (ten thousand rows)

Load OPS

Load WAVG

Run OPS

Run WAVG

Run RAVG

Run RP95

Run RP99

Run RP999

4 vCPUs 8 GB

100/16

1600

32063

3093

16721

514

974

1879

3047

28063

Note

This topic lists the test results of an instance that uses standard SSDs. Ultra disks also provide high IOPS. When the amount of data and the instance specifications are small, the impact of ultra disks on performance is close to that of SSDs on performance. The storage is no longer the performance bottleneck. Therefore, ultra disks are not used in the tests. You can use simulated workloads based on your business scenario to obtain more accurate results. The impacts of applications must also be taken into account. For example, the garbage collection mechanism of Java clients may increase the latency.