The data structure in ApsaraDB for Redis has a significant impact on service performance. If the number of big keys is large, the service performance or even service stability may deteriorate. Regular memory analysis and optimization can ensure service stability and efficiency. To avoid impacts on online services, you can run the BGSAVE command to generate an RDB file and use redis-rdb-tools and SQLite to analyze the file offline.
- A Linux-based Elastic Compute Service (ECS) instance is created.
- The SQLite database is installed on the ECS instance.
Create an RDB file
- For an ApsaraDB for Redis instance, back up the instance and download the backup data as an RDB file in the ApsaraDB for Redis console. For more information, see Manual backup (instant backup).
- For an on-premises Redis database, run the BGSAVE command on the client to generate an RDB file.
Introduction to redis-rdb-tools
You need to use redis-rdb-tools to generate a memory snapshot from the RDB file that is obtained. redis-rdb-tools is a Python tool used to parse RDB files. It supports the following features:
- Generate a memory snapshot.
- Convert data in an RDB file to JSON format.
- Compare two RDB files to find their differences.
You can install redis-rdb-tools in either of the following ways:
- Install it from Python Package Index (PyPI) on the ECS instance.
pip install rdbtools
- Install it from source code on the ECS instance.
git clone https://github.com/sripathikrishnan/redis-rdb-tools cd redis-rdb-tools sudo python setup.py install
Use redis-rdb-tools to generate a memory snapshot
Run the following command on the ECS instance to generate a memory snapshot in CSV format:
rdb -c memory dump.rdb > memory.csv
The memory snapshot contains the following data:
- Database ID
- Data type
- Memory usage (in bytes), including the memory occupied by the key-value pair and other
Note The memory usage is a theoretical approximation. Generally, it is slightly lower than the actual value.
A sample CSV file is as follows:
$head memory.csv database,type,key,size_in_bytes,encoding,num_elements,len_largest_element 0,string,"orderAt:377671748",96,string,8,8, 0,string,"orderAt:413052773",96,string,8,8, 0,sortedset,"Artical:Comments:7386",81740,skiplist,479,41, 0,sortedset,"pay:id:18029",2443,ziplist,84,16, 0,string,"orderAt:452389458",96,string,8,8
Import the CSV file to the SQLite database
SQLite is a lightweight relational database. After importing the CSV file to the SQLite database, you can use SQL statements to analyze the data in the CSV file.
- The SQLite version must be 3.16.0 or later.
- Before importing the CSV file, delete the comma (,) at the end of each line in the CSV file.
Run the following commands to import the CSV file:
sqlite3 memory.db sqlite> create table memory(database int,type varchar(128),key varchar(128),size_in_bytes int,encoding varchar(128),num_elements int,len_largest_element varchar(128)); sqlite>.mode csv memory sqlite>.import memory.csv memory
Analyze the memory snapshot generated by redis-rdb-tools
After importing the CSV file to the SQLite database, you can use SQL statements to analyze the data in the CSV file. For example:
- Query the number of keys in the memory.
sqlite>select count(*) from memory;
- Query the total memory usage.
sqlite>select sum(size_in_bytes) from memory;
- Query the top 10 keys with the highest memory usage.
sqlite>select * from memory order by size_in_bytes desc limit 10;
- Query lists with over 1,000 elements.
sqlite>select * from memory where type='list' and num_elements > 1000;