edit-icon download-icon

How to check big keys?

Last Updated: Jul 27, 2017

Background

ApsaraDB for Redis provides complex data structure types such as list, hash, and zset. Improper key design may result in excessively large keys. Because ApsaraDB for Redis uses a simple single-thread model, the operation of getting or deleting large keys will result in adverse impact. In cluster mode, large keys are prone to occupying the entire memory of a subnode. A search tool is required to identify excessively large keys.

To scan for large keys in ApsaraDB for Redis, run the scan command of the master-slave version or ISCAN command of the cluster version. The command rule is provided in the following. The number of nodes can be determined using the info command.

  1. ISCAN idx cursor [MATCH pattern] [COUNT count]

idx indicates the node ID that starts from 0. For an eight-node cluster instance with a size of 16–64 GB, idx ranges from 0 to 7. A cluster instance with a size of 128 GB or 256 GB has 16 nodes.

Procedure

  1. Run the following command to download the Python client:

    1. wget "https://pypi.python.org/packages/68/44/5efe9e98ad83ef5b742ce62a15bea609ed5a0d1caf35b79257ddb324031a/redis-2.10.5.tar.gz#md5=3b26c2b9703b4b56b30a1ad508e31083"
  2. Decompress and install the Python client.

    1. tar -xvf redis-2.10.5.tar.gz
    2. cd redis-2.10.5
    3. sudo python setup.py install
  3. Create the following scan script:

    1. import sys
    2. import redis
    3. def check_big_key(r, k):
    4. bigKey = False
    5. length = 0
    6. try:
    7. type = r.type(k)
    8. if type == "string":
    9. length = r.strlen(k)
    10. elif type == "hash":
    11. length = r.hlen(k)
    12. elif type == "list":
    13. length = r.llen(k)
    14. elif type == "set":
    15. length = r.scard(k)
    16. elif type == "zset":
    17. length = r.zcard(k)
    18. except:
    19. return
    20. if length > 10240:
    21. bigKey = True
    22. if bigKey :
    23. print db,k,type,length
    24. def find_big_key_normal(db_host, db_port, db_password, db_num):
    25. r = redis.StrictRedis(host=db_host, port=db_port, password=db_password, db=db_num)
    26. for k in r.scan_iter(count=1000):
    27. check_big_key(r, k)
    28. def find_big_key_sharding(db_host, db_port, db_password, db_num, nodecount):
    29. r = redis.StrictRedis(host=db_host, port=db_port, password=db_password, db=db_num)
    30. cursor = 0
    31. for node in range(0, nodecount) :
    32. while True:
    33. iscan = r.execute_command("iscan",str(node), str(cursor), "count", "1000")
    34. for k in iscan[1]:
    35. check_big_key(r, k)
    36. cursor = iscan[0]
    37. print cursor, db, node, len(iscan[1])
    38. if cursor == "0":
    39. break;
    40. if __name__ == '__main__':
    41. if len(sys.argv) != 4:
    42. print 'Usage: python ', sys.argv[0], ' host port password '
    43. exit(1)
    44. db_host = sys.argv[1]
    45. db_port = sys.argv[2]
    46. db_password = sys.argv[3]
    47. r = redis.StrictRedis(host=db_host, port=int(db_port), password=db_password)
    48. nodecount = r.info()['nodecount']
    49. keyspace_info = r.info("keyspace")
    50. for db in keyspace_info:
    51. print 'check ', db, ' ', keyspace_info[db]
    52. if nodecount > 1:
    53. find_big_key_sharding(db_host, db_port, db_password, db.replace("db",""), nodecount)
    54. else:
    55. find_big_key_normal(db_host, db_port, db_password, db.replace("db", ""))
  4. Run the python find_bigkey host 6379 password command to search for large keys.

    Note: The command returns a list of large keys in ApsaraDB for Redis of the master-slave version and cluster version. The default size threshold is 1,0240. Large keys include string-type keys with a value greater than 1,0240, list-type keys with a length greater than 1,0240, and hash-type keys with more than 1,0240 hash fields.

    The script searches 1,000 keys by default and only has minor service impact. However, it is recommended that you execute the script during off-peak hours to prevent service impact resulting from the scan command.

Thank you! We've received your feedback.