This topic provides answers to some frequently asked questions about the historical versions of PolarDB-X 1.0.
Errors exist on a PolarDB-X 1.0 instance for a period of time after a primary/secondary switchover is performed on an ApsaraDB RDS instance associated with the instance
Problem description
Connections between the PolarDB-X 1.0 instance and the ApsaraDB RDS instance are established by using a connection pool. The pool stores some connections. In most cases, old connections are terminated in the primary/secondary switchover of the ApsaraDB RDS instance. However, the PolarDB-X 1.0 instance that servers as the client side can detect that the old connections are terminated only when some SQL statements are executed on the connections for the first time. Therefore, a request from the PolarDB-X 1.0 instance may fail within a period of time after the primary/secondary switchover. Each time the access request fails, a dirty connection is eliminated.
If a small number of access requests are sent by the instance, it takes a long period of time to eliminate dirty connections and report errors.
Suggestion
If you want to fix the preceding issue at the earliest opportunity, you can increase the value of the socket_timeout parameter by 1 in the PolarDB-X console. In this case, the PolarDB-X 1.0 instance rebuilds connection pools.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance. After the upgrade, the PolarDB-X 2.0 instance can detect the primary/secondary switchovers of data nodes and automatically rebuild connection pools.
Supported versions
All versions of PolarDB-X 1.0.
SQL statements executed on the PolarDB-X 1.0 instance may be stuck after a primary/secondary switchover caused by the failover of the ApsaraDB RDS instance
Problem description
In the primary/secondary switchover of the ApsaraDB RDS instance, old connections are terminated. If the PolarDB-X 1.0 instance uses the terminated connections, errors are immediately reported. However, if the ApsaraDB RDS instance fails, the old connections may not be terminated. In this case, the PolarDB-X 1.0 instance cannot detect that the connections are closed. By default, the socket_timeout TCP-layer parameter is set to 900s. Therefore, you may need to wait for at least 900 seconds before errors due to the failed execution of SQL statements that are sent by the PolarDB-X 1.0 instance to the ApsaraDB RDS instance are reported.
Suggestion
If you want to fix the preceding issue at the earliest opportunity, you can increase the value of the socket_timeout parameter by 1 in the PolarDB-X console. In this case, the PolarDB-X 1.0 instance rebuilds connection pools.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance. After the upgrade, the PolarDB-X 2.0 instance can detect the primary/secondary switchovers of data nodes and automatically rebuild connection pools.
Supported versions
All versions of PolarDB-X 1.0.
The PolarDB-X 1.0 instance cannot be connected to the ApsaraDB RDS instance during the across-zone migration or VPC switching of the ApsaraDB RDS instance
Problem description
When network changes such as across-zone migration or Virtual Private Cloud (VPC) switching are performed on the ApsaraDB RDS instance, the PolarDB-X 1.0 cannot detect these changes. This causes failed connections between the two instances.
Suggestion
Use the connection repair feature on the PolarDB-X 1.0 tab in the PolarDB-X console to rebuild the network path between the PolarDB-X 1.0 instance and the ApsaraDB RDS instance.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance.
Supported versions
All versions of PolarDB-X 1.0.
The PolarDB-X 1.0 instance has stopped responding due to the exhausted connection pool caused by high-concurrency KILL operations
Problem description
A large number of KILL operations are performed at the same time. This may lead to the exhausted connections pools of the PolarDB-X 1.0 instance. In this case, other SQL requests do not receive responses because resources are exhausted.
Suggestion
We recommend that you do not send a large number of KILL requests at a high frequency to PolarDB-X 1.0 instance.
Supported versions
All versions of PolarDB-X 1.0.
Incorrect SQL query results are returned due to the calculation results of the NOW function cached by transaction connections
Problem description
The calculation results returned by the NOW function in a SQL statement remain unchanged in a transaction or consecutive transactions. The function value is refreshed or recalculated only if the front-end connection is reset or after a non-transactional request is completed.
Suggestion
Upgrade the PolarDB-X 1.0 instance to the latest version of 5.4.12.
Supported versions
All versions of 5.4.9.
SQL statements failed to be executed due to a large number of blocked threads in the PolarDB-X 1.0 instance when the threads obtain metadata
Problem description
If the PolarDB-X 1.0 instance has multiple logical databases, the worker thread pool used to manage the metadata of the PolarDB-X 1.0 instance may be destroyed when you manually delete the databases in the PolarDB-X console. This causes that a large number of threads used to query data from other logical databases are blocked for a long period of time when the threads obtain metadata. In this case, queries do not respond and timeout errors occur.
Suggestion
Upgrade the PolarDB-X 1.0 instance to the latest version of 5.4.12.
Supported versions
5.2.x-* to 5.2.8-15738106 (exclusive)
Incorrect SQL query results are returned due to the cached invalid value of the LIMIT parameter in a SQL statement
Problem description
The invalid value of the LIMIT parameter in the SQL statement is cached. This causes that delivered physical SQL statements return incorrect results.
Suggestion
Upgrade the PolarDB-X 1.0 instance to the latest version of 5.4.12.
Supported versions
5.4.11-* to 5.4.12-*
Garbled characters may appear when the value of a BLOB field is updated
Problem description
The UPDATE statements that cannot be pushed down improperly process BLOB data in the SET clauses. This causes that unexpected results are returned due to the conversion from the BLOB data type to the Char data type when the statements write BLOB data.
Suggestion
Writes a BLOB field as a hexadecimal string. Example: INSERT INTO t1 VALUES(0xFFFFFFFF).
Supported versions
All versions of PolarDB-X 1.0.
Sharding parameters get polluted in an execution plans of the PolarDB-X 1.0 instance, which causes that physical SQL statements are delivered to unexpected shards
Problem description
A cached execution plan is polluted. This may cause that sharding calculation expression in the plan is modified to a constant value in the concurrency scenarios. In this case, subsequent access requests are delivered to a specified shard and invalid results are returned.
Suggestion
Temporary solution: Clear the baseline or plancache or remove the weights of problematic nodes.
Long-term solution: Upgrade the PolarDB-X 1.0 instance to the latest version of 5.4.12.
Supported versions
5.4.1-* to 5.4.12-16444832 (exclusive)
Shards that you want to query may be missing or data that you want to query is invalid
Problem description
In range queries that involve values less than or equal to specified negative integers, such as id<-2, some sharded tables are not included in the routing results when the columns of the sharded tables are of the integer type. This compromises the completeness of data that you want to scan.
Suggestion
When sharded tables in the PolarDB-X 1.0 instance use the INTEGER type and the HASH partitions, we recommend that you do not run range queries, such as queries that involve negative numbers.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance and use databases in AUTO mode.
Supported versions
All versions of PolarDB-X 1.0.
Precautions of shard keys of data types such as the string type
The hash routing calculations of shard keys of the string type in the PolarDB-X 1.0 instance are case-sensitive. For example, different shards are calculated from the 'ABC' and 'abc' strings. By default, MySQL is case-insensitive when matching strings. This means that the 'abc' string is equal to the 'ABC' string in MySQL. Take note that if the 'ABC' string is written to the instance, no results are returned when a query is run based on the 'abc' string because these strings are routed to different shards.
The trailing spaces in strings are retained during hash routing calculations. However, when MySQL matches strings, MySQL ignores trailing spaces in the strings. For example, different shards are calculated from the 'ABC' and 'ABC' strings. However, these strings are equal in MySQL. In the PolarDB-X 1.0 instance, queries based on these strings return different results.
MySQL truncates a string based on the data type definition of a column, which causes that the stored string to be different from the string used for routing. For example, you write the
INSERT INTO t1 VALUES ("abc")statement to a column named varchar(1). The PolarDB-X 1.0 instance uses the 'abc' string for routing. However, MySQL truncates the string to a new string 'a' and stores the 'a' string. This causes that new written data cannot be queried regardless of whether the WHERE column="abc" or WHERE column="a" statement is executed. When the WHERE column="a" statement is executed, results generated during the routing of the 'a' string are different from those generated during the routing of the 'abc' string.The PolarDB-X 1.0 instance does not support the collation of MySQL. This may lead to the inconsistency between sorting results across the shards of the instance and the collation of MySQL. Therefore, incorrect results are returned for aggregation and sorting operations.
You must also take note of the preceding items when you use data types such as numbers and timestamps as shard keys.
Suggestion
A shard key of the string type is case-sensitive when the shard key is written and used for queries.
The shard key has no leading and trailing spaces.
The length of the shard key is the character storage length of your database.
We recommend that you do not sort and aggregate columns that contain non-ascii characters in the PolarDB-X 1.0 instance.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance and use databases in AUTO mode.
Supported versions
All versions of PolarDB-X 1.0 and databases in DRDS mode in PolarDB-X 2.0.
Shards may be missing after some sharding functions of the time type are routed
Problem description
If a column in a sharded table is of the date, datetime, or timestamp type, and sharding functions such as YYYYMM, YYYYWEEK, YYYYDAY, MMDD, MM, and DD, are used to partition the sharded table, some partitions may be missing after routing is performed in a range query such as col > time1 and col < time2. This causes incorrect results to be returned.
Suggestion
Upgrade the PolarDB-X 1.0 instance to the latest version of 5.4.12.
Supported versions
5.2.x to 5.3.11-15622313
Exceptions, such as table locks, are thrown due to the execution of the LOCK Table statement
Problem description
If the LOCK TABLE statement is executed, locks cannot be released on the same connection. This causes the persistence of the locks on the physical connections and the following impacts:
The table on which the
LOCK TABLEstatement is executed cannot be accessed by other connections.The physical connection that executes the
LOCK TABLEstatement cannot access other tables.
Suggestion
We recommend that you do not execute the LOCK TABLE statement on the PolarDB-X 1.0 instance. By default, scripts generated by tools such as Mysqldump contain the LOCK TABLE statement. In this case, you must configure the --skip-lock-tables parameter to prevent the LOCK TABLE statement from being generated.
Supported versions
5.1.x to 5.3.x
Data inconsistency or latency occurs on broadcast tables
Problem description
The broadcast tables of the PolarDB-X 1.0 instance are suitable for the following scenarios:
Scenarios where consistency requirements are relatively low.
Scenarios where tables with minimal changes are involved.
The data synchronization of broadcast tables can be implemented by using the following methods: the asynchronous replication of binary logs and the XA protocol. The preceding methods cannot ensure strong data consistency and may cause data latency.
Suggestion
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance.
The broadcast tables of the PolarDB-X 1.0 instance have weak dependency. In this case, applications that utilize broadcast tables must allow scenarios where the data of these tables may be inconsistent.
Minimize create, update, and delete operations on broadcast tables and maximize read operations.
Supported versions
All versions of PolarDB-X 1.0.
Effective behavior of the whitelists of the PolarDB-X 1.0 instance
Problem description
The whitelists of the PolarDB-X 1.0 instance are implemented at the instance level. Each database management page displays an option that allows you to modify whitelists of a database in the instance. After the whitelists of the database are modified, the whitelists of all databases in the instance are also modified.
Suggestion
Proceed with caution.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance that provides more permission management capabilities.
Supported versions
All versions of PolarDB-X 1.0.
The system tables of the PolarDB-X 1.0 instance cannot be modified
Problem description
The PolarDB-X 1.0 instance creates some system tables on the ApsaraDB RDS instance. The system tables are non-business tables, including but not limited the tddl_rule and tddl_sequence tables. You cannot modify the system tables of the PolarDB-X 1.0 instance. Otherwise, exceptions, such as table loss, disordered data, and primary key conflicts, are thrown.
For example, if you use Data Transmission Service (DTS) to synchronize data between two ApsaraDB RDS instances in the PolarDB-X 1.0 instance, you must filter out the system tables of the PolarDB-X 1.0 instance.
Suggestion
We recommend that you do not modify the system tables of the PolarDB-X 1.0 instance.
Supported versions
All versions of PolarDB-X 1.0.
Errors occur when HASH or UNI_HASH is used to route bigint unsigned data
Problem description
In earlier versions of PolarDB-X 1.0, no limits are imposed on the use of the bigint unsigned type as the data type of partition key columns. When the hash code of a bigint unsigned value is calculated, the actual magnitude of the bigint unsigned value exceeds the range that is specified by the Java long type, and an overflow occurs in the hash calculation process. In this case, routing errors occur. The hash calculation of PolarDB-X 1.0 uses a raw integer value. After data is inserted, no results are returned for a query based on the partition key.
Therefore, you cannot use HASH/UNI_HASH to route the data of the bigint unsigned type as the data type of partition key columns in the PolarDB-X 1.0 instance.
Suggestion
For PolarDB-X 1.0 instances, you must change the data type of all partition key columns in the instance from the bigint unsigned type to the bigint signed type. Alternatively, you must upgrade the instances to PolarDB-X 2.0 instances and use databases in AUTO mode.
For PolarDB-X 2.0 instances, we recommend that you use databases in AUTO mode for partitioning.
Supported versions
All versions of PolarDB-X 1.0 and databases in DRDS mode in PolarDB-X 2.0 instances.
Use of UNI_HASH, RANGE_HASH, STR_HASH, or RIGHT_SHIFT results in uneven distribution of data across partitions in some scenarios
Problem description
If you use the UNI_HASH, RANGE_HASH, STR_HASH, or RIGHT_HASH hash algorithm to shard a column in the PolarDB-X 1.0 instance, no data exist in some physical table shards. This is an expected scenario that is dependent on the number of database shards and table shards, hash code calculation in Java, and routing algorithms used for sharded tables and adopted by the preceding hash functions.
Hash code calculation in Java
The hash code algorithms of various data type in Java, such as Integer, Long, and String, have a poor confusion. When a partition key column has a relatively narrow valid value range, more hash conflicts occur. For example, when an original value contains consecutive numbers within 1000, hash conflicts occur.
Calculation rules of sharding subscripts when UNI_HASH, RANGE_HASH, STR_HASH, or RIGHT_HASH is used (a single column)
dbIndex=hashCode%dbCount (number of physical database shards);
tbIndex=hashCode%tbCount (number of table shards in a single physical database shard);
For example, if the hash value used for sharding is hashCode(xxx), the number of physical database shards is 2 (dbCount=2), and the number of table shards in a single physical database shard is 2 (tbCount=2), the following scenarios exist:
If the remainder of dividing hashCode(xxx) by 2 is the odd number 1 (hashCode(xxx)%2 = 1), data is routed only to No.1 table shard in No.1 database shard.
If the remainder of dividing hashCode(xxx) by 2 is the even number 0 (hashCode(xxx)%2 = 0), data is routed only to No.0 table shard in No.0 database shard.
In the preceding scenarios, some table shards, such as No.0 and No.1 table shards, may be empty. This leads to the uneven distribution of data on some table shards.
In this case, you must use the preceding hash functions to shard a single column in the PolarDB-X 1.0 instance. Make sure that partition key columns have a wide valid range. This means that these partition key columns are easily distinguished, and the number of different values is preferably more than 500,000.
Suggestion
If you want to shard a single column, we recommend that you use hash functions that are different from the preceding algorithms in the calculation logic. This can minimize the occurrence of this issue.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance and use hash partitions in AUTO mode.
Supported versions
All versions of PolarDB-X 1.0 and databases in DRDS mode in PolarDB-X 2.0 instances.
A performance bottleneck of simple sequences exist in the PolarDB-X 1.0 instance
Problem description
To ensure global continuity, orderliness, and uniqueness of simple sequences in the PolarDB-X 1.0 instance, requests sent to obtain a sequence such as insert requests need to update the sequence table by using persistent storage and locking mechanisms such as executing the update seq set val=val+1 where seq=xxx statement. In the database engine, the statement is executed in a globally single-point and mutually-exclusive lock contention manner.
If a large number of insert requests are sent for a short period of time, simple sequences are prone to expected exclusive lock contention due to the high-concurrency updates of sequences. This causes that a large number of threads used for requesting sequences are queued due to lock waiting. In this case, a performance bottleneck occurs when insert requests are processed.
Suggestion
Use group sequences.
Upgrade the PolarDB-X 1.0 instance to a PolarDB-X 2.0 instance and use NEW sequences.
Supported versions
All versions of PolarDB-X 1.0.