×
Community Blog How Does PolarDB Optimize Performance in the AUTO_INC Scenario

How Does PolarDB Optimize Performance in the AUTO_INC Scenario

This article focuses on how PolarDB optimizes performance in concurrent INSERT scenarios.

By Zongzhi Chen

In database scenarios, concurrent data insertion or import is a common occurrence. In these scenarios, the auto-increment ID is not specified and is generated automatically by the database during the insertion process. This type of scenario is commonly referred to as the AUTO_INC data insertion scenario.

Typical business scenarios that fall into this category include the simultaneous login and registration of multiple users during a game launch or the backend order pushing to vendors in e-commerce activities.

This article focuses on how PolarDB optimizes performance in concurrent INSERT scenarios like these.

Background

In concurrent insertion scenarios, the auto-increment ID follows an incremental pattern. Therefore, one might assume that the data would be inserted into the rightmost page of the Btree.

However, this is not always the case.

In this scenario, the continuity of the inserted data is not guaranteed. As a result, the value of the data to be inserted may be smaller than the minimum value of the rightmost page, causing it to be inserted into the second page on the right. Consequently, the data may be inserted into multiple rightmost pages.

The reason for this behavior is that the code, from the thread obtaining the AUTO_INC value to the actual data insertion, is not locked. As a result, the thread that first obtains the AUTO_INC value may be dispatched later, causing the thread that obtains the AUTO_INC value later to insert their record first.

Therefore, when inserting records concurrently, it is possible for rows with values smaller than the current AUTO_INC value to be inserted later.

1

To optimize performance, PolarDB allows records to be inserted into multiple Level 0 pages simultaneously. This provides room for performance improvements, as opposed to always inserting records into the rightmost page like a queue, which offers no optimization opportunities.

2

In the given scenario, as shown in the figure, three threads are concurrently performing optimistic insertions with values 14, 25, and 36. Each of these threads holds an X lock on their respective page, enabling concurrent insertions into three pages.

If only the rightmost page is inserted when thread = 1, the performance would undoubtedly be inferior to concurrent insertions into three pages. Theoretically, allowing more leaf pages to be inserted simultaneously leads to higher concurrency and better performance.

To further optimize performance in concurrent INSERT scenarios, the implementation performs Structure Modification Operations (SMO) as early as possible. This involves splitting the rightmost page early on, creating more unfilled pages available for simultaneous insertions.

Therefore, the next steps in our optimization process is to perform SMO as early as possible.

What then is the actual scenario in InnoDB for concurrent INSERT operations?

3

During our testing, we discovered that due to scheduling issues, it is possible for approximately 3 to 4 pages to be inserted simultaneously in the AUTO_INC scenario, as shown in the previous figure.

  • Phase 1: Currently, three threads are performing optimistic insertions into three different Level 0 leaf pages concurrently. These threads hold X locks on their respective leaf pages. At the same time, there are N threads holding S locks on Level 1 pages, waiting for X locks on the Level 0 leaf pages. This means that N threads are waiting for the three threads to complete their insertions before proceeding with their own insertions.
  • Phase 2: The SMO thread is performing a pessimistic insertion. It holds the index SX lock and the X lock on the Level 2 page, and is waiting for the X lock on the Level 1 page. However, N threads for optimistic insertions, holding S locks on Level 1 pages, are currently waiting.
  • Phase 3: The SMO thread must wait for the previous N threads to complete their optimistic insertions. Once they have completed, it obtains the X lock on the Level 1 page and then waits for the X lock on the Level 0 leaf page. After obtaining the X lock on the Level 0 leaf page, it proceeds with the SMO operation.

It is evident that the SMO thread needs to wait for the N threads to finish their optimistic insertions before it can carry out the SMO operation. The greater the concurrency and the more threads engaged in optimistic insertions, the longer the SMO thread waits, making early execution of SMO difficult and preventing performance enhancement.

Why then does performance improve after limiting Innodb_thread_concurrency?

As analyzed above, in the AUTO_INC scenario, there are only a few pages (about three to four) that allow concurrent insertions. An excess of threads will cause the SMO thread to be held up, waiting for these optimistic insertions to be attempted and completed. The more threads there are, the longer the wait. Ideally, if there are no threads currently waiting to perform optimistic insertions into the rightmost page, the SMO thread does not need to wait, thus fulfilling the goal of early SMO execution. Limiting Innodb_thread_concurrency effectively restricts the number of threads performing optimistic insertions, thereby achieving better performance.

In practice, setting Innodb_thread_concurrency to 8 nearly attains the best performance.

4

How does Blink-tree improve this situation?

When analyzing the issues in the existing InnoDB Btree versions, we proposed a solution to reduce the number of concurrent threads by setting Innodb_thread_concurrency = 8 to ensure high insertion performance. However, setting Innodb_thread_concurrency = 8 is relatively low, and in practical usage, there are also query operations, making such settings rare. So, how does Blink-tree achieve high-performance AUTO_INC insertion while allowing multi-threaded concurrency?

  1. Blink-tree naturally allows concurrent Structure Modification Operations (SMOs), whereas the existing InnoDB Btree only allows one SMO at a time. Allowing concurrent SMOs is equivalent to performing SMO as early as possible.
  2. Simply adding SMO threads is not enough. If SMO threads have to wait for optimistic insertions to complete, it is similar to the previous InnoDB Btree implementation, where multiple SMOs are still executed in a serial manner. The only difference is that the SMOs are executed slightly earlier.

Therefore, we implement priority scheduling for locks in Blink-tree to perform SMO as early as possible.

In the previous Phase 3, the SMO thread competes with threads for optimistic insertions for execution priority, resulting in low efficiency of the SMO threads. With priority scheduling for locks, SMO threads are given the highest priority. SMO threads waiting for page X locks are awakened first, followed by threads for optimistic insertions waiting for address locks. This ensures that SMO is performed as early as possible.

The specific implementation is shown in the following figure:

5

Blink-tree employs lock coupling for adding locks, and even in scenarios of pessimistic insertion, Level 1 pages are held with S locks.

  • Phase 1: Currently, three SMO threads concurrently perform SMO operations. Similar to the Btree implementation, Blink-tree requires the lock of the rightmost page for SMO. Therefore, only SMO 1 can be executed, while SMO 2 and SMO 3 have to wait for the exclusive X lock on the rightmost page.
  • Phase 2: N threads attempt optimistic insertions. During this process, it is observed that Pages 1, 2, and 3 are undergoing SMO operations, while Page 4 is not. Consequently, threads attempting to insert records into Pages 1, 2, and 3 relinquish their shared locks on the current page and wait for the address locks on Pages 1, 2, and 3 instead. Threads attempting to insert records into Page 4 wait for the exclusive lock on Page 4.
  • Phase 3: After SMO 1 is completed, it notifies SMO 2 to proceed and awakens the threads waiting for the address lock on Page 1. However, since SMO 2 is still in progress and holds the exclusive lock on Page 1, the optimistic insertion threads can only be executed after SMO 2 completes. When SMO 2 finishes, it notifies SMO 3 to proceed and wakes up the thread waiting for the address lock on Page 2. As SMO 3 is in progress and holds the exclusive lock on Page 2, optimistic insertion on Page 2 is not possible. However, optimistic insertion can be performed on Page 1 at this point. Finally, when SMO 3 completes, optimistic insertions on Pages 2 and 3 can commence.

It can be seen that Blink-tree performs SMO as early as possible by adding concurrent SMO threads while introducing lock priority scheduling. In this way, Blink-tree achieves higher performance than InnoDB Btree.

In fact, there is another difference between Blink-tree and InnoDB Btree implementations. In Blink-tree, the threads waiting for the address lock still perform optimistic insertions when awakened. In contrast, in the InnoDB Btree, threads perform pessimistic insertions when awakened after waiting for the page lock. Pessimistic locking involves a wider range and entails greater overhead, leading to further performance degradation.

The specific data from the tests is as follows:

6

Summary

In the Blink-tree scenario, it is evident that the performance in the concurrent AUTO_INC scenario is twice as high as the official Btree version. Additionally, Blink-tree outperforms enabling Innodb_thread_concurrency by approximately 13%.

Recommended Reading

0 1 0
Share on

ApsaraDB

456 posts | 98 followers

You may also like

Comments