Global exclusive write lock implemented by Seata to achieve write isolation at the Read Uncommitted isolation level
Seata, formerly known as Fescar, is a distributed transaction solution with high performance and ease of use for microservices architecture.
Generally, the isolation level of database transactions is set to Read Committed to meet business requirements. Thus, the isolation level of branch (local) transactions in Seata is Read Committed. Then, what is the isolation level of global transactions in Seata? Seata defines the default isolation level of global transactions as Read Uncommitted. The impact of the Read Uncommitted isolation level on the business must be clear to everyone - dirty data will be read. A classic example is bank transfers, where the data is inconsistent. For Seata, if no other technical measures are taken, it will cause serious problems. For example:
As shown in the preceding figure, what status should global transaction A roll back to resource R1 eventually? Obviously, if UndoLog is used for rollback, a serious problem occurs: The change of resource R1 made by global transaction B is overwritten. How does Seata solve this problem? The answer is the Seata global exclusive write lock solution. During the execution of global transaction A, global transaction B is in the waiting status because it cannot obtain the global lock.
For the isolation level of Seata, please refer to the following official passage:
The isolation of global transactions is based on the local isolation level of branch transactions.
On the premise that the local isolation level of the database is Read Committed or above, Seata designs a global exclusive write lock maintained by the transaction coordinator to ensure the write isolation between transactions, and defines the global transaction at the Read Uncommitted isolation level by default.
Our consensus on the isolation level is that the vast majority of applications can normally work under the Read Committed isolation level. In fact, the vast majority of these applications can also work normally under the Read Uncommitted isolation level.
In extreme scenarios, if the application needs to reach the global Read Committed level, Seata also provides corresponding mechanisms to achieve the goal. By default, Seata works under the isolation level of Read Uncommitted to ensure the efficiency of most scenarios.
Below, this article will go deep into the source code layer to explain the implementation solution of the Seata global exclusive write lock. The implementation solution of the Seata global exclusive write lock is maintained in the TC (Transaction Coordinator) module, and RM (Resource Manager) module requests the TC module where the global lock needs to be obtained to ensure the write isolation between transactions. Below, it is introduced in two parts: TC - global exclusive write lock implementation solution, and RM - global exclusive write lock usage.
First, take a look at the portal for the TC module to interact with the external. The following figure shows the main function of the TC module:
The figure above shows that the RpcServer processes communication protocol related logic, while the real processor of the TC module is DefaultCoordinator, which contains all the functions exposed by the module. For example, doGlobalBegin (Global Transaction Creation), doGlobalCommit (Global Transaction Commit), doGlobalRollback (Global Transaction Rollback), doBranchReport (Branch Transaction Status Report), doBranchRegister (Branch Transaction Registration), and doLockCheck (Global Exclusive Write Lock Check). Among them, doBranchRegister, doLockCheck and doGlobalCommit are the portals of the global exclusive write lock implementation solution.
/**
* When a branch transaction is registered, the system obtains the global lock resources of the branch transaction.
*/
@Override
protected void doBranchRegister(BranchRegisterRequest request, BranchRegisterResponse response,
RpcContext rpcContext) throws TransactionException {
response.setTransactionId(request.getTransactionId());
response.setBranchId(core.branchRegister(request.getBranchType(), request.getResourceId(), rpcContext.getClientId(),
XID.generateXID(request.getTransactionId()), request.getLockKey()));
}
/**
* Check whether the global lock can be obtained.
*/
@Override
protected void doLockCheck(GlobalLockQueryRequest request, GlobalLockQueryResponse response, RpcContext rpcContext)
throws TransactionException {
response.setLockable(core.lockQuery(request.getBranchType(), request.getResourceId(),
XID.generateXID(request.getTransactionId()), request.getLockKey()));
}
/**
* When a global transaction is committed, records occupied by locks of all branch transactions under the global transaction are released.
*/
@Override
protected void doGlobalCommit(GlobalCommitRequest request, GlobalCommitResponse response, RpcContext rpcContext)
throws TransactionException {
response.setGlobalStatus(core.commit(XID.generateXID(request.getTransactionId())));
}
The above code logic will eventually be proxied to DefualtCore for execution.
As shown in the figure above, the logic to obtain the lock or check the lock status will eventually be taken over by LockManger, while the logic of LockManager is implemented by DefaultLockManagerImpl, and all designs for global exclusive write lock are maintained in DefaultLockManagerImpl.
First, let's take a look at the structure of the global exclusive write lock:
private static final ConcurrentHashMap<String, ConcurrentHashMap<String, ConcurrentHashMap<Integer, Map<String, Long>>>> LOCK_MAP = new ConcurrentHashMap<~>();
On the whole, the lock structure is designed using Map. The first half uses ConcurrentHashMap, and the second half uses HashMap. In the end, it actually creates a lock occupation mark: To indicate which global transaction occupies the global exclusive write lock of the row record corresponding to a primary key in a Table on a certain ResourceId (database source ID). The following is the specific source code for obtaining the lock:
As noted in the preceding figure, the entire acquireLock logic is quite clear. For lock resources required by branch transactions, either they are all successfully obtained at one time, or all of them are failed to obtain. The situation of partial success and partial failure will not occur. Two questions may arise from the above explanation:
ConcurrentHashMap is used in the first half to support better concurrent processing. The question is, why not use ConcurrentHashMap directly in the second half, but HashMap instead? The possible reason is that the second half of the structure needs to determine whether the current global transaction occupies the lock resources corresponding to the primary key, which is a composite operation. Even if ConcurrentHashMap is adopted, it is still inevitable to use Synchronized lock for determination, so it is better to directly use the more lightweight HashMap.
This is relatively simple to understand. The lock records occupied by branch transactions are not reflected in the entire lock structure. Thus, how can a branch transaction release the occupied lock resources when a global transaction is committed? Therefore, the lock resources occupied by branch transactions are stored in BranchSession.
The following figure shows the logic for checking whether a global lock resource can be obtained:
The following figure shows the logic for a branch transaction to release a global lock resource:
The above is the implementation principle of the global exclusive write lock in the TC module: When registering a branch transaction, RM transfers the lock resources required by the current branch transaction, and the TC module is responsible for obtaining the global lock resources (either they are all successfully obtained at one time, or all of them are failed to obtain. The situation of partial success and partial failure will not occur). When a global transaction is committed, the TC module automatically releases the lock resources held by all branch transactions in the global transaction. And, to reduce the probability of failure in acquiring the global exclusive write lock, the TC module exposes the interface for checking whether lock resources can be obtained, so the RM module can perform the check at the appropriate position to reduce the probability of failure in branch transaction registration.
In the RM module, two functions of the global lock in the TC module are mainly used. One is to check whether the global lock can be obtained, and the other is to register a branch transaction to occupy the global lock. The release of the global lock is independent of RM and is automatically released by the TC module when the global transaction is committed. Before registering a branch transaction, the global lock status check logic is performed to ensure that lock conflicts will not occur in the branch registration
When executing Update, Insert, and Delete statements, data snapshots are generated before and after SQL execution to be organized as the UndoLog, and the way to generate snapshots is basically in the form of Select...For Update. The logic that RM tries to check whether the global lock can be obtained is in the executor that executes the statement: SelectForUpdateExecutor. The details are as follows:
The basic logic is as follows:
Note: If you are careful, you may find that the UpdateExecutor and DeleteExecutor corresponding to the Update and Delete statements will execute the Select...For Update statement due to obtaining the beforeImage, and then check the global lock resource status. However, the InsertExecutor corresponding to the Insert statement does not have relevant global lock check logic, which may be due to the fact that it is an Insert statement, so the primary key of the inserted row is newly added, and the global lock resource must not be occupied yet. Therefore, the corresponding global lock resource must be available when the branch transaction is registered before the local transaction is committed.
Next, let's take a look at how branch transactions are committed, and how the global lock resources required in branch transactions are generated and stored. First, after the SQL statement is executed, the UndoLog is generated based on beforeImage and afterImage. At the same time, the global lock resource ID required by the current local transaction is also generated, and saved in the ConnectionContext of ContentoionProxy, as shown in the following figure.
In ContentoionProxy.commit, when a branch transaction is registered, the global lock ID stored in the context in ConnectionProxy that needs to be occupied will be transferred to TC for obtaining the global lock.
The above is the logic of using global exclusive write lock in the RM module. Before obtaining the global lock resources, the status of the global lock resources is checked cyclically to ensure that the lock conflict will not cause failure when obtaining the lock resources. However, the disadvantage is also obvious: When the lock conflict is serious, the time that the lock resources of the local transaction database are occupied will be prolonged, thus bringing certain performance loss to the business interface.
This article details the global exclusive write lock implemented by Seata to achieve write isolation at the Read Uncommitted isolation level, including the implementation principle of the global exclusive write lock in the TC module, and the logic of using the global exclusive write lock in the RM module. During the process of learning the source code, two issues are left behind:
For question 1, further research is needed. The answer to question 2 is available now, but Seata has not implemented it yet. Specifically, an error is reported when global transaction A rolls back. When branch transaction A1 in global transaction A rolls back, the afterImage is checked to be consistent with the data of the corresponding row in the current table. If they are consistent, rollback is allowed, otherwise, the rollback fails and the corresponding business party is notified with an alert, which shall be handled by the business party itself.
You can learn more about Seata here: https://github.com/seata/seata
Smart Access Gateway: A Smarter Way to Connect Your Enterprise to Alibaba Cloud
2,599 posts | 758 followers
FollowStone Doyle - January 28, 2021
Alibaba Cloud Native Community - September 14, 2023
Alibaba Cloud Native Community - August 8, 2023
Alibaba Cloud Native Community - June 25, 2021
Alibaba Cloud Native Community - July 20, 2023
Alibaba Cloud Native Community - May 21, 2021
2,599 posts | 758 followers
FollowMSE provides a fully managed registration and configuration center, and gateway and microservices governance capabilities.
Learn MoreProvides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources
Learn MoreA ledger database that provides powerful data audit capabilities.
Learn MoreA financial-grade distributed relational database that features high stability, high scalability, and high performance.
Learn MoreMore Posts by Alibaba Clouder