GitHub star 14000+, Alibaba's open source SEATA

Date: Oct 26, 2022

Related Tags:1. Hybrid Cloud Distributed Storage
2. AlibabaMQ for Apache RocketMQ

Abstract:Alibaba has open-sourced FESCAR, and the name after open-source is SEATA. Currently, it has over 13,000+ stars on GIT.

It is a pity that the author traverses the entire network segment, and none of them are practical production-level instructions. At the same time, the relevant documents on the GIT official website are missing and the Sample is too HelloWorld to be used in a real production environment.

Therefore, the author combined the experience of solving distributed transactions in the MQ era 6 or 7 years ago, and combined with this SEATA (the latest COMMIT at the end of December 2019) to talk about the latest and the most current When it is trendy, how to solve distributed transactions while taking into account the final consistency of data and performance, efficiency, and throughput rate, how does Alibaba's open source combination make them the ultimate.

We cite two examples throughout:

The first example will use commodities and inventory to simulate how the distributed transaction of the AT mode in SEATA is implemented;

The second example will use the inter-bank transfer between two banks to simulate how the distributed transaction of the TCC mode in SEATA is implemented.

In particular, all the TCC explanations for SEATA on the Internet are only one of Ali's original SEATA-tcc. The example that it originally came with has the following shortcomings:



Several providers are mixed together

provider and consumer are mixed in one project

nacos connection is not supported

Annotations are not supported

Then all the blogs on the Internet revolve around this helloworld-level example. In fact, many of them are plagiarism. None of them have incorporated their own understanding and ideas, and they have not separated the original examples according to the production level. Obviously will mislead many readers.



Therefore, this time, we will make production-level enhancements on the original Alibaba official example, so that it can adapt to the full simulation of the production environment you are about to do.

The author does not want to make too many long-winded articles here or directly COPY PASTER a bunch of so-called source code to make up the word count like all the articles about SEATA on the Internet. The author will first talk about several important concepts of distributed transactions here, and then go to the actual combat-level code, application and analysis in production.

Briefly describe distributed transactions


In large-scale websites and websites with high concurrent traffic, it does not need to be too high. At what level does it need to consider distributed transactions. Take a look at the following description: When you assume that there are 100 tables in the DB of your production environment, the data volume of each table exceeds 100 million. At this time, it is no longer as simple as clustering, master-standby, and read-write separation.

Or you can improve the response speed and throughput of the front end of the website through clustering, master-standby, and read-write separation, but do you know? If you have really stayed in such an environment of magnitude, you must be familiar with a situation! That is, when each or main single table has more than 100 million data, no matter how many masters or slaves or separations you have, as long as you make a modification (delete or update) on the "master database", it will have a The action is called "master-slave synchronization".

Then you will find that this master-slave synchronization will cause frequent "master-slave delays" in your production database. Here I refer to mySQL with 6 masters and 6 slaves per SSD, 64C CPU, 128GB, etc. If You said, "Your family is crazy and rich, and they are all 512GB, 128C minicomputers." I also said that this situation will appear at most a few days later, but it will appear sooner or later.

What happens when a master-slave delay occurs? For example, when you want to delete a piece of data, the master deletes it and then starts to try to synchronize the slave database, it will lock the database and lock the table. Suppose you have 1000 tables (a system submodule has 1000 tables, a large website has 30-40 submodules is normal for a large website), each table is hundreds of millions of data, any business caused During the JOIN operation, the database is still synchronizing the master-slave library. At this time, the cost of this synchronization is "6-8 hours or more". If the synchronization fails, the consistency of the data will get worse and worse. To put it bluntly, you business data will be severely affected.

Seriously, the disk of your database is running out of disk, and you want to delete some historical data, but you dare not operate it, because once you operate, your master-slave synchronization cannot be completed from the early morning to about 8:00 in the morning, and the database is still Locking at this time will affect your online and offline business. This is all because the database is too large and heavy, so that you can't even delete some history records and logs. At this time, IT will be quite passive.

Therefore, according to the optimal design, we will ensure that the data of a single table does not exceed 20 million. At this time, we will do the vertical business division of the table, so there is a microservice, and it will be folded into this structure. We can see that every database and service will be folded and even the same member will be folded into a database corresponding to a microservice instance for every 10 million data.



After folding, the situation has been greatly improved, and the performance, real-time performance, and throughput have been improved. Then I encountered a scenario similar to the following, and the problem of distributed transactions appeared at this time:

scene one



From the above example, we can see that when the commodity master data and the inventory master data are folded, a data consistency problem will occur. Suppose you do an action of adding or updating the product master data in the background, then the entire system also requires that the corresponding inventory data and master data must be consistent, and once you split into microservices, the master data and inventory have actually changed. It became two different systems, each with its own independent DB. At this time, the problem to be solved is that after any update operation between the two systems fails, in order to maintain the consistency of the data, the two related "services" need to roll back the previous operation.



scene two



For inter-bank transfer, assuming account A is ICBC, it transfers funds to B's China Merchants Bank account through ICBC. This transfer is a distributed transaction that either succeeds or fails, and there is no possibility of "partial success", which is also a scenario of a distributed transaction that requires eventual consistency of data.

Whether it is scenario 1 or scenario 2, it pays attention to the eventual consistency of data. Discussions on this problem have arisen more than 20 years ago, and solutions are already available.

From the earliest use of MQ's acknowledge mode, the relevant participants are notified when the transaction is initiated. When all relevant participants commit (success), the master initiates the transaction and then displays success. If one party fails, each participant will be notified. At this point, the transaction is rolled back step by step. Modern distributed transactions and cross-table transactions were born to solve similar problems.

However, in the face of large-traffic and large-consolidation scenarios, the traditional approach, if it is a step-by-step notification method similar to the earliest MQ, will seriously affect the performance of the system during transactions, and its throughput will be restricted.

However, in the scenario of using distributed transactions, what we require is the eventual consistency of data, which will inevitably involve lock libraries, lock tables, and lock business segments. Therefore, we have been focusing on data consistency and performance for nearly 20 years. trying to strike a balance.

As a result, several core solutions were born, namely: 2PC (two-phase) commit, 3PC (three-phase - a preparation phase is added to the second phase) and TCC (transaction compensation) mechanism.

The CAP and PAXOS theories involved in the principles of these core solutions are not discussed in this article. There are too many related papers on the Internet. If you want to deal with the PPT architect interview, you can go to rote memorization. If you want to produce code, then Let's continue. Here we only briefly describe the core mechanism of 2PC and TCC.

2PC (Phase Two) Submission


1) The first stage: the preparation stage (prepare)

The coordinator notifies the participants that they are ready to submit the order, and the participants start voting.

The coordinator completes the preparation and responds Yes to the coordinator.

2) The second stage: commit/rollback stage

The coordinator initiates the final commit instruction based on the voting results of the participants.

If any participant is not ready, a rollback command is issued.



The application initiates prepare to the two libraries through the transaction coordinator. The two databases receive the message and execute the local transaction (logging) respectively, but do not commit. If the execution is successful, it will reply yes, otherwise it will reply no.

The transaction coordinator receives the reply, and as long as one party replies no, it initiates a rollback transaction to the participant, and the participant starts to roll back the transaction.

The transaction coordinator receives the reply, all replies yes, and then initiates the commit transaction to the participant. If one of the participants fails to commit the transaction, the transaction coordinator initiates a rollback of the transaction.


TCC transaction compensating commit


TCC transaction compensation is a business layer transaction control scheme implemented based on 2PC. It is the first letter of the words Try (prepare), Confirm (submit) and Cancel (rollback). The meanings are as follows:

1) Try check and reserve business resources Complete the check before submitting the transaction and reserve the resources.

2) Confirm confirms the execution of the business operation and formally executes the resources reserved in the try phase.

3) Cancel cancels the execution of the business operation and releases the resources reserved in the try phase.



1. Try

When transferring money, the from library and the to library respectively perform the operations of account number information, balance information, freezing transfer funds, and lock resources.

2. Confirm

The from account turns the transfer amount into a frozen amount, and then the from account deducts the transfer amount and locks the record during the operation. The to account turns the transfer amount into a frozen amount, and then the to account balance + the transfer amount = the remaining amount and records are locked during the operation.

3. Cancel stage

If there is any error or failure in the respective business of from and to, then all operations of from and to must cancel their respective operations;

then

The from operation puts the balance + the frozen amount = the original from balance; the original frozen amount returns to 0;

The to operation puts the frozen amount - the transfer amount = the original frozen amount, the to operation puts the balance - the transfer = the original balance;

All the above steps must achieve "business idempotency". What is "business idempotency"?

Business idempotence

That is, no matter how the above steps are operated, their business relevance must be equal, for example:

The from account is originally 100 yuan, the frozen field is 0 yuan, and 10 yuan is to be transferred;

The original to account is 100 yuan, the frozen field is 0 yuan, and 10 yuan is to be transferred from from;

Then there is one step in the previous cycle of steps, and you must return to the state of the starting point. This requires our application to do intermediate state retention and pre-embed "business compensation" or we also call it "anti-transaction" logic in the program code.

Well, the above is the core logic, no more in-depth principles will be expanded, and further expansion will involve algorithms and theories. We are not here to help you deal with interviews, but to help you truly enter the "production environment". Therefore, I will start to show me the code below. I actually reserved the overall "architecture design" when I drew the picture above. Therefore, we will use springboot+dubbo+nacos+SEATA to implement the above two scenarios. they.




Combination of SEATA+Nacos


Here we need to introduce SEATA and nacos.

What is SEATA?

The predecessor of SEATA is Ali Ant Financial: FESCAR, which is to solve the problem of achieving the final consistency of transactions, and to ensure the high performance and high throughput of the overall system. In order to achieve the above transactions, 2PC or TCC is not correct The business code that has been written is designed with too much "intrusive destruction".

At the time of writing this article, its highest version is version 1.0.0GA, and the last GIT submission time was 2 days ago, that is, on January 19, and there are still people submitting patches and fixing bugs. At present, all the samples on the Internet either can't run or are unavailable. All of them are helloworld-level things. They can only be played on a single machine and can't run in production, and they are not combined with nacos applications. They are not practical at all, and the documentation is incomplete, so I put My actual production experience is directly shared with you.


What is Nacos?
This is a fairly mature service registration discovery + resource manager, the latest version is 1.1.4. It is to replace the status of Zookeeper, and in fact it is also replacing Zookeeper, it is quite mature, mature than SEATA, after all, it appeared earlier than SEATA.

We all know that SpringCloud and Dubbo have their own ZK-based service management centers, right? That thing is outdated, simple, difficult to operate, and difficult to operate and maintain. Dubbo has included nacos-registry since version 2.6, so more and more remote service registrations and services have started to use nacos since they were discovered.

We noticed that I placed a "Configuration Management Center" when drawing the TCC transaction, which I placed on purpose, and it is Nacos used here.


How does SEATA+Nacos work together?
SEATA is the transaction management center, Transaction Management, referred to as TM. Each Dubbo microservice will be registered as a resource in SEATA's "resource manager", which we call RM for short.

What's the matter with Nacos?

Think about what I said above, what we need to solve is "to achieve the eventual consistency of transactions, and to ensure the high performance and high throughput of the overall system, and to solve the problem of 2PC or TCC in order to achieve the above transactions. Good business code does too much intrusive damage". Note the words "do not do too much intrusive damage to the written business code" here.

How not to intrusively destroy? Class reflection + callback -> Spring + callback, Yeah!

At the same time, each microservice is a set of independent systems. How to make two remote systems or multiple systems reflect each other?

Then our approach originated from the design concept of EJB's Session Bean in the oldest and oldest J2EE v1.2 specification.

Its principle is actually that each EJB (microservice, called SOA at that time) registers the full path of its interface name into the J2EE container with the addressing method of JNDI (you can't use Tomcat to play J2EE, Tomcat is always just a webcontainer, To play J2EE must use Websphere, Weblogic or open source JBOSS, spring+mybatis=Thisis not a J2EE). Then different participating "service components" pass this JNDI to address and notify each other (call).

The author of Dubbo (including his development team) once put forward such an idea: I think the management of transactions should not belong to the Dubbo framework, Dubbo only needs to be managed by transactions, like JDBC and JMS can be managed by transactions For distributed resources, as long as Dubbo implements the same behaviors that can be managed by transactions, such as rollback, the scheduling of other transactions should be implemented by a dedicated transaction manager. FESCAR was constructed under such a premise.

It is such a bond between SEATA, Nacos, and Dubbo.



Dubbo is the core service provider of microservices;

The communication between Dubbo and Dubbo uses a remote interface, which requires a remote automatic service discovery and registration management center, so there is Nacos;

SEATA is a kind of TM, which is to find the registration address of the corresponding RM in the registration management center, and then complete the unified coordination and management of related transactions in the RM through the remote message + asynchronous callback mechanism;

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us