Development history of database transaction isolation

Target

Transaction isolation is a direct requirement generated by transaction concurrency. The most intuitive isolation method to ensure correctness is to let concurrent transactions execute in sequence, or it looks like they are executed in sequence. However, in real scenarios, such high accuracy assurance is not required sometimes, so we hope to sacrifice some accuracy to improve the overall performance. By distinguishing isolation levels of different strengths, users can freely balance correctness and performance. With the expansion of the number of database products and usage scenarios, various isolation levels have been confused. Many database designers and users urgently need a consensus on the isolation level division, which is the significance of the standard. A good isolation level definition has the following two important goals:

Correct: The definition of each level should be able to exclude all cases that damage the correctness that the level wants to ensure. In other words, as long as the implementation meets a certain isolation level definition, the corresponding correctness guarantee will be obtained.

Implementation independent: Common concurrency control implementations include locks, OCC, and multiple versions. A good standard should not limit its implementation.

ANSI SQL standard (1992): based on anomalies

In 1992, ANSI first tried to specify a unified isolation level standard, which defined different levels of anomalies, and divided the isolation standard according to how many anomalies can be avoided. The anomalies include:

Dirty Read: Read data that has not been submitted by other transactions;

Non Repeatable/Fuzzy Read: The results of two reads of a data are different due to the modification or deletion of other transactions;

Phantom Read: Range results become invalid due to modification, addition or deletion of other transactions (such as where condition query).

A Critique of ANSI (1995): lock based

A few years later, Microsoft researchers criticized the ANSI standard in A Critique of ANSI SQL Isolation Levels, pointing out that there are two fatal problems:

1. Incomplete, lack of exclusion of Dirty Write

All isolation levels in the ANSI SQL standard do not exclude Dirty Write. Dirty Write means that two uncommitted transactions have modified the same object successively. Dirty Write is an anomaly mainly because it causes the following consistency problems:

H0: w1[x] w2[x] w2[y] c2 w1[y] c1

In this history, if there is a dependency constraint x=y, T1 attempts to modify both to 1, T2 attempts to modify both to 2, and the result of sequential execution should be that both are 1 or both are 2. However, due to Dirty Write, the final result becomes x=2, y=1, which is inconsistent.

2. Ambiguity

The English expression of ANSI SQL is ambiguous. Take Phantom as an example, as shown in the following historical H3:

H3:r1[P] w2[insert y to P] r2[z] w2[z] c2 r1[z] c1

Suppose T1 queries the list of all employees according to condition P. Then T2 adds an employee and adds the number of employees value z, and T1 reads the number of employees z, and finally the number of employees in T1's list is less than z, which is inconsistent. But T1 does not use the value in P after T2 modifies the linked list. Does it not belong to the definition of Phantom in ANSI? This also leads to two interpretations of ANSI: strict and loose. The same problem applies to Read Dirty and Non Repeatable/Fuzzy Read.

So, how to solve the above two problems? The answer to Critique of ANSI is that it is better to kill three thousand by mistake than to let one go, that is, to give the strictest definition of anomalies in the ANSI standard. Critique of ANSI changes the definition of anomalies:

P0: w1[x]…w2[x]…(c1 or a1) (Dirty Write)

P1: w1[x]…r2[x]…(c1 or a1) (Dirty Read)

P2: r1[x]…w2[x]…(c1 or a1) (Fuzzy or Non-Repeatable Read)

P3: r1[P]…w2[y in P]…(c1 or a1) (Phantom)

At this point, the definition is very strict, and the corresponding read/write combination order is directly blocked. It can be seen from the details that the definition obtained here is based on the lock:

Read Uncommitted, prevent P0: lengthen the write lock on x in the whole transaction phase

Read Committed, prevent P0, P1: short read lock+long write lock

Repeatable Read, prevent P0, P1, P2: long read lock+short predicate lock+long write lock

Serializable, prevent P0, P1, P2, P3: long read lock+long predicate lock+long write lock

The nature of the problem

It can be seen that the isolation definition of this method ensures the correctness, but it has the problem of depending on the implementation method: too strict isolation definition prevents some normal situations in the implementation of Optimize or Multi version:


The implementation of P0: Optimize may allow multiple transactions to write their own local copies. When submitting, it can succeed as long as the order is appropriate. Abort only when necessary, but this option is blocked by P0;

For P2: As long as T1 is not reading x, there is no subsequent operation related to x, and it is submitted before T2. It is acceptable in the implementation of Optimize, but it is blocked by P2.

Recall that the ANSI standard problems pointed out in Critical of ANSI, including Dirty Write and ambiguity, are actually caused by the mutual constraint relationship between multiple objects, as shown in the figure below. The black part in the figure represents the exception described in ANSI for a specific anomaly, and the gray part is the exception caused by multiple object constraints. However, this part cannot be described in the traditional definition of anomalies, so it can only be followed, Expand the limit to the yellow part, thus limiting the normal situation.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us