DDIA: Chp 7. Transactions (Part 2): Serializability

We are continuing from the first part of our Chapter 7 review.  Serializable isolation ensures that the final result of concurrent transactions is equivalent to if they had been run one at a time, without any concurrency. This eliminates any concurrency anomalies, since it ensures the transactions would behave as they would in a sequential environment.

Databases offering serializability typically use one of three methods:

  • Executing transactions literally in a serial order 
  • Two-phase locking (2PL)
  • Optimistic concurrency control, like serializable snapshot isolation (SSI)

For now, we will focus on single-node databases. The book discusses how these methods apply to distributed systems in Chapter 9.


Actual Serial Execution

Serial transaction execution is used in systems like VoltDB/H-Store, Redis, and Datomic. While limiting the database to a single thread can boost performance by eliminating coordination overhead, it restricts throughput to the capacity of one CPU core. Unlike traditional databases, these systems don't allow interactive multi-statement transactions. The entire transaction logic must be pre-submitted as a stored procedure (see Figure 7-9). With all data in memory, this allows the procedure to run quickly without waiting on network or disk I/O.

Stored procedures, though part of the SQL standard since 1999, have a bad reputation due to:

  • Vendor-specific languages (PL/SQL for Oracle, T-SQL for SQL Server, etc.), which lag behind modern programming languages
  • Difficulty in managing database-resident code compared to application servers: it's harder to debug, version control, deploy, test, and monitor.
  • Greater performance and security sensitivity: a poorly written stored procedure can degrade overall database performance more than poor application server code.

Modern systems have moved away from PL/SQL, adopting general-purpose languages. VoltDB uses Java or Groovy, Datomic uses Java or Clojure, and Redis uses Lua. But the other two points didn't get addressed much. Moreover, giving up the generality of interactive transactions is a big NO for many users.

There has been consistent followup work academically on the deterministic database systems, but there has not been traction in practical industry use.  

Actually, single-shot transactions, which are limited in scope, may also be seen as an instance of this serial execution approach. Consider DynamoDB transactions which I reviewed here. They use timestamp order (TSO) algorithm and optimistic control to achieve serializability. But the reason this is possible to implement without headache is that it only serves limited-scope single shot transactions rather than generalized interactive transactions. 


Two-Phase Locking (2PL)

For decades, 2PL was the primary method for enforcing serializability. Here, several transactions can read the same object using a shared lock, but any write operation requires exclusive access:

  • If Transaction A reads an object and Transaction B wants to write to it, B must wait until A commits or aborts.
  • If Transaction A writes to an object and Transaction B wants to read it, B must wait until A finishes.
  • A transaction must hold onto its locks until it commits or aborts --hence the "two-phase" name: acquiring locks while executing, releasing them at the end.

In 2PL, writers block readers and vice versa. Contrast this with snapshot isolation, whose mantra is "readers never block writers, and writers never block readers".

The major downside of 2PL is its performance hit. Transactions frequently block each other, reducing concurrency, which can cause queues and long wait times. Traditional relational databases don’t limit the duration of a transaction, because they are designed for interactive applications that wait for human input. Consequently, when one transaction has to wait on another, there is no limit on how long it may have to wait. Deadlocks are also more frequent under 2PL, especially with complex access patterns, leading to transaction aborts and retries, wasting resources.

To ensure serializability, 2PL must also handle phantom reads. This requires predicate locks, which lock all objects matching a search condition, including those not yet in the database. However, predicate locks perform poorly, so most databases use index-range locking (or next-key locking), which approximates/overestimates predicate locks with reduced overhead.  An approximation of the search condition is attached to one of the indexes.  If there is no suitable index where a range lock can be attached, the database can fall back to a shared lock on the entire table. 

Today MySQL (InnoDB), SQL Server, and DB2 (repeatable read isolation level) use 2PL for serializable isolation.


Serializable Snapshot Isolation (SSI)

Achieving both serializability and good performance is hard. Serial execution is limited and not general. 2PL is slow and blocking. Serializable Snapshot Isolation (SSI) offers another option. It provides full serializability with some additional performance cost over snapshot isolation. Introduced in 2008 in Michael Cahill's PhD thesis, SSI is made available in PostgreSQL (since version 9.1) and FoundationDB.

SSI uses optimistic concurrency control (OCC): transactions proceed without blocking, and at commit time, the database checks for isolation violations. If a conflict is detected, the transaction is aborted and retried. Optimistic approaches often outperform pessimistic ones when contention is low.

Like snapshot isolation, SSI gives each transaction a consistent snapshot of the database for reads. But SSI adds a mechanism to detect serialization conflicts and abort problematic transactions. This is my high-level understanding of how SSI works. SI has two anchor/projection points per transaction, reads are done at the start point of the transaction, and updates are done at the commit point of the transaction. SSI collapses these two projection points into one point as serializability demands. It does this by checking if it can move the read from the start point to the commit point of the transaction, and if this doesn't work the transaction is aborted.

But that is not enough! To qualify for serializable isolation, the database should also handle phantom reads. For this, SSI tracks when a transaction ignores other transactions’ writes due to MVCC visibility rules and checks if any of the ignored writes were committed before the transaction’s end. If so, the transaction is aborted.

Dually, when a transaction writes, SSI also checks whether recent reads by other transactions are now outdated. Unlike 2PL, these checks don’t block the transaction but act as a notification, or "tripwire," warning that earlier reads may now be stale.

Handling phantom reads is costly, but SSI’s big advantage over two-phase locking is that it avoids blocking transactions. This results in more predictable query latencies, particularly for read-heavy workloads, and makes SSI less sensitive to slow transactions.

Comments

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Foundational distributed systems papers

Advice to the young

Linearizability: A Correctness Condition for Concurrent Objects

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Understanding the Performance Implications of Storage-Disaggregated Databases

Designing Data Intensive Applications (DDIA) Book