Use of Time in Distributed Databases (part 3): Synchronized clocks in databases

This is part 3 of our "Use of Time in Distributed Databases" series. In this post, we explore how synchronized physical clocks enhance database systems, focusing on research and prototype databases. Discussion of time's role in production databases will follow in our next post.

To begin, let's revisit the utility of synchronized clocks in distributed systems. As highlighted in Part 1, synchronized clocks provide a shared time reference across distributed nodes and partitions. For simple, single-key replication tasks, such precision is often unnecessary and leader-based approaches such as MultiPaxos or Raft is much more appropriate. Even WPaxos might be considered if you need a WAN deployment. Of course, if you want to go very fancy by using a leaderless designs, such as those in the EPaxos family/Tempo/Accord, then dependency graphs and time synchronization re-enter the picture.

The true value of synchronized clocks becomes apparent in distributed multi-key operations. By aligning nodes to a shared reference frame, these clocks eliminate the need for some coordination message exchanges across nodes/partitions, and help cut down latency and boost throughput.

There are two key enhancements that catalyzes/facilitates the use of synchronized time in databases.

The first is Multiversion Concurrency Control. MVCC allows each key to maintain multiple versions over time, enabling operations like reading "at a timestamp" or writing "at a timestamp." This simplifies transactional reads by offering consistent snapshots of the database at a specific moment. While MVCC enhances efficiency, it is not strictly required. Bernstein & Goldman’s groundbreaking basic Timestamp Ordering (TSO) algorithm (VLDB'80) operated without MVCC, relying instead on single-version storage and timestamping. MVCC, however, reduces contention and improves performance, making it a valuable enhancement employed by several of the systems (Clock-SI, GentleRain, Scalable OLTP) surveyed in this post.

The second is more tightly synchronized clocks. Tighter bounds on clock precision mean less time spent waiting to account for potential skew. Of course if you have tightly synchronized clocks as Spanner have, you can choose to provide strictly serializable transactions (which we will discuss in our next post). But tightly synchronized clocks were not available publicly before 2020s, so most of the systems we discuss today make do with loosely synchronized clocks, and in order not to impose too much wait-time, they go with snapshot isolation (SI). This is a very smart tradeoff to make because despite the prevalence of serializability in academia, read-committed, repeatable-read, and snapshot isolation are dominantly used in practice/industry.

In this post, we explore research and prototype systems that employ synchronized clocks, ummm...,  in chronological order. Early systems leveraged synchronized clocks primarily for read-only transactions and snapshots, reaping low-hanging fruit. Over time, these systems evolved to tackle read-write transactions and employ more advanced techniques. As we progress through this timeline, you’ll see how synchronized clocks take on increasingly critical roles in database design. 

We cover the following:

  • Granola: Low overhead distributed transaction coordination (ATC'12)
  • Clock-SI: Snapshot Isolation for Partitioned Data Stores Using Loosely Synchronized Clocks (SRDS'13)
  • GentleRain: Cheap and Scalable Causal Consistency with Physical Clocks (SOCC'14)
  • Scalable Causal Consistency with No Slowdown Cascades (NSDI'17)
  • Nezha: Deployable and High-Performance Consensus Using Synchronized Clocks (VLDB'23)
  • Scalable OLTP in the Cloud: What’s the BIG DEAL? (CIDR'24)

As the titles hint, we'll see below that synchronized clocks have been employed to reduce coordination and achieve scalability in these distributed databases.


Granola: Low overhead distributed transaction coordination (ATC'12)

The Granola paper aimed to provide low-overhead approach to distributed transaction coordination tailored for one-shot (non-interactive) transactions. The system uses loosely synchronized clocks to enhance throughput without relying on them for correctness. (After all Barbara Liskov is an author on this paper, and remember what she said in her PODC 1991 paper.)

Granola operates in two distinct modes, Timestamp Mode and Locking Mode, switching between them on-the-fly based on the characteristics of the transactions being processed.

In Timestamp Mode, the system eschews locking to enable timestamp-based serializability, excelling at handling single-repository and independent (local-read) distributed transactions with high throughput and minimal overhead.

However, when coordinated transactions requiring remote reads or cross-node dependencies arrive, Granola transitions the affected repositories to Locking Mode. This ensures serializability through traditional locking mechanisms. Once these coordinated transactions are completed, repositories can revert to Timestamp Mode, restoring efficiency.


Clock-SI: Snapshot Isolation for Partitioned Data Stores Using Loosely Synchronized Clocks (SRDS'13)

The Clock-SI paper implements snapshot isolation (SI) in partitioned multi-version data stores using loosely synchronized clocks. They ensure that read-only transactions always observe consistent snapshots by leveraging local physical clocks for assigning snapshot and commit timestamps. They compensate for clock skew through introducing response delays to wait out the clock uncertainty bounds and to account for the pending commit of an update transaction.


For read operations, transactions observe the version with the highest version number smaller than their snapshot timestamp. This ensures consistent reads while allowing read-only transactions to commit unconditionally. Clock-SI also delays reads to account for pending updates from concurrent transactions.

Employing Hybrid Logical Clocks (HLC) would help avoid the delay in Figure 1 because HLC also encodes/integrates happened-before information in addition to physical clocks.


GentleRain: Cheap and Scalable Causal Consistency with Physical Clocks (SOCC'14)  

The GentleRain is a followup to the ORBE (SOCC'13) multi-version database we reviewed in Part 2. ORBE used a matrix of vector clocks for dependency checking. GentleRain aims to reduce the metadata piggybacked on update propagation and to eliminate complex dependency checking procedures for causal consistency. It does this by employing synchronized physical clocks to encode/compress/replace complex dependency tracking. Unlike its predecessor ORBE, which relied on a matrix vector clocks, GentleRain uses a single physical clock timestamp for updates. The tradeoff is that updates are delayed until all partitions in a data center have seen all previous updates (updates with smaller timestamp), but this ensures causality without the need for explicit dependency checks or extra metadata.

The delay in PUT operations that GentleRain requires can affect the write throughput of the key-value store. CausalSpartan solves this problem in GentleRain by replacing physical clock with Hybrid Logical Clock (HLC).


Scalable Causal Consistency with No Slowdown Cascades (NSDI'17)

Occult (Observable Causal Consistency Using Lossy Timestamps) introduces a novel approach to implementing causal consistency in geo-replicated data stores by shifting enforcement to the client side. Rather than attempting to enforce causal consistency within the data store itself, Occult ensures clients observe a causally consistent view of the system. This strategic shift to client-centric specification of causal consistency must have seeded the later more general treatment of client-centric isolation levels.

Another key innovation of Occult its relaxation of the Parallel Snapshot Isolation (PSI) requirements. While PSI demands a total ordering of transactions committed at the same replica, PC-PSI (Per-Client Parallel Snapshot Isolation) only requires total ordering per client session. This relaxation, implemented through a combination of loosely synchronized clocks and hybrid logical clocks (HLC), enables lightweight dependency tracking without sacrificing consistency guarantees. When combined with the introduction of PC-PSI, the client-centric specification of causal-consistency enables Occult to avoid slowdown cascades, and solves a significant barrier to deploying causal consistency at scale.

Occult also provides comprehensive support for read/write transactions, moving beyond the limited read-only and write-only transactions common in earlier approaches. Occult guarantees that all transactions read from causally consistent snapshots without requiring coordination during asynchronous replication, but instead by the client either retrying the read locally or reading from the master. Occult achieves atomicity by making writes causally dependent on each other, ensuring that causality is used to enforce stronger consistency properties.


Nezha: Deployable and High-Performance Consensus Using Synchronized Clocks (VLDB'23)

Nezha is not a database per-se, but its approach to using time synchronization for consensus and state-machine replication is noteworthy and could be useful in distributed database systems. The protocol leverages synchronized clocks to decrease latency and increase throughput by offloading traditional leader or sequencer-based ordering to synchronized clocks. This enables decentralized coordination without relying on network routers or sequencers, while using time synchronization on a best-effort basis without impacting correctness.

At the core of Nezha is the Deadline-Ordered Multicast (DOM) primitive, which assigns deadline timestamps to requests using synchronized clocks and only delivers them after the deadline is reached, in timestamp order. This creates a buffer that helps maintain consistent ordering across receivers. The system operates with a dedicated stable leader involved in both fast and slow path operations, where each replica follows the leader's log rather than attempting to piece together logs across multiple leaderless nodes. In the fast path, when time synchronization and message delivery work well, Nezha achieves one-RTT consensus.

The protocol's design allows for high scalability as multiple proxies can send their DOM requests using local clocks without inter-proxy communication, with time synchronization ensuring consistent request ordering at the replicas. The leader executes requests speculatively while replicas initially just acknowledge message delivery, executing requests later after confirming the leader's order. If the fast path conditions aren't met (when a super-majority quorum doesn't have the same value as the leader), the system falls back to a more traditional asynchronous slow path where replicas stream the log from the leader. The evaluation suggests that Nezha significantly outperforms previous protocols, including achieving order of magnitude improvements in throughput.


Scalable OLTP in the Cloud: What’s the BIG DEAL? (CIDR'24)

Pat Helland's prototype database architecture aims to show how we can build scalable OLTP systems by leveraging time as a primary organizing principle. The design moves away from traditional multi-version concurrency control (MVCC) databases where reads and writes contend for access to a "current" value at a home location, and instead organizes data primarily by creation time to achieve better scaling. This temporal-first approach eliminates the need for pre-assigned record homes, allowing the database to seamlessly adapt to workload changes.

The system uses a combination of worker servers and owner servers to manage transactions. Workers execute transactions and maintain their own transaction logs, while owners handle concurrency control by verifying that concurrent transactions don't create conflicting updates. The architecture uses time extensively in its operation: workers guess future commit times for transactions, owner servers align commit times for records, and all record versions are organized first by time and then by key in the LSM (log structured merge tree) storage.

Time also plays a crucial role in providing external consistency and snapshot isolation in this architecture. By using current time (T-now) as the snapshot time, the system ensures that new incoming requests see all previously exposed data, even across different database connections. Everything in the database is versioned by record-version commit time, with reads accessing old record versions as of a past snapshot and row-locks ensuring locked records remain unchanged until commit time. This time-based organization allows the database to scale without requiring coordination across disjoint transactions that are reading and updating different records.  

Comments

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Advice to the young

Foundational distributed systems papers

Linearizability: A Correctness Condition for Concurrent Objects

Understanding the Performance Implications of Storage-Disaggregated Databases

Designing Data Intensive Applications (DDIA) Book

Use of Time in Distributed Databases (part 2): Use of logical clocks in databases