Linearizability: A Correctness Condition for Concurrent Objects

This paper is from Herlihy and Wing appeared in ACM Transactions on Programming Languages and Systems 1990. This is the canonical reference for the linearizability definition.

I had not read this paper in detail before, so I thought it would be good to go to the source to see if there are additional delightful surprises in the original text. Hence, this post.

I will dive into a technical analysis of the paper first, and then discuss some of my takes toward the end.

I had written an accessible explanation of linearizability earlier; you may want to read that first. I will assume an understanding of linearizability to keep this review at reasonable length.


Introduction

I love how the old papers just barge in with the model, without bothered by pleasantries such as motivation of the problem. These are the first two sentences of the introduction.

"A concurrent system consists of a collection of sequential processes that communicate through shared typed objects. This model encompasses both message-passing architectures in which the shared objects are message queues, and shared-memory architectures in which the shared objects are data structures in memory." 

The major thing that pops up is the emphasis on the "object" model. C++ appeared in 1985, and object-oriented was all the hype at the time. And this is a programming language journal, so the emphasis objects is warranted.

A friend of mine is very impressed that this linearizability paper did not just consider read/writes on values for illustrating linearizability, but instead used a queue object (with state and methods proper). He likes the generality this definition affords (compared to a read/write register model), and thinks this could have applications for improving availability. After considering the context of the paper, though, I am not sure if this was a conscious choice by the authors or was just a historical accident.

The queue object has two methods, enqueue and dequeue. The paper uses a weird notation, but after correcting for it, you understand what is going on in this examples. E(x) A represents process A enqueues item x: the invocation is the start of the interval, and the receipt of the response marks the end of the interval. Similarly, D(y) A means that A dequeued an item from the queue, and the returned item turned out to be item y.  


Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object’s operations can be given by pre- and post-conditions. History in Fig 1.b is not acceptable because clearly x was enqueued first, so the first dequeue operation should return x, not y--which is enqueued behind x. History in Fig 1.d is not acceptable because you cannot dequeue y twice. It would have been acceptable if the second dequeue operation returned x, because the enqueue of x and y overlapped, and it is possible that y overtook x in linearization order in the queue. (Alex Miller created a tool to create diagrams similar to Figure 1 by just taking text annotation as input. Check this out.)

Behind the curtain, linearizability still admits a lot of concurrency/interleaving. I loved Fig 5 as it provides a great illustration of how an observation by the client leads to collapse of multiple possible worlds behind the curtain. There is one reality, but there are multiple worlds/possibilities that fits the current observation, and a new observation leads to a collapse of multiple worlds.


Here are some things I didn't like in the paper's explanations.  In the queue implementation, the atomic blocks are unclear. Is the entire Enqueue method atomic? How about Deq, is each iteration of while atomic, or the entire while atomic. The paper tries to be very precise in other places but drops the ball big time in this discussion. Also I am bothered that they provided an inefficient naive implementation of the queue.


Theorems about Linearizability

This is the big theorem in the paper and this is also supposed to be the big selling point of linearizability.

A property P of a concurrent system is said to be local if the system as a whole satisfies P whenever each individual object satisfies P. Linearizability is a local property:

THEOREM 1. H is linearizable if and only if, for each object x, H|x is linearizable.


This seems to be a bit tautological and straightforward. I found Theorem 2 much more impressive. 

Linearizability is a nonblocking property: a pending invocation of a totally- defined operation is never required to wait for another pending invocation to complete. 

THEOREM 2. Let inv be an invocation of a total operation. If <x inv P> is a pending invocation in a linearizable history H, then there exists a response <x res P> such that H . <x res P> is linearizable.


Serializability and linearizability

The paper then says that serializability lacks these nice properties of linearizability. I mean, yeah, because it is not a single object property. It is an isolation property, not a consistency property like linearizability. 

https://jepsen.io/consistency


The paper says this: Linearizability can be viewed as a special case of strict serializability where transactions are restricted to consist of a single operation applied to a single object. Nevertheless, this single-operation restriction has far-reaching practical and formal consequences, giving linearizable computations a different flavor from their serializable counterparts. An immediate practical consequence is that concurrency control mechanisms appropriate for serializability are typically inappropriate for linearizability because they introduce unnecessary overhead and place unnecessary restrictions on concurrency. For example, the queue implementation given below in Section 4 is much more efficient and much more concurrent than an analogous implementation using conventional serializability oriented techniques such as two-phase locking or multi-version timestamping.

The comparison is somewhat unwarranted, and also this whole thing is a bit confusing to me. When this paper was written, people already knew about strict serializability which includes per object linearizability. I find it weird that strict serializability came to be before a formal definition of linearizability. Isn't that backwards?


Swept under the rug

The model swipes under the rug an important behavior. You don't always get a response. You may get a timeout. Or the response message may be lost. In these cases you don't know if the operation succeeded or not! Then what do you do? The paper concerns itself with only complete histories. In the absence of missing responses, things get much harder. 

In practice, differentiated histories trick is used for resolving this problem. "We prove that reasoning about causal consistency w.r.t. the RWM abstraction becomes tractable under the natural assumption of data independence (i.e., read and write instructions is insensitive to the actual data values that are read or written)." The  idea is that if you use a monotonous sequencer (or true time) as the identifier of requests, you can simplify things back, and manage complexity. Furthermore operations that over-write a previous value completely, like write/update operations in key-value stores (e.g., S3) make the linearizability reasoning even simpler because you get a clean the slate when you get a notification of a later timestamped operation, and collapse all the doubts about lesser timestamped unacknowledged operations. Either they took effect, and now it is irrelevant because this recent update overwrote them, or they were not yet processed, but they will never be processed by the system because a higher timestamped operation has already been processed. In either case, you collapsed the uncertainties.


Comments

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Foundational distributed systems papers

Advice to the young

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Understanding the Performance Implications of Storage-Disaggregated Databases

Designing Data Intensive Applications (DDIA) Book