Fault Tolerance via Idempotence (paper summary)


This paper (by Ramalingam and Vaswani) proposes a generic/automatic way to use idempotence---which requires the system to tolerate duplicate requests--- for handling communication and process failures in a software system efficiently.

Using idempotence for dealing with process and communication failures has been investigated also in the "Idempotence is not a medical condition", but there no (generic) solution for achieving idempotence was provided.

Automating idempotence

For automatically ensuring idempotence for a system, the paper makes use of the monad idea, and designs & implements the idempotence monad.

The idea underlying the idempotence monad is simple: "Given a unique identifier associated with a computation, the monad adds logging and checking to each effectful step in the workflow to ensure idempotance". Armed with this idempotence monad, the paper shows that idempotence, when coupled with a simple retry mechanism, provides a solution to the process and communication failures.

Consider a bank example, where the parameter requestId serves to distinguish between different transfer requests and identify duplicate requests. This service can be made idempotent and failfree by (1) using this identifier to log debit and credit operations, and (2) modifying the debit and credit steps to check--using the log-- if the steps have already been performed. This strategy ensures that multiple (potentially partial and concurrent) invocations of transfer with the same identifier have the same effect as a single invocation.

But, manually ensuring idempotence is tedious, error-prone and makes implementation less comprehensible. So the paper describes a monad-based library that realizes idempotence and failure-freedom in a generic way. "A type with a monad structure defines what it means to chain operations, or nest functions of that type together. This allows the programmer to build pipelines that process data in steps, in which each action is decorated with additional processing rules provided by the monad. As such, monads have been described as programmable semicolons." This is also the aspect-oriented programming way.

The Idempotence Monad:

  • Associate every computation instance (that we wish to make idempotent) with a unique identifier.
  • Associate every step in an idempotent computation with a unique number.
  • Whenever an effectful step is executed, persistently record the fact that this step has executed and save the value produced by this step.
  • Every effectful step is modified to first check if this step has already been    executed. If it has, then the previously saved value (for this step) is used instead of executing the step again.

Decentralized Idempotence

The implementation of the idempotence monad is designed to work with key-value data stores, and does not assume a dedicated storage for logs that can be accessed atomically with each transaction. The implementation adopts the key-value datastore to simulate a distinct address space for logging.

This leads to a decentralized implementation of idempotence that does not require any centralized storage or any (distributed) coordination between different stores. Thus, this implementation of idempotance preserves the decentralized nature of the underlying computation.

Evaluation and critique

The idempotence monad has been implemented in C# and F# targeting Windows Azure, which provides a key-value store. The evaluations in the paper shows that the performance overheads of using the idempotence monad over hand-coded implementations of generic idempotence are not significant.

But, this is an incomplete evaluation: The compared baseline of hand-coded implementations is constrained to add logging and checking to the operation ---as it was the case for the monad. So, the only overheads in monad solution over hand-coded are that the compiler generated monad code "tend to capture a lot more state than hand-coded implementations, and may also add unnecessary logging to transactions that are already idempotent". It could be interesting to repeat the evaluations with a non-constrained developer that can implement custom solution to fault-tolerance leveraging more on the application logic and implementation information.

Another thing to compare in future evaluations could be to that of alternative generic fault-tolerance approaches: e.g., using Replicated State Machines (e.g., via Paxos, or primary/secondary replication), or using the CRDT approach (for replicating the node, when it is applicable).

Actually, taking a step back to re-examine the problem, we can notice that leveraging on TCP gets us through most of the way: We won't have message losses if we use TCP, and would get message losses only because of process crashes. So, we only would need a way to tolerate process crashes, say via replication, checkpointing, or a fast-restartable system.

Finally, the trends toward providing better plumbing at datacenters also alleviates the communication/process failures problems. (For example, Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed components of their applications that perform different tasks, without losing messages or requiring each component to be always available.)

Comments

Anonymous said…
I find this idea quite useful as you don't need a centralised transaction co-ordinator. It kind of makes sense that TCP already does this on a packet level, but on an application logic level, you need some sort of fault tolerance that results you needing to recreate the TCP protocol somewhat for application data "packets". Do you think this is better than other fault-tolerance protocols?

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Foundational distributed systems papers

Advice to the young

Linearizability: A Correctness Condition for Concurrent Objects

Understanding the Performance Implications of Storage-Disaggregated Databases

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Designing Data Intensive Applications (DDIA) Book