Posts

Showing posts from 2015

What technological advancements will the next decade bring?

There have been amazing progress in technology in recent years . Just to name a few, I would mention deep learning, big data, ubiquitous tablets and smartphones, advances in medicine, BitCoin, 3D printing, accurate voice recognition, Arduino, Raspberry pie, maker movement, drones, etc. I can't help but wonder what we will see next, in the coming decade. What discoveries we will be able to make? How would the world change? I am very curious and excited about the future. To extrapolate to 2025, indulge me as I go back to the year 2005 to extrapolate. Ten years ago, iPhone was still not introduced (iPhone 1 got introduced on June 29, 2007). Smartphones were rare. Internet was not a household item. Cloud was not there. Tablets were not there. In fact, none of the technologies mentioned in the previous paragraph were there. In ten years we have come a long way. Let me add an anectode about how I witnessed this ride in the last 10 years. In 2005, I joined University at Buffalo as an

My Distributed Systems Seminar's reading list for Spring 2016

Below is the list of papers I plan to discuss in my distributed systems seminar in Spring'16 semester. These are all very exciting papers, and I am looking forward to the Spring semester. If you have some suggestions on other good/recent papers to cover, please let me know in the comments. Distributed coordination No compromises: distributed transactions with consistency, availability, and performance , SOSP 15 Implementing Linearizability at Large Scale and Low Latency , SOSP 15 High-Performance ACID via Modular Concurrency Control , SOSP 15 Existential Consistency: Measuring and Understanding Consistency at Facebook , SOSP 15 Holistic Configuration Management at Facebook , SOSP 15 Building Consistent Transactions with Inconsistent Replication , SOSP 15 Bolt-on Causal Consistency , Sigmod 13 The Design and Implementation of the Wave Transactional Filesystem Big data Arabesque: A System for Distributed Graph Mining , SOSP 15 Petuum: A New Platform for Distri

Paper summary: Detecting global predicates in distributed systems with clocks

Image
This is a 2000 paper by Scott Stoller. The paper is about detecting global predicates in distributed systems. There has been a lot of previous work on predicate detection (e.g., Marzullo & Neiger WDAG 1991, Verissimo 1993), but those work considered vector clock (VC) timestamped events sorted via happened-before (hb) relationship. This paper proposes a framework for predicate detection over events timestamped with approximately-synchronized (think NTP) physical-time (PT) clocks. This was a tough/deep paper to read and a rewarding one as well. I think this paper should receive more interest from distributed systems developers as it has applications to the cloud computing monitoring services. As you can see in Facebook stack and Google stack , monitoring services are an indispensible component of large-scale cloud computing systems. Motivation Using PT for timestamping (and predicate detection) has several advantages over using VC. VC is O(N), whereas PT is a scalar. For c

Energy Approach to Life

I had blogged about this before. Now, I created the highlights into a short presentation. I included some book recommendations in the slides as well. Here is the link for the presentation.

HPTS trip report (day 2)

This is a belated wrap up of my HPTS trip report. Read the first part of the HPTS trip report here, if you haven't before. The first day of HPTS was a very busy, productive, and tiring day. Since my body clock was still on East Coast time, I was wide awake at 6am again. I went for a longer run this time. I ran for 5 miles along the beach on the aptly named Sunset drive. Monterey Bay, Asilomar is an eerily beautiful piece of earth. After breakfast, HPTS started with the data analytics session. That night it was announced that Chris Re (from Stanford) won a  MacArthur genius award . Chris was scheduled to talk in this first session of HPTS day 2. It was a very engaging talk. Chris talked about the DeepDive macroscopic data system they have been working on.  DeepDive is a system that reads texts and research papers and collect macroscopic knowledge about a topic with a quality that exceeds paid human annotators and volunteers. This was surprising news to me; another victory for

HPTS trip report (days 0 and 1)

Image
Last week, from Sunday to Tuesday night, I attended the  16th International Workshop on High Performance Transaction Systems (HPTS). HPTS is an unconventional workshop.  "Every two years, HPTS brings together a lively and opinionated group of participants to discuss and debate the pressing topics that affect today's systems and their design and implementation, especially where scalability is concerned. The workshop includes position paper presentations, panels, moderated discussions, and significant time for casual interaction. The only publications are slide decks by presenters who choose to post them."  HPTS is by invitation only and keeps it under 100 participants. The workshop brings together experts from both industry and academia so they mix and interact. Looking at the program committee, you can see names of entrepreneurs venture capitalists ( David Cheriton , Sequoia Capital), large web companies (Google, Facebook, Salesforce, Cloudera), and academics. HPTS is le

Consensus in the wild

Image
The consensus problem has been studied in the theory of distributed systems literature extensively. Consensus is a fundamental problem in distributed systems. It states that n nodes agree on the same decision eventually. Consistency part of the specification says that no two nodes decide differently. Termination states that all nodes eventually decide. And NonTriviality says that the decision cannot be static (you need to decide a value among inputs/proposals to the system, you can't keep deciding 0 discarding the inputs/proposals). This is not a hard problem if you have reliable and bounded-delay channels and processes, but becomes impossible in the absence of either. And with even temporary violation of reliability and timing/synchronicity assumptions, a consensus system can easily spawn multiple corner-cases where consistency or termination is violated. E.g., 2-phase commit is blocking (this violates termination), and 3-phase commit is unproven and has many corner cases involvin

Analysis of Bounds on Hybrid Vector Clocks

Image
This work is in collaboration with Sorrachai Yingchareonthawornchai and Sandeep Kulkarni at the Michigan State University and is currently under submission. Practice of distributed systems employs loosely synchronized clocks, mostly using NTP. Unfortunately, perfect synchronization is unachievable due to messaging with uncertain latency, clock skew, and failures. These sync errors lead to anomalies. For example, a send event at Branch1 may be assigned a timestamp greater than the corresponding receive event at Branch2, because Branch1's clock is slightly ahead of Branch2's clock. This leads to /inconsistent state snapshots/ because, at time T=12:00, a money transfer is recorded as received at Branch2, whereas it is not recorded as sent at Branch1. Theory of distributed systems shrugs and doesn't even try. Theory abstracts away from the physical clock and uses logical clocks for ordering events. These are basically counters, as in Lamport's clocks and vector cloc

Serving at NSF panels and what it teaches about how to pitch the perfect proposal

NSF is one of the largest funding sources for academic research.  It accounts for about one-fourth of federal support to academic institutions for basic research. NSF accepts 1000s of proposals from researchers, and organizes peer-review panels to decide which ones to fund. Serving at NSF panels are fun. They are also very useful to understand the proposal review dynamics. NSF funding rates are around 10% for computer science and engineering research proposals, so understanding the dynamics of the panel is useful for applying NSF to secure some funding. How do you get invited as a panelist?  You get invited to serve at an NSF panel by the program director of that panel. (Program directors are researchers generally recruited from the academia to serve at NSF for a couple years to run panels and help make funding decisions.) If you have been around and have successfully secured NSF funding, you will get panel invitations. They will have your name and contact you. But, if you are

paper summary: One Trillion Edges, Graph Processing at Facebook-Scale

Image
This paper was recently presented at VLDB15. A. Ching, S. Edunov, M. Kabiljo, D. Logothetis, S. Muthukrishnan, "One Trillion Edges: Graph Processing at Facebook-Scale." Proceedings of the VLDB Endowment 8.12 (2015). This paper is about graph processing. Graphs provide a general flexible abstraction to model relations between entities, and find a lot of demand in the field of big data analysis (e.g., social networks, web-page linking, coauthorship relations, etc.) You think the graphs in Table 1 are big, but Frank's laptop begs to differ . These graphs also fail to impress Facebook. In Facebook, they work with graphs of trillion edges, 3 orders magnitude larger than these. How would Frank's laptop fare for this? @franks_laptop may step up to answer that question soon. This paper presents how Facebook deals with these huge graphs of one trillion edges. Apache Giraph and Facebook In order to analyze social network data more efficiently, Facebook considered som

New directions for distributed systems research in cloud computing

This post is a continuation of my earlier post on "a distributed systems research agenda for cloud computing".  Here are some directions I think are useful directions for distributed systems research in cloud computing. Data-driven/data-aware algorithms Please check the Facebook and Google software architecture diagrams in these two links:  Facebook Software Stack , Google Software Stack . You will notice that the architecture is all about data: almost all components are about either data processing or data storage. This trend may indicate that the distributed algorithms should need to adopt to the data it operates on to improve performance. So, we may see the adoption of machine-learning as input/feedback to the algorithms, and the algorithms becoming data-driven and data-aware. (For example, this could be a good way to attack the tail-latency problem discussed here .) Similarly, driven by the demand from the large-scale cloud computing services, we may see power-man

How to go for 10X

I think the 10X term originated from this book . (Correct me if I am wrong. I didn't check this.) It seems like Larry and Sergey are a fan of this concept (so should you!). Actually reading this January 2013 piece, you can sense that the Alphabet transition was in the works by then. 10X doesn't just mean go fast, get quick results, and get 10X more done in the same time. If you think about it, that is actually a pretty incremental mode of operation. And that is how you incur technical debt. That means it was just a matter of time for others to do the same thing, and probably much better and more complete. Trading off quality for time is often not a good deal (at least in the academic research domain). 10X means transformative rather than incremental improvement. Peter Thiel explains this well in his book Zero to One, 2014. The main theme in the book is: Don't do incremental business, invent a new transformational product/approach. Technology is 0-1, globalization is

A distributed systems research agenda for cloud computing

Distributed systems (a.k.a. distributed algorithms) is an old field of almost 40 years old. It gave us impossibility proofs on the theory side, and also algorithms like Paxos, logical/vector clocks, 2/3-phase commit, leader election, dining philosophers, graph coloring, spanning tree construction which are adopted in practice widely. Cloud computing is a relatively new field in contrast. It provides new opportunities as well as new challenges for the distributed systems/algorithms area. Below I briefly discuss some of these opportunities and challenges. Opportunities Cloud computing provides abundance. Nodes are replaceable, even hot swappable. You can dedicate several nodes for running customized support services, such as monitoring, logging, storage/recovery service. These opportunities are likely to have impact on how fault-tolerance is considered in distributed systems/algorithms work. Programmatic interfaces and service-oriented architecture are also hallmarks of cloud compu

Book Review: "Rework", 2010, Jason Fried & David Heinemeier Hansson

This is a book from the founders of 37signals . The book is full of simple commonsense advice for business and work. Yet these advice are also very fresh. How come?? Since we have been trying/inventing/pushing complex complicated rules & approaches for business and work in lieu of simple straightforward approaches, the simple commonsense advice in the book do come across as surprisingly fresh. Just to give one example, for culture, the book says: "You can't build it, it occurs. Whatever you value and do (actions speak louder than words), it will become the culture of your company/workers overtime." Commonsense? Yes. But only after you take a step back and think, you say "Huh. That is obvious." Another after-the-matter obvious advice: Try to underdo your competition, just do fewer things but better. You should stand for something. A strong stand is how you attract superfans. Covering everything means you don't have focus. Provide the main functionalit

How to run effective paper reading groups

Every year, I offer a distributed systems reading group seminar, where we discuss recent interesting research papers. ( Here is the list of papers we covered this Spring. ) Reading group seminars are a lot of fun when everything clicks. But they can easily turn into soul-draining boring meetings when a couple of things go wrong. Here are some common bad meeting patterns: (1) the presenter goes on and on with a dry presentation, (2) without a common background, the participants bombard the presenter with a lot of questions just to get the context of the work and a lot of time is wasted just to get started on the paper, (3) the audience drifts away (some fall into their laptop screens, some start to fiddle with their phones), and (4) in the discussion phase an awkward silence sets in and crickets chirp. I have been trying to tinker with the format of my reading group meetings to avoid those problems and improve the odds that everything clicks together. And over time I have been learn

Popular posts from this blog

The end of a myth: Distributed transactions can scale

Hints for Distributed Systems Design

Foundational distributed systems papers

Learning about distributed systems: where to start?

Metastable failures in the wild

Scalable OLTP in the Cloud: What’s the BIG DEAL?

SIGMOD panel: Future of Database System Architectures

The demise of coding is greatly exaggerated

Dude, where's my Emacs?

There is plenty of room at the bottom