Showing posts from January, 2016

Fool yourself

"The first principle is that you must not fool yourself --and you are the easiest person to fool."     ---Richard Feynman I agree with the second part. I am the easiest person to be fooled by myself, because I like to believe what is convenient, comfortable, less scary to me. For example, I am fooled to procrastinate, because I am scared of facing the task and not having a perfect complete product. Sometimes I am fooled to be overly optimistic, because I am scared of the alternative: failure and hassle. And sometimes I am fooled to be overly pessimistic, because I am scared of the long arduous fight to achieve something, so I say "No way, I won't be able to achieve it anyways".

However, I like to propose a change to the first part of Feynman's quote. If it is so easy to fool yourself, you should exploit it: you should fool yourself in a beneficial manner to avoid fooling yourself in the usual/default harmful manner.

For example, when you catch yourself pro…

Paper review: TensorFlow, Large-Scale Machine Learning on Heterogeneous Distributed Systems

The paper is available here.

TensorFlow is Google's new framework for implementing  machine learning algorithms using dataflow graphs. Nodes/vertices in the graph represent operations (i.e., mathematical operations, machine learning functions), and the edges represent the tensors, (i.e.,  multidimensional data arrays, vectors/matrices) communicated between the nodes.  Special edges, called control dependencies, can also exist in the graph to denote that the source node must finish executing before the destination node starts executing. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.

It seems like the dataflow model is getting a lot of attention recently and is emerging as a useful abstraction for large-scale distributed systems programming. I had reviewed Naiad dataflow framework earlier. Adopting the dataflow model provides flexiblity to TensorFlow, and as a result, TensorFlow f…

Book Review: "Zero to One", Peter Thiel, 2014

I liked this book a lot. It inspired me to write about "How to go for 10X". That blog post and the "Zero to One" book I mentioned there got covered better than me by Todd Hoff at his High Scalability blog. I am including my brief and unstructured notes on Zero to One book here just for the record.

The main theme in the book is: Don't do incremental business, invent a new transformational product/approach. Technology is 0-to-1, globalization is 1-to-n. Most people think the future of the world will be defined by globalization, but the book argues that technology matters more. The book says: Globalization (copying  and incrementalism as China has been doing) doesn't scale, it is unsustainable. That's a hard argument to make, but a softer version of that argument is: "technology creates more value than globalization".

A related theme is that you should aim to become a monopoly with your transformational product/technology. If you compete, everybo…

My new pomodoro workflow

Pomodoro is a timeboxing technique. You set a Pomodoro timer for 25 minutes to get a task done. Then you take a break for 5 minutes, after which you can do another Pomodoro. Since Pomodoro imposes a limit on your work time, this adds a gaming element to the task.

I have been using the Pomodoro technique for 3-4 years now and I had written about that before. Recently I changed my Pomodoro usage significantly. Now I use Pomodoro more deliberately to achieve intense mental workouts and to force myself to get more creative and obtain transformative results.

Deep work versus Shallow work The problem I want to attack with this new deliberate Pomodoro practice is the shallow work problem. Human brain is programmed to save energy and go easy on itself, so it prefers shallow tasks (such as watching TV, web browsing, answering emails) and tries to avoid intense thinking sessions required for many creative tasks such as writing, thinking on a proof, and designing an algorithm. As a result, we ac…

Popular posts from this blog

I have seen things

SOSP19 File Systems Unfit as Distributed Storage Backends: Lessons from 10 Years of Ceph Evolution

PigPaxos: Devouring the communication bottlenecks in distributed consensus

Learning about distributed systems: where to start?

My Distributed Systems Seminar's reading list for Fall 2020

Fine-Grained Replicated State Machines for a Cluster Storage System

My Distributed Systems Seminar's reading list for Spring 2020

Cross-chain Deals and Adversarial Commerce

Book review. Tiny Habits (2020)

Zoom Distributed Systems Reading Group