Thursday, January 14, 2016

Fool yourself

"The first principle is that you must not fool yourself --and you are the easiest person to fool."     ---Richard Feynman
I agree with the second part. I am the easiest person to be fooled by myself, because I like to believe what is convenient, comfortable, less scary to me. For example, I am fooled to procrastinate, because I am scared of facing the task and not having a perfect complete product. Sometimes I am fooled to be overly optimistic, because I am scared of the alternative: failure and hassle. And sometimes I am fooled to be overly pessimistic, because I am scared of the long arduous fight to achieve something, so I say "No way, I won't be able to achieve it anyways".

However, I like to propose a change to the first part of Feynman's quote. If it is so easy to fool yourself, you should exploit it: you should fool yourself in a beneficial manner to avoid fooling yourself in the usual/default harmful manner.

For example, when you catch yourself procrastinating, you should fool yourself to get started. When you are frozen by the prospect of writing an article or blog post, you can say: "This is the zeroth draft, I can throw this anyways. This is just a brain dump, let's see what it will look like."

Insisting on a one-shot completion, waiting inspiration to hit you for getting it perfect and finished in one sitting, raises the bar up high. Instead, you should think of any project as iterations of attempts. You should forget about shipping, and just cajole yourself for making progress. And when you get something good enough, then you can convince yourself to overcome your fear and ship it.

Similarly, when you catch yourself to be overly optimistic, you can fool yourself to consider alternatives: "Just for fun, let me take 15 minutes, to write about what alternatives I can pursue if things go wrong with this. Let's brainstorm hypothetical failure scenarios."

Or, when you catch yourself to be overly pessimistic, you can fool yourself to be hopeful: "Why not just give this a try? Let's pretend as if this will workout, and don't bother if it doesn't."


And how do you catch and intercept when you are fooling yourself in a harmful manner?

By always being on the watch. (Step 1, raise your gaze from your smartphone screen. Step 2, observe yourself and reflect.) You should always suspect that you are fooling yourself: which you are, either one way or another. Your only choice is to decide on what type of stories you tell yourself. You can fool yourself to engage in unproductive harmful behavior, or fool yourself to engage in productive behavior. You get to decide what stories you tell yourself.

I will finish with this other quote I like a lot:
"Give up on yourself. Begin taking action now, while being neurotic or imperfect, or a procrastinator or unhealthy or lazy or any other label by which you inaccurately describe yourself. Go ahead and be the best imperfect person you can be and get started on those things you want to accomplish before you die." -- Shoma Morita, MD

Monday, January 11, 2016

Paper review: TensorFlow, Large-Scale Machine Learning on Heterogeneous Distributed Systems

The paper is available here.

TensorFlow is Google's new framework for implementing  machine learning algorithms using dataflow graphs. Nodes/vertices in the graph represent operations (i.e., mathematical operations, machine learning functions), and the edges represent the tensors, (i.e.,  multidimensional data arrays, vectors/matrices) communicated between the nodes.  Special edges, called control dependencies, can also exist in the graph to denote that the source node must finish executing before the destination node starts executing. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.

It seems like the dataflow model is getting a lot of attention recently and is emerging as a useful abstraction for large-scale distributed systems programming. I had reviewed Naiad dataflow framework earlier. Adopting the dataflow model provides flexiblity to TensorFlow, and as a result, TensorFlow framework can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models.

TensorFlow's heterogeneous device support

The paper makes a big deal about TensorFlow's heterogenous device support, it is even right there in the paper title. The paper says: "A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards."

Wait, what? Why does TensorFlow need to run on wimpy phones?!

The paper says the point is just portability: "Having a single system that can span such a broad range of platforms significantly simplifies the real-world use of machine learning system, as we have found that having separate systems for large-scale training and small-scale deployment leads to significant maintenance burdens and leaky abstractions. TensorFlow computations are expressed as stateful dataflow graphs."

I understand this, yes portability is important for development. But I don't buy this as the explanation. Again, why does TensorFlow, such a powerhorse framework, need to be shoehorned to run on a single wimpy phone?!

I think Google has designed and developed TensorFlow as a Maui-style integrated code-offloading framework for machine learning. What is Maui you ask? (Damn, I don't have a Maui summary in my blog?)

Maui is a system for offloading of smartphone code execution onto backend servers at method-granularity. The system relies on the ability of managed code environment (.NET CLR) to be run on different platforms. By introducing this automatic offloading framework, Maui enables applications that exceed memory/computation limits to run on smartphones in a battery- & bandwidth-efficient manner.

TensorFlow enables cloud backend support for machine learning to the private/device-level machine learning going on in your smartphone. It doesn't make sense for a power-hungry entire TensorFlow program to run on your wimpy smartphone. Your smartphone will be running only certain TensorFlow nodes  and modules, the rest of the TensorFlow graph will be running on the Google cloud backend. Such a setup is also great for preserving privacy of your phone while still enabling machine learned insights on your Android.

I will talk about applications of this, but first let me mention this other development about TensorFlow that supports my guess.

Google Opensourced the TensorFlow API in Nov 2015

Google opensourced the TensorFlow API and a limited reference implementation (the implementation runs on a single device only) under the Apache 2.0 license in November 2015. This implementation is available at www.tensorflow.org, and has attracted a lot of attention.

Why did Google opensource this project relatively early rather than keeping it proprietary for longer? This is their answer: "We believe that machine learning is a key ingredient to the innovative products and technologies of the future. Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products."

This supports my guess. TensorFlow's emphasis on heterogeneity is not just for portability. Google is thinking of TensorFlow as an ecosystem. They want developers to adopt TensorFlow, so TensorFlow is used for developing machine learning modules in Android phones and tablets. And then, Google will support/enrich (and find ways to benefit from) these modules by providing backends that run TensorFlow. This is a nice strategy for Google, a machine learning company, to percolate to the machine learning in the Internet of Things domain in general, and the mobile apps market in particular. Google can be the monopoly of Deep learning As A Service (DAAS) provider leveraging the TensorFlow platform.

How can Google benefit from such integration? Take a look at this applications list: "TensorFlow has been used in Google for deploying many machine learning systems into production: including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery."

With a mobile-integrated TensorFlow machine-learning system, Google can provide better personal assistant on your smartphone. Watch out Siri, better speech recognition, calendar/activity integration, face recognition, and computer vision is coming. Robotics applications can enable Google to penetrate self-driving car OS, and drone OS markets. And after that can come more transformative globe-spanning physical world sensing & collaboration applications.

With this I rest my case. (I got carried away, I just intended to do a paper review.)

Related links

In the next decade we will see advances in machine learning coupled with advances in Internet Of Things.

In this talk, Jeff Dean gives a very nice motivation and introduction for Tensorflow.

Here is an annotated summary of the TensorFlow paper.

This post explains why TensorFlow framework is good news for deep learning.

Monday, January 4, 2016

Book Review: "Zero to One", Peter Thiel, 2014

I liked this book a lot. It inspired me to write about "How to go for 10X". That blog post and the "Zero to One" book I mentioned there got covered better than me by Todd Hoff at his High Scalability blog. I am including my brief and unstructured notes on Zero to One book here just for the record.

The main theme in the book is: Don't do incremental business, invent a new transformational product/approach. Technology is 0-to-1, globalization is 1-to-n. Most people think the future of the world will be defined by globalization, but the book argues that technology matters more. The book says: Globalization (copying  and incrementalism as China has been doing) doesn't scale, it is unsustainable. That's a hard argument to make, but a softer version of that argument is: "technology creates more value than globalization".

A related theme is that you should aim to become a monopoly with your transformational product/technology. If you compete, everybody loses: competitive markets destroy profits. (That is why a lot of restaurants fail.) A smarter approach is to start small and monopolize your area. Of course, over time monopolies also fade and outdated, so you should strive to reinvent another business.

This book also has a lot of interesting counterintuitive ideas. When the book
told me about these items, I thought they made sense:

  • make incremental advances
  • stay lean and flexible
  • improve on the competition
  • focus on product not sales

NO! Peter Thiel says that these lesson have become dogma in the startup world, and yet the opposite principles are probably more correct.

  • it is better to risk boldness than triviality
  • a bad plan is better than no plan
  • competitive markets destroy profits
  • sales matter just as much as product


It is easy to fool yourself to think you have an innovative thing. If you are defining your business in terms of intersections, it may not be that innovative, and maybe it shouldn't exist in the first place. It is hard to find something truly innovative/transformative, and you will know it when you find it. It will open a new market and will monopolize that market. If you are a monopoly, you try to downplay and define yourself as a union of many markets. Google is a monopoly in search, so it lies to underplay this by casting itself as a IT company.

Friday, January 1, 2016

My new pomodoro workflow

Pomodoro is a timeboxing technique. You set a Pomodoro timer for 25 minutes to get a task done. Then you take a break for 5 minutes, after which you can do another Pomodoro. Since Pomodoro imposes a limit on your work time, this adds a gaming element to the task.

I have been using the Pomodoro technique for 3-4 years now and I had written about that before. Recently I changed my Pomodoro usage significantly. Now I use Pomodoro more deliberately to achieve intense mental workouts and to force myself to get more creative and obtain transformative results.

Deep work versus Shallow work

The problem I want to attack with this new deliberate Pomodoro practice is the shallow work problem. Human brain is programmed to save energy and go easy on itself, so it prefers shallow tasks (such as watching TV, web browsing, answering emails) and tries to avoid intense thinking sessions required for many creative tasks such as writing, thinking on a proof, and designing an algorithm. As a result, we accumulate a lot of bad studying habits: we seem to be working but we take it slow, we get distracted with other thoughts and break our concentration. In other words, unless we are deliberate about it, it is easy to switch to a superficial shallow work mode, rather than an effective deep work mode.

(If you like to learn more about deep work versus shallow work, and improve your study habits to achieve deep work, I recommend you my friend/colleague Cal Newport's new book: Deep Work.)

Using Pomodoro in a more deliberate/mindful way, I aim to prevent shallow work and achieve better focus and intense concentration. Why is intense concentration this important? I had made this observation before: Intense concentration sessions followed by a rest period (meal, walking, playing with the kids) is helpful to cultivate creative ideas. The more intense the study session, the better chance you will have an epiphany/insight in your resting time. (A Coursera course titled "Learning How To Learn" also mentions this finding.)

Intense concentration builds tension about the problem in your brain. So in your resting time, your brain spawns several background processes that try to resolve this tension, and, voila, you get epiphanies about solutions. In order to preserve this tension and keep your brain obsessed about the problem, it is also helpful to focus on one problem/paper at any given day. My postdoc advisor Nancy Lynch would work on only one paper for any given week. "The way she worked with students was that she would dedicate herself solely on a student/paper for the duration of an entire week. That week, she would avoid thinking or listening other works/students. This is because, she wanted to immerse and keep every parameter about the paper she is working on in her mind, and grok it."

My new Pomodoro cycle

I now have a three phase Pomodoro cycle: 4 minutes of prePomodoro, 22 minutes of Pomodoro, and 4 minutes of postPomodoro. In the prePomodoro, I plan about what I will do, how I can go best about it, and how I can take an alternative and more productive route. In other words, I go meta about the task. By raising several questions I warm up my brain to get into an active attention state. In the Pomodoro, I do my best to get as much done in 22 minutes with intense concentration. In the postPomodoro, I first do a celebration routine; I stretch and walk around. This helps for associating positive feelings of accomplishment with a hard focused studying session. I then review my performance, and critique myself. What did I accomplish? What could I have done better? I write a tweet about this at a protected Twitter account, @YazarKata, that only I follow. This way when reading my own twitter feed @muratdemirbas, I can see/review my Pomodoro tweets as well. This postPomodoro is extensible, with a key press, I can always add another 4 minutes to work on completing the task in case the Pomodoro needs a little bit more work.

I also changed my Pomodoro software. I used to have a timer at the menubar, but now I replaced it with an Emacs script. I figured it makes more sense to have my Pomodoro workflow incorporated to my Emacs setup, because I live most of my productive life in Emacs and use the org-mode to organize my tasks. I adopted the pomodoro.el script, and modified it to use the Mac OSX "say" command to announce out special motivational messages at the beginning and end of my pomodoros. I am not disabling wifi during a Pomodoro anymore. My pomodoros are already very intense, so I don't have any urges, distractions to browse the web.

Maybe in a couple years time, I will have another update to report on this.

Related links: 

How I read a research paper
How to go for 10X
Research is a capricious mistress