Thursday, April 27, 2017

DARPA: Delivering Disruption

What does DARPA and Facebook have in common?
Move fast and break things!

A couple weeks ago Stefanie Tompkins, DARPA acting deputy director, visited University at Buffalo to give a talk and meet researchers. She talked about DARPA and funding opportunities at DARPA. I had the mental image of DARPA as a slow moving, hard to maneuver aircraft carrier, but her talk painted a very different picture, and partially changed my perception of DARPA.

DARPA, then ARPA (Advanced Research Projects Agency), was created in 1958 as a response to Sputnik 1. DARPA's mission is to prevent strategic surprise, avoid another Sputnik moment. And as you already know, the best way to predict future is to invent it. So DARPA is perpetually pushing the envelope to invent the next strategic surprise and disruptive technology itself.

DARPA has a large number of successes under its belt. Internet has its roots in ARPANET.  GPS grew out of basic research on atomic clocks funded by DARPA. DARPA is also credited with several inventions in stealth, radar arrays, uavs, ir night vision, microelectronics, and materials sciences. DARPA funded several high impact grand challenges on autonomous vehicles starting back in 2004, which helped kick start todays self-driving cars technologies. DARPA also funded other grand challenges, including the network challenge (2009), and cyber grand challenge (2016).

To drive for disruption and off-scale impact, DARPA recruits program managers from universities (20% of PMs) and industry. The program managers rotate quickly, they serve 3-5 years. Within that time, they use the budget allocated to them to create/fund DARPA programs on high impact research and development. DARPA programs may range from $10,000,000 to $400,000,000 over 2 to 5 years.

DARPA has 6 technical offices.

Heilmeier questions have a significant place DARPA culture. Even the program managers need to answer these questions when creating new programs.

Another part of DARPA culture is the infamous quadchart. "Quadcharts which cram in figures, assertions, and required logos/branding/acronyms are a proud and sacred tradition" (RT @tedherman).

While Stefanie's presentation managed to convince me that DARPA is an agile institution, the quadcharts make me a skeptic :-)

Monday, April 24, 2017

Book Review: The Story of Your Life (by Ted Chiang)

The "Story of Your Life" is the science-fiction story which the Arrival film was based on. It was a very good read. Back in November, before the movie was released, I read it in less than 2 hours, in one breath.

[Spoilers ahead.]

The first couple pages of the book had some weird/mixed usage of the past future tense. Right at that first page, you get the idea that something is not right and this is going to be an interesting read. And as you keep reading, the story drops on you more clues, and you feel both smart and awed, when you first piece together that Gary was indeed the linguist's (Louise) first husband, and the daughter is not dead yet due to the climbing accident.

Ted Chiang gives a lot of credit to the readers' intelligence. I liked that a lot. I also liked that I had to google and learn several things while reading the story. I googled to learn about "Fermat's principle", "teleology", "ideograms", and some related linguistic terms.

I was pretty shaken after finishing the story. It contributes to the Freewill and Fate debate in philosophy and theology from a science-fiction perspective. (Time is an illusion, and with an altered perception, you can experience time in an orthogonal axis, and freewill becomes irrelevant/pointless.) The story is at the same time very emotional because the parentship thread is very well woven into the story.

Ted Chiang writes a very tight story. No, actually, he calculates, computes, and weaves the story. Amused by this first story, I read couple other stories from him. I also liked "The Alchemist's Gate", which was written like a Sufi story, again exploring the concept of fate, and free will.  Ted seems to be obsessed about these concepts, and must have been thinking very deeply about them. This  story also made subtle and thoughtful "what if" cases about these concepts. This story also had a strong technical side interwoven with a moving emotional side.

Wikipedia says Ted Chiang has been working as a technical writer in the software industry. I would be interested in reading the manuals he writes.

Related links:
Hacker News Discussion 
How I wrote Arrival 
Wolfram's blog on the symbolic language created for the movie 
Timequake by Kurt Vonnegut was also an excellent book playing with the freewill and fate ideas.

Friday, April 21, 2017

Paper summary: Federated Learning

On Thursday, April 6, Google announced Federated Learning.  The announcement didn't make quite a splash, but I think this is potentially transformative. Instead of uploading all the data from the smartphones to the cloud for training the model in the datacenter, federated learning enables in-situ training at the smartphones themselves. The datacenter is still involved but it is involved for just aggregating the smartphone-updated local models in order to construct the new/improved global model.

This is a win-win-win situation. As a smartphone user, your privacy is preserved since your data remains on your device, but you still get the benefits of machine learning on your smartphone. Google gets what it needs: it perpetually learns from cumulative user experience and improves its software/applications. Google collects insights without collecting data (and some of these insights may still be transferable to advertising income). Secondly, Google also outsources the training to the users' smartphones: it makes the smartphones work on updating the model, rather than using servers in its datacenters. This may seem like penny-pinching, but if you consider the sheer number of Android smartphones out there (more than 1.5 billion according to numbers by the end of 2015), the computation savings are huge. Notice that Google is winning twice, while you win only once. :-)

After skimming the Google announcement, I was excited, because I had predicted this on Jan 2016. When I read the TensorFlow whitepaper, I was struck by a peculiar emphasis on heterogenous device support of TensorFlow, especially on the smartphone. I predicted that Google is up to something, more than just inference/serving layer support on smartphones. I got the mechanism for this wrong, since I didn't have machine learning experience then.

The Google announcement links to some papers, and I read the most relevant paper carefully. Below is my summary of the paper: Communication-Efficient Learning of Deep Networks from Decentralized Data.

Applications that are ideal fit for federated learning

The paper pitches smartphone applications as the ideal domain for federated learning.
For smartphones, the upload transfer is a bottleneck. Instead of uploading the data to the cloud for training, let the smartphone train and update the model and transmit it back. This makes sense when the model-update is smaller than the data. There is another Google paper that provides tricks for further optimizing/compressing the size of the learned-model at the smartphone for transmitting back to the cloud. Since the upload rate is about 1/3rd of the download rate, such techniques are beneficial.

A second big benefit for the smartphone applications domain is preserving the private data of the smartphone user. Finally, in the smartphone applications domain the labels on the data is available or inferable from user interaction (albeit in an application dependent way).

Concrete examples of these applications are "image classification: predicting which images are most likely to be viewed and shared", and "language modeling: improving voice recognition and text entry on smartphone keyboard." Google is already using federated learning in the GBoard Keyboard on Android for improving text entry. On device training uses a miniature version of TensorFlow.

Related work

There has been work on iteratively averaging locally trained models, but they are inside datacenter, not at the edge. See: Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In NIPS. 2015. and also this one: Ryan McDonald, Keith Hall, and Gideon Mann. Distributed training strategies for the structured perceptron. In NAACL HLT, 2010.

There has been work motivated from privacy perspective, but with limited empirical results. And there has been work in convex setting, for distributed consensus algorithm (not the distributed systems version of the problem but the machine learning version.)

The contributions of the paper

The paper introduces the FedAvg algorithm for federated learning. The algorithm is not an intellectually challenging innovation, as it prescribes a small variation to the traditional SGD training. So the paper is careful not to claim too much credit for the algorithm. Instead the paper distributes its contributions to items 1 and 3 below, and downplays 2 in comparison: "1) the identification of the problem of training on decentralized data from mobile devices as an important research direction; 2) the selection of a straightforward and practical algorithm that can be applied to this setting; and 3) an extensive empirical evaluation of the proposed approach."

The algorithm

The algorithm uses a synchronous update scheme that proceeds in rounds of communication. There is a fixed set of K clients, each with a fixed local dataset. At the beginning of each round, a random fraction C of workers is selected, and the server sends the current global algorithm state to each of these workers (e.g., the current model parameters). Only a fraction of workers are selected for efficiency, as the experiments show diminishing returns for adding more workers beyond a certain point. Each selected worker then performs local computation based on the global state and its local dataset, and sends an update to the server. The server then applies these updates to its global state, and the process repeats.

So here are the high-level steps in the algorithm:

  1. Workers are sent the model by the server
  2. Workers compute an updated model based on their local data
  3. The updated models are sent from the workers to the server
  4. The server aggregates these models (by averaging) to construct the new global model

These steps are almost the same in a traditional ML/DL learning with a parameter-server and workers. But there are some minor differences. Difference in step 1: Not all workers, but a subset of workers are chosen. Differences in step 2: Workers are also producers of data, they work on the data they produce. Workers may do multiple iterations on the local-model-update. Difference in step 3: The model, not the gradients, is transmitted back.

For step 3, I don't know why the model is sent rather than the gradients. The paper presents a derivation to argue why both are equivalent, but then does not provide any justification for transmitting model instead of the gradients. There is no explanation nor experiments about comparing the two approaches.

Here is the algorithm: (The paper calls workers as clients.)

The amount of computation is controlled by three key parameters: C, the fraction of workers that perform computation on each round; E, then number of training passes each worker makes over its local dataset on each round; and B, the local minibatch size used for the worker updates. B =infinity indicates that the full local dataset is treated as a single minibatch.

And this is the biggest difference in the algorithm from traditional synchronous SGD:
In each synchronous round, the workers can perform multiple rounds of local-model update before uploading the model back to the cloud. Since the local-data is small, computing/iterating over it is fast. And since communication is already high-latency, the workers may as well do many local computation rounds to make this worthwhile.

The paper makes a big deal of the unbalanced data distribution and especially the  non-iid (non- independent & identically distributed) data at the workers. However, there is nothing special in the FedAvg algorithm to address the non-iid data challenge. It works because the SGD tolerates noise. And most likely having many workers participate at each round helps a lot. Even with C=0.1, the number of smartphones used for training can be amazingly high. In the experimental results,   100 workers are used for MNIST image classification application, and  1146 workers are used for Shakespeare dataset language modeling application. This is way more workers used in traditional ML, certainly for the MNIST and Shakespeare datasets. So the sheer number of workers helps compensate the non-iid challenge.


Can Federated Learning bootstrap from zero-trained model?
Before reading the paper, I thought "maybe the smartphones are not initialized from random parameters, but with partially or mostly trained parameters". The experiments suggests otherwise: The smartphones are started from an untrained model. However, Figure 1 in the paper shows that the smartphones (i.e., workers) should get started with the same common initialization, because independent initialization does not converge.

How long is a round?
The experiments investigate how many rounds are needed to converge, but there is no explanation in the paper about the length of a round. In my estimation, the length of a round should be around 5 minutes, or so. In practical use, Google employs as workers the smartphones that are plugged to a wall-outlet and connected to a WI-FI. (Android phones provide this information to Google and a subset among them is randomly selected as workers. In our PhoneLab project, we also uploaded data from phones only when they were plugged in.) Since this is synchronous SGD, the round progresses with the rate of the slowest worker. So they must be using backup workers (maybe 5% or 10%) to combat the straggler problem.

FedAvg uses synchronous SGD, but would it also work with asynchronous rounds?
Unfortunately there are no experiments that investigate this question.

Experimental results

The experiments use the MNIST dataset image classification, CIFAR 10 image classification, and Shakespeare dataset for language modeling and prediction of the next character (alphabet character that is).

The paper says this: "Our goal is to use additional computation in order to decrease the number of rounds of communication needed to train a model. The speedups we achieve are due primarily to adding more computation on each client, once a minimum level of parallelism over clients is used." They achieve this by increasing E (or decreasing B), once C is saturated (which tend to happen for C=0.1 in their setup).

Even the pathologically biased distribution of local datasets  works great because number of C is high, and provides smoothing. "Both standard SGD and FedAvg with only one client per round (C = 0), demonstrate significant oscillations in accuracy, whereas averaging over more clients smooths this out."

Tuesday, February 21, 2017

1 million pageviews

My blog has recently reached 1 million pageviews. This warrants for a short retrospection.

I started the posting regularly on September 2010. I wanted to get into the cloud computing domain, so I needed to accumulate background on cloud computing work. I decided that as I read papers on cloud computing, I will post a summary to this blog. I thought if I could explain what I learned from the papers in my own words, I would internalize those lessons better. And if others read those summaries and benefit, that is an extra plus.

"Writing is nature's way of telling you how sloppy your thinking is." In fact, I learned a lot writing those paper reviews. Writing the reviews gave me a deeper understanding of the work done, beyond what I could achieve by passively reading them. Putting them on web was also a nice choice, because I could refer my students to some of these summaries when needed. And it turned out that I referred to those summaries myself very frequently to jog my memory. Since I have encoded the writing with my understanding, reading through my summary would get me refreshed about the important lessons from that work. Since my summaries were made available on the web, all I needed to do was google search for muratbuffalo and paper name.

(Side remark about my research journey: My research area at the first couple years of my PhD was distributed algorithms and self-stabilization. Then starting on 2002, wireless sensor networks has become my research area. I applied stabilizing distributed algorithms for in-network querying and tracking in wireless sensor networks. Around 2009 I started transitioning to crowdsourced sensing and collaboration using smartphones. And starting from 2010, I transitioned to large-scale cloud computing systems. Distributed systems has been the common theme through out. Here is a link to my research statement as of the end of 2016.)

Over time I included posts about my conference trips, book reviews, rants, and research advice for students. Putting research advice (reading, writing, presenting) is also beneficial because I can refer my students to it. And occasionally I receive emails from remote corners of the world about how some of these posts helped them or inspired them, and that makes me very happy for an entire day.

Some of the big hits 

The bursty traffic all came from Hacker News. The regular traffic came from many sources: Google searches, blog posts, twitter links.

Google tells me I can earn up to $18.50 per month by placing ads on my blog using AdSense. No thanks, for now.

Here are the top 10 posts in my blog as of now. Looks like anything mentioning Facebook is a big hit. Deep learning is also very hot. Glad to see our hybrid logical clocks work also up there. And glad to see interest for TLA+.

Saturday, February 18, 2017

Bowling your way to the top

"Oh, this is very American!" I said, when I finally understood how Bowling scoring works.

Bowling scoring is nonlinear

In a bowling game, there are 10 rounds. There are 10 pins, and you get 2 shoots in each round to knock as many as you can.

Even if you are novice, if you are eager and put effort in it, at each round you can knock down 6 pins. So that gives you a score of 6*10=60.

If you knock down 7 pins at each round, you get a score of 70.
8 pins, you get a score of 80.
9 pins, you get a score of 90.

Here is where things start to go nonlinear and you get accelerated returns. If you knock down all the 10 pins in your two shoots, this is called a spare. Your score for that round is not just 10, but the point you get from the next round is also added to it. So if you had a spare in round k, and got 7 in the next round k+1, you get 10+7 for round k, and 7 for round k+1, and in total of 17+7=24 points from these two rounds. If we were scoring this linearly, you would only get 10+7=17.

If you knock down all the 10 pins in your first shoot in a round, this is called a strike. Your score for that round is not just 10, but the points you get from the next *two* rounds get added to it. If you had a strike in round k, and got 7 in round k+1 and k+2, you get 10+7+7=24 points for round k, 7 for k+1, and 7 for k+2, and a total of 38 points from these 3 rounds. If we were scoring this linearly, you would only get 10+7+7=24 from these 3 rounds.

In the first game I played, I was knocking about 7 pins each round, so I thought, I should be in pretty good shape. Wrong. I got 4th place. The first two guys were experienced, they managed to hit sequences of strikes and spares, and their score grew very fast. My third place friend had managed to hit a series of spares towards the end of the game, and has beaten my score before I could understand my score was being beaten. I thought I was comfortably ahead.

It is more important to hit occasional strikes and spares than hitting a constantly comfortable 7 average.

So let's get back to where we left, the transition from linear to nonlinear scoring.
All 8 pins at all rounds, you get a total score of 80.
All 9, you get a total score of 90.
All spares, you get a total score of 200, instead of 100.
All strikes, you get a total score of 300, instead of 100.

And that last one is called a perfect game. 
Here is a short video of a perfect game.

OK so what?

Why am I wasting my time and your time telling you about bowling scoring?

If you were born and raised in US, you might have yawned reading through the above text. You might be taking the scoring for granted. In fact, when I realized the scoring works in a "funny" way, I asked my American friends to explain. They didn't have much previous practice explaining the scoring. One of them said, after stalling for some time, "Hmm, I realize this is the first time I am explaining Bowling scoring to someone." And this guy has played in a Bowling league for a couple years :-)

After a couple more takes of explaining/questioning with a second friend, when I finally understood what is going on, I blurted: "Oh, this is very American!", which surprised my friend.

If you take this scoring for granted, all I will tell you is this: "Three points for a win" is a relatively recent adoption in soccer scoring. Before that, it was 0 points for loss, 1 points for draw, and 2 points for win. And the games were so boring.

Teams would go for a draw, because the prospect of gaining one extra point by putting effort into attacking was not worth risking your defensive stance which could make you lose the game and get no points. The transition to three points for a win started only after 1980 taking up to 2000 in some countries. And this led to a significant increase of average goals scored in the games.

This is not about bowling, isn't it?

Yes, you see, free markets are inherently nonlinear scoring markets. Nonlinear scoring applies especially for the current information technology markets, where "the best performers are able to capture a very large share of the rewards, and the remaining competitors are left with very little". In such a winner-take-all economy, you run the risk of being overlooked if your products are mediocre. You need to hit some strikes.

This is also true in academia. Yes, you need to show that you are publishing productively, and there is some pebble counting. But in order for those pebbles to count, you need some occasional gems in between. You need to hit some strikes.

It is more important to hit occasional strikes and spares than hitting a constantly comfortable 7 average.

You need to think big, aim big, and go for a strike, so you can achieve nonlinear returns occasionally. 

Other related links

1. Wait a minute? Didn't I tell you a couple days ago "worse is better"?
Yes, I did. But this is how I concluded that post: "Worse is better takes a simplistic/minimalist approach. Simple and minimal can be powerful, if it is not done too ugly. Is worse always better? No. As I said earlier systems design is all about tradeoffs. It is important to analyze and decide in advance what the priorities are."

In fact, a worse-is-better system hits a strike in a priority dimension, such as being minimalist and going viral. On the other hand, a do-the-right-thing system may get stuck with hitting constantly comfortable of 7 average in all dimensions.

2. This nonlinear return idea also reminds me of the high-intensity interval training (HIT) idea. Tim Ferris had a very interesting interview with Prof. Martin Gibala on this.  The idea in HIT is that you get accelerated returns for the short nonlinear effort you put into your training.

Thursday, February 16, 2017

Mesos: A platform for fine-grained resource sharing in the data center

This paper appeared in NSDI 11 and introduced the Mesos job management and scheduling platform which proved to be very influential in the big data processing ecosystem. Mesos has seen a large following because it is simple and minimalist. This reminds me of the "worse is better" approach to system design. This is an important point and I will ruminate about this after I explain you the Mesos platform.

The problem 

We need to make multiple frameworks coexist and share the computing resources in a cluster. Yes, we have submachine scheduling abstractions: first the virtual machines and then containers. But we still need a coordinator/arbiter to manage/schedule jobs submitted from these frameworks  to make sure that we don't underutilize or overload/overtax the resources in the cluster.

Offer-based scheduling

Earlier, I have talked about Borg which addressed this cluster management problem. While Borg (and later Kubernetes) takes a request-based scheduling approach, Mesos chooses to provide an offer-based scheduling approach.

In the request-based scheduling, the frameworks provide their scheduling needs to the scheduler/controller and the scheduler/controller decides where to place the tasks and launches them. This can arguably make the design of the controller overcomplicated. The scheduler/controller may need to understand too many details about multiple frameworks in order to perform their requests adequately. This may not scale well as the number of frameworks to support grows. (And we have an abundance of big data processing frameworks.)

In stark contrast, Mesos delegates the control over scheduling to the frameworks. The Mesos master (i.e., the controller) provides resource offers to the frameworks,  and the frameworks decide which resources to accept and which tasks to run on them.

In other words, Mesos takes a small-government, libertarian approach to cluster management :-)

Since Mesos is minimalist, it is simple and nicely decoupled from the various frameworks it serves. This made Mesos go viral and achieve high-adoption. But systems design is an exercise in choosing which tradeoffs you make. Let's study the drawbacks. (I am putting on my critical hat, I will talk about the benefits of Mesos again toward the end of this post.)

The long-running tasks, and the big tasks strain this offer-based scheduling model. Some frameworks may schedule tasks that can overstay their welcome, and take advantage of the too trusting and hands-off Mesos. This would be unfair to other client frameworks. (Of course the Mesos master may take into account "organizational policies such as fair sharing" when extending offers to the frameworks, and can even kill long running tasks.)

Moreover, since Mesos is hands-off, it does not provide fault-tolerance support for long-running tasks, which are more likely to experience failure in their lifetimes as they run longer. Mesos punts the ball to the client frameworks which will need to carry the burden. And doing this for each client framework may lead to redundant/wasted effort. Fortunately other helper systems like Marathon emerged to address this issue and provide support for long running tasks.

Even assuming that the client frameworks are not-greedy and on their best cooperating behavior, they may not have enough information about other tasks/clients of Mesos to make optimal scheduling decisions. The paper mentions that: "While this decentralized scheduling model may not always lead to globally optimal scheduling, we have found that it performs surprisingly well in practice, allowing frameworks to meet goals such as data locality nearly perfectly."

Related to this problem, another very simple idea makes a cameo appearance in the paper: "We used delay scheduling to achieve data locality by waiting for slots on the nodes that contain task input data. In addition, our approach allowed us to reuse Hadoop's existing logic for re-scheduling of failed tasks and for speculative execution (straggler mitigation)."

Is that too simple a technique? Well, it is hard to argue with results. The delay scheduling paper has received 1000+ citations since 2010. Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling. In EuroSys 10, 2010. 

Mesos architecture

Mesos means middle or intermediate, from Greek misos. Nice name.

Mesos master is ZooKeeper guarded, so a hot standby can get in and take over if the Mesos master fails. The Mesos master manages the resources by talking to Mesos slaves/workers on the machines in the cluster. This is similar to how BorgMaster manages resources talking to Borglets on the machines.

So where is the scheduler in this architecture? This responsibility is punted to the client frameworks. As we mentioned above, the Mesos master provides offers to the client frameworks, and it is upto the client framework to accept an offer.

Here is how things work from the client framework's perspective. Each framework intending to use Mesos needs to implement two components: a scheduler that registers with the Mesos master to be offered resources, and an executor that is launched on Mesos worker nodes to run the framework’s tasks.

This table shows the callbacks and actions to implement to write the scheduler and the executor components. (To Python users, there is pyMesos to help you write the scheduler and executor components in Python.)

In the resourceOffer callback, the scheduler should implement the functionality to select which of the offered resources to reject and which to use along with how to pass Mesos a description of the tasks it wants to launch on them. What if that offer became unavailable in the meanwhile? The Mesos master will then warn the Scheduler via the offerRescinded callback that the offer has been rescinded, and it is the client framework's responsibility to handle this and reschedule the job using the next offers from the Mesos master.

The implementation of the scheduler gets more and more involved if the framework would like to keep track of tasks for a submitted job and provide the users of the framework this information. The scheduler gets callbacks on statusUpdate of the tasks, but it needs to piece together and track which job these tasks correspond to. For example, the scheduler gets a callback when a task is finished, and then it is the responsibility of the scheduler to check and mark a job as completed when all its tasks are finished.

This scheduler/executor abstraction can also get leaky. The paper mentions this about the Hadoop port, which came to a total of 1500 lines of code: "We also needed to change how map output data is served to reduce tasks. Hadoop normally writes map output files to the local filesystem, then serves these to reduce tasks using an HTTP server included in the TaskTracker. However, the TaskTracker within Mesos runs as an executor, which may be terminated if it is not running tasks. This would make map output files unavailable to reduce tasks. We solved this problem by providing a shared file server on each node in the cluster to serve local files. Such a service is useful beyond Hadoop, to other frameworks that write data locally on each node."

If you are thin like Mesos, you can (should?) add on weight later when it is warranted.

A side remark: What is it with the "fine-grained" in the title?

The title is a humble and conservative title: "Mesos: A platform for fine-grained resource sharing in the data center". The paper seems insecure about this issue, and keeps referring back to this to emphasize that Mesos works best with fine-grained short tasks. This gets peculiar for a careful reader.

Well, I think I know the reason for this peculiarity. Probably the authors may have been burned before about this from an over-critical reviewer (it is always Reviewer 2!), and so they are trying to preemptively dismantle the same criticism to be aired again. This is a very solid and important paper, but I wouldn't be surprised even a paper of this caliber may have been rejected earlier and this version may be their second (or even third) submission. Reviewer 2 might have told the authors  not too subtly that the paper is claiming too much credit (which is always a big pet peeve of reviewer 2), and the current version of the paper is written defensively to guard against this criticism.

Oh, the joys of academia. I wouldn't be surprised if Reviewer 2 also found the paper low on novelty and suitable more for industrial research and not for academic research.


Yes, Mesos is too eager to punt the ball to the clients. But this is not necessarily a bug, it can be a feature. Mesos is thin, and  gives your frameworks control over how to schedule things. Mesos doesn't step in your way and provides your frameworks  low-level control over scheduling and management decisions.

Mesos reminds me of the "worse-is-better" approach to system design. (Ok, read that link, it is important. I will wait.) Since Mesos is minimalist and simple it is a viral platform. (Much like MapReduce and Hadoop were.)

Borg/Kubernetes aims to do "the-right-thing". They provide a blackbox cluster manager that provides a lot of features, optimal scheduling, fault-tolerance, etc. This is great if you fit into the workloads that they cater to, which covers most of the web-services workloads. But this approach may actually get in your way if you like to have low-layer control on scheduling/management decisions.

I read the "worse is better" when I was a fresh graduate student in 1999 working on the theory side of distributed algorithms and self-stabilization. I was a Dijkstra fan, and this article was a real eye opener for me. It made me to question my faith :-)

Worse is better takes a simplistic/minimalist approach. Simple and minimal can be powerful, if it is not done too ugly. Is worse always better? No. As I said earlier systems design is all about tradeoffs. It is important to analyze and decide in advance what the priorities are.

I feel like I will pick up on this thread at another time.

Saturday, February 11, 2017

Large-scale cluster management at Google with Borg

This paper from Google appeared on Eurosys'15. The paper presents Borg, the cluster management system Google used since 2005. The paper includes a section at the end about the good and bad lessons learned from using Borg, and how these led to the development of Kubernetes container-management system which empowers the Google Cloud Platform and App Engine.

Borg architecture

This is the Borg. Resistance is futile.

A median Borg cell is 10K machines. And all those machines in a cell are served by a logically centralized control: the Borgmaster.

Where is the bottleneck in the centralized Borg architecture? The paper says it is still unclear whether this architecture would hit a practical scalability limit. Anytime Borg was given a scalability target, they managed to achieve it by applying basic techniques: caching, loose-synchronization, and aggregation.

What helped the most for achieving scalability was decoupling the scheduler component from the Borgmaster. The scheduler is loosely-synchronized with the Borgmaster: it operates on a cached cached copy of the cell state and acts as a counsel/advisor to the Borgmaster. If the scheduler makes a decision that is not feasible (because it is based of an outdated state: machine failed, resource gone, etc.), the Borgmaster will not take that advice and ask the scheduler to reschedule the job this time hopefully with better up-to-date state.

To provide high-availability, the Borgmaster is Paxos-replicated over 5 machines. Replicas serve read-only RPC calls to reduce the workload on the Borgmaster leader. In addition to the Paxos log, there is also periodic checkpoints/snapshots to restore the Borgmaster's state to an arbitrary point in the past. A fauxmaster can also use this functionality in debugging of the Borgmaster and scheduling performance.

A Borglet is the local Borg agent on every machine in a cell. (In Mesos this corresponds to the Mesos slave, or in the new terminology the Mesos agent.) Borgmaster replica runs a stateless link shard to handle the communication with some subset of borglets. The link shard aggregates and compresses and reports only diffs to the state machines to reduce update load at the elected master.

Jobs and tasks

A job consists of many tasks (which are same binary programs). 50% of machines run 9+ tasks, and 90%ile machine has ~25 tasks and run ~4500 threads.

Google's Borg workload consists of 2 main categories. Production jobs are long running services serving short user requests and they require low-latency. Batch jobs on the other hand are less-sensitive to performance fluctuations. The workload has dynamic surges: batch jobs come and go, and productions jobs have a diurnal pattern. (A representative Borg workload trace is publicly available.) Borg needs to handle this dynamic demand while providing as high utilization of the cluster machines as possible.

It turns out tight-packing scheduling is not optimal for high-utilization, because it is too strict and fails to accommodate for bursty loads and misestimations from Borg clients. Instead a hybrid packing is used, which provides 5% better packing efficiency than the tight-packing/best-fit policy. Borg uses priorities for tasks. If a machine runs out of resources to accommodate its assigned tasks (e.g., due to burst in demands), lower priority tasks on that machine are killed and added to the scheduler's pending queue for re-placement.

Users operate on jobs by issuing remote procedure calls (RPCs) to Borg, most commonly from a command-line tool or from other Borg jobs. To help users manage their jobs, Borg provides declarative job specification language, and job monitoring/management tools. Borg uses the concept of allocation set for a job, which corresponds to the concept of pod in Kubernetes.

Task startup latency at a machine is about 25seconds, 20 sec of which is package installation time. To reduce the latency from package installation, Borg tries to schedule tasks where the packages are already available. In addition, Borg employs tree and torrent-like protocols to distributes packages to machines in parallel. Finally, Borg also tries to schedule tasks to reduce correlation of failures for a given job.

Almost every task contains a builtin HTTP server that publishes health and performance info. Borg monitors the health-check URL and restarts tasks that fail to respond.

Wednesday, January 18, 2017

Deep Learning With Dynamic Computation Graphs (ICLR 2017)

This is a paper by Google that is under submission to ICLR 2017. Here is the OpenReview link for the paper. The paper pdf as well as paper reviews are openly available there. What a concept!

This paper was of interest to me because I wanted to learn about dynamic computation graphs. Unfortunately almost all machine learning/deep learning (ML/DL) frameworks operate on static computation graphs and can't handle dynamic computation graphs. (Dynet and Chainer are exceptions).

Using dynamic computation graphs allows dealing with recurrent neural networks (RNNs) better, among other use cases. (Here is a great article about RNNs and LSTMs. Another good writeup on RNNs is here.) TensorFlow already supports RNNs, but by adding padding to ensure that all input data are of the same size, i.e., the maximum size in the dataset/domain. Even then this support is good only for linear RNNs not good for treeRNNs which is suitable for more advanced natural language processing.

This was a very tough paper to read. It was definitely above my level as a beginner. The paper assumed a lot of background from the reader. It assumed familiarity with TensorFlow execution and operators, and also some understanding of programming language background and familiarity with RNNs. The dynamic batching idea introduced in the paper is a complex idea but it is explained briefly (and maybe a bit poorly?) in one page. Even when I gave the paper all my attention, and tried to form several hypothesis of dynamic batching idea, I was unable to make progress. At the end, I got help from a friend who is an expert at deep learning.

I skipped reading the second part of the paper which introduced a combinator library for NNs. The library is relevant because it was instrumental in implementing the dynamic batching idea introduced in the first part of the paper. This second part looked interesting but the functional programming language concepts discussed was hard for me to follow.

The dynamic batching idea

This paper introduces dynamic batching idea to emulate dynamic computation graphs (DCGs) of arbitrary shapes and sizes over TensorFlow which only supports static computation graphs.

Batching is important because GPUs crave for batching, especially when dealing with text data where each item is of small size. (While images are already large enough to fill/busy the GPU, but that is not so for text data.)

However, the challenge for batching when using DCGs is that the graph of operations is not static, and can be different for every input. The dynamic batching algorithm fixes batching for DCGs. Given a set of computation graphs as input, each of which has a different size and topology, dynamic batching algorithm will rewrite the graphs by batching together all instances of the same operation that occur at the same depth in the graph. (Google is really into graph rewriting.)

The dynamic batching algorithm takes as input a batch of multiple input graphs and treats them as a single disconnected graph. Source nodes are constant tensors, and non-source nodes are operations. Scheduling is performed using a greedy algorithm: (I omit some of the more detailed steps in the paper.)
  • Assign a depth, d, to each node in the graph. Nodes with no dependencies (constants) are assigned depth zero. Nodes with only dependencies of depth zero, are assigned depth one, and so on.
  • Batch together all nodes invoking the same operation at the same depth into a single node.
  • Concatenate all outputs which have the same depth and tensor type. The order of concatenation corresponds to the order in which the dynamic batching operations were enumerated.
Each dynamic operation is instantiated once in the static dataflow graph. The inputs to each operation are tf.gather ops, and the outputs are fed into tf.concat ops. These TensorFlow ops are then placed within a tf.while_loop. Each iteration of the loop will evaluate all of the operations at a particular depth. The loop maintains state variables for each tensor type t, and feeds the output of concat for tensor type t and iteration d into the input of the gathers at tensor type t and iteration d+1.

Experimental results

The test results emphasize the importance of batching, especially on GPUs where it can enable speed ups up to 120x. The speedup ratio denotes the ratio between the per-tree time for dynamic batching on random shapes ("full dynamic"), versus manual batching with a batch size of 1.

Dynamic batching instantiates each operation only once, and invokes it once for each depth, so the number of kernel invocations is log(n), rather than n, where n is tree size. Dynamic batching thus achieves substantial speedups even at batch size 1, because it batches operations at the same depth within a single tree.


Dynamic batching works on a single machine, it is not distributed. Dynamic batching requires an all to all broadcasts, so it doesn't scale to distributed machines.

This Google paper doesn't cite or talk about Dynet and Chainer, but Dynet and Chainer are single machine ML/DL frameworks that support dynamic computation graphs. On one hand, Dynet & Chainer are most likely not good at batching, and the dynamic batching method here has contribution. On the other hand, since Dynet & Chainer support dynamic computation graphs natively (rather than by way of emulating it on static computation graphs like dynamic batching does), they are most likely more expressive than the dynamic batching can achieve. In fact, another limitation of the dynamic batching approach is that it requires all operations that might be used to be specified in advance. Each input/output may have a different type but all types must be fixed and fully specified in advance.

Tuesday, January 17, 2017

My first impressions after a week of using TensorFlow

Last week I went through the TensorFlow (TF) Tutorials here. I found that I hadn't understood some important points about TensorFlow execution, when I read the TensorFlow paper. I am noting them here fresh to capture my experience as a beginner. (As one gathers more experience with a platform, the baffling introductory concepts starts to occur obvious and trivial.)

The biggest realization I had was to see a dichotomy in TensorFlow among two phases. The first phase defines a computation graph (e.g., a neural network to be trained and the operations for doing so). The second phase executes the computation/dataflow graph defined in Phase1 on a set of available devices. This deferred execution model enables optimizations in the execution phase by using global information about the computation graph: graph rewriting can be done to remove redundancies, better scheduling decisions can be made, etc. Another big benefit is in enabling flexibility and ability to explore/experiment in the execution phase through the use of partial executions of subgraphs of the defined computation graph.

In the rest of this post, I first talk about Phase1: Graph construction, Phase2: Graph execution, and then I give a very brief overview of TensorFlow distributed execution, and conclude with a discussion on visualizing and debugging in TensorFlow.

Phase1: Graph construction

This first phase where you design the computation graph is where most of your efforts are spent. Essentially the computation graph consists of the neural network (NN) to be trained and operations to train it. Here you lay out the computation/dataflow graph brick by brick using TensorFlow operations and tensors. But what you are designing is just a blueprint, nothing gets built yet.

Since you are designing the computation graph, you use placeholders for input and output. Placeholders denote what type of input is expected. For example, x may correspond to your training data, and y_ may be your training labels, and you may define them as follows using the placeholders.
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])

This says that x will later get instantiated unspecified number of rows (you use 'None' to tell this to TensorFlow) of 784 float32 vectors. This setup enables us to feed the training data to the NN in batches, and gives you flexibility in the graph execution phase to instantiate multiple workers in parallel with the computational graph/NN and train them in parallel by feeding them different batches of your input data.

As a more advanced topic in graph construction, heads up for the variable scopes and sharing of the variables. You can learn more about them here.

Phase2: Graph execution using sessions 

After you get the computation graph designed to perfection, you switch to the second phase where the graph execution is done. Graph/subgraph execution is done using sessions. A session encapsulates the runtime environment in which graphs/subgraphs instantiate and execute.

When you open a session, you first initialize the variables by calling "tf.global_variables_initializer().run()". Surprise! In Phase1 you had assigned variables initial values, but those did not get assigned/initialized until you got to Phase2 and called "tf.global_variables_initializer". For example, let's say you asked b to be initialized as a vector of size 10 with all zeros "b = tf.Variable(tf.zeros([10]))" in Phase1. That didn't take effect until you opened a session, and called "tf.global_variables_initializer". If you had typed in "print( b.eval() )" in the first part after you wrote "b = tf.Variable(tf.zeros([10]))", you get an error: " ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`  ".

This is because b.eval() maps to, and you don't have any session in Phase1. On the other hand, if you try print (b.eval()) in Phase2 after you call "tf.global_variables_initializer", the initialization takes effect and you get the output [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.].

Each invocation of the Session API is called a step, and TensorFlow supports multiple concurrent steps on the same graph. In other words, the Session API allows multiple calls to in parallel to improve throughput. This is basically performing dataflow programming over the symbolic computation graph built in Phase1.

In Phase2, you can open sessions and close sessions to your heart's content. Tearing down a session and reopening a session has several benefits. This way you instruct TensorFlow runtime to forget about the previous values assigned to the variables in the computation graph, and start again with a new slate (which can be useful for hyperparameter tuning). When you close a session you release that state, and when you open a session you initialize the graph again and start from scratch. You can even have multiple sessions open concurrently in theory, and that may even be useful for avoiding variable naming clashes.

An important concept for Phase2 is partial graph execution. When I read the TensorFlow paper first time, I hadn't understood the importance of partial graph execution, but turns out it is important and useful. The API for executing a graph allows the client to specify the subgraph that should be executed. The client selects zero or more edges to feed input tensors into the dataflow, and one or more edges to fetch output tensors from the dataflow. Then the runtime prunes the graph to contain the necessary set of operations.

Partial graph execution is useful in training parts of the NN at a time. However, it is commonly exercised in a more mundane way in basic training of NNs.  When you are training the NN, every K iterations you may like to test with the validation/test set. You had defined those in Phase1 when you define the computation graph, but these validation/test evaluation subgraphs are only included and executed every K iterations, when you ask to evaluate them. This reduces the overhead in execution.  Another example is the tf.summary operators, which I will talk about in visualizing and debugging. The tf.summary operators are defined as peripheral operations to collect logs from computation graph operations. You can think of them as an overlay graph. If you like to execute tf.summary operations, you explicitly mention this in And when you leave that out, tf.summary operations (that overlay graph) is pruned out and don't get executed. Mundane it is but it provides a lot of computation optimization as well as flexibility in execution.

This deferred execution model in TensorFlow is very different than the traditional instant-gratification instant-evaluation execution model. But this serves a purpose. The main idea of Phase2 is that, after you have painstakingly constructed the computation graph in Phase1, this is where you try to get as much mileage out of that computation graph.

Brief overview of TensorFlow distributed execution

A TensorFlow cluster is a set of tasks (named processes that can communicate over a network) that each contain one or more devices (such as CPUs or GPUs). Typically a subset of those tasks is assigned as parameter-server (PS) tasks, and others as worker tasks.

Tasks are run as (Docker) containers in jobs managed by a cluster scheduling system (Kubernetes). After device placement, a subgraph is created per device. Send/Receive node pairs that communicate across worker processes use remote communication mechanisms such as TCP or RDMA to move data across machine boundaries.

Since TensorFlow computation graph is flexible, it is possible to easily allocate subgraphs to devices and machines. Therefore distributed execution is mostly a matter of computation subgraph placement and scheduling. Of course there are many complicating factors: such as heterogeneity of devices, communication overheads, just in time scheduling (to reduce overhead), etc. Google TensorFlow papers mention they perform graph rewriting and inferring of just-in-time scheduling from the computation graphs.

I haven't started delving into TensorFlow distributed, and haven't experimented with it yet. After I experiment with it, I will provide a longer write up.

Visualizing and debugging

TF.summary operation provides a way to collect and visualize TensorFlow execution information. TF.summary operators are peripheral operators; they attach to other variables/tensors in the computation graph and they capture their values. Again, remember the two phase dichotomy in TensorFlow. In Phase1, you define and describe these TF.summary for the computational graph, but they don't get executed. They only get executed in Phase2 where you create a session, execute the graph, and explicitly mention to execute tf.summary graph as well.

If you use the TF.summary.FileWriter, you can write the values the tf.Summary  operations collected during a into a log file. Then you can direct the Tensorboard tool to the log file to visualize and see the computational graph, as well as the how the values evolved over time.

I didn't get much use from the Tensorboard visualization. Maybe it is because I am a beginner. I don't find the graphs useful even after having a basic understanding of how to read them. Maybe they get useful for very very large computation graphs.

The Google TensorFlow whitepaper says that there is also a performance tracing tool called EEG but that is not included in the opensource release.

Related links 

By clicking on label "mldl" at the end of the post, you can reach all my posts about machine learning / deep learning (ML/DL).

In particular the series below reviews the introductory concepts in ML/DL.
Learning Machine Learning: A beginner's journey 
Linear Regression
Logistic Regression
Multinomial Logistic Regression

Thursday, January 12, 2017

Google DistBelief paper: Large Scale Distributed Deep Networks

This paper introduced the DistBelief deep neural network architecture. The paper is from NIPS 2012. If you consider the pace of progress in deep learning, that is old and it shows. DistBelief doesn't support distributed GPU training which most modern deep networks (including TensorFlow) employ. The scalability and performance of DistBelief has been long surpassed.

On the other hand, the paper is a must read if you are interested in distributed deep network platforms. This is the paper that applied the distributed parameter-server idea to Deep Learning. The parameter-server idea is still going strong as it is suitable to serve the convergent iteration nature of machine learning and deep learning tasks. The DistBelief architecture has been used by the Microsoft Adam project, Baidu Deep Image, Apache Hama, and Petuum's Bosen. Google, though, has since switched from the DistBelief parameter-server to TensorFlow's hybrid dataflow architecture, citing the difficulty of customizing/optimizing DistBelief for different machine learning tasks. And of course TensorFlow also brought support for distributed GPU execution for deep learning, which improves performance significantly.

I think another significance of this paper is that it established connections between deep-learning and distributed graph processing systems. After understanding the model-parallelism architecture in DistBelief, it is possible to transfer some distributed graph processing expertise (e.g., locality-optimized graph partitioning) to address performance optimization of deep NN platforms.

The DistBelief architecture

DistBelief supports both data and model parallelism. I will use the Stochastic Gradient Descent (SGD) application as the example to explain both cases. Let's talk about the simple case, data parallelism first.

Data parallelism in DistBelief

In the figure there are 3 model replicas. (You can have 10s even 100s of model replicas as the evaluation section of the paper shows in Figures 4 and 5.) Each model replica has a copy of the entire neural network (NN), i.e., the model. The model replicas execute in a data-parallel manner, meaning that each replica works at one shard of the training data, going through its shard in mini-batches to perform SGD. Before processing a mini-batch, each model replica asynchronously fetches from the parameter-server service an update copy of its model parameters $w$. And after processing a mini-batch and computing parameter gradients, $\Delta w$, each model replica asynchronously pushes these gradients to the parameter-server upon which the parameter-server applies these gradients to the current value of the model parameters.

It is OK for the model replicas work concurrently in an asynchronous fashion because the $\Delta$ gradients are commutative and additive with respect to each other. It is even acceptable for the model replicas to slack a bit in fetching an updated copy of the model parameters $w$. It is possible to reduce the communication overhead of SGD by limiting each model replica to request updated parameters only every nfetch steps and send updated gradient values only every npush steps (where nfetch might not be equal to npush). This slacking may even be advantageous in the beginning of the training when the gradients are steep, however, towards converging to an optima when the gradients become subtle, going like this may cause dithering. Fortunately, this is where Adagrad adaptive learning rate procedure helps. Rather than using a single fixed learning rate on the parameter server, Adagrad uses a separate adaptive learning rate $\eta$ for each parameter. In Figure 2 the parameter-server update rule is $w' := w - \eta \Delta w$.  An adaptive learning with large learning rate $\eta$ during convergence, and small learning rate $\eta$ closer to the convergence is most suitable.

Although the parameter-server is drawn as a single logical entity, it is itself implemented in a distributed fashion, akin to how distributed key value stores are implemented. In fact the parameter server may even be partitioned over the model replicas so each model replica becomes the primary server of one partition of the parameter-server.

Model parallelism in DistBelief

OK now to explain model-parallelism, we need to zoom in each model replica. As shown in the Figure, a model-replica does not need to be a single machine. A five layer deep neural network with local connectivity is shown here, partitioned across four machines called model-workers (blue rectangles). Only those nodes with edges that cross partition boundaries (thick lines) will need to have their state transmitted between machines. Even in cases where a node has multiple edges crossing a partition boundary, its state is only sent to the machine on the other side of that boundary once. Within each partition, computation for individual nodes will be parallelized across all available CPU cores.

When the model replica is sharded over multiple machines as in the figure, this is called *model-parallelism*. Typically the model replica, i.e. the NN, is sharded upto 8 model-worker machines. Scalability suffers when we try to partition the model replica among more than 8 model-workers. While we were able to tolerate slack between the model-replicas and the parameter-server, inside the model-replica the model-workers need to act consistently with respect to each other as they perform forward activation propagation and backward gradient propagation.

For this reason, proper partitioning of the model-replica to the model-worker workers is critical for performance. How is the model, i.e., the NN, partitioned over the model-workers? This is where the connection to distributed graph processing occurs. The performance benefits of distributing the model, i.e., the deep NN, across multiple model-worker machines depends on the connectivity structure and computational needs of the model. Obviously, models with local connectivity structures tend to be more amenable to extensive distribution than fully-connected structures, given their lower communication requirements.

The final question that remains is the interaction of the model-workers with the parameter-server. How do the model workers, which constitute a model-replica, update the parameter-server? Since the parameter-server itself is also distributedly implemented (often over the model replicas), each model-worker needs to communicate with just the subset of parameter server shards that hold the model parameters relevant to its partition. For fetching the model from the parameter-server, I presume the model-workers need to coordinate with each other and do this in a somewhat synchronized manner before starting a new mini-batch.

[Remark: Unfortunately the presentation of the paper was unclear. For example there wasn't a clean distinction made between the term "model-replica" and "model-worker". Because of these ambiguities and the complicated design ideas involved, I spent a good portion of a day being confused and irritated with the paper. I initially thought that each model-replica has all the model (correct!), but each model-replica responsible for updating only part of the model in parameter-server (incorrect!).]


The paper evaluated DistBelief for a speech recognition application and for ImageNet classification application.
The speech recognition task used a deep network with five layers: four hidden layer with sigmoidal activations and 2560 nodes each, and a softmax output layer with 8192 nodes. The network was fully-connected layer-to-layer, for a total of approximately 42 million model parameters. Lack of locality in the connectivity structure is the reason why the speech recognition application did not scale for more than 8 model-worker machines inside a model-replica. When partitioning the model on more than 8 model-workers, the network overhead starts to dominate in the fully-connected network structure and there is less work for each machine to perform with more partitions.

For visual object recognition, DistBelief was used for training a larger neural network with locally-connected receptive fields on the ImageNet data set of 16 million images, each of which we scaled to 100x100 pixels. The network had three stages, each composed of filtering, pooling and local contrast normalization, where each node in the filtering layer was connected to a 10x10 patch in the layer below. (I guess this is a similar set up to convolutional NN which become an established method of image recognition more recently. Convolutional NN has good locality especially in the earlier convolutional layers.) Due to locality in the model, i.e., deep NN, this task scales better to partitioning up to 128 model-workers inside a model replica, however, the speedup efficiency is pretty poor: 12x speedup using 81 model-workers.

Using data-parallelism by running multiple model-replicas concurrently, DistBelief was shown to be deployed over 1000s of machines in total.

Wednesday, January 11, 2017

Learning Machine Learning: Deep Neural Networks

This post is part of the ML/DL learning series. Earlier in the series, we covered these:
+ Learning Machine Learning: A beginner's journey 
+ Linear Regression
+ Logistic Regression
+ Multinomial Logistic Regression

In this part, we are going to add hidden layers to our neural network, learn how backpropagation works for gradient descent in a deep NN, and finally talk about regularization techniques for avoiding overfitting.

For this post also, I follow the course notes from the Udacity Deep Learning Class by Vincent Vanhoucke at Google. Go, take the course. It is a great course to learn about deep learning and TensorFlow.

Linear models are limited 

We constructed a single layer NN for multinomial regression in our last post. How many parameters did that NN have? For an input vector X of size N, and K output classes, you have (N+1)*K parameters to use. N*K is the size of W, and K is the size of b.

You will need to use many more parameters in practice. Deep learning craves for big model as well as big data. Adding more layers to our NN will give us more model parameters, and enable our deep NN to capture more complex functions to fit the data better.

However, adding another layer that does linear matrix multiplication does not help much. With using just linear layers our NN is unable to efficiently capture nonlinear functions to fit the data. The solution is to introduce non-linearities at the layers via rectified linear units (ReLUs). Using ReLU layers r we get a layering of the form, Y= W1 r W2 r W3 X= WX. This lets us to use big weight matrix multiplications putting our GPUs to good use, enjoying numerically stable and easily derivativable linear functions, as well as seeping in some nonlinearities.

If you like to get a more intuitive neural-perspective understanding of NN, you may find this free book helpful.

Rectified Linear Units (ReLUs)

ReLUs are probably the simplest non-linear functions. They're linear if x is greater than 0, and they're 0 everywhere else. RELUs have nice derivatives, as well. When x is less than zero, the value is 0. So, the derivative is 0 as well. When x is greater than 0, the value is equal to x. So, the derivative is equal to 1.

We had constructed a NN with just the output layer for classification in the previous post. Now let's insert a layer of ReLUs to make it non-linear. This layer in the middle is called a hidden layer. We now have two matrices. One going from the inputs to the ReLUs, and another one connecting the ReLUs to the classifier.


If you have two functions where one is applied to the output of the other, then the chain rule tells you that you can compute the derivatives of that function simply by taking the product of the derivatives of the components. $[g(f(x))]' = g'(f(x))*f'(x)$. There is a way to write this chain rule that is very computationally efficient.

When you apply your data to some input x, you have data flowing through the stack up to your predictions y. To compute the derivatives, you create another graph that flows backwards through the network, get's combined using the chain rule that we saw before and produces gradients. That graph can be derived completely automatically from the individual operations in your network. Deep learning frameworks will do this backpropagation automatically for you.

This backpropagation idea is explained beautifully (I mean it) here.

Training a Deep NN 

So to run stochastic gradient descent (SGD), for every single little batch of data in your training set, the deep NN

  • runs the forward prop, and then the back prop and obtains the gradients for each of the weights in the model,
  • then applies those gradients to the original weights and updates them,
  • and repeats that over and over again until convergence.

You can add more hidden ReLU layers and make your model deeper and more powerful. The above backpropagation and SGD optimization applies the same to deeper NNs. Deep NNs are good at capturing hierarchical structure.


In practice, it is better to overestimate the number of layers (and thus model parameters) needed for a problem, and then apply techniques to prevent overfitting. The first way to prevent overfitting is by looking at the performance under validation set, and stopping to train as soon as we stop improving.

Another way is to prevent overfitting is to apply regularization. For example in L2 Regularization, the idea is to add another term to the loss, which penalizes large weights.

Another important technique for regularization that emerged recently is the dropout technique. It works very well and is widely used. At any given training round, the dropout technique randomly drops half of the activations that's flowing through the network and just destroy it. (The values that go from one layer to the next are called activations.) This forces the deep NN to learn a redundant representation for everything to make sure that at least some of the information remains, and prevents overfitting.

Related links

Here are the links to the introductory ML/DL concepts series:

Learning Machine Learning: Multinomial Logistic Classification

In the previous post, we got started on classification. Classification is the task of taking an input and giving it a label that says, this is an "A". In the previous post, we covered logistic regression, which made the decision for a single label "A". In this post, we will generalize that to multinomial logistic classification where your job is to figure out which of the K classes a given input belongs to.

For this post I follow the course notes from the Udacity Deep Learning Class by Vincent Vanhoucke at Google. I really liked his presentation of the course: very practical and to the point. This must have required a lot of preparation. Going through the video transcript file, I can see that the material has been prepared meticulously to be clear and concise. I strongly recommend the course.

The course uses TensorFlow to teach you about Deep Neural Networks in a hands-on manner, and follows the MNIST letter recognition example in the first three lessons. Don't get stressed about TensorFlow installation and getting the tutorial environment setup. It is as easy as downloading a Docker container, and going to your browser to start filling in Jupyter Notebooks. I enjoyed programming in the Jupyter Notebooks a lot. Jupyter Notebooks is literate programming, literally.

Multinomial logistic classification 

The logistic classifier takes an input vector X (for example, the pixels in an image), and applies a linear function to them to generate its predictions. The linear function is just a giant matrix multiply: it multiplies X with the weights matrix, W, and add biases, b, to generate its prediction to be one of the output classes.

Of course, we need to first train our model (W and b that is) using the training data and the corresponding training labels to figure out the optimal W and b to fit the training data. Each image, that we have as an input can have one and only one possible label. So, we're going to turn the scores (aka logits) the model outputs into probabilities. While doing so, we want the probability of the correct class to be very close to one and the probability for every other class to be close to zero.

This is how multinomial logistic classification generalizes logistic regression.

  • We use a softmax function to turn the scores the model outputs into probabilities.  
  • We then use cross entropy function as our loss function compare those probabilities to the one-hot encoded labels.

Softmax function and one-hot encoding

A softmax function, S, is of the form $S(y_i)=\frac{e^{y_i}}{\sum e^{y_j}}$. This way S can take any kind of scores and turn them into proper probabilities which sum to 1.
def softmax(x):
    """Compute softmax values for each sets of scores in x."""
    return np.exp(x)/np.sum(np.exp(x), axis=0)

Compare the softmax with the logistic function $g(z)= \frac{1}{(1 + e^{-z})}$ in the logistic regression post. The logistic function was concerned with deciding if the output is label "A" or not (less than 0.5 and it is not A, and more than 0.5 it is A), whereas the softmax function is giving/distributing probabilities for the output being in each of the output class "A", "B", "C", etc., the sum of which adds up to 1.

One-hot encoding is a way to represent the labels mathematically. Each label will be represented by a vector of size output classes and it has the value 1.0 for the correct class and 0 every where else.

Cross entropy 

We can now measure the accuracy of the model by simply comparing two vectors: one is the softmax vector that comes out of the classifiers and contains the probabilities of the classes, and the other one is the one-hot encoded vector that corresponds to the label.

To measure the distance between those two probability vectors, *cross-entropy* is used. Denoting distance with D, Softmax(Y) with S, Label with L, the formula for cross-entropy is: $D(S,L)= -\sum_i L_i log(S_i)$.

When the $i$th entry corresponds to the correct class, $L_i=1$, and the cost (i.e., distance) becomes -log(S_i). If $S_i$ has a larger probability close to 1, the cost becomes lower, and if $S_i$ has a lower probability close to 0, the cost becomes larger. In other words, the cross entropy function penalizes $S_i$ for the false-negatives. When the $i$th entry corresponds to one of the incorrect classes, $L_i=0$ and the entry in $S_i$ becomes irrelevant for the cost. So the cross entropy function does not penalize $S_i$ for the false positives.

Compare the cross-entropy with the cost function in logistic regression:

It looks like the cross-entropy does not take into account false-positives, whereas the earlier $J$ cost function took both into account and penalized both the false-positives and false-negatives. On the other hand, cross-entropy does consider false-positives in an indirect fashion: Since the softmax is a zero-sum probability classifier, improving it for the false-negatives does take care of the false-positives.

Minimizing Cross Entropy via Gradient Descent 

To transform the multinomial classification problem into a proper optimization problem, we define training loss to measure the cross-entropy averaged over the entire training sets for all the training inputs and the corresponding training labels: $\mathcal{L} = 1/N * \sum_i D( S(Wx_i+b), L_i)$

We want to minimize this training loss function, and we know that a simple way to do that is via gradient descent. Take the derivative of your loss, with respect to your parameters, and follow that derivative by taking a step downwards and repeat until you get to the bottom.

As we discussed before, in order to speed up gradient descent, normalization is important. Normalization is simple, if you are dealing with images. You can take the pixel values of your image, they are typically between 0 and 255. And simply subtract 128 and divide by 128. W and b should also be initialized for the gradient descent to proceed. Draw the weights randomly from a Gaussian distribution with mean zero and a small standard deviation sigma.

Stochastic Gradient Descent

Computing gradient descent using every single element in your training set can involve a lot of computation if your data set is big. And since gradient descent is iterative, this needs to get repeated until convergence. It is possible to improve performance by simply computing the average loss for a very small random fraction of the training data. This technique is called stochastic gradient descent, SGD. SGD is used a lot for deep learning because it scales well with both data and model size.

How small should an SGD step (aka "learning rate") be? This is an involved question: setting the learning rate large doesn't make learning faster, instead using large steps may miss the optima valley, and may even cause divergence. To set a suitable value for learning rate, we can try a range of values 0.001, 0.003, 0.01, 0.03. 0.1, 0.3, and plot convergence. After you settle on a suitable step size to start with, another useful thing is to make the step smaller and smaller as the training progresses during a training run, for example by applying an exponential decay. AdaGrad helps here. AdaGrad is a modification of SGD that makes learning less sensitive to hyperparameters (such as learning rate, momentum, decay).

How do we go deep? 

We devised a neural network (NN) with just 1-layer, the output layer. Our
1-layer NN works like this:

  • It multiplies training data by W matrix and adds b 
  • It applies the softmax and then cross entropy loss to calculate the average of this loss over the entire training data. 
  • It uses SGD to compute the derivative of this loss with respect to W and b, and applies the $\delta$ adjustment to W and b (i.e., takes a step downwards in the gradient field)
  • It keeps repeating the process until it converges to a minimum of the loss function.

In the next post, we will learn about adding hidden layers via rectified linear units (ReLUs) to build deeper NNs.  Deeper NNs are able to capture more complex functions to fit the data better. For training the deep NN we will learn about how to backpropagate the gradient descent adjustments to the corresponding layers in the NN using the chain rule of derivation.

Related links

Here are the links to the introductory ML/DL concepts series:

Friday, January 6, 2017

Learning Machine Learning: Logistic Regression

This is part 2 of learning machine learning introductory concepts. Recall that supervised learning had two basic examples, regression and classification. We covered linear regression in part 1, and now in part 2 we look at classification. Although the name of the technique used here, logistic regression, includes the word "regression", this is in fact a classification algorithm. It builds on a similar gradient descent approach as we discussed in part 1 in the context of linear regression.

(In this post, again I follow/summarize from Andrew Ng's machine learning course at Coursera. Here is Ng's course material for CS 229 at Stanford. There are also good course notes here, and I will summarize even more briefly than those notes to highlight only the big ideas.)

Hypothesis representation

The goal of the logistic regression algorithm is to determine what class a new input should fall into. Here is an example application. See, line fitting does not make sense for this application. We need discrete classification into yes or no categories.

For linear regression, our hypothesis representation was of the form $h_\theta(x) = (\theta x)$. For classification, our hypothesis representation is of the form $h_\theta(x) = g((\theta x))$, where we define $g(z)= \frac{1}{(1 + e^{-z})}$. This is known as the sigmoid function, or the logistic function. For a real value $z$, the logistic function has the following plot.

If $z$ is positive, $g(z)$ is greater than 0.5. In our logistic regression hypothesis, we take $z = (\theta x)$, so when  $\theta x \geq 0$, then $h_\theta \geq 0.5$ and the hypothesis predicts $y=1$. When $\theta x \leq 0$ then the hypothesis predicts $y=0$.

In other words, $\theta x \geq 0$  is the decision boundary. When our hypothesis $h_\theta(x)$ outputs a number, we treat that value as the estimated probability that y=1 on input x.

If our hypothesis is linear, of the form $h_\theta(x) = g(\theta_0 + \theta_1 x_1 + \theta_2 x_2)$, the decision boundary would be a line. For example:

If our hypothesis is polynomial, $h_\theta(x) = g(\theta_0 + \theta_1 x_1 + \theta_2 x_1^2 + \theta_3 x_2^2)$ , the decision boundary can be a circle. (By using higher order polynomial terms, you can get even more complex decision boundaries.) For example:

OK, assuming we had decided on our hypothesis, how does the logistic regression algorithm learn values for fitting $\theta$ to the data to capture the data nicely in the decision boundary? We again use gradient descent, but this time a little differently as follows.

Cost function for logistic regression

Since $h_\theta(x) = h_\theta(\frac{1}{(1 + e^{-x})})$ is a sigmoid/nonlinear function, when we plug this in the cost function, we don't know if the cost function will be convex or not.  However, the cost function should be convex for the gradient descent to work. So we use a trick, we define our cost function carefully to make sure when $h_\theta(\frac{1}{(1 + e^{-x})})$  is plugged in the cost function, the function is still a convex function.

We define our cost function as:

Note that:
cost (1)= 0 if y=1, else it is infinity
cost (0)=0 if y=0, else it is infinity

In other words, this cost function harshly penalizes and thus aims to rule out very confident mislabels; mislabels can still have lukewarm 0.6 confidence because the penalty is less there.

The above is the cost for a single example. For binary classification problems y is always 0 or 1, and using this, we can have a simpler way to write the cost function, and compress it into one equation as follows.

Gradient descent for logistic regression

We use gradient descent to minimize the logistic regression cost function. As described before the gradient descent algorithm repeatedly does the following update $\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta)$,

where $\frac{\partial}{\partial \theta_j} J(\theta)= \sum_{i=1}^m (h_\theta (x^i)-y^i)*x_j^i$.

Multiclass classification problems

We can adopt this singleclass logistic regression idea for solving a multiclass classification problem using one vs. all approach: To do k classifications, split the training set into k separate binary classification problems.

Related links

Here are the links to the introductory ML/DL concepts series: