Monday, November 5, 2018

Judoing the Dunning-Kruger effect: the "surprisingly-popular option" strategy for crowdsourcing

Let's say you want to crowdsource the answer to the question:
(Q1) What is the capital of Brazil?

The surprisingly-popular option strategy for crowdsourcing suggests piggybacking a control question to the real question Q1:
(Q2) What do you think the majority of other people will respond to this question?

Many people (non-Brazilian and non-geography-nerds) will answer with Rio for both Q1 and Q2. But there will be some people that will answer with Brasilia for Q1 and yet Rio for Q2.

The first set of people that replied with Rio to both questions did not know much about Brazil, and went with what they know as the most prominent city in Brazil. The second set of people not only knew of the correct answer, Brasilia, but they also anticipated that the majority of participants will go wrong by answering with Rio.

Rio is the popular option for Q1, but Brasilia is the surprisingly popular option because the respondents for Brasilia had anticipated that Rio would be the popular option.

The lesson is: "People who expect to be in the minority opinion deserve some extra attention."

Here is an article about this algorithm, and here is the nature paper.
(I wish this would have occurred to us 4 years ago, when we were collecting crowdsourced data from our "who wants to be a millionaire" app.)

MAD questions

1. What are similar judoing techniques?
I am fascinated with this technique, because this is like applying a judo-throw on the Dunning-Kruger and reversing its effects.

It takes a weakness observed in the Dunning-Kruger effect (that people of low ability suffer from illusory superiority and mistakenly assess their cognitive ability as greater than it is), and transforms that in to a strength for identifying the expert voices among the participants, the people who knew the answer to Q2 but provided a minority answer for Q1.

What are similar judo moves in computer science/technology and in general?

2. What about the conspiracy nuts?
This surprisingly-popular option technique has a conspiracy nuts problem, isn't it? There is inevitably a strongly opinionated group that will be in the minority and they will correctly anticipate that they will be in the minority. This technique does not filter for them, right?

So, let's take anti-vaxxers. They will say "no" to (Q1) "Should I vaccinate my kids?" and they will correctly guess the popular option "yes" to (Q2) "What do you think the majority of other people will respond to this question?"

How do we fix this issue with the method? Maybe we can add a (Q3) What do you think the minority choice is and why it is wrong? Of course we are moving away from the automatic aggregation in the original method, but the essence of the idea still holds: Are you aware of the other opinions on this question, and can you explain and refute them?

Can you not only strawman, but also steelman the other arguments and address them?

3. How does this apply to me as I assess my views/positions?
For the hygiene of the mind, it is important to reflect on your beliefsets/behaviors/positions and reassess them occasionally. Am I able to explain why I am in the majority or minority side for my positions? Do I understand the other positions and not only refute them but can also appreciate some valuable points in them?

Some other related posts from my blog on this are:


Joseph said...

Counterexample: Trump was surprisingly popular in 2016.

Murat said...

I think this falls under the scope of discussion in MAD question #2. Can you not only strawman, but also steelman their arguments and address them?

Two-phase commit and beyond

In this post, we model and explore the two-phase commit protocol using TLA+. The two-phase commit protocol is practical and is used in man...