The success of Daniel Kahneman’s Thinking, Fast and Slow has popularized Dual Process Theory, and more precisely the idea that heuristics and logic are two different facets, or “systems”, of the human mind. According to this approach, logic would be limited to “system 2”, a slow module requiring conscious activation and mental effort, while our default “system 1” would rely on “rules of thumb”, or heuristics, allowing us to make quick and largely unconscious judgments and decisions.
But what do all of those “rules of thumb”, such as “anchoring” (relying on the first pieces of information), “availability” (relying on the ease with which information comes to mind), or “aversion to losses” (weighing potential losses more heavily than potential gains), have in common? What is their general principle?
Kahneman did explain why heuristics evolved: they are energy-saving devices, which must have been much needed as we adapted to our prehistoric environment. Although inaccurate, and therefore unreliable, these mental shortcuts were nevertheless useful, not only because of their practicality but also because of their efficiency — which means that they yielded, more often than not, workable solutions. This is an important point: we often insist on the fact that cognitive biases are irrational and lead to errors in judgment and decision-making. However, if their conclusions were not, in most cases, acceptable approximations, they would not have been selected and become part of our universal makeup. At least, they had to be “good enough” at the time.
Precisely, what Kahneman did not explain is how these rules of thumb came to be selected. Kahneman did formulate a general rule for system 1: “WYSIATI, What You See Is All There Is”. But this is very vague and more descriptive than explanatory. In addition, while it might shed light on the connection between “anchoring” and “availability” biases, for instance, it does not really tell us why “losses loom larger than gains”.
Yet, a simple and general model can be proposed to explain how heuristics came to be. This model also explains how these rules of thumb can be “good enough” despite being fundamentally wrong. Indeed, it is this very fact that indicates the origins of cognitive biases. Simply put, all of them are different instances of a common logical fallacy: affirming the consequent. The basic rule of logical reasoning is known as the modus ponens: ((p → q) & p) → q, which reads “if p is true, q is true; and p is true, therefore q is true.” An example would be: “if someone is born in the U.S. (p), they are an American citizen (q); and this person is born in the U.S. (p), therefore this person is an American citizen (q). A common mistake is to misuse this rule in the following way: ((p → q) & q) → p. This is “affirming the consequent.” In the case above, it would lead to this logical error: “if someone is born in the U.S. (p), they are an American citizen (q); and this person is an American citizen(q), therefore they were born in the U.S.” This is obviously wrong: people can be American citizens based on their parents’ citizenship, or through naturalization, even if they were born abroad. Still, if you had to guess someone’s birthplace, and if you knew that they are an American citizen, the “rule of thumb” would work more often than not.
Here, we have a logical “system 2” rule, the modus ponens, and a mental shortcut that basically states that, since p → q, q can always be used as a proxy for p. This explains all heuristics and cognitive biases. Take the availability bias, for instance. Generally speaking, events that are the most frequent are also the ones instances of which comes most easily to mind. This doesn’t mean that it is rational, or always reliable, to judge frequency (p) based on availability (q): frequent events can be unremarkable, while unfrequent ones might be striking and easily recalled. Still, as a rule of thumb for quick estimates, it’s usually “good enough.” The same is true for anchoring. If you are a teacher, you know that good essays usually have good introductions. This doesn’t mean that an essay cannot start very well and then go off-topic, or start poorly, but then get much better. Still, if you were to grade papers based on their introductions alone, the rankings probably would not be that different from those based on the full essays.
Contrary to the WYSIATI rule, this simple general model also accounts for our aversion to losses. If losses and gains are both as visible, then the rule doesn’t explain why we give more weight to the former in our decisions. And if losses are more visible than gains, then the WYSIATI rule does not explain why. Yet, the reason is quite simple. Aversion to losses takes the logical principle of diminishing marginal utility, which states that additional units of an item have less and less value, and concludes that losing an item is more significant than gaining one, regardless of their underlying value.
More generally, heuristics take criteria that are logically derived from rational principles and use them as proxies, for those more complex rules; even though such conversions and generalizations are neither accurate, not entirely reliable. Such shortcuts are assumptions that an implication (p → q) can simply be reversed (q → p). Thus, a better way to describe heuristics would be: YSAAAT! Your Simplifying Assumptions Are Always True!