When the human brain is faced with too much information, it compensates for its limited processing capacity using mental shortcuts.
Introduction
Despite the wondrous complexity of the human brain, it can struggle to keep up with the sheer abundance of stimuli that humans encounter at any given moment. The brain suffers from information overload when forced to operate beyond capacity – an estimated 120 bits of data per second for the conscious mind. After all, processing data requires attention, which, in turn, requires mental energy.
We live in the so-called information age, and it is important to be able to cherry-pick what warrants attention amid the flotsam of distraction around us. Take for example the breadth and depth of information we find in social media. It requires effort to sift through low-quality ‘information’ and not fall prey to fake news. As a way of dealing with this, the human brain employs filters that direct our attention away from trivial matters. For the most part, these shortcuts we use in information processing and decision-making serve us well, but they can also lead to errors in logic when we focus on irrelevant information or overlook key pieces of data.
The anchoring bias
One prevalent and well-researched cognitive bias is the **anchoring bias**. This refers to our tendency to ‘anchor’ judgments and decisions on the first piece of information that we receive on a specific matter. Though we may recognize an anchor as inaccurate or even arbitrary, our instinct is to interpret subsequent information with the anchor as frame of reference. This distorts our perception and prevents us from assessing alternatives objectively, by their own merit.
The concept of anchoring first came about in the field of psychophysics. In 1958, researchers Muzafer Sherif, Daniel Taub, and Carl Hovland examined how individuals perceived the physical characteristics of objects. They observed that, when estimating the weights of objects, subjects adjusted their estimates based on the presence of outliers in the group, thereby exhibiting an anchoring effect. Subsequent research has since found the anchoring effect to exist in consumer purchasing behavior, in the courtroom, and in negotiation scenarios, among others.
Strikethrough pricing and the anchoring effect
When you’re out shopping and see a pair of nice-looking pants, how do you decide whether it’s priced reasonably? Do you take into account its brand, the material, the quality of its stitching? Which matters more, fit or design? Translating these variables into one number is tricky because there are so many things to consider. The equation is complex and can trigger information overload.
Going back to those pants, you check the price tag – $200. Too expensive. Hmm.
Wait, though. It says underneath that it’s on sale for $100. That seems completely reasonable, especially compared to its original price.
You walk out of the shop $100 poorer but ecstatic with your bargain find. Except maybe if the pants were initially priced at $100, you wouldn’t have felt the same way. But you saw the $200 price tag first, so the sale price felt reasonable relative to $200. That’s **strikethrough pricing** in action, a common retail practice that takes advantage of our propensity to use anchors in decision-making.
Examples of anchoring bias
Marketing and pricing strategies are rife with anchoring bias. In addition to strikethrough pricing, vendors use decoy pricing to nudge customers toward a favored product variant. For instance, the premium plan in product subscriptions seems excessive. The basic plan feels restrictive. But, as in Goldilocks and the three bears, the standard plan is *just right.*
The anchoring effect figures into negotiation tactics too. Negotiations start with one party making a proposition that sets the tone. Subsequent counteroffers are assessed based on this initial offer, the anchor on which a deal may be struck.
Even courtroom decisions are not exempt from bias. In one study, judges rolled a pair of dice to determine the prosecutor’s sentencing demand. Researchers manipulated the dice to favor either high or low rolls. Despite knowing that the demand was arbitrary, judges served sentences impacted by their rolls. The high-anchor group sentenced an average of eight months; the low-anchor group an average of five. The study begs the question – to what extent do irrelevant factors impact courtroom decisions?
Two hypotheses behind anchoring
Two leading theories seek to explain anchoring bias. Tversky and Kahneman’s anchoring-and-adjusting hypothesis suggests that, when humans make estimates, we first set a starting point, or an anchor, and adjust accordingly. However, adjustments usually end up being insufficient, leaving us with a final estimate that ends up closer to its anchor than to the target.
Meanwhile, the selective accessibility hypothesis explains anchoring as a result of a priming effect. When making judgments, by default, we consider the plausibility of an anchor that is at the top of our mind. Even if the anchor proves incorrect, our mental calculus considers parts of the anchor that seem relevant to the value we are looking for, thus serving as a benchmark for comparative judgment.
Nevertheless, studies find that anchoring bias is difficult to avoid, even when incentivized to do so. The best way to overcome this bias, according to experts Thomas Mussweiler, Fritz Strack, and Tim Pfeiffer, is to create counterarguments against an anchor, similar to playing devil’s advocate.
Base rate fallacy
In contrast to anchoring, which favors prior information, base rate fallacy refers to the human tendency to overlook prior general knowledge, instead placing more value on recent individuating details. This cognitive bias suggests that, in the face of a base rate, say statistics on a general phenomenon, humans are prone to rely on anecdotal evidence.
Maya Bar-Hillel’s cab problem is a prime example of base rate neglect. Her study poses the problem: a town has only two cab colors, green and blue of which 85% of the town’s cabs are blue and 15% are green. A hit-and-run witness claims that the car involved in an incident was green. Given this claim, what are the chances that it was actually a green cab if witnesses can correctly identify cab colors 80% of the time?
Most participants said 80%. In doing so, they ignored the base rate, which is that only 15 in 100 cars in town are green. If both pieces of information are taken into account, the actual likelihood that the witness correctly identified a green cab is only 41%.
Bayesian probabilities
One thing that becomes apparent when we talk about the base rate fallacy is how **most people misinterpret statistics**. Whether this has more to do with our statistical literacy or with the potentially misleading nature of some statistical statements is up for debate. Some researchers argue that it’s a matter of how we phrase statistical questions – some formats are more intuitive than others. All the same, let’s have a look at the concepts at play.
The term ‘base rate’ refers to prior probabilities. By extension, this means that we’re dealing with at least two sets of probabilities. When we’re faced with multiple sets of information, according to the base rate fallacy, we tend to favor specific details at the expense of the general. What we should be doing is assessing each statement for relevance, and then integrating the relevant pieces of information to come up with a better prediction. This is where Bayesian probabilities come in.
False positives in healthcare
In healthcare, no test is 100% accurate. Most medical tests produce false positives, where a healthy individual is incorrectly diagnosed as ill. Though rare, these occur where the prevalence of the condition being tested is low. And although a false positive is not as dangerous as a false negative – which deprives patients of the treatment they need – it causes unwarranted anxiety and burden.
Take a medical test that detects cancer with 95% accuracy. The actual prevalence of the condition is five in every thousand, or 0.5%. Say a patient tests positive. We know the test isn’t 100% accurate. How likely is the patient to be ill?
Most of us fall prey to base rate neglect and say 95% – after all, that’s how accurate the test is. However, applying Bayes’ theorem, we integrate the two pieces of information – (a) the test is wrong 5% of the time, and (b) any person has only a 0.5% chance of suffering from the condition. Using inferential statistics the actual probability of cancer, given a positive result, is **8.7%**, a stark difference from 95%.
Impacts of base rate neglect
Misunderstanding statistics due to base rate neglect can cause undue panic and, subsequently, faulty decision-making. Upon seeing a negative earnings report, an investor may pull out their investments prematurely– even if it’s a company’s first quarter in the red following years of steady growth. This dip may just be a blip in the greater scheme of things, but, as the saying goes, sometimes we miss the forest for the trees.
In July 2021, eyebrows were raised at Iceland – and COVID-19 vaccines. Despite a 71% vaccination rate, Iceland saw a surge in COVID-19 cases, with 67% of infections detected from fully vaccinated individuals. Pundits weaponized this as proof of vaccine ineffectiveness, but they ignored the broader context.
If most of the population is vaccinated, the pool of unvaccinated people shrinks. It follows then that, among the infected, the percentage of vaccinated people would be high. If the entire population is vaccinated, all new infections would be of vaccinated people. This serves as a potent reminder that statistics are prone to misinterpretation when taken out of context.
Framing effect
When trying to win people over, it’s not just about what we say. More than that, how we say it is important. One element of effective communication is presenting messages so they resonate with one’s audience. Indeed, marketers are constantly reminded to step inside the consumer’s mind and to reflect the aspirations and frustrations of their target market in their messaging. This is consistent with the **framing effect,** which states that, when we make decisions, humans tend to focus on the way information is presented rather than on the information itself. Hence, we have to ‘frame’ our message in a way that directs people’s attention where we want them to focus.
Unfortunately, this cognitive bias may lead to suboptimal choices when inferior choices are deliberately presented in a positive light. We see this often in product packaging and advertising material. Two products may be identical, but the one that pushes its product benefits more effectively will end up being more successful than its counterpart. Or, a subpar product may word things so as to understate or obscure its flaws.
How framing works
To help explain the framing effect, Tversky and Kahneman developed the prospect theory, which suggests that we do not perceive potential gains and losses symmetrically. As humans, we fear a potential loss more than we value an equivalent potential gain. In this regard, we lean toward risk aversion. Thus, when faced with two options – a guaranteed $50 versus a 50% chance of receiving $100 – we are likely to choose the first option despite the upside potential of the second choice.
In tandem with prospect theory, our brain reverts to two shortcuts in particular that contribute to the framing effect. The **availability heuristic** refers to our tendency to favor information that is easier to recall – say, simple explanations that require minimal cognitive load. In addition, we prefer information that appeals to our emotions – the **affect heuristic**.
In sum, when making decisions, the options we lean toward are often those that were framed to highlight potential benefits, minimize risks, stick to the top of our mind, and evoke an emotional response.
Frames in action
Framing is widely used in consumer marketing. When we walk down the aisles of a supermarket and see signs like ‘save $50,’ or ‘buy one get one,’ that’s called positive framing – emphasizing what the buyer stands to gain. Conversely, when we receive marketing emails in our inboxes with headlines like, *”Don’t miss out on this year’s biggest sale!”* or, *”Stop wasting your time on x, y, z,”* that’s negative framing exploiting our fears and frustrations.
Research suggests that positive framing produces higher conversion rates, but it’s not a hard and fast rule. Testing to ensure that messaging resonates with a target audience is still best practice.
So, how do we avoid letting our biases nudge us into potentially suboptimal decisions? One way is to **slow down our decision-making and seek alternative information** that may be framed differently. We can also examine the choices we make thoroughly, picking apart our rationales for any possible bias.