Our ability to survive and thrive relies on overcoming obstacles and solving problems in everyday life
Responding to our environment
**Decision-making, reasoning, and judgment** are crucial to responding to our environment in ways that help us overcome obstacles and work toward our goals. All of them involve **conscious awareness**, though typically not of the processes or thoughts themselves, but rather the product or object of our thinking.
After all, choosing socks doesn’t typically involve ‘metacognition,’ or thinking about thinking, but we are aware when we can’t find a matching pair.
So, what is a thought? It can refer to a mental event – bringing to mind something that has happened. Yet, it is also associated with a mental faculty, the capacity to think, and being engaged in the activity of thinking itself.
While all aspects of such cognition are equally fascinating to the cognitive researcher, underpinning each is our ability to coherently organize our trains of thought to make decisions, reason, and judge.
Problem-solving
Whether rushing to a meeting when our train is delayed or trying to finish an assignment and finding the library is shut, problem-solving is a vital aspect of the human experience.
**‘Knowledge rich’** problems such as chess require highly-specific knowledge, while in ‘knowledge lean’ problems, including math’s questions, we are often given the required information.
The German **Gestalt psychologists** also differentiated between problem-solving involving ‘**reproductive thinking**,’ reusing prior experiences, and ‘**productive thinking**’ involving moments of considerable insight – known as the ‘ah-ha experience.’
The **2-string problem** famously describes the latter. Participants are challenged with tying the ends of 2 strings hanging from the ceiling, but they are too far apart to hold one and reach the other. The room also contains several seemingly random objects, including a pair of pliers.
Researchers found that if they ‘accidentally’ brushed against one of the strings setting it swinging, the subject often experienced an ‘aha’ moment. They recognized they could use the pliers as a pendulum, holding the other string and catching the weighted one on the upswing.
Past experiences
Past experiences aren’t always helpful in overcoming the obstacles we face – sometimes, they can even become a ‘blocker.’
**‘Functional fixedness’ is where we can only see an object as having a limited number of uses.**
In Karl Duncker’s classic 1945 experiment, he gave his participants matches, a candle, and tacks in a box, amongst other objects. When he asked them to attach the candle to the wall above the table so that it would not drip, they often tried, unsuccessfully, to nail it to the wall using the tacks.
They ‘fixated’ on the box the tacks were in as a container rather than a potential platform. The best solution was to tack the empty box to the wall, then use the inside as a candle holder.
Similarly, we often find ourselves ‘**mentally set**’. After all, as Albert Einstein apparently said, doing the same thing over and over again and expecting different results is the definition of insanity. And yet, at times, most of us are guilty of repeating past problem-solving techniques even when they don’t work – but it’s never too late to change our approach.
Becoming an expert
**Expertise is often the result of years of gaining appropriate skills and knowledge within a specific field or environment**. The expert becomes highly efficient at solving problems within their domain and can see patterns and insights that others would miss.
Medical expertise is very literally a matter of life and death. When doctors’ eye movements were tracked while looking at medical scans those with the most expertise almost immediately fixated on the cancer, suggesting the use of holistic or global processes.
In fact, an early study of radiographers found that even when images were shown for only 200 milliseconds, experts were able to make a correct assessment 70% of the time.
Similar studies in other areas, including chess, suggest that experts use fast, seemingly automatic processes not available to non-experts. And yet, it remains unclear exactly how such methods work or even how they are acquired – though thousands of hours of practice are vital.
Errors in Judgment
**Judgment** and **decision-making** overlap – the former influences the latter and is revised with new information. We may be 90% sure all swans are white until someone tells us otherwise, when our confidence drops to 50%. Then on holiday, we see one that is not, and we are 100% sure not all swans are white.
Because much of our knowledge is incomplete, **we often turn to rules of thumb or ‘heuristics’ to help with judgment tasks**. So, if someone or something appears to belong to a category we are already aware of, and they appear representative, we base our judgment on what we think we know based on that assumption. We then add in what’s called ‘base-rate’ information, or the likelihood of something happening or being true.
And yet, we are notoriously wrong. The ‘**conjunction fallacy**’ confirms that we often believe that the combination of 2 events is more likely to happen than 1 on its own.
Making the Right Decision
**Making the right decision typically involves choosing from among several possibilities**, often without having all the information to hand or even being clear on the consequences.
Participants in a study were asked to choose between a sure win of $800 or an 85% chance of winning $1000. Two-thirds of people chose the smaller average gain of $800 over the potential $1000.
When repeated, they were asked whether would opt for a guaranteed loss of $800 or an 85% chance of losing $1000. Despite the average loss in the first choice being $800, two-thirds of people chose the potential loss of $1000.
According to the ‘prospect theory,’ people experience ‘loss aversion’, so they take greater risks on what they might gain over what they could lose.
Other studies have uncovered something equally fascinating, including the ‘**sunk cost effect**.’ If we have invested resources in something, even if it proves unfavorable, we are likely to continue. However, expertise in the area can override the effect, for example, doctors stopping treatment when clearly not effective.
Poor deductive reasoning
All spiders have 8 legs. A tarantula is a spider. Therefore, tarantulas have 8 legs. **’Deductive reasoning’ refers to the logical conclusions we can draw with certainty**. And yet, such ‘**conditional reasoning**’ can also become confusing, leading us to incorrect deductions.
For example, when presented with the following 2 premises, “If Mary is angry, then I am upset” and “I am upset,” many conclude that “Mary is angry.” And yet, this needn’t be the case – I could be upset because I have lost my new phone.
Research suggests that the inferences we draw often are based on our poor understanding of probability and our potential inability to think of counterexamples that fit the situation.
It seems that, far from being logical thinkers, humans frequently rely on their general knowledge of causal factors and people’s goals and preferences. While it can seem illogical, it often makes sense in the real world.
Creating Mental Models
Phil Johnson-Laird, Professor at Princeton University’s Department of Psychology, suggested that **we reason using ‘mental models’ to help us understand the logical outcomes of a set of premises**.
When given a set of premises, such as “Joanne is older than Ben” and “Mary is younger than Ben,” we draw up a mental model consistent with both. **If we find counterexamples, the model is rejected; if not, the conclusion is considered valid**.
Such thought processes may involve visual imagery and preserving spatial relationships but are constrained by our limited working memory.
While many studies have found evidence supporting our use of mental models, the approach assumes people engage in deductive reasoning more often than they do. Yet, most people find such logical thinking incredibly difficult. The theory also fails to consider our background knowledge or how we approach more ambiguous reasoning problems.
Inductive reasoning
‘Inductive reasoning’ travels in only 1 direction, from more specific to more general – and is the opposite of deductive reasoning.
We use such reasoning every day. Each time I turn the tap, water comes out. When I turn the tap next time, more water will be available. Such inferences are incredibly helpful, if not 100% reliable. There could be a burst pipe, and there may be no more water.
The more observations that feed our inductive reasoning, statistically speaking, the more likely they are to be true. And yet, as philosopher of science **Karl Popper** recognized, when testing a hypothesis, instead of seeking additional confirmation, we must pursue ‘**falsification**.’
After all, “all frogs are green outside Brazil,” until you stumble across a purple one. Humans make assumptions because they aren’t yet aware of what is currently unknowable. And yet, despite their likelihood of incorrectness and incompleteness, such an approach has helped keep individuals and groups safe throughout our long evolutionary journey.
Informal Reasoning
**Human reasoning is limited** and fails easily. Anecdotally we know this when we try and perform complex mental arithmetic in our heads and get it wrong or mistakenly take the wrong exit trying to avoid traffic on the way home. While some of these limitations are likely to be down to our mental ‘hardware,’ they also result from judgment biases and failing to understand the problem.
We can also be conservative when using our cognition – choosing to reason informally even when we have the skills and knowledge to reason deductively.
Not only that, but according to what’s known as the **‘Dunning-Kruger effect,’ we are often unaware of our incompetence**, failing to recognize our errors, meaning that we continue to make the same mistakes repeatedly.
We are also more likely to be persuaded by the arguments given by someone who appears to be an expert – even when they are not.
**Informal reasoning may be easier but carries greater risks of coming up with the wrong answer.**