with No Comments

Post No.: 0890coherent

 

Furrywisepuppy says:

 

Inside our brains are various motives and impulses trying to drive us one way or another, and it all only seems coherent to our conscious mind (if we don’t have split-brain syndrome) because our mind is constantly making up or rationalising stories that appear as if ‘only one voice is speaking’.

 

Basic logic can even be overridden for the sake of the representativeness heuristic (see Post No.: 0588) and coherency of a story. For instance, thinking that a feminist female will more likely be a banker who is also active in the feminist movement than merely a banker alone, when logically an intersection in a Venn diagram can never be larger than all of the conjoining sets i.e. things will always more likely possess one attribute than multiple independent attributes (‘the conjunction fallacy’). Presenting the two statements directly one after the other makes it slightly easier to notice the fallacy but even here we can still fall for it.

 

You’re more likely to be a human with hair than a human with dark hair. A fair coin tossing HHHT is more likely than tossing THHHT. Whenever you specify a possible event in ever-greater detail, you can only lower its probability of being true. Yet adding detail to stories can make them more persuasive. So one’s suspicions of one’s partner cheating on us will sound more plausible the more one can add potential details and motivations to the conjectured story, when this’ll actually make that particular story less statistically probable.

 

The most coherent stories are therefore not always the most probable – yet they sound the most plausible. Plausibility is substituted for probability, which can be a problem when forecasting scenarios (e.g. piecing together a coherent story of motive, opportunity and behaviour is what Crown prosecutors attempt to do). The richer, more detailed, a constructed story may be, the more plausible it might sound, but the less probable it’ll be due to the conjunction fallacy.

 

Finding hard evidence to confirm the individual pieces of an elaborate story won’t make that story more probable per se – it’ll simply make alternative hypotheses less likely. And without hard, unambiguous evidence – an increasingly elaborate conjectured story should certainly become less rather than more persuasive. Conspiracy theories technically become less, not more, probable the more elaborate they become.

 

So it’s plausibility and coherency (a better story), rather than really merely more detail, that causes this cognitive error – plausibility is sufficient to achieve an endorsement from our ‘system two’. A tidy, coherent story makes us feel good because of cognitive ease, and we’re more ready to accept and trust answers that are cognitively easy. (Religions, myths, superstitions, beliefs of karma and of the supernatural propagate through their stories too.)

 

System two isn’t terribly alert – people may understand the logic of Venn diagrams yet not apply it reliably even when all relevant information is staring at them in the face i.e. intuition can even block the application of basic logic! We may be entitled to our own opinions but some opinions can be logically incorrect – yet logical fallacies can remain attractive (to our ‘system one’) even when we recognise (with system two) that a logical fallacy has been committed (similar to not being able to stop falling for a visual illusion despite knowing it’s just an illusion).

 

People seem to be less prone to the conjunction fallacy if asked ‘how many of a group of 100 are x?’ (a natural frequency representation) instead of ‘what percentage of people are x?’ – it helps people to see that one group/set is wholly included in another, and to think of individuals rather than an abstract percentage.

 

Conjunctive events decrease in probability the more events there are (e.g. there’s a 50% chance of flipping a head with one fair coin toss, but a 25% chance of flipping two heads in a row), whilst it’s the opposite with disjunctive events (e.g. there’s a 50% chance of flipping a head with one fair coin toss, but a 75% chance of flipping at least one head in two tosses). And conjunctive events tend to be overestimated while disjunctive events tend to be underestimated because we anchor onto the probability of a single (usually the first) event occurring.

 

In the context of planning and projects, where there’s typically a conjunctive element (i.e. a series of events must work, and work on time in a row, for the entire furry project to succeed) – this contributes to the tendency to be over-optimistic of the success of most projects. Even when the success of each individual event may be likely, the overall success can be quite low if the number of independent events is large. Conversely, the evaluation of risks has a disjunctive character (e.g. any major part of a financial institution only has to fail once for there to be a catastrophe, even though the risk of any single part failing on any particular day is tiny) – hence the tendency to underestimate the risks of failure in complex systems.

 

When faced with a never-seen-before case – only hearing one side of the evidence can make us feel more confident in our opinions than if we had heard both sides (we can of course try to generate counterviews of our own but we tend not to because that’s a particularly effortful system two process). This is what you’d expect if confidence is determined by the coherency of the story we can generate from the information that’s available to us. Consistency is what makes a story more believable, not completeness, unfortunately. Understanding multiple sides of a story casts relatively more doubt on any single view, or at least nuances and softens extreme views – this is why wiser people hold fewer black-or-white or extreme views. Woof.

 

As eluded to several times before in this blog – a little bit of knowledge can therefore be a very dangerous thing because only understanding a portion of an issue makes it easier to fit everything you know into a coherent pattern and fluent belief, which in turn increases surety in that belief – hence those with the strongest opinions about complex issues tend to be those who know too little.

 

This can mean we make confident decisions based on limited information, and rapidly too. We logically don’t (can’t) allow or mitigate for information that doesn’t come to mind, and that missing information could be relevant. We even usually fail to allow for the possibility of there being vital information that’s out there that we don’t know – yet we’ll nearly always be able to form and have an opinion for everything!

 

So our system one excels at constructing the best possible story that incorporates the ideas that are currently activated but doesn’t (can’t) account for any information it doesn’t currently have to mind. How coherent the story one can construct from the information available is what determines one’s confidence, whilst the quality and quantity of information available is largely irrelevant – even if it’s one-sided, or especially when it’s one-sided because one-sided arguments are easier to make a coherent story out of.

 

And then what tends to happen is that once we’ve settled on a stance, we’d rather not hear any disconfirming information that spoils our personally neat and coherent story because this decreases cognitive ease. Questioning our beliefs, and potentially having to un-believe what we believed before to possibly believe in something we even previously opposed, is like having to rewrite the textbook on a subject in our own minds, or like denouncing one religion for another in some cases. We’ll want to resist this arduous effort and upheaval, especially if one has sunk a lot of costs (e.g. time, energy, money, reputation) into one’s current worldview.

 

We prefer simple, coherent rules and explanations. We’d rather believe in a nice, neat, coherent world that makes complete sense to us (like it’s always crystal clear who the good and bad guys are) than one that’s chaotic, complicated or messy; even though the world is quite chaotic, complicated and messy.

 

‘What you see is all there is’ also means that even when we know a fact – if that fact doesn’t come to mind when we need it, it’s as good as us not knowing it at all. Our system one utilises only currently activated information. Information that’s neither in front of us nor retrieved from memory might as well not exist when it comes to how it affects our judgements. That’s why it’s best to ask for a favour or pay rise immediately after doing something impressive for whom we’re asking, before their memory of that event fades.

 

‘What you see is all there is’ is one reason how magic tricks, cons and sales spiels work on us – only the information we currently have to mind is taken into consideration by uncritical system one whenever we make a judgement. It also means we’re not great at noticing very gradual changes (e.g. becoming overweight). WYSIATI means that unsure swing voters will be heavily swayed by scandalous news that’s strategically released just before the day of voting (and then this result will have a material bearing for up to 5 years for elections, or longer for referenda!) In sports, how we feel about a team depends highly on their last result. When fixated on the fear of a certain threat, we can forget that other threats exist. It’s why people are more accepting of increased stop searches straight after a terrorist attack, but after a while this attitude wanes again and people will wonder why we are being constantly searched.

 

Making sense of partial information in a complex world works well enough most of the time, but it can lead to overconfidence, we can be manipulated by ‘framing effects’ (i.e. different ways of presenting the same information can evoke different emotions/feelings e.g. seemingly inconsequential variations in the wordings of a choice problem can lead to large shifts in preferences, like ‘90% fat-free’ sounds better than ‘10% fat’, even though we can figure out, if we employ some critical thinking, that they both mean identical things), and we can fall foul to numerous statistical logic errors (e.g. base rate neglect).

 

The less we know (but we know a little), the more things can seem easier (e.g. assuming it must be straightforward to find a massive plane that has crashed in the Indian Ocean, or to shoot drones down with rifles in airports). And so everybody’s an armchair critic! Anybody who thinks that any major issue, that has presented challenges to humankind and civilisation for many decades without satisfactory solution, is easy to solve, is most probably naïve, especially if their ideas aren’t original and have been discussed by other thinkers before.

 

The less we know, the more we tend to over-simplify and over-generalise – ‘some’ becomes taken to mean ‘all’ i.e. stereotypes that lead to traps or poor decisions (e.g. ‘natural’ always means ‘healthy’, ‘expensive’ always means ‘better’, ‘Muslims’ are all ‘terrorists’, ‘blondes’ are all ‘dumb’). If people still desperately want a rule of thumb, a better one would be ‘it’s likely to be more complicated than that!’

 

When information is scarce (which is common), system one operates by jumping to conclusions, and unless we spend the effort to question a conclusion, lazy system two won’t be engaged to ask for more information to potentially override that opinion i.e. an opinion will be formed by system one, one way or another (although it can change opinion if/when more information is presented), but it will not wait for more information and it cannot not make a preliminary opinion. There’s no waiting and no subjective discomfort with our preliminary opinion. System two can be employed to make more systematic analyses but system one still influences even our most careful decisions because it cannot be switched off!

 

So a good habit is to put in the deliberate effort to ponder whether and what types of information we could be missing?

 

Woof!

 

Comment on this post by replying to this tweet:

 

Share this post