Post No.: 0404
Weirdly, if you ask people to list only 6 instances of when they behaved assertively then they will generally deem themselves as quite assertive people. But if you had asked them to list 12 instances then, if they had struggled to complete this list, they would’ve deemed themselves as less assertive people; even if they had listed more than 6 instances in this latter case! The other way is also true – if one struggles to complete a list of the times that they were not assertive then they will deem themselves as quite assertive.
This confirms that the ease in which instances of something come to mind is far more important to our perceptions and judgements than the number of instances we can retrieve, and this ease or unease can come in any way too i.e. not just by struggling to come up with examples of something but if one is feeling unhappy during the task or if one is under time pressure, for instance. Any method of increasing or decreasing cognitive strain (e.g. frowning or smiling) will make a task feel more or less difficult and so will affect our related judgements accordingly.
It’s also got to do with how judgements are always made via comparisons between things that have been currently brought to mind – so 6 will seem high if the suggested standard is coming up with 6 instances, but low if the suggested standard is coming up with 12 instances. High or low, assertive or not assertive, and other such judgements, can only be made relatively, not absolutely or objectively.
Regardless, this means that people who can only think of a few examples of something happening can feel more confident that this something is common, than people who can think of many more examples, depending on whether those examples came to mind easily – or were more cognitively available – or not. An example is the perception of how frequently civilian planes crash or get shot down after one or two such incidents are reported in the news in quick succession.
People can therefore potentially feel less confident in a choice after they’ve been asked to produce more arguments to support it, feel they’ve under-performed an activity even after recalling numerous times they’ve performed that activity, feel less confident that an event was avoidable after listing more ways it could’ve been avoided, or they may rate something higher after being asked to list more ways in which that something could’ve been improved… if they’ve struggled to come up with the seventh, eighth or more instance, example or reason.
But then sometimes people do go by the number of examples generated rather than the ease in which examples were generated – this is because it’s not just about the simple lack of ease in coming up with more examples (we all expect a bit of drop in ease when coming up with the twelfth compared to the sixth example of something) but also about the lack of ease in coming up with more examples compared to the ease one expected in coming up with more examples. So if someone assumed that it’d be easy to list 12 reasons why pugs are better than poodles but then struggles to think of a ninth reason, then their confidence in their stance will be weaker than someone who thought that it’d be easy to list maybe 6 but maybe not 12 reasons. (All dog breeds are equally cool and deserve equal love and care I’d say – woof woof!) The inference is that if one is struggling more than expected to justify a stance then maybe that stance isn’t that solid after all. It’s the surprise of fluency being worse than expected – or the ‘unexplained unavailability heuristic’.
If, however, people are given a reason for why their ease to come up with further examples may be adversely affected (even spurious ones, such as the colour of the paper used!) then people will tend to explain away their struggle and not factor in their lack of ease in coming up with examples when judging things. So people who’ve been asked to come up with either 6 or 12 examples of when they’ve been assertive will on average feel equally as assertive as each other if told that the background music may be distracting their thinking. Any surprise concerning their struggle to think of examples is explained away and the ease or fluency no longer factors into their own judgements of their own assertiveness.
Now if a person is more personally or seriously (as opposed to casually) involved in a judgement though, then they’ll more likely go by the number of instances retrieved than the ease in which they come to mind. For example, those with a family history of cardiovascular disease will feel safer if they can think of more examples of protective behaviours and feel more at danger if they can think of more examples of risky behaviours i.e. they’ll more likely be vigilant and use more critical thinking if it’s a really serious, life-or-death decision. (They’ll also more likely feel that their future behaviour will be affected by the experience of evaluating their risk today.) In contrast, when we approach a judgement only casually and so are relying on our intuitions, when critical thinking ‘system two’ is otherwise mentally preoccupied or overloaded, when there’s cognitive ease (e.g. one is made to feel temporarily in a good mood or powerful), if we know just a little bit about something to be overconfident about it, or if we simply prefer to trust in our intuitions more – we will tend to rely on the availability heuristic of ‘system one’. The ‘availability heuristic’ was discussed more deeply in Post No.: 0157.
Estimates of causes of deaths are oftentimes massively warped by media coverage. The media itself is biased towards stories of novelty and poignancy, and it shapes, as well as is shaped by, what the public is interested in, thus creating a runaway feedback effect (as opposed to a self-balancing corrective effect). The world in our heads is not an accurate 1:1 representation of reality because, for just a start, our expectations about the frequency or risk of events are distorted by the prevalence and emotional intensity of the messages to which we’re exposed to. This is the ‘affect heuristic’ – people make judgements by consulting their emotions. It substitutes the question of ‘what do I think about it?’ with the easier question of ‘how do I feel about it?’ The emotional tail wags the rational dog! People are habitually more guided by emotion than reason, swayed by trivial details and are inadequate when it comes to intuiting probabilities.
Frightening messages and images that evoke fear are more affecting, salient, dramatic, vivid and are therefore particularly easier to think of and recall, and so they affect the availability heuristic and exacerbate people’s emotional fears. And thoughts of danger that are cognitively fluent and vivid exacerbate fear in a self-reinforcing way too. This is why fear is such a powerful persuasive tool, and why the media, politicians and advertisers/salespeople all utilise or exaggerate it.
The affect heuristic simplifies our lives by making the world seem tidier and decisions seem easier than they really are. For example, if something is perceived to be highly beneficial then (without further relevant information provided) it’ll also be perceived as low risk, and vice-versa; whereas in reality, we often face difficult tradeoffs between benefits and costs. System one coherently assumes that ‘high risk’ and ‘low benefits’ go together, and ‘high benefits’ and ‘low risks’ go together, whereas according to market norms – ‘high benefits’ should come with and do tend to come with ‘high risks’, and ‘low benefits’ should come with and do tend to come with ‘low risks’!
But then risk is arguably subjective – some deaths can be seen as better than others. Deaths that occur when doing voluntary activities can be seen as better than random or accidental deaths, for example. Some deaths appear costlier than others i.e. there’s no objective determination of risk – risk isn’t (just) measurable by the number of lives or life-years lost. The utility of something depends on the measure chosen to operationalise it too (e.g. deaths per million people, or deaths per million dollars of product produced – where choosing which method can be a matter of which serves one’s agenda better).
Yet this subjectivity doesn’t make all judgements of risk arguably correct. For example, we struggle when evaluating small risks – we tend to either ignore them altogether or severely overweight them, hence the amount of concern we hold about something doesn’t always match the statistically known probability of harm from it (e.g. a parent knows that the risk of his/her daughter getting into trouble during a night out is low yet he/she cannot help images of horror that come to mind from disproportionately affecting his/her judgement); never mind the problem of trying to subjectively guess the unknown probability of a danger. We imagine the numerator (the tragic story in the media) without thinking about the denominator (the total population) i.e. we neglect the base rate or how often this may happen in the population as a whole. Possibility is not the same as probability.
It’s the neglect of probability – or ‘probability neglect’ – which, in combination with the social mechanisms of the affect heuristic, availability heuristic and availability cascade (which all influences system one), leads to a gross exaggeration of minor threats (e.g. terrorists kill far less than traffic accidents, yet anti-terrorism gets far more media attention and more public resources than is proportional. The same mismatch of priority occurs with saving furry pandas versus saving wild or native bees and other threatened animals and plants).
Both views may be required though – one may be more objectively correct, but it can be pointless if one cannot motivate action by listening to the emotions of the public. We are trying to help humans after all, and the human animal thinks and behaves in certain human animal ways, hence we must be sympathetic to this rather than rest solely on arguing that humans ought to think and behave in ways that are unnatural for the species. Irrational or not, fear is debilitating, and the public must be protected from fear, as well as from genuine dangers. It’s not easy trying to reason people into a state of rational calm, although maybe more education can help put things into more rational perspective.
So even if scientists or experts claim that humans would be better off if they behaved in more rational ways – they’d be naïve to expect them to, and they’d be naïve if they don’t account for the real-world behaviours of real humans. Real humans should be listened to at times, and expert opinions shouldn’t be beyond questioning by laypeople either. On the other paw, irrationality can lead to erratic and misplaced priorities, and a waste of lives and money. People sometimes don’t choose what’s best for themselves, and in particular what’s best for the long-term (e.g. for either neglecting small risks altogether or severely over-exaggerating them), and such irrational perceptions will influence government policies and spending.
A broad-frame view that takes into account all risks and resources may be more rational than the emotional sentiments that tend to be borne from public pressures, but without the emotional support of the public in a democracy, the public will reject such policies. Thus, in a compromise between idealism and realism, expert knowledge must combine with the public’s emotions and intuitions. A politician must appeal to the public’s emotions because the public will generally weight their own emotions highly when it comes to how they’ll vote.
Woof! Well anyway, if you can give 3 reasons why you think this blog is awesome, or 300 reasons why you think this blog is pants, then please list them via the Twitter comment button below(!)