with No Comments

Post No.: 0588representativeness

 

Furrywisepuppy says:

 

One of the most basic heuristics of ‘system one’ is to represent things in categories, with conceptions of norms and prototypical exemplars that things that belong to a particular category are perceived to possess or emulate (e.g. we hold in our associative memories a representation of one or more ‘normal’ or ‘typical’ examples of horses, tables, rock stars or terrorists). When these categories are social, they’re called stereotypes, which are sometimes useful but at other times unreliable. Stereotyping, rightly or wrongly, is how system one thinks of categories, and many stereotypes are correct but many are erroneous or are uncritically applied, which can lead to dangerous or unfair assumptions that are potentially harmful or unjust.

 

Stereotypes are statements about a group that are (at least tentatively) believed to be ‘facts’ about every member of that group. The similarity of an individual to the perceived stereotype of a group isn’t affected by the size or even diversity of that group. The less one knows about a group, the more one over-generalises that group too. Descriptions of individuals are often not trustworthy or complete enough to make a fair judgement anyway, yet most of us seldom question the quality or quantity of the information we receive about another person’s personality or life before judging them.

 

In fact, a lot of us think it’s something to be proud of to be able to judge others within a few seconds or with scant information about them i.e. we like to trust our intuitions. But that only means that we are relying on potentially fallible system one heuristics such as stereotypes or substitution, rather than more critical ‘system two’ assessments, especially if we’re judging mainly on superficial appearances.

 

There are also numerous line-drawing problems when it comes to trying to categorise things (e.g. what’s the smallest size an orbiting object can be and still be called a moon? Or where do the sideburns end and fuzzy beard start?) The notion of ‘I’ll know it when I see it’ can be too subjective, especially if we’re seeking consistent definitions.

 

The ‘representativeness heuristic’ pertains to the similarity of a description to a stereotype, or the degree to which someone/something is representative of or resembles someone/something else. We tend to think ‘like goes with like’ or causes and effects should resemble each other. This is a bias that can make people believe that two similar people/objects/events are more closely correlated than they really are. The representativeness heuristic plays a greater role in our judgements than objective base rate frequencies or prior probabilities of outcomes do.

 

For example, without knowing anything more about a person other than they are (allegedly) ‘nerdy’, one would guess that this person would more likely be a computer scientist than a social worker – even though one may understand that there are far more social workers in the world than computer scientists. Or even if every single librarian were quiet and introverted in personality – there’ll still be many more quiet and introverted people in the much larger set of business students. In other words, we tend to neglect relevant statistical facts and just rely on resemblance, or representativeness, as a cue to the probability of an outcome.

 

The representativeness heuristic basically substitutes the question of ‘what is the probability of this person fitting that category?’ with a much simpler question of ‘how similar is this person to the stereotype of that category?’ System one automatically generates an impression of similarity without intending to do so.

 

The representativeness heuristic is in action when we say, “His voice doesn’t sound the part for being a leader” or, “She doesn’t look like an academic” or, “The furniture represents a well-managed company” – as if the timbre, appearance or chairs are what make a good boss, scholar or well-run firm respectively(!) The lack of logical sense of such judgements is apparent when pointed out like this, yet such judgements are quite intuitive to make. And if we don’t question them with some critical thinking then we’ll believe them to be sound judgements.

 

Our stereotypes bias judgements of fit into categories, even though factors like voice or looks are essentially irrelevant or non-predictive in the evaluation. Yet people will express greater confidence in their predictions when the selected outcome (e.g. librarian) better represents the input (e.g. he/she is quiet and loves to read books), and when there’s little regard for factors that will limit predictive accuracy (e.g. lots of non-librarians are quiet and love to read books too). This unwarranted confidence stems from the ‘illusion of validity’, or erroneously assuming predictive power in cues where there is none or it’s weak.

 

People can fall for this fallacy even when they know that the predictive accuracy of the information they base their predictions on is low. So people will still overrate their own ability to pick out top employees of the future from interviewing them, even when they’re aware of the vast literature that shows how typical selection interviews are highly fallible, for instance.

 

Stereotypes can be accurate most of the time when they rely on the general data (e.g. that young men are more likely than elderly women to drive aggressively). But each individual is an individual, and often a stereotype isn’t based on the general data at all but on skewed or biased impressions, small samples or on unreliable, prejudiced data (e.g. selectively biased media representations of low-skilled immigrants). We should know that worthless or unreliable information should really be treated the same as no information i.e. disregarded, yet most of us will still be swayed by worthless, false, insufficient or unreliable information.

 

If you don’t have any case-specific information (e.g. Larry is a young male) then you must anchor towards the baseline prediction for the general category to which that person/object/event belongs (the population average for getting a speeding ticket, in the above case). But if you do have some case-specific information then you can adjust this anchor more closely towards the baseline prediction for that more specific category (the relevant subset average i.e. the average number of speeding tickets that young male drivers get, in this example). We should however only draw tentative conclusions about an individual from the statistics of the group. If we want to draw more confident conclusions, the most reliable kind of information would be even more individually case-specific (e.g. Larry is known to drive over the speed limit regularly, or of course most reliable of all – Larry is known to have accrued three speeding tickets already).

 

People will rely on base rates if no other information is presented, but once a mere hint of a specific personality trait is offered – even worthless or irrelevant information (e.g. Larry has a blue mohawk haircut) – people will tend to weight that specific information far more heavily than the base rate information, even if the base rate isn’t completely neglected.

 

But base rates still matter, especially for rare events, even when presented with specific and valid information about a particular person or instance. So anchor your judgement of the probability of an outcome on a sensible base rate, then question the diagnosticity of your evidence i.e. don’t exaggerate the degree with which the specific information favours your hypothesis over the alternative. It takes a lot of strong and reliable idiosyncratic evidence to counter a base rate, so when the evidence for predicting a rare event is weak, one should stick essentially to the statistical base rates. Something may be based on even solid preliminary evidence, but one should still account for any uncertainty where uncertainty is due. And make sure even your subjective beliefs are constrained by the logic of probabilities (e.g. full sets should logically add up to 100%), and update your predictions in light of this logic and any verified evidence.

 

A major problem with representativeness is over-exaggerating the occurrence of unlikely, rare or low-base-rate events. Descriptions of personalities are also over-weighted compared to situational factors (e.g. if asked to guess whether a person who is reading a financial broadsheet on the subway is likely to have ever directly invested in stocks before, most people will guess that this person likely has – even though most people who ride the subway have never directly invested in stocks before).

 

Even when people are told that a description of a person might be unreliable, the representativeness heuristic still has a tendency to make people automatically pigeonhole whom they’re judging according to such descriptions – thus representativeness is insensitive to the quality of evidence presented.

 

Unless you immediately decide to reject some piece of purported evidence, your system one will automatically process the information available as if it were true. Apparently, priming people to ‘think like a statistician’ enhances the use of base rates, but priming people to ‘think like a clinician’ has the opposite effect. Activating system two, such as via increasing cognitive strain, helps, but sometimes people will still consciously ignore the base rates because they believe these become irrelevant once individual information is presented.

 

Of course, just because system one is sometimes wrong, it doesn’t mean system two is always correct and will always correct system one if called into action – even with all the deliberation and effort in the world, no one knows everything and our conscious selves can still be ignorant (e.g. regarding understanding Bayes’ theorem). Systems one and two will both be indicted whenever an incorrect intuitive judgement is declared, because system one will have suggested the incorrect intuition and system two will have endorsed it and expressed it as a judgement.

 

For a primer on priming, please see Post No.: 0194. Priming works because we cannot hold everything that we’ve learnt and know in the front of our minds simultaneously, so whatever’s primed in our minds will become the most salient thing we’ll think of at a particular moment. If two groups of otherwise randomised Asian-American women in America were to take a maths test in a lab experiment, and one group was primed to think about their Asian identity and the other group was primed to think about their female identity – the former group will, on average, perform better than the latter, because of the pernicious stereotype, in much of America at least, that women are worse at maths than men, even though this isn’t empirically true.

 

Summarising the main point – the representativeness heuristic is when we categorise something based on the idea of a ‘typical’ prototype that we have in our mind for that category of thing, and this bias sometimes leads to errors of judgement that we don’t feel are errors at all.

 

Woof.

 

Comment on this post by replying to this tweet:

 

Share this post