Post No.: 0052
Summary statistics (such as the mean (or the average), median (the ‘middle’ value) and mode (the most frequently occurring value)) based on a sample group – represent that sample group as a whole only and do not necessarily represent any individual within that sample group or any individual in the population that the sample group is supposed to represent. (An example of a ‘population’ is like everyone in your country, which could be millions of people, and an example of a ‘sample’ is a few thousand people surveyed in your country who have been used to try to gauge the population of your country as a whole.)
So these statistics only apply to the sample or population as a whole, but once we start to talk about or deal with specific individuals (whether people included in a studied sample or inferred from a studied sample) we must view and treat them as individuals (e.g. just because the average weight of people in America is x kg – it doesn’t mean everyone in America is around x kg in weight).
The average person may have a, b and c traits but this doesn’t mean everyone, or even anyone (especially if there were lots of variables measured), in that sampled group or population necessarily has all those traits in one person (e.g. if the average height of a tall person and a short person is a medium height – it doesn’t mean any of these two people have a medium height here). You’ll know all this when you feel like you don’t personally fit a statistic or generalisation about your own e.g. demographic group – if you don’t fit the generalisation as an individual then you don’t and it’d be wrong for others to assume you do or should.
Statements of statistics like the ‘average’ or ‘the highest figures since year z’ always need to be read and understood with caution and in the full context. One-off outliers can dramatically affect these figures and mask or over-accentuate rising or falling patterns or trends. A higher ‘average’ is not the same thing as a higher likelihood. The ‘median’ is usually far more robust in many contexts than using the ‘mean’ because the impact of outliers is reduced. The sample average is not necessarily the same as the population average too (although randomisation and a large enough sample size can make them close, according to probability).
‘Ecological correlations’ are not necessarily the same as individual correlations (e.g. the average number of hours spent exercising correlates with body mass index levels, but this doesn’t necessarily mean an individual’s number of hours spent exercising will correlate with their own body mass index in the same way). ‘Simpson’s Paradox’ shows that a positive (or negative) trend that appears for two separate groups can show a negative (or positive) trend when these very same groups are combined.
This highlights that statistical data can be open to manipulative presentations and therefore one-sided interpretations (e.g. depending on one’s bias or agenda, one may find that presenting only the average or presenting only the range between the minimum and maximum values supports their conclusion better – these aren’t lies but they aren’t the fullest picture either i.e. we can mislead without lies via biased presentations of data). And that’s why it’s often better to wait until one has access to the entire histogram of data before forming one’s own conclusions i.e. look at the entire set of data for yourself and not just the summary statistics provided by somebody else, then use this dataset to form your own furry conclusions. Woof!
In any study, one must also be aware of the chosen operational definitions (e.g. how did they define ‘athletic’ or ‘youthful appearance’?), how representative of the intended purpose and real life was the study (ecological validity), any incomplete or ambiguous information, the assumptions made, whether any inferences to the general population were made with a sample or samples that was or were random and large enough, and be aware of any conflicts of interest by the contributors (e.g. a study on smoking funded by cigarette companies or a study on climate change funded by fossil fuel firms), for instance.
Facts are facts, but to make sweeping inferences or stereotypes does not constitute fact at all. Statistics about most crimes of a certain kind being disproportionately represented by a particular racial subset of a population may be true statistically according to summary statistics – but again, statistics about a population or sample do not necessarily reveal anything reliable about any particular individual within or inferred from that population or sample i.e. the statistics of a population must only be used to talk about the statistics of a population, not about any particular individual in that population (unless it confidently applies to 100% or 0% of that population (a uniform distribution or zero variance), for which crime statistics don’t). Not even half of all members of any ethnicity commit serious crimes so why stereotype most people of any ethnicity as potential criminals? Since statistically far fewer than half of them commit crimes then why not logically generalise them, if one should generalise them at all, as non-criminals since this would be a more accurate and useful rule of thumb?! A terrorist who follows a certain religion is a terrorist for committing, planning and/or inciting a terror attack, not because of his/her religion – evidently, terrorists or mass killers come from many different religious backgrounds, including atheists.
Or even if there were a true statistic that says e.g. ‘9 out of 10 people from country x like y’ and even if you met 10 people from country x, you can’t presume for certain that any of them will like y. The odds may be on the face of it very high but you shouldn’t prejudge these individuals without first knowing their specific facts (e.g. this group of 10 people may hang around with each other precisely because they all don’t like y! You just don’t know so you shouldn’t jump to conclusions). Quite simply, stereotypes based on crude inferences are highly prone to being unreliable when one is faced with a bunch of individuals – because everyone is an individual!
Sample or population statistics are useful if you have no detailed information about your own individual statistics (e.g. if you’re 55 years old and don’t have any more information about your own individual case, such as a heart problem or dementia, then you should take the projected average life expectancy for your age group for yourself). They can also be used to make decisions based on the population level (e.g. a particular county generally has more people than any other county who need a new hospital, so let’s build a new hospital there even though we’re not saying that everyone in that county needs it). Governments can therefore often make use of them, as can firms looking for which area to focus their marketing efforts, for instance. Generalisations can therefore be useful for brevity if we’re talking about a group in general terms.
They can improve efficiencies based on following the probabilities – but at the cost of individual liberty rights, such as when used for criminal profiling. Stereotypes are especially fallible when there is huge overlapping variance between individuals within and across arbitrary category bins. This is why profiling (e.g. by gender, ethnicity, religion) is problematic, even though such strategies are sometimes overall rationally efficient. But even if a heuristic is correct most of the time, is one injustice (i.e. one false positive or false negative) already too many from a moral point of view? Plus real criminal organisations could adapt after knowing what the stereotyped criminal profile is ‘supposed to look like’.
If we are to hold useful stereotypes, we should at least base them on the 99%, or at least the majority of a group, rather than on the 1% or minority. Some stereotypes work fine as a heuristic e.g. that most dogs moult hair or drool, because most dogs do (although even in this case not all hence we must still treat each breed, or even sometimes each individual dog, on a case-by-case basis – woof woof). But some stereotypes are plain illogical e.g. a squad of about 23 people win the World Cup so therefore all x million people from that nation are expected to be good at football(!) This is not judging the ‘all’ according to the ‘most’ (which can still be fallible) but judging the ‘all’ according to evidence gathered from just a tiny percentage of that group (which will logically produce a very high probabilistic error). Stereotypes (although always subject to some error) are only effective and useful if they are based on the evidence of a majority of a population in question, not if they are based on the evidence of a minority of the population in question. But simple intuitions rely on simple rules and biases, such as crude stereotypes and the availability heuristic (described in Post No.: 0024).
Furrywisepuppy hopes that it has been made super clear that stereotypes are frequently fallible and that everyone should be more sophisticated and judge individuals on an individual case-by-case basis, even though this requires a bit more cognitive effort than relying on crude generalisations. It’ll make for a better world.