with No Comments

Post No.: 0090crowd


Furrywisepuppy says:


The ‘wisdom of the crowd’ or ‘collective wisdom’ works when individuals may greatly over-estimate or greatly under-estimate answers to a question, such as ‘how heavy is this cow?’, and therefore if the sample size is large enough (enough people are asked) and the samples are independent from each other (these people’s answers or guesses are all independently formed from each other), the errors should average out to zero and therefore the average answer should be close to the correct answer. (This method of error reduction is the main reason why randomised-controlled trials are the gold standard in scientific experiments – all other variables will have their errors probabilistically de-correlated, leaving only the intervention under investigation to likely explain any differences between the experimental and control groups.)


The wisdom of the crowd works well for tasks or questions where people don’t have a preconception or prior agenda and therefore don’t have a motivational bias, and each person is independent from each other to cancel out any systematic errors/biases. So the principle works well for group tasks like guessing the weight of a cow or the number of marbles in a jar, and it’s these types of tasks that experiments have used to come the conclusion that there is such a thing as the ‘wisdom of the crowd’.


But the conclusions of such experiments have frequently been over-extrapolated. In many real-world scenarios and tasks, ‘collective wisdom’ is often a fallacy because most of the time relatively few people have any reasonably true competence to answer a matter at hand (everyone can take a fair stab at e.g. guessing the weight or quantity of something right in front of them but not everyone can e.g. take a fair stab at guessing the right results of legal cases, for instance) and so what tends to happen is that most people just follow the most popular view (meaning that people’s views aren’t independent from each other but are in fact directly influenced by each other), thus they follow each other’s ill-researched views and just amplify the bias to create a collective systematic error.


So the average of a crowd’s estimates might be able to get close to the correct answer of such trivial tasks as guessing the weight or quantity of something presented right in front of them – tasks that no one needs a qualification, specific education or complex expertise for (one just needs to have experienced Earth’s gravity or seen objects in containers for a while) – but it won’t mean the average of a crowd’s opinions will necessarily be better at producing the correct or best medical diagnoses, optimal economic policy decisions or answers to other non-trivial problems. Many of these problems of the real world aren’t issues of ‘estimating a quantity that everyone surveyed is allowed to more-or-less directly see in complete form’ but have complex and hidden variables. And even regarding simply guessing a quantity – if people have no basis of experience or expertise regarding a question at hand (e.g. guessing the distance to Andromeda or how many human cells people have on average) then even the collective average might be orders of magnitude out from the correct answer.


Therefore collective wisdom, or the wisdom of the crowd, only works well if it is the wisdom of a diverse crowd of independent minds, plus if everyone has an adequate ability to make a judgement. Without an adequate ability to make a judgement, it’ll be less like ‘guessing the number of marbles in a jar’ and more like ‘guessing the number of marbles in a jar having only seen a small portion of the jar or even having the jar completed obscured’! This case of the ‘wisdom on the crowd’ also highlights that many scientific results have been over-extrapolated to form conclusions that the original studies did not actually reveal (for they lacked ecological validity, or the extent to which the findings of a study can be generalised to appropriate real-life settings). And that too many fuzzy journalists and laypeople fail to scrutinise ‘science news’ or ‘fun facts’ enough when they latch onto an interesting conclusion to share.


Too many people (often with a political bias and agenda) took these experiments to infer that every situation to do with crowds will lead to efficient, optimal or correct results when averaged out or left to the crowd’s own self-regulation – but note e.g. traffic jams, sales rushes in stores or under-regulated banking sectors in real-life scenarios – typically scenarios where each participant is acting selfishly and individualistically and one person’s gain is another person’s loss, which then result in bigger-picture losses for almost everyone overall in the system (e.g. individualistic drivers constantly changing road lanes left and right to try to find a faster lane for themselves, when the entire system would travel faster overall if everyone just simply cooperated or followed the temporary road signs and drove straight forwards unless taking a junction), and/or due to the gross inequalities or disparities between a few powerful players and the rest (e.g. in markets that have only a few dominant players, such as monopolies and monopsonies). It’s not to say that self-regulation and free markets cannot or do not work in some situations – it’s to say that it’s evidently wrong to over-extrapolate and zealously or over-simplistically generalise a conclusion based on one’s political biases.


Collective wisdom sometimes (catastrophically) fails in areas that are affected by people’s prior and ingrained beliefs, stereotypes and peer influences (e.g. popular myths, echo chambers) – a crowd can sometimes get things very wrong. Often the problems in these cases are that group members are influencing each other within their own self-selected social groups (which are often based upon sharing similar interests, values and beliefs hence they share a bias) and therefore are not being independent when they make their own judgements, thus generating systematic errors (this is why eyewitnesses must not discuss their versions of accounts with each other before giving their testimonies) and/or the groups are not large and diverse enough where it matters to eliminate sampling errors (the errors are not being cancelled out/de-correlated). In these real-world scenarios, we might call it ‘following the herd’ or a ‘hive mind’, for instance.


Layperson juries are typically only twelve members in size so can lack sufficient diversity for a given case, either because (depending on the area of jurisdiction) chance happens to ‘roll too many threes’ when they’re selected (e.g. many more males than females are selected) or they’re deliberately chosen by the prosecution and defence hence they’re two sets of biased selections. There are many more jobs, roles in society and life experiences than just twelve. And in real-life discussions, even if one member happens to understand e.g. what it’s like to work in a pressurised entertainment industry in a case that relates to this issue, his/her voice can still be crowded out by the rest who don’t have that understanding and choose not to listen to that single person (e.g. because they’d rather maintain their stereotypes that all people in showbiz are unreasonable divas). But of course, real-world practicality means the sizes of juries must be limited, and it is considered that most of the time they come to the appropriate verdict.


To help gather more diverse and independent input during meetings – ask everyone to individually and independently write a very brief summary of their position first, before getting together to discuss their views. Typical open discussion settings give too much weight to the opinions of those who speak early and assertively – potentially causing others to be somewhat dragged into giving the same views and reducing the diversity of views expressed. It’s important to collect each person’s judgements confidentially and not in a public group discussion initially because the first person can anchor subsequent judgements. The problem of open group brainstorming sessions is that they don’t make full use of each individual’s knowledge, ideas and points of view, which may be crowded-out or anchored away by other, louder or more strongly-opinionated, people in the group. Mainstream media and social media influence is also more about who shouts the loudest, hypes the most or utilises the most fear or sex, rather than who knows the most or considers the nuances; and this can create and reinforce the systematic errors we’re trying to avoid in order to come to sound collective wisdoms.


In summary, the wisdom of the crowd only works in some situations and not all – and even where it works, we must remember that it’s about the wisdom of a diverse crowd.


Woof! We can work together well as long as everyone is included, everyone gets sufficiently clued-up regarding an issue at hand, and a situation is not dominated by a small number of unequally powerful and influential players or voices.


Comment on this post by replying to this tweet:


Share this post