with No Comments

Post No.: 0505contradictory

 

Fluffystealthkitten says:

 

Most real science is incremental i.e. genuine revolutions and paradigm shifts are incredibly rare – science can therefore seem boring to some. Meanwhile, the media and us, their audience, crave to discover and know what’s the next ‘miracle cure’ or hidden ‘scare story’.

 

So the media always wants novel stuff to report but scientific discovery is usually slow, bumpy and very gradual. If an unexpected experimental result seems newsworthy, it’s far more likely to be a false call that’s been over-hyped than a genuinely novel and reliable finding. One example was apparently discovering neutrinos that travelled faster than the speed of light in a vacuum.

 

To be a genuinely novel and important finding, it must contradict a large body of pre-existing experimental evidence or research that we’ve already discovered about something. It can happen, and sometimes does happen, but it would require extraordinary evidence for such an extraordinary claim. One example where this has happened is uncovering how Neanderthals interbred with Homo sapiens and weren’t as unsophisticated as previously believed.

 

Certain science stories receive media attention precisely because they’re novel, counterintuitive and contradictory to current conventional wisdom – but extraordinary claims should demand extraordinary evidence, hence these novel or contradictory findings and their news stories need to be met with proportionately high levels of scepticism. It’d be foolish to ignore a forest of evidence to trust in a lonely contradictory tree. It’d be unwise to stake your life on a single piece of news that goes against the grain – no matter how much you wish it to be true, such as chocolate being classed as a ‘health food’. Cocoa beans might have a chance but they’re not the same thing as chocolate – this is a prime example of how the media spins things to fashion a more attractive story. An atypical event may stand out only because it’s rare and thus unlikely too (see Post No.: 0428).

 

Because these novel and contradictory ‘breakthrough’ stories (that are mostly false calls) get over-reported in the mass media, it can give the impression that science or an empirical worldview is about constantly disputed, contradictory, never certain, continually self-revising, transient fads too. If we keep on coming across these false calls, and then coming across other research that contradicts them, or those false calls get debunked or corrected directly, it can make scientific findings seem always subjective, unreliable and dismissible.

 

The way leading theories can appear to ‘shift from one moment to the next’ can be more true regarding issues at the cutting edges of science (young or limited research areas), but it’s more highly likely not true with long-established, well-researched areas (e.g. research concerning what’s a healthy overall diet and the benefits of exercise) due to the greater mountain of prevailing evidence that needs to be overturned in order to discover a genuinely novel and contradictory finding.

 

Reasons why most brand new research with unexpected findings will turn out to be false calls are – results that were down to chance or a fluffy fluke rather than a real-world pattern, the misrepresentation, misreporting or misunderstanding of statistics, the over-extrapolation of findings (e.g. between IQ test results and practical intelligence) or even intentional hoaxes (e.g. the link between reusing plastic bottles and cancer).

 

When science appears to produce two contradictory findings – they mightn’t actually be contradictory but both correct because the different studies could’ve used different test groups (e.g. people aged 16-25 versus 36-45, or people predominantly from two different cultures) or used different measures of success, for instance. This is why you must always read, question and take note of each study’s particular methodologies and results and not just read the authors’ likely over-extrapolated and over-generalised conclusions, or your own over-extrapolated and over-generalised conclusions when reading the headlines you want to read rather than what’s really written. For example, an article might say ‘although not essential, wine is fine for your health in moderation when accompanied with a healthy, balanced diet’ but one just reads it as ‘wine is great for your health full-stop’.

 

Different methodologies or algorithms can produce different results – especially in the social sciences where there are umpteen variables at play that interact with each other in highly complex ways. For example, romantic date-matching algorithms are sometimes described as ‘matched by science’ but this can still mean a multitude of different methodologies or algorithms – any of which will almost certainly be an incomplete test of compatibility, and arbitrary in how they might weight different factors such as one person’s values over another. Tuning in for two minutes on a show might count as a ‘view’ on a video streaming service, but you might disagree on that definition when measuring a show’s viewership numbers.

 

The best practice is perhaps not to say ‘science says x’ but ‘this specific scientific study conducted with this specific methodology says x’. Or, if possible, ‘all of these independent scientifically conducted studies and their respective methodologies all say x’ if the aggregate evidence is strongly united in their findings.

 

As an example – although not a scientific study – Usain Bolt, as of posting, holds the records for being the fastest human on Earth over 100m and 200m on an Olympic running track (and it may be safe to assume for the distances in-between and on other surfaces, yet this would still be an assumption). We should not over-extrapolate him to be ‘the fastest human on Earth full-stop’ because he’s not over, say, 1500m or a marathon distance, or even necessarily over 10m or 1m. ‘100m’ is an arbitrary number, unit of distance, and in turn distance, for finding the ‘fastest human’ runner or even sprinter.

 

Being number one in the music charts could’ve been per day or month rather than per week. In reality TV competitions, eliminating the worst contestant per week encourages different strategies compared to just promoting the best contestant i.e. ‘try not to be the worst’ or ‘throw someone under the bus’ rather than ‘try to be the best’ strategies. Think hard enough and you’ll realise that a lot of competition rules could be different and could’ve produced different winners hence those rules are by no means objective or the only way they could’ve been, such as when handball rules in football change!

 

Science is not a centralised bible of dogmatic or sacred texts so there can legitimately be reasoned dispute and differences between scientists regarding their chosen methodologies for conducting their research, chosen operational definitions or measures of success, chosen interpretations of the data or results, as well as their individual levels of conscientiousness for avoiding potential unintentional errors and individual research budgets, etc. – even when they’re all behaving as honestly and as impartially as possible.

 

So not all experiments testing for – or supposedly testing for – the same things are necessarily the same. There isn’t always just one way of trying to test for something. An analogy is like experiments trying to find ‘the fittest person there is’ – one group tests swimming and the best swimmer is concluded by this group as being ‘the fittest person there is’, and another group tests burpees ability and the person who can do the most burpees within a certain time limit is concluded by this group as being ‘the fittest person there is’. They’re really different tests so they can present different findings but they both conclude the same over-extrapolation – that they’ve each found ‘the fittest person there is’, even though they’ve placed that title on different people.

 

Even if both groups tested ‘fitness’ via swimming, burpees and running (or any other incomplete combination of physical tests), they can still conduct these tests in different ways (e.g. a non-stop race versus a series of shorter stop-start races even if they both total the same overall distance). This is why, once again, we must take note of an experiment’s methodology because a seemingly trivial detail and difference between it and another experiment might be important.

 

How we define and calculate ‘coronavirus-related deaths’ (e.g. excess deaths compared to the previous year, or maybe the average of the past three years, or the number who tested positive then died within a month later, or maybe within two months later) makes different measures incomparable with each other. This is the benefit of standardisation rather than competing measures/standards sometimes, and again the importance of reading the methodologies and not just the results or given conclusions of research.

 

There’s a theme here – read articles, papers or posts fully and read them carefully. Don’t just rely on the headlines, titles or abridged news. The other message is to be careful not to over-extrapolate conclusions that might only apply to highly specific conditions/criteria and only between the set of subjects/objects that were actually tested with/for. For example, in a lab versus a field, with one type of experimenter over another, after a full or empty stomach, on a sunny or rainy day, between this subset of cars tested compared to all cars in the world ever, etc. – real-world variables can be endless and may be unexpectedly relevant to what’s being tested for. Even if we randomise the subjects/objects in a study, we seldom randomise starting from a complete set (e.g. randomising from every animal from Australia doesn’t mean randomising from every animal in the entire world, ever – there might be something systemically peculiar about Australian animals?! Jimbo the Tasmanian devil definitely agrees in his own opinion…). And we seldom test under a full variety of test conditions, if such an approach is practical within single studies or a field of research can ever be categorically completed.

 

This explains why some statistics might declare that crime, poverty or unemployment is up whilst other sources declare they are down at the same time – it can depend on how each study has defined and measured crime, poverty or unemployment.

 

Yet absent of stronger countering evidence or theories, such studies can (and arguably should) produce conclusions that can (or should) be tentatively inferred to apply to other conditions i.e. we should still follow wherever the preponderance of evidence that we currently know of points to. But bear in mind that future research into the same area may produce contradictory findings for good reasons that don’t make science a farce because ‘scientists cannot seem to agree on anything’. Every well-conducted experiment is ultimately adding to eventually painting the fullest and most reliable and accurate picture of the universe. It’s not the fault of science, and it’s not always the fault of scientists, who may only be presenting a tentative conclusion themselves but it’s the media who are exaggerating it! Our duty is to fully read science news articles, or even original academic papers, and to not over-generalise their findings, or to only do so tentatively. Meow.

 

Looking back at a lot of the scientific or ‘scientific’ claims made about ‘wonder ingredients’, ‘miracle treatments’ or ‘scare stories’, and seeing which ones have stuck to become accepted mainstream understanding and which have faded away into obscurity, is broadly a good indicator of which scientific claims were sound and which were not. (It’s only broadly a good indicator though because lots of claims persist as myths that just won’t disappear e.g. that microwaves cause cancer.)

 

It reminds us not to get too hyped up whenever a novel piece of research, that’s yet to be scaled up or independently replicated, suggests some novel ‘wonder ingredient’, ‘miracle treatment’ or ‘scare story’. And it also reminds us to not automatically forget about the tried-and-tested-to-the-nth-degree and consistent findings, such as a varied diet full of fruit and vegetables of many colours is good for humans. ‘Findings that are so boringly consistent with what’s been said so many times before that they’re hardly newsworthy’ are actually the most robust and reliable findings possible!

 

So we mustn’t be gullible, misled or a slave to the media or marketing whenever they over-hype or over-sell novel or contradictory findings. Extraordinary claims require extraordinary evidence.

 

Meow.

 

Comment on this post by replying to this tweet:

 

Share this post