Post No.: 0017
There are many different types and different qualities of scientific research, and potentially different ways to interpret results. So whenever you read or hear that ‘science says x’ – you must still scrutinise it, question it and not just take the given conclusion at face value as if ‘it’s an unchallengeable fact because it’s scientific evidence’. This questioning of given evidence or answers (even long-held ones), rather than a reliance on faith or the impeachability of given evidence or answers, makes science unlike a religion.
Just about all experiments involve a compromise between the ideal and the practical, a consideration for ethics, current technological and budgetary constraints (e.g. lab versus field studies) – so one must review every scientific paper or news story and decide for oneself whether these compromises are significant enough to affect the usefulness of the results or not.
If human subjects are used, the randomisation of the participants used in a study is important to minimise the chance of systematic or sampling biases (e.g. a study is testing a new fitness app and the treatment group (the group that’ll use the app during the study) is made up of a lot of already fit people, and the control group (the group that’ll use some other thing or nothing at all during the study; this group’s results will be used to compare with the treatment group’s results) is made up of a lot of quite unfit people – if any differences are discovered between these groups then one cannot be confident whether it’s down to the app or down to the pre-existing systematic differences between the two groups). Randomisation probabilistically ensures an even mix of types of people between study groups (thus any differences are likely to cancel out) – particularly when the sample size (the number of participants in each group) is large.
Also, if participants aren’t randomly selected from the population as a whole (as in before anyone even gets placed into the treatment or control groups) then it can limit the conclusions that can be drawn from the study (e.g. a study that involved only teenage males from London who volunteered to participate, which would reduce how confident one can generalise any results to people who are not teenage males from London or are not the type to volunteer to take part in such experiments; this can be a problem with using volunteers – but on the other hand it can be highly unethical to force random people to participate in experiments or to secretly test them without their knowledge or consent, hence a compromise). Focusing on a narrow demographic group may or may not be the desired approach to a particular study, but whether it is or not, this factor must be accounted for when attempting to generalise or extrapolate any conclusions to other demographic groups or the broader population.
The blinding of the administration of a treatment or control is also important to prevent conscious and unconscious biases in how people react when giving or taking a treatment or control (e.g. if you knew you were receiving a placebo rather than the real drug then you might react in a certain way such as relief or disappointment, and it could be this physiological reaction of relief or disappointment that affected your results rather than the placebo or drug itself; or the scientist administering the placebo or drug could behave subtly differently depending on which one they’re giving hence indirectly and unintentionally giving the game away to the participants via e.g. their body language or tone). If both the participants and administrators don’t know which treatment or control they’re getting or giving then it’s called ‘double-blind’. ‘Triple-blinding’ goes even further by making sure the participants remain anonymous to those conducting the statistical analyses of the results in case e.g. reading people’s names or nationalities could introduce biases.
So query the method of randomisation and blinding. People also often drop out of studies and there may be a systematic pattern to those who drop out that’ll skew the results (e.g. the drug made them feel so sick that they didn’t want to carry on with the study to the end – these are obviously important data points too) so note the dropout rate and the reasons for each and every dropout too.
These are just a few of the many things one must query whenever scientific experiments are done. As you should hopefully be beginning to see – there are a lot of potential nuances and variations between different scientific studies, and these nuances and differences affect how confident we can be of any conclusions we draw from them i.e. just because it’s all ‘science’ – not all studies offer the same levels of confidence in their results or therefore conclusions. Science is about this scrutiny.
If any part of the method is not declared in full then this is a warning sign too – be wary of any incomplete methodology or results because this lack of full disclosure or transparency prevents us from properly scrutinising a study, suggesting to us ‘what are they trying to hide?’ or ‘are they trying to consciously fudge the results to give the results they want rather than the unfettered and honest results?’
So different studies/experiments/research can vary in their accuracy and reliability, practicality, budget, time and ethics, thus all types of scientific studies have their pros and (innocent) cons or limitations, and virtually all can have some justification for being – but the key point is that everything in the methodology and every single result must be declared, plus any conflicts of interest too (e.g. scientists who are on the payroll of a company that sells the product that’s being tested have at least a subconscious vested interest in trying to paint it in a favourable light) – so that you, the reader, can make up your own mind about it all (hopefully with a mind that understands the pros and cons of the different types of research, and ideally with a mind that can be bothered to read the original scientific paper sources from start to finish and not just the conclusions sections).
Therefore just because you hear someone say ‘science revealed x’ – understand that not all scientific studies (and therefore interpreted scientific conclusions) are equal. Some methodologies are well-conducted and some are flawed or may even contain wilful fraud, and any interpretations of any results i.e. the conclusions, can be potentially subjective too (e.g. is something that is dense in calories good or bad? Does that number indicate a high risk or a merely moderate risk, all things considered? Is this country ‘the healthiest of the poor’ or ‘the poorest of the healthy’?)
Lots of people currently believe in a lot of rubbish because they’ve naïvely assumed for a long time that if someone said ‘science said x’ then it must be true without question (and that anyone who even merely begins to question any scientific results must be an idiot who cannot accept ‘objective conclusions’(!)) They’ve ironically treated science as a religion that reveals only sacred, unquestionable truths(!) But they can and should be questioned, so scrutinise the details. And also don’t look at studies in isolation either because the more robust truths tend to be the ones that have been cross-confirmed by many other independent parties and independent studies too (bearing also in mind the size of each study).
Of course one must apply sound logic and/or provide (stronger) counter-evidential support for one’s disagreements, rather than disagree with something just because one doesn’t want to believe in it – but the act of scepticism itself should be applauded rather than ridiculed.
If a piece of research is so full of potential confounds or possible alternative explanations for its results then one might question the point of it? One might question such a study’s agenda (e.g. to boost the business of a particular product or the promotion of something, be it from a manufacturer, marketer or whoever). The motive for extrinsic rewards such as money can lead to distorting the truth, either with outright lies and bull****, or at least over-hyped exaggeration, over-extrapolation, cherry-picking and confirmation bias (but do note that biases can occur even when one is not conscious of them or when not intentional).
(Reputable) scientific journals will try to sort out the well-conducted, well-documented, original, interesting and outstanding pieces of research from those that are not, but sometimes poor or fraudulent pieces do slip through (and criteria like ‘interesting’ or ‘outstanding’ are subjective and are therefore subject to the journals’ own biases e.g. boring or negative results are important to know too because they contribute to the total picture). They do have limited page space and their business is readership. Journals don’t check study results (e.g. they don’t have their own Large Hadron Collider!) – this is the job and vital value of independent parties replicating studies.
There are lots more to be said about science if all the above is new to you but I hope you now appreciate that not all scientific studies are the same and therefore some scientific conclusions are more confident or robust than others. They are also conducted by humans who can make mistakes or have individual conflicts of interest. Henceforth whenever you read or hear ‘science says x’, I hope you’ll read entire articles with a critical mind rather than just the headlines and sub-headers.
Well you could remain being a lazy and passive news consumer if you currently are but this could cost you in e.g. money or unnecessary stress.
SHOCK – SCIENCE SAYS PUPPIES CAUSE CANCER!
Especially those with red flame brows.
Alright! I reckon I know which cheeky fluffball is doing this now…