Post No.: 0332
With scientific experiments, scientists must declare their hypotheses before they begin gathering any data. This is because – especially if you’re going to measure a lot of different things and especially if you’re only going to take a small number of samples – you are almost guaranteed to get something that’ll show a statistically significant result out of all of those measures. You won’t know exactly what beforehand but chances are you’ll get one purely by chance, hence the risks of attaining false or spurious positives. The dodgy practice of finding patterns in data that can be presented as statistically significant when there’s no underlying real-world effect is called ‘p-hacking’ or ‘data dredging’.
The ‘Texas sharpshooter fallacy’ is about painting on the target after one has made one’s claim, or improperly asserting a cause to explain a cluster of data when the cluster could be due to sampling errors if the sample size is too small, or due to pure chance if the total amount of data available is large but one is only focusing on a small subset of it. The metaphor is apt because you’re not going to show a ‘miss’ if you paint the bullseye around where you shot(!) And that’s why one needs to declare one’s target, or hypothesis, before gathering any data.
If an experiment doesn’t achieve the desired result then scientists might convince themselves that they’ve messed up somewhere in the methodology or execution, so they may repeat and repeat the experiment until they finally get the result they want, or they may arbitrarily drop outlier data points out if they don’t give them the averages they want. But by doing this, any positive result discovered will have highly likely only finally ‘worked’ due to pure chance again.
Both of these questionable practices are about abusing the ‘95% minimum confidence level’ generally required to show a positive result from an experiment i.e. purposely looking for that up-to-1-in-20 pure chance result – which would be like you trying to predict 20+ different sports results (simultaneously or one at a time), ignoring all of the wrong predictions you made, then highlighting just the one or however many correct predictions you made, and declaring yourself a brilliant psychic(!)
Ways to manipulate, or ‘clean up’, data or findings include – if the results are borderline then conducting more trials until one or a few eventually show positive results (which can once again happen purely by chance alone) and then bundling all of this data together to show a minor overall positive result; if one measures a lot of variables then something is bound to be positive purely by chance alone (it’s like rolling more dice – there’ll be more chances that some sixes will be rolled somewhere) and then declaring an ad hoc hypothesis (the Texas sharpshooter fallacy); playing ‘best of three/five/seven…’ until one gets the results one wants; if the results are positive early on then stopping the trial early, or if ‘nearly significant’ at the end of the intended trial duration then extending the trial; or altering the number of decimal places used (if the experiment involves a highly sensitive or chaotic mechanism then this could result in a completely different outcome).
There’s also ignoring or not chasing up participant dropouts, who’ll most likely have dragged the results down (now the participant dropout rate is important to minimise but not for the sake of the study, scientist’s reputation or research grant donor’s interests if retaining them in the study or experiment would be unethical); selectively deleting or keeping in outliers to one’s advantage; cherry-picking certain subgroups that should help give one a positive result; unfairly adjusting the baseline for the intervention and control groups to suit one’s aim; fudging the data for one group but not for the other; assuming correlation is causation; using inappropriate statistical tests; or simply exaggerating the conclusions!
Some of these kinds of tricks aren’t necessarily hiding anything but can fool people who don’t read everything beyond the abstracts and conclusions of academic papers. Scientists may also sometimes state that ‘more research is required’ if they don’t get the results they want – sometimes such a claim of needing more research is legitimate but occasionally it’s not. (They may also claim that more research is needed for their work in order to continue to attract funding for their research – the continual pursuit for grant money is a perennial task of a scientist, and this may affect what they say.)
Scientists often present early versions of their work in conference presentations, where the bar for getting a study accepted is much lower than in a journal. It also turns out that positive findings presented at conferences consequently end up getting accepted by journals far more than negative results. This suggests that journals are being swayed too much by early positive results presented in such conferences.
But whereas individual scientists can be corrupt or do sloppy work – the scientific community can collectively self-correct for this in the long run via independent ‘peer reviews’. These peer reviews are sometimes anonymous and double-blinded i.e. the authors don’t know who peer reviews their papers, and the peer reviewers don’t know who authored the papers they review, while they’re being reviewed, in order to reduce potential biases. We all have biases, and because we are not aware of our own unconscious biases, it usually takes others to help point them out to us. The scientific community is basically about different, expert yet preferably diverse, people pointing out each other’s mistakes and biases so that they can hopefully be amended or retracted.
‘Replication studies’ conducted by independent scientists can also double-check the results of an experiment performed by another scientist. This is why the methodology of a study or experiment must be clearly laid out in a scientific paper so that someone else can copy exactly what the original scientist(s) did to get the results they claimed. Woof!
A healthy group of people must be capable and free to self-criticise their own group – whether academics critiquing other academics in a peer review process, politicians criticising other politicians in parliament, or whomever scrutinising whomever – otherwise ideas never improve and facts don’t get checked. So if you, say, don’t believe that ‘the greenhouse effect’ is real, then you can look at the science and you can go test the effect for yourself with your own experiments by using carbon dioxide (e.g. dry ice) or methane, a tube, a heat source and a thermometer. You can proactively answer for yourself whether there’s a scientific conspiracy regarding climate change science or not.
Science isn’t a fraternity of conspirators who only allow people with similar beliefs to publish their papers – it’s more like ‘collaborative warfare’. In order to gain a name for oneself, scientists try to carve out unique, often confrontational positions, challenging common wisdom and seeking out data that will disrupt the current ways of thinking – this is because one won’t ever make a name for oneself by merely publishing research that simply confirms what other people have already said. It is exciting when something novel is genuinely found but distracting when scientists are merely being antagonistic in order to try to stand out from the established crowd. And this is why it’s impressive whenever a bunch of independent scientists do, via peer reviews and replication studies, collectively reach an overwhelming majority consensus on something!
When there is a strong furry consensus in the scientific community (such as that regular physical exercise is overall very beneficial for our health) then it’s easier to trust it. But when there’s varied disagreement from reputable independent experts (such as what’s the most important thing to achieve economic growth?) then it suggests that the answers are less black-or-white. Yet even a scientific consensus isn’t as important as understanding the facts that inform the basis for such an agreement amongst the majority of scientists. And to do this – you must research and scrutinise the methodologies used in a study, analyse the raw data for yourself, take into account any conflicts of interests, and then come to a conclusion about what it all means yourself. (Read Post No.: 0200 for more about why.)
So arguably more important than peer reviews is for absolutely anyone in the world to be able to find and read a published paper in full in order for everybody to be able to make up their own conclusions to a study. However, a lot of ‘science news’ in the general media is based upon unpublished papers, particularly when it comes to scare stories, and by the time a paper does get published (and it’s revealed that the paper and its conclusions are incongruous or flawed) – the scare story is already out there and primed in the minds of the public and it’s too late! Scare stories are notoriously very hard to retract from people’s minds. Journalists tend to fight with each other to try to be the first to break a piece of news hence are sometimes just too eager to publish stories that are too soon to fully verify, and scare stories attract a lot of readers too. So be suspicious of a science story that doesn’t reliably link back to (or agree with the data or results of) the original paper so that you can check it out. If a scientific claim has been presented in the media that hasn’t been submitted to a reputable academic journal or site to be peer reviewed then be suspicious or cautious.
Some companies claim that they don’t want to reveal their trade secrets to their competitors hence why they seldom publish their scientific research in the public domain to be independently scrutinised and peer reviewed – but this means that it’s difficult for the scientific community, and customers or potential customers, to appraise their claims. And although recipes are not patentable, it’s a cop-out because they could in many of these cases apply for patent protections to protect their trade secrets.
We all tend to like to be the first to know something too hence can fall foul of trusting in some ‘science news’ that is too early to be conclusive. Occasionally, something can turn out to be a deliberate hoax too, which then gets unwittingly perpetuated by ordinary people (e.g. that reusing plastic water bottles will cause cancer). Fake science news is unfortunately almost an everyday thing on social media nowadays. Since it can take a while for consensuses to settle or self-correct in science, we should actually be particularly sceptical of novel findings presented in the media, and we should rationally have more confidence in consensuses that have stood the test of time.
…So eat your fruit and vegetables, and get some regular exercise!