Post No.: 0800
Furrywisepuppy says:
According to good practice – a scientific study should, beforehand, declare its hypothesis, research design, instruments and materials, procedures and protocols, data management strategy, coding, and statistical analysis plan. All captured data and interpretations should then afterwards be recorded.
Fraudulent practices include – plagiarism (unreferenced or untraceable sources, plus sources that lead to false replications too), unreported conflicts of interest, and the undue influence of personal values (this can be the trickiest to spot). These are all against the openness, transparency, critical and systematic empirical testing and objectivity of the scientific process.
Questionable research practices include – harking (forming a hypothesis after the results are known thus invoking the hindsight bias), cherry-picking or selective omissions, p-hacking (manipulating the data to contrive a statistically significant result), and data snooping (gathering data or an arbitrary sample size until one gets the data one wants). These are all in a way fine though if reported – it’ll then be up to the reader to read about them and account for them.
Some regulatory solutions that have been introduced to tackle some of these problems include having to get one’s hypotheses, research designs and statistical analysis plans of proposed studies pre-registered and pre-accepted, and to then have these studies published regardless of their results. Additionally, all data, full documentation and statistical analysis plans should be made freely and openly available, and all requests for access to this data should be fulfilled (except where it affects the privacy of the participants). The decision to publish should then not be dependent on the research results or whether a study was completed, but on its hypotheses and proper research design – which reduces the publication bias.
Because the quantity of citations is a proxy for determining the quality of a paper (even though this isn’t always reliable e.g. a paper isn’t always cited for positive reasons but as a critique of it, authors can self-cite their own previous papers, it could just simply depend on whether a research topic is popular, and other reasons), and the number of published papers is what goes down on a researcher’s CV/résumé and builds their reputation – there’s a pressure for researchers to publish lots of papers to increase the chances of getting their papers cited. This can lead to self-plagiarism (publishing the same intellectual material multiple times). Making it about money (seeking research grants) and academic fame, and allowing too much to be self-regulated in the past, has meant that there unfortunately has been much research fraud.
In academia, there’s the notion of ‘publish or perish’ – as in if an academic individual doesn’t publish, and publish lots of, manuscripts in the hope of receiving plenty of citations for his/her work then he/she’s not going to get noticed in his/her academic career. It’s pretty common that academic journals will charge academic individuals to have their papers published in their journals too (even if the research was publicly funded!) This has led to predatory, fake or low-quality journals agreeing to publish people’s work – the academic wants his/her research published, and will get it published in such journals even if the research is of incredibly poor quality because the journal mainly cares about getting paid rather than the quality of the research. Therefore we cannot presume that just because a scientific paper has been published somewhere then the peer review process for it has been thorough.
This means that the reputation of a particular scientific journal matters greatly – journals are scored according to their ‘impact factor (IF)’, which is the average number of citations-to-articles published by a particular journal. (This proxy for determining the relative importance of a journal within its field does face numerous criticisms though, including regarding the score’s validity and how it can be gamed.)
But even reputable scientific journals can occasionally fail to meet high standards due to time pressures and checking errors. Professional peer reviewers for scientific journals can sometimes make mistakes and allow a weak paper to publish, which is then taken as ‘verified by experts’ by news journalists when they write up their own articles on it, which in turn is then reposted and shared by the general public, where those who most want the dubious finding to be true will most latch onto it dearly.
Understand that the peer review system relies on the goodwill of other fellow scientists who are themselves increasingly pressed for time; and it can be subject to the biases of a select few who work for a particular journal. Scientists are humans too and can therefore let their partiality override their reason because they’re deliberately serving their own interests, or mistakes and fallacies can undermine their scrutiny because of innocent and unintentional errors.
The journals may suffer from the bias of their own editors-in-chief, who make many key decisions like which papers to bother reviewing in the first place, who’ll peer review them, and then what to ultimately publish. There’s a huge list of fake publishers, or publishers who biasedly serve the interests of particular industries.
The editors of scientific journals and news outlets also have a motive of trying to sell as many subscriptions or papers, or attract as many clicks and eyeballs, as they can – they run businesses after all. Methods that may be used to try to grab attention away from their competitors include publishing illusory breakthroughs, ‘high impact research’ and sensationalised stories at the expense of thoroughly verified and ‘boring’ news that doesn’t take things out of context, act prematurely, fallaciously exaggerate findings or depend heavily upon emotional appeals.
For a scientist, their most important metric for success is again how many times their work has been published in the top journals, which in turn increases the number of times their work will likely be cited by others, which in turn can help them to attract further research grants. Therefore they are themselves incentivised to find or create these illusory breakthroughs and ‘high impact research’. Scientists don’t want their projects to be de-funded hence may attempt to spice-up the progress that’s being made and overstate that exciting positive findings are being found.
So merely trusting the reputation of a journal, or how many times a paper has been cited, isn’t perfectly reliable. This means that nothing really completely eliminates the need for us to conduct our own scrutiny and apply our own active critical thinking. A general consensus amongst the scientific community affords incredible weight – you could say that science is really about the scientific community. Yet we should still each learn about the scientific method, and as much about human biases as we can, because gaining a community consensus takes time, and in some cases a consensus may not be reached until bigger or better experiments can be conducted.
Passively and casually accepting what could potentially be falsehoods or overstatements, then uncritically spreading them because we want to sound smart in front of others (related to gossip, we generally desire to be the first amongst our peers to say we know or discovered something useful but in this haste we can fail to apply sufficient critical thinking regarding what we’re sharing), and taking conclusions as forever sacrosanct instead of provisional, are reasons why we can believe in loads of questionable, out-of-date or even illogical or contradicting things, or urban myths!
The scientific community is precisely built upon rational and empirical scrutiny and debate – so it’s often best to not mentally take things as unquestionably true or unquestionably false but as different shades of mutable confidence. Woof!
Well when a scientific paper gets published, it only really means that it has, on the face of it, fulfilled the specification of a properly written paper, according to the journal that has decided to publish it. At this point, the contents of the paper should not be necessarily taken as a hard fact, particularly if it’s controversial and not in support of an already existing, established and broadly-accepted body of empirical science.
But the general media often jumps the gun because it wants to report exciting (i.e. novel, outlier or controversial) stories, and be the first to do so. The media can be exceptionally atrocious for focusing only on a tree whilst ignoring the forest of pre-existing literature, when we should generally go where the current overall weight of robust evidence points. Pop science media outlets often publish, and sensationalise, a lot of ‘cutting edge’ findings – but beware that ‘new’ or ‘unusual’ isn’t the same as ‘well verified’, so only take them tentatively because they could be anomalies or the result of mistakes. A story may stand out from the crowd precisely because it is unrepresentative of the typical result we would normally expect, or simply wrong. We should rationally be more guided by the typical rather than the unusual or extreme, unless we have specific reasons to believe that the atypical result is more relevant for a particular set of cases.
It’s like judging a stranger based on just seeing them for five seconds passing rancid wind – yes it happened thus it’s not a lie, but how does it weigh amongst all the other seconds of behaviour of this person’s life; including all the ‘boring’ recurring events that disconfirm our hasty conclusion?
The general media tends to latch onto these early stories and hype them up, but then fail to follow up on them, even when they’re found to be hoaxes (or especially so because the media outlet doesn’t wish to admit to their poor journalism).
We, as news consumers, don’t typically do this except by accident – but if one revisits an old article, one might now find some addendums, corrections or retractions in the header and/or footer of the article, in italics. These may result from other people (like independent experts) questioning the article to the point that an update, correction and perhaps apology are the only reasonable things for the original author to do. And these amendments can sometimes cast doubt on the entire original conclusion altogether. But they rarely receive as much media fanfare and attention as the original hyped-up ‘breakthrough’ stories. The publication bias might also mean that whenever a previous novel finding becomes discredited – even journalists may not hear about it. (Some narrow-interest media outlets could potentially purposely post lies and pollute the news space knowing full well that most readers won’t return to an old article to notice it subsequently gone or corrected?)
Patience is thus required whenever we hear the first reports of a new, cutting-edge or controversial finding. The scientific community is still assessing it and hopefully planning on conducting further experiments to confirm its claims. (You could possibly get involved too.) Post No.: 0505 stressed that extraordinary claims require extraordinary evidence. The publication of a paper is only the first step towards hopefully establishing a true scientific fact – it’s only really after when scientific peers have had a chance to verify that a study measures what it intended to measure, cannot find any flaws in its methodology or conclusions (e.g. due to sampling errors), and can successfully replicate the results, can we begin to more confidently say that the furry findings are rigorous.
Over time, you may notice that you don’t hear reputable sources sharing a particular piece of information again or to the same extent as when it was first hyped (this happens frequently with fad diets!) This is a, ruff and ready, proxy for how overstated the science news originally was.
Try to recall all the science stories, inventions or products that vanished ever since their initial hyped-up coverage, and take this as a caution regarding present and future hypes! Our initial reaction should be to take new information as neither ‘this changes everything’ nor ‘this changes nothing’.
Woof! In short, it’s not about merely getting manuscripts published but about surviving the peer review – and this peer review process effectively never stops, hence even long-established facts can reinforce, refine, or possibly be reversed, over time.
Comment on this post by replying to this tweet: