Post No.: 0188
All experimental research in science should be reported and published in academic journals to have all of their details peer-reviewed by experts – not have their results kept concealed or just reported in the mainstream media alone via headlines and sound bites. If one cannot find or get hold of the proper research details and results from a company claiming that its own product does this or that then be wary of their claims.
Finding out that something, like a drug, doesn’t work or is harmful, is just as important a finding for the public to know as a positive or desirable result – therefore the issue of ‘publication bias’, or the suppression of negative, contradictory or undesirable (for the party who funded and/or carried out the research) findings, is essentially an act of informational deception and fraud.
All data, including negative results, is knowledge to retain in records and to share. They at least tell others what doesn’t work or is unsafe so that others don’t waste their time, money or health with it. These negative findings might even lead to unforeseen or unintended beneficial discoveries elsewhere (e.g. a chemical compound that failed for what a scientist originally intended it to do but works in some other application); sometimes something that seems like a bizarre or irrelevant question to ask can turn out to be a genius, novel angle we were looking for.
Negative findings may be assumed to be just chance results without even putting them through the appropriate statistical tests to check, and so are dismissed and not published – albeit more common reasons for not publishing a study are laziness, neglect and a lack of motivation to continue writing up a ‘failed’ study (which is the ‘file drawer problem’), or simply not wanting a result that one didn’t want to find to be disclosed to the public. Neglect or wishful thinking can be non-malicious or non-intentional reasons for publication bias or the file drawer problem so there doesn’t need to be a deliberate act of deception, downplaying, the skewing or a concealment of the full truth – although sometimes the concealment of scientific data is quite deliberate.
If scientists fail to find a relationship between two variables then these studies are much harder to publish in academic journals though. Scientists know that null results don’t tend to get published so they don’t bother submitting them to journals as much, hence the file drawer problem. But the other side of this problem is that if scientists and journals only submit and publish studies with positive results then it could be the case that they only got lucky with their positive results (e.g. due to the law of small numbers with small sample sizes). And along with consistently sticking their fuzzy ‘failures’ in the file drawer – over time it may look like there really is a pattern in the collective scientific literature. But this pattern of results would be down to the file drawer problem and publication bias rather than real patterns in the observed world.
So finding no relationship is an important finding itself – finding no cure for a particular illness down a particular avenue is important data for the entire health community to know too, for instance. But indeed, we don’t want the scientific literature to be littered with reports of things that aren’t correlated with each other since there could be an indefinite number of such things – most of which will be utterly boring news (e.g. listing everything that doesn’t cause skin cancer!) However, for the things that do seem correlated together, a problem is that we often do find spurious correlations by chance alone or by error of methodology or analysis, and then our brains will work furiously to rationalise those results as being reasonable to expect (e.g. that wearing red ‘of course’ makes us feel warmer and happier, or ‘of course’ makes us feel more conspicuous and stressed if that were the findings instead!) i.e. confirmation bias then comes into play. Some of the things we think are ‘obvious’ after being told of their results weren’t quite as obvious before the results were published.
If another, later, study stands against the general existing trend and disproves a correlation then it’s important to not automatically presume that it must be this result that is wrong hence we should ditch this study’s results for being presumed for having a mistake in it somewhere. This is because if we keep dismissing these kinds of ‘anomalous’ results (akin to constantly saying that a historically great team’s recent run of losses are just ‘anomalies’) then we’ll collectively miss a growing trend. This is again why we must keep and report all results so that we can eventually look at the fullest picture to get to the best possible aggregate truth. Of course we also shouldn’t do the opposite and presume that the latest studies always automatically override all previous studies on the same subject – again, we should record all of the data, check their corresponding methodologies, then take a step back and look at the fullest picture of the aggregate literature on the particular subject so far.
Therefore all data is potentially important data. But it’s not always a scientist’s or journal’s fault if the public doesn’t hear about it – often even if a scientist publishes his/her results, the general media usually doesn’t want to report these ‘boring’ results to the wider public (e.g. if there isn’t a positive experimental result then a TV documentary isn’t likely going to be made to report it, thus biasing the picture); even though these stories might allay fears about something or add more weight to what we already understand and accept. So sometimes it’s the journalistic biases of the general media that’s at fault rather than scientists hiding unwanted findings. For example, finding out that some exotic so-called ‘superfood’ isn’t that special compared to alternatives that are considered more ordinary and cheaper is as important for the wider public to know as finding out that it is something special for our health – but such negative findings don’t tend to make headline-grabbing news stories so the mainstream and social media typically opts to report other, more interesting, ‘marketable’, positive-finding stories instead (e.g. goji berries versus oranges as good sources of vitamin C).
Scientists may publish a favourable and fair trial/study in a reputable digest journal, but if they find a positive result as a consequence of an unfair trial/study then they may decide to publish it in an obscure place (e.g. an extremely industry-specific journal so that it’ll be written up and edited by those who want their industry to look like it’s innovative and has new and exciting findings all the time i.e. a journal that’s not going to be entirely impartial about its own industry). Or if a trial/study is really bad then they might hide it and cite ‘data on file’, where someone can only find the paper if they specifically pester the author for it in order to use it as part of a systematic review, which might take ages. Or they may surreptitiously duplicate positive results by publishing them several times in different journals, in different forms and permutations, to make it seem like there were lots of separate positive trials, which will help manipulate a ‘meta-analysis’ (a kind of study that uses statistical methods to aggregate the results of several existing studies on the same issue into one big study, in order to try to increase the power of any results, improve the estimates of any effect sizes and/or to resolve any uncertainty if individual studies disagree with each other). Devious!
Sometimes authors, scientists and journalists are bullied by large and wealthy multinational corporations to not publish negative findings concerning their products. Rich and powerful companies (in any industry) frequently bully people to keep the truth concealed. Their strategy is to waste your time or to deflect from answering your questions by any means they can, including, commonly, via – aggression, threats, bullying, harassment, counter-attacks or counter-allegations, attacking a straw man (misrepresenting a position to make it easier to criticise thus attacking a position one didn’t actually make or hold, or creating a distorted or simplified caricature of one’s arguments and then arguing against that), spending lots of resources to try to discredit you as a person (not discrediting your arguments but you as a person, usually on fallacious grounds that just take up your time, which is easier for the other party to do if they’re wealthy and can hire people to do this against you), or by simply trying to crowd-out your claims with their own marketing and PR or propaganda (which is once again easier for a wealthy and well-connected party to do against a smaller party).
Their aim is to avoid answering your questions and to shut out your inquisition and discussion of the evidence, and to ultimately get you off their backs. It all contradicts the claim that ‘all companies never want to harm their customers because it’s bad for repeat business’ – most of the harms they do aren’t immediately obvious to customers and therefore it’s not easy to work out the links between their products and the harms they cause in the long-term without scientific research and education (e.g. smoking and lung cancer, or processed sugar and obesity, diabetes and tooth decay, to name just a couple). Powerful companies are incentivised to (and have plenty of means to) suppress any scientific findings that impact their bottom line.
One solution to the publication bias is to make every company or other party who wishes to conduct a scientific study register their study with an independent registry before commencing with it, and for journals to not even consider publishing any study that had not been registered from the start. If the results of any studies then fail to be disclosed for any reason given, when studies that show positive results in their area are, then this will raise suspicions (this is like someone gloating about all of his/her gambling wins but mentioning nothing about ever sustaining any losses when you know they’ve played more games than they’re letting on). Some medical journals employ this strategy but maybe all journals should.
Woof! On the whole, all of the above is why scientists must post all of their results to journals, and why journals must consider publishing all of these results too (at least online somewhere) regardless of how uninteresting or unconvincing they think they are compared to positive results. Knowing half the truth can be more dangerous than knowing nothing at all.