Post No.: 0200
When reading academic papers, it is most important to read the methods and results sections, and then to make up your own mind based on these. The discussion and conclusion sections are potentially subjective and might contain very partial or personal interpretations and arguments made by the author(s).
In science, the fuzzy devil is in the details – of the materials and methodologies used to carry out the experiment or study, the results or data gathered, and the interpretations of those results i.e. scrutinising the details is important. The author’s (either the scientist(s) who wrote the paper or a journalist who is reporting on it) own conclusions might, on occasion, be over-simplifications, over-generalisations, over-extrapolations, wishful thinking and/or reductionist interpretations of the data, for instance.
For example, if something is shown to work on a few cells in a test tube then it could be interpreted to mean that it works in our real-life bodies in real-life environments and situations too, which isn’t always the case. Another example is that a drug may be shown to work in adults but not in adolescents or children. Or a study on the effectiveness and side-effects of a drug may have lasted for just a few months, which will not really help us to answer the question of whether the drug is effective and safe for use over several years. So try not to over-extrapolate the results of what a study actually studied. These extrapolations from the data might later prove to be correct in future studies, or they might not, so we won’t be able to confidently say based on the current study.
Data can be cherry-picked, so although all that’s being presented in the given conclusion may be true, it could be only part of the picture (where any data that disconfirms the author’s own conclusion could be greater than any data that confirms it, and any confirming data could be completely down to chance too). One needs to know everything (i.e. all of the good and bad) about something to know whether it’s overall good. For example, a drug may offer the positive benefits claimed and so the company that sells it is not technically lying about what it’s saying in its marketing, but by failing to state all the possible known harmful side-effects too, which may be serious and/or common, this incomplete information for the consumer is still an act of deception and it’s not just misleading but dangerous. (Read Post No.: 0188 to learn how the publication bias and file drawer problem can lead to incomplete information too.)
Cherry-picked data can be as misleading as fabricated data and worse than having no data at all. A one-sided, cherry-picked collection of facts is as misleading as the history books saying that Adolf Hitler really loved his mother, cared passionately about his country ‘above all’ and was incredibly driven… and that’s it(!) So be aware of not just what anyone mentions but what anyone may have left out, not told you or has kept secret too. Always seek to see the entire histogram (graph) of data before coming to your own conclusions – don’t just rely on a few figures selected by the author. And don’t passively or slavishly take people’s words for it that a study has been ‘rigorous’, ‘clinically trialled’ or even actually carried out, etc. – carefully read through their methods.
Now this doesn’t mean outright rejecting everything that presents the slightest bit of doubt, but attenuating, softening or nuancing the conclusions or the reliability of the conclusions based on these doubts until further studies with better methods are carried out. Besides, you must provide stronger evidence to support any adamant and differing conclusions you may come to or present. Even though casting doubt on something is generally far easier to do than positively proving something – it’s not about believing what you want to believe or rejecting what you simply don’t want to believe. On it’s own, we cannot say that just because one conclusion is questionable it therefore means the direct opposite conclusion must be true.
Consider the on-paper theory compared to the practice in the real world too. For example, some chemical elixir has apparently been found but you’ll need to consume a gallon of it every day just to get these beneficial effects in the real world, which may be expensive, impractical or comes with a lot of seriously harmful side effects!
Consider the actual findings compared to the inferred or surrogate benefits/outcomes. For example, a drug that does something to the blood, which is then inferred to do something to the heart.
And consider the results with other test animal trials compared to human trials. For example, mice compared to humans, where humans aren’t completely the same as mice hence the need to carry out human trials too. (As an aside, this arguably raises an issue here – we know that just because a drug/product works or is safe in mice, rats or another mammal then it doesn’t necessarily mean that it’ll work or is safe in humans too. But what if there are drugs/products that don’t work or are harmful for mice, rats or other test animals that would work and aren’t harmful in humans but clinical trials were halted before they reached human stage trials based on these (other) animal trial results? Does all of this question the point of using any other animals altogether to test for products that are meant for humans if what is and isn’t safe for one mammal doesn’t perfectly correlate with another? What drugs would’ve worked for humans but were halted in their research progress because they failed the (other) animal stage trials? Although many will argue that there’s still overall a practical benefit for animal testing from a human perspective, especially when there’s no ‘realistic alternative’ – this is not so much an ethical argument but a practical argument against animal testing. Animal testing for medicines meant for those precise animals are arguably more acceptable though. Woof.)
So make sure an experiment is ‘ecologically valid’ i.e. approximates the real world that is being examined. For instance, if we want to know how something performs in a school setting, it’s less useful to test it in a home setting – it might not mean the information is totally useless but it makes the information gathered less useful. And make sure it is actually testing for what it’s claiming rather than some surrogate outcome. For instance, goals scored, not a player’s ability to kick a ball harder, if one wants to know if something will improve the goal-scoring ability of a player – kicking a ball harder may seem to imply that more goals will be scored but not necessarily; so measure the goals scored, not the surrogate outcome, or state the conclusions based on the surrogate outcome only.
The scientific research is one thing, and even if it were methodologically sound and reported fully and truthfully, what a paper claims it shows – the findings or conclusions – is potentially the author’s own subjective interpretation, which may include over-extrapolations, such as that squirting a high dose of vitamin C onto some cells in a Petri dish, or on the open flesh of an apple, and discovering a result means that a high dose of vitamin C is good for us, or our own skin, in real-life bodies operating in real-world situations; or other biases or logical fallacies. A scientist might think that a test of tracking circles on a computer screen is a good proxy measurement for people’s reaction times in real-life driving scenarios, but you might disagree. Is a community where no one wears any shoes a good potential market for a shoe seller or a bad potential market for a shoe seller?! This is why we should all look at the methods and results and come to our own conclusions. This doesn’t mean our own conclusions won’t potentially be biased themselves or even more flawed but hopefully we’re more detached and independent from a study than the author(s) of that study.
Bear in mind that if it’s not an ‘intervention-control’ experiment, where a group that receives the intervention (e.g. the drug that’s being tested) is compared to a control group (who e.g. takes a placebo instead), then a casual relationship between a variable and an effect is hard to ascertain with observational studies. But if an association is consistent and strong, specific to the thing that was being studied (i.e. not over-extrapolated), the claimed cause occurred before the claimed effect, there’s a dose-response gradient (i.e. more of the variable is associated with more of the effect) and it has an explainable mechanism that’s consistent with what we already know and accept and seems plausible – then it’ll create a stronger case for a potential causal relationship rather than a mere coincidence.
Scientific results can be easily taken out of context in their conclusions to serve a particular person or group’s self-interests. For instance, product A may be better than product B but this doesn’t necessarily make product A good or recommended because product A might still be bad or harmful.
Therefore remember, when reading any science news – rather than automatically taking the conclusion given to you, either by the author of a scientific paper or a journalist reporting on it – it’s better to read the methods and results sections of the academic study papers yourself because you might find holes in the way it was carried out and/or come to a different conclusion. You can then later read the discussion and conclusion sections and compare notes. If your interpretations and arguments are different then you may or may not be persuaded to change your mind.
This is all admittedly time consuming and often not pragmatic to do in all cases, and sometimes you won’t have free access to the original study papers anyway. In which case you might need to rely on the efforts of journalists to perform the scrutiny on your behalf. Nevertheless, we can still scrutinise the articles written up by science journalists for the methods and results will hopefully be presented there, at least in condensed form, if the journalist and article are good. The main thing is to not just lazily read the headlines and just the first paragraph or two of an article and that’s it, where the given conclusions are usually presented. A good journalist will have put a lot of effort into writing up an article and we should reward that effort by bothering to read it thoroughly – otherwise journalism will be reduced to short and rubbish sensationalist news feeds and sound bites, and present an over-simplified black-or-white representation of the world.
And always be willing to change views according to a shift in the overall weight of empirical evidence – a single piece of evidence or a single study is seldom enough to shift views because this result could’ve been down to chance or confounds. Extraordinary claims (i.e. major deviations from the current prevailing scientific consensus) require extraordinary evidence, thus one or two apparent findings that contradict a huge mountain of existing data will normally not be enough. And to know the size of this mountain of existing data – you may want to formally study the subject rather than merely rely on reading a few media articles here and there and whenever (a classic case here I think is the huge mountain of data that confirms climate change). Cherry-picking data can happen between studies as well as within a study. To concentrate on just one single piece of data would be like concentrating on the results of one match day and ignoring the league table at the end of the season.