Post No.: 0134
A randomised-controlled trial (RCT) is a type of scientific experiment that involves randomising the allocation of the participants of an experiment between two groups – a group receiving the treatment under investigation (e.g. a new drug) and a control group receiving another treatment (this control group might receive a placebo, the standard treatment that is currently available or the best treatment that is currently known, for instance). The results from these two groups are then compared to see if the treatment under investigation makes a difference. Any differences will be down to the different treatments the two groups received as long as the two groups were treated identically except for the treatment received. There is more about control groups in Post No.: 0044.
Randomisation aims to reduce bias by cancelling out any systematic biases between the participants of the two groups on average (e.g. chances are, when you flip a coin to randomly decide whether people go to the treatment group or control group, the ratio between female and male participants will be the same in both groups thus reducing a gender bias that might confound the results i.e. the variable of ‘gender’ is kept constant between the two groups thus inherent (known or unknown) gender differences won’t likely explain any differences in the results between the two groups). This is according to the probabilities if the sample size (the number of participants in this case) is sufficiently large enough. Whether these results can then be extrapolated to the wider population may depend on whether the participants where randomly picked from the population in the first place rather than e.g. only mostly males volunteered to partake in or were eligible for the experiment. Although in some cases this is fine e.g. only females need apply for an experiment concerning a new tampon and we only care about their effects on females anyway. Meow.
The RCT is often considered the gold standard for clinical trials, for determining whether a treatment under investigation causally makes a difference compared to a control.
But many, if not most, scientific studies that are reported in the media aren’t randomised-controlled trials thus they mostly only reveal correlational findings – and correlation doesn’t necessarily mean causation. Without correlation, there is no chance of a causal link; but not all correlations indicate causality (e.g. ice cream sales are correlated with drowning deaths).
Whenever you see a correlational data (non-experimental, non-RCT) study – you must question if the causal relationship could be reversed, if there could be a third or fourth, etc. factor that caused both variables, or if there could still be a chance that the correlation was merely coincidental when the data was collected? Certainly this doesn’t rule out that the suggested causal direction could in fact be correct, but correlational studies, especially small scale ones, regarding any subject or research question, just aren’t typically anywhere sufficient enough to make big, confident, causal claims. (In the ‘ice cream and drowning deaths’ example above, the third factor that causes both variables is hot weather – both the desire for ice cream and going swimming increase when it’s a hot day).
When there are potential confounding factors due to correlational data, it can be hard to know what caused what. An example is the general correlation that dog ownership is associated with a higher well-being – now is that because of all the walking and therefore physical exercise, the lower cortisol and higher oxytocin levels from stroking furry dogs (which means fluffy cats are cool too?! Meow), the non-judgemental and loyal companionship, something about the type of person who would or could own a dog in the first place (e.g. because they can afford to, hence they’re not likely (in general) to be poor), a combination of these factors or something else? Another example is that people living around the Mediterranean Sea tend to live longer than those in the UK – is this because of their diet (and if so, which component or components?), their activity levels during old age, their outlook and priorities in life, genetics (and if so, which gene or genes?) or is it necessarily the combination of all these factors adding up a little bit each or some other factor(s)? It’s difficult to say with near-absolute confidence just through observational studies because real life is complex rather than carefully controlled like in a lab.
Or it can be like a particularly tasty lasagne – you wonder whether it was the extra herbs that made it more delicious, the riper tomatoes, the baking time or a specific combination of factors that made it particularly tastier than other lasagnes you’ve had before. Now some of the extra bits may have actually made no difference to the end product at all (e.g. a fancier brand of black pepper or more expensive cheese), and some minor things could’ve even actually made it worse but not enough compared to the things that made it more tasty overall (so it was tastier despite these things). When things are mixed together and it’s hard to tease out the separate parts, it’s difficult to work out their individual contributions to the whole, if they had any positive (or negative) contribution to the whole at all. One will need to tightly control recipes and compare lasagnes that preferably differ only in one aspect at a time in order to determine what exactly contributed what to the end net result.
Natural experiments (where different groups of people are exposed to the experimental and control conditions by nature or by other factors outside of the investigator’s control) can still raise many potential confounding explanations for any effects; not least because the participants aren’t likely to be randomised but self-segregated, for instance.
So it’s extremely difficult or impossible to absolutely confidently say just from correlational data studies what the conclusion of such a study is, especially when a lot of possible variables are in play. So an author’s guess can be just their own guess (and your own guess will be just your own guess too) and nothing more – unless and until further, more tightly-focused or experimental design (with control group), studies can be conducted.
Naïvely taking tentative preliminary conclusions from scientific research as black-or-white, definitive and sacred ‘scientific facts’, and then people later finding, via more focused research, evidence that points to another conclusion – is one of several ways how (to a lazy or casual author or reader) science can seem to ‘say one thing one day then another thing the next’.
Note also that variables that do cause an effect likely only do so when all else is held constant according to the experiment (e.g. it may be shown that chillies can help one to lose weight, but not if one increases the consumption of cakes and sodas at the same time!) This is another way that people can take science news too far.
There are ways to increase the probability that a correlation is likely to be a causation, such as looking at the strength of the correlation, the consistency of the correlation in different contexts and with different people, the suspected cause leads to a single specific effect and only these two variables co-vary out of all the variables observed, there’s a dose-response curve (more of A always produces more of B), there’s a possible logical explanation for causality that doesn’t contradict other established facts, and the cause preceded the effect (although some things work bi-directionally). But still, we’ll be able to raise ‘what if’ questions such as what if there’s some other variable(s) we’ve not thought about? We cannot even rule out causality if the above criteria aren’t met (e.g. a small effect or an effect seen in only one group of people still might be causal). Arguments based on plausibility or analogy are extremely subjective too.
Controlling for variables by holding an extraneous variable constant via statistics is possible if there’s a large and diverse enough sample, but this is still not as desirable as using a RCT experiment, where such variables are controlled from the start. However, sometimes there’s no choice but to use an observational study due to the ethics and/or practicalities of conducting an equivalent RCT (e.g. forcing one randomised group to smoke cigarettes and another to not, or forcing one randomised group into poverty and another not). And we cannot live, in a practical sense, always thinking ‘that hasn’t been trialled in a randomised and controlled way so we cannot ever say whether it’s just a mere coincidental correlation or not’ – one really needs to live thinking about different levels of confidence rather than dichotomous ‘absolutely is’ or ‘absolutely isn’t’ black-or-white conclusions. In fact, different confidence levels apply to the conclusions of RCTs too because there’s still a remote chance of sampling biases or coincidence (e.g. it’s not impossible to coin toss 100 tails in a row). That’s the nature of epistemology.
Intuitively equating correlation with causation is a major explanation for superstitious beliefs. For example, believing that if a mirror broke today and then something bad happens tomorrow then the two events were somehow directly causally linked. It also involves confirmation bias because one’s mind will be looking for any future bad event to attribute to that broken mirror. But really, bad, and good, events happen all the time anyway.
Everything in the physical universe is ultimately connected to each other, either directly or indirectly (apart from galaxies moving away from each other faster than the speed of light – but let’s not get into that), and every effect has a cause or causes, but it’s not reasonable to say that one’s dress sense or comments when watching a live sports match caused a goal to be scored or conceded a hundred miles away, for instance.
…But maybe that just jinxed it(!)