Post No.: 0643
It’s been said that some scientific studies lead to conclusions that ‘state the bleeding obvious’. But in many cases they should still be carried out in case they do reveal novel conclusions. One of the most crucial practices in science is to repeat experiments/studies to confirm that the initial conclusions weren’t a mistake, or fraudulent. We must occasionally re-question long-held beliefs because there’s so much in every scientific field we’ve yet to fully understand.
Maybe the issue is that the media should convey the importance of all scientifically-gathered results, whether positive or negative, novel or obvious.
Instead, the media picks up too many preliminary or otherwise inconclusive scientific studies because if they’re suggesting novel or unexpected findings then they’re incredibly exciting to report on. Meanwhile, findings that confirm the existing scientific consensus are usually deemed too boring to cover because they’ll be accused of ‘stating the bleeding obvious’. But research that reaffirms the obvious is important to know too because they give these conclusions even more weight, confidence and certainty. They enable us to place any claimed novel findings in their proper context in the overall picture. Research that fails to find what was hoped (e.g. a drug that doesn’t work) is likewise important to know about because this is still information that helps us to see the fullest picture.
Sometimes what we’re taught are things that seem completely obvious but only after they’ve been taught (e.g. we don’t buy/sell products per se – we buy/sell and compete on ‘value propositions’ i.e. not just a product that serves a practical function but psychological experiences with a brand, reputation, exclusivity, service theatre, etc.).
Lessons like these can help us to view and frame the world from a perspective we never really took before. Only when such information is presented explicitly – rather than abstractly and tacitly in one’s mind or perhaps not at all – do we suddenly better understand why the world is the way it is. Even if something only serves as a reminder of what we already know but had not thought about for a while – there’s some value in that too.
It’s the benefit of having a teacher who, when you break it down it may seem like they haven’t taught you anything new but the way they package, frame and present a nugget of knowledge, helps you to see old things in brand new ways (e.g. ‘value propositions’ are a more accurate model of market behaviour and better explain apparently irrational decisions like when some customers weight intangible or purrely psychological benefits like ‘brand’ over tangible or practical benefits like ‘reliability’).
So some things appear obvious but only after we’ve been told about them. Many things only seem obvious with the benefit of fluffy hindsight once the answer has been revealed.
We might anticipate a ‘grey rhino’ event (something major that should’ve been obvious in the first place) like an earthquake along the San Andreas Fault – but it’s still not obvious exactly when it’ll happen? The ‘when’ matters equally as much as the ‘what’ – should we start evacuating the population now?! But it’ll all appear obvious with the benefit of hindsight.
Although independent scientific studies are more reliable than personal anecdotes because our own experiences are usually limited and biased – we can sometimes end up forgetting to think for ourselves. For instance, certain scientists may claim that you’re sitting on the toilet wrong, and they might have a point – but unless your ****hole is constantly getting blown apart, I wouldn’t worry about it and would worry more about your diet and activity levels to get regular bowel motions and softer stools! Or there’s no ‘right’ or ‘wrong’ end to peel a banana from because either way works. If your shoelaces aren’t coming off when you don’t want them to and do come off when you do then your current tying method works. People can point out alternative ways if we’re struggling with one way but there isn’t always an objectively ‘correct’ way.
Many who regularly drank caffeinated drinks weren’t feeling dehydrated yet trusted the now-debunked theory that caffeinated drinks dehydrated people. (It’s not a problem unless one drinks hugely concentrated amounts at once, but even then the dehydration effect would be minor.) And you can test yourself to see if you only tell the truth when your eyes look to one side and only tell lies when your eyes look to the other.
We must occasionally re-evaluate any kind of long-held belief. Some old Chinese medicines have proved to be empirically sound under modern scientific scrutiny (e.g. the recipe for artemisinin to fight malaria) – but many have not. Test every case individually, and if the evidence agrees with the ancients then fine but if it doesn’t then ditch it – the overriding factor is the evidence, not the traditions or beliefs, or how long something has been relied upon.
You might like carnation flowers to decorate your house even though others profess they symbolise grief, love or whatever. Nature didn’t decide those symbols, nor did the flowers themselves. Symbols and traditions are made up, and you’re not offending anyone in this case.
Likewise, if your clothes keep you warm enough, are clean, cover your bits and you personally like the look of them then wear them. Other people’s subjective judgements would speak about them because all subjective judgements speak only about those who bear them, even if millions of people think the same thing. Flexible thinking isn’t stupid – being inflexible is. Meow!
We can definitely fairly question certain types of research such as whether we think horizontal or vertical stripes look more slimming, because these are essentially about our opinions. We could conduct our own informal experiments by asking our friends what they think too. (In my opinion, just wear the right clothes sizes for your body right now because something too loose will look baggy and something too tight will make you look like a bursting sausage!)
We should still however pay heed to general scientific findings because they’re useful and appropriate when we don’t know where we individually sit (e.g. whether a drug will work for us). In such cases, we should rationally assume we’d sit where most of our cohort generally sits i.e. the set with the highest probability. Too many times we hear people assume, without sound justification, that they or something related to them are atypical, special or exceptional amongst the population (e.g. they’re obese but aren’t worried about diabetes, or their children need corporal punishment because they’re especially uncontrollable). Generalisations are also useful when talking about populations as a whole.
Yet we must also understand that even if something like a stereotype is generally reliable – generalisations don’t completely apply to all individuals; in some cases hardly all. Prejudices and injustices can arise if we apply generalised assumptions in this way (e.g. the police disproportionately stopping black people who drive luxury motors, or believing that women don’t lift weights). So we mustn’t try to over-generalise or over-extrapolate data. We need to apply effortful critical thinking and understand what a statistic is really saying and what it isn’t.
We should also follow the science rather than the scientists because – even though in most cases they’re saying the identical things – it’s the facts rather than the messenger that really matters. Judging people’s words according to their titles is just another unreliable shortcut when we’re being too lazy to think for ourselves. Judge the message, not the messenger. It’s like we should judge the food rather than the cook, even though cooks cook the food. Even the best chefs make duff dishes occasionally. Some people who claim to be scientists are fraudsters too – and we find out who by scrutinising the research they rely upon. The message, medium and messenger do combine to influence and persuade us – but it should ideally only be the content of the message that matters rather than how it’s presented (unless the message lacks clarity because of the presentation) or who presents it (well-known or otherwise). It shouldn’t be ‘who said it’ but ‘what is said’. This means that junior doctors shouldn’t be ignored by senior doctors just because of rank, celebrities shouldn’t be listened to just because they’re familiar, and we shouldn’t ignore ‘outsiders’ just because they’re not ‘one of us’, for instance.
As humans, some scientists promote scams or harmful advice because of a hidden selfish motive. For some others, their faces and names have been appropriated to lend credibility to what are ultimately scams. If you just unquestionably follow a scientist then you might fall for these traps. Scientists frequently disagree with each other anyway because – particularly when it comes to making predictions or coming to conclusions about historical events that can’t be absolutely confirmed because we cannot rewind time to observe the events as they unfolded – they too can act like lawyers who’ll start with the conclusions they wish to prove then employ confirmation bias in their research focus (e.g. a virus leaked from a lab), rather than like dispassionate observers who’ll gather all of the information they can then go where the preponderance of evidence and greatest probabilities point to. Some scientists or ‘scientists’ are in the pockets of narrow-interest groups so may cherry-pick data points or interpretations to fit a sponsored agenda? An overall consensus is more persuasive but there isn’t always one amongst scientists.
A prediction isn’t a fact until, and unless, it proves to be correct in due course. We should pay attention to the probabilities and confidence intervals but different scientists can come up with quite radical ones based on the different models and datasets they’ve relied upon. Hence it’s these we must scrutinise.
Scientists may agree with the data but disagree about what it means and what to do about it too – not everything that scientists suggest is objective or impartial but political. Well deciding what we ‘ought’ to do about any scientific finding is always political because science itself cannot answer ‘ought’-type questions. Science cannot objectively answer questions like whether the human race ‘ought’ to survive to the next millennia, or that human happiness ‘ought’ to be maximised. These are human-biased questions that’ll have human-biased responses if humans answer them!
Regarding things like how will the human race evolve if there’s a continued lineage to the year 40,000 – far-future predictions, especially with something to do with the chaotic human world, are difficult. And even if scientific experts should make better cases for their predictions than laypeople in their specific domains, they don’t literally possess crystal-ball precognition.
Regarding things like how non-avian dinosaurs became extinct or how our solar system formed – no one was actually there at the time to observe it, hence the best guesses by scientists today or any other day in the future are just best inferences rather than things that are 100% for sure. And we’ll never ever confirm these kinds of answers for absolutely certain. It’s not like reporting on what happened in the game we all just witnessed. Applying logic based on what we so far evidently know about the universe gives us a better guess than any other, but we must always remember that we cannot call our inferences deductions or unchallengeable facts.
Are outsiders to a field better placed to see past any entrenched biases or preconceptions within that field, or are they incapable of judging this field on technical grounds? Well if there is controversy regarding a scientific query then the opposing sides should ideally collaborate to test it and resolve the issue with a large study – by pre-registering all of their experimental plans and agreeing to publish all of their data no matter what they find so that there’s no accusation of questionable research methods or publication bias. And they should all accept the results wherever they fall. Whichever side doesn’t wish to cooperate may signal that they’re the side that’s trying to hide something.