Post No.: 0776
Furrywisepuppy says:
Too many laypeople conflate ‘the use of high-tech equipment’ with ‘doing science’ i.e. that someone is being scientific merely because they’re using a fancy scanner or some gadget with flashing lights.
The white lab coats, clipboards, conical flasks, test tubes, Bunsen burners and related paraphernalia are another narrow cliché and stereotype. This is probably because of what most people associate with science (well chemistry) lessons. But mixing one coloured liquid with another in a glass beaker isn’t, on its own, any more ‘doing science’ than mixing milk into a cup of tea. Or, if doing anything that causes something else to happen is regarded as ‘science’ then literally everything we do is science, from breathing to boiling an egg!
The scientific method involves starting with a question or observation, forming a hypothesis from it, testing it with an experiment, collecting and analysing the data, reporting one’s conclusions, and then reiterating this cycle.
So you don’t necessarily need to use any fancy or expensive, as opposed to basic and accessible, equipment to conduct some research of your own. It just depends on what you’re trying to test for and discover.
Many laypeople also assume that something is scientific merely if it follows a tight or precise recipe, formula or set of steps. Relatedly, many assume that if a fluffy formula or algorithm is scientific then it must necessarily be objective.
But no algorithm is objective because its goal state – or definition of success – is chosen by a group of inevitably biased humans. For instance, a social media machine-learning algorithm that decides what to promote in users’ feeds that has a goal of maximising the profits of the company that owns it. That algorithm will then work by itself to try to achieve that goal as best as it can, but it won’t care (or won’t know) if something is unethical or untrue – hence if fake news is what makes the most money then fake news will be promoted in people’s feeds more. (And users are indeed quite drawn to checking out fake news, such as conspiracy theories, compared to the usually comparatively more mundane truth.)
One of the easiest ways to draw in readers is to suggest that something is ‘science news’ – but there’s good or well-conducted science as well as bad science and pseudoscience. Even good science varies in the certainty of its conclusions depending on the type of study used (e.g. a randomised-controlled trial versus an observational study).
Speculations are frequently quoted as scientific certainties, often by those who want to sound clever to others, as in ‘I know the latest in science and you don’t’; when they may have jumped the gun. Some ‘science news’ stories are pranks/hoaxes or are ultimately sourced from pop or cod science articles or books instead of primary science journals or papers too.
It’s also the case that plenty of ‘science news’ isn’t as a result of published and peer-reviewed studies but due to one or two scientists expressing their own personal opinions; sometimes in the form of a formula for something. But a scientist merely saying something doesn’t necessarily make what’s being said a report on a scientific experiment they’ve just conducted. Thought experiments aren’t actual experiments. They’re usually laden with assumptions. Something presented as a formula or algorithm again doesn’t necessarily make it impartial or objective anyway, like some mental health diagnosis questionnaires can ask leading questions.
Yet plenty of laypeople assume that if something contains numbers and mathematical-looking symbols then it must be objective and in turn indisputable. And anyone who attempts to question or dispute something like that will be accused of being ignorant. But, as stated in Post No.: 0559, we should question numbers too.
But if we call them recipes (for that’s what a formula essentially is) then we’ll understand that not all recipes are good or accurate recipes, such as E + S = C, or Egg + Sawdust = Cake! Anyone can come up with a formula or algorithm for anything, just like anyone can have their own cake recipe – but it won’t necessarily make that formula, algorithm or recipe a good one. Like different ‘perfect’ food recipes from different chefs – different experts can present their own different formulas that they claim is the optimal for a particular thing, like perhaps ‘how to find your perfect partner’ or ‘how to host a perfect party’. So beware of such articles, that, in many cases, are also deliberately requested by PR firms to grab media attention as part of a larger campaign to promote something, like maybe a dating app. It doesn’t necessarily mean their views aren’t supported by research or statistics but we’ll need to question their rationales and what they may have deliberately omitted.
So we can follow a recipe or formula perfectly, but that assumes the recipe or formula was perfect. Even with pastries, when we measure ingredients to the exact gram, it presumes the recipe was perfect, which is doubtful when it contains round number or simple fraction measures. (Even if a formula is perfect, rounding numbers – especially if they’re then multiplied or divided together – can create sufficient inaccuracies.) We need to be more exact with small quantities like dry yeast because an extra gram might mean 7% extra. There are natural product variances too like ‘one egg’ can vary tremendously in its size and yolk ratio. It’s down to experience – with no experience, one might think it doesn’t matter to be exact. With a little bit of experience, one might think one always needs to be utterly exact. And with far greater experience, one would know that one should be adaptable and recognise when it matters and when it doesn’t so much regarding using exact weights and volumes, or what substitutes one can use if one cannot source particular ingredients, and even sometimes regarding altering some procedural steps.
An algorithm could also be fed with poor quality, or even subjectively assumed, data (e.g. data regarding the statistics of different warriors in different historical eras or geographical regions so that we can see which one would’ve probably beaten the other in a simulated fight). We occasionally find that ‘on paper theories’ don’t match the results in reality well enough, which invites the scientific community to refine them, or perhaps even ditch them.
Some scientific laws are more approximate than others. Many eponymous laws are like this (e.g. Murphy’s law, Sod’s law, Poe’s law, Parkinson’s law) so they’re more like adages compared to the more reliable scientific laws found in physics or chemistry.
A scientifically conducted survey of, say, the world’s favourite biscuit will produce an answer – but the most popular option would still just be a reflection of a collection of subjective opinions and won’t constitute a fact that this biscuit is the best, even if the survey question was phrased as ‘What biscuit do you think is the best in the world ever?’ So just because more dogs prefer biscuit A to biscuit B, it won’t necessarily mean your particular pooch will, or should, too. Not all scientifically derived answers, or answers from scientists, are therefore objective. Indeed, uncritically accepting ‘scientific conclusions’ or what a ‘scientist says’ will trigger a face palm from proper scientists because being critical is precisely at the core of science! A key attribute of possessing a ‘scientific mindset’ is healthy scepticism. Woof!
If a study found that the most delicious pie is an apple pie – it won’t mean you’ll be rendered incorrect if you personally disagree with the overall consensus(!) If you scrutinise the data from that study, you’ll find that way less than 100% of respondents claimed that apple pie was the tastiest anyway. It’s a matter of opinion. Even when it’s not a matter of taste – if scientific research perhaps reveals that a drug is effective for treating a certain disease, it won’t necessarily mean you’ll be a liar if it doesn’t work for you, because not all drugs work for 100% of the population. There are legitimate cases where some people are treatment-resistant. They’re not necessarily faking their symptoms if they could be faked. Even if a drug worked for 99.9% of patients but you happened to fall into that other 0.1%, it won’t necessarily mean you’re wrong if you said it didn’t work for you. (Now you might only be able to discover that this drug won’t work for you with the benefit of hindsight, thus the rational decision based on the prior odds (for people like you in age, gender, etc. if such finer-grained data were available) could still be for you to try it nevertheless.)
‘Some’ or ‘most’ doesn’t mean ‘all’ – but for headlines, or for generalised abbreviation (which can be understandably employed in most contexts), we tend to state scientific findings as if they’re always black-or-white rather than a matter of varying confidence levels.
Question how and when a survey was conducted (e.g. before or after a particular salient world event?) and question the questions in order to clarify what they meant (e.g. how was ‘the best’ defined?) Don’t make assumptions unless you reliably keep track of which assumptions you’ve made.
There might’ve been a poll that asked ‘who is the funniest comedian in the world?’ We might argue that the resultant conclusion is ‘indisputably in the data’ and ‘data is always objective’. But the methodology used in that study to gather that data can, and should, be scrutinised. (Different methodologies are how different supermarkets are all able to simultaneously claim to be better in value than each other!)
Perhaps the survey was only presented in the English language thus it could only be answered by people who could read English? Perhaps the poll was only likely to have been seen by a certain demographic who followed a certain publication? (Ever stood back to realise the sheer number of survey results from polls that concern your demographic yet you were never asked for your opinion?!) It was a snapshot in time so would the results still stand after 6 months? And other caveats.
Even if the data was gathered through measuring something quantitative like ‘the number of laughs per hour for a comedian during one of their shows’, then you might need to ask which shows and audiences? Did the study somehow differentiate between the duration and strength of each laugh or did all discrete instances just count as ‘one point’? Did it analyse every comedian in the world (not likely)? And more. There are many ways to define and measure a construct like ‘funny’.
The exact phrasing of a research question matters too, and we shouldn’t over-extrapolate the findings. For instance, one study claimed that Scouse (the Liverpudian accent) is one of the most unfriendliest-sounding accents in the UK. But another study claimed that Liverpool is consistently voted as one of the friendliest places in the UK.
Scrutiny is a key aspect of science, and the science also doesn’t end just because a paper has been published and when a journalist has reported its conclusions. The error is in treating science like a religion in that what’s read and understood, superficially, must be trusted without question.
You might presume that everything in science is objective and all questions can be answered objectively if only we look hard enough. But how different scientists choose to define ‘alive’ means that disagreements persist between whether viruses are alive or not. Or how different scientists choose to define ‘addictive’ means that disagreements persist between whether sugar can be addictive or not. A consensus, like what minimum mass of non-stellar body shall be classed as a ‘planet’, helps – but a consensus itself isn’t evidence of objectivity because a consensus can change over time, like how it did in this case, which affected Pluto’s status.
So you can legitimately argue with what scientists think, as long as you provide reasons and/or evidence to support your own conclusions.
Woof!
Comment on this post by replying to this tweet: