with No Comments

Post No.: 0308surveys

 

Furrywisepuppy says:

 

‘Selection biases’, or sometimes synonymously ‘sampling biases’, occur when there is a bias in how the participants or data point samples are selected for use in an experiment, study or survey. An example of selection bias is that a lot of psychology experiments were, and still often are, conducted using mainly undergraduate psychology students who came from ‘western, educated, industrialised, rich and democratic’ (WEIRD) backgrounds. (More widely, cultures other than within Europe and North America are generally underrepresented in social science research, which is a known current problem.) This means that they’re not a truly random sample of the population and don’t represent the full breadth and cross-section of people in a population, and so any results from these studies or surveys might therefore not be applicable to the wider general population, or world.

 

Such selection biases may not be declared, such as a commercial advert saying ‘75% of men said x’ when really it was ‘75% of men who visited a particular workshop said x’. Or saying ‘doctors recommend y’ without stating where these doctors came from, how many there were, how they were asked, via what method did they respond, were they paid to respond, etc.?

 

Technically, ‘sampling errors’ are errors that occur if a sample of a population is selected that doesn’t accurately represent that population as a whole. These errors can be reduced by properly randomising the samples selected and making sure the sample size is large enough because random errors should cancel each other out if a sample size is sufficiently large. Meanwhile, ‘non-sampling errors’ are either random or systematic errors that occur during data collection or processing. They basically cover all other statistical errors that cause the data to deviate from their true values, such as poor sampling techniques, biased questions in surveys, interviewer errors, participant non-response or misinformation response errors, counting people twice coverage errors, and data processing errors. Systematic non-sampling errors are worse than random non-sampling errors because they systematically shift or skew the results one way or another, such as because an interviewer is biased in which participants he/she is approaching and selecting to interview.

 

So think about the possibility of any systematic biases that go unreported e.g. it seems that, according to the media, big dogs bite more, but small dogs bite too, except these incidences get reported less because they do less damage. This systematic bias can skew the picture, and since they’re not as reported when they happen, it can make us believe in the wrong overall picture. (Little puppies like me can mouth or bite too, but only for teething, exploring and playing until we’ve hopefully learnt to inhibit our bites if we bite too hard – woof woof.)

 

If we’ve been selected to complete a survey or questionnaire, we’ve got to ask whether the survey questions have been surreptitiously slanted towards trying to obtain a particular answer from us? In any context, the way we frame the questions we ask can reveal a bias towards the answers we want to hear e.g. ‘Should I marry him/her?’ as opposed to ‘Should I ditch him/her?’ A question framed as ‘Do you like paying for car ownership?’ instead of ‘Do you like the freedom of owning a car?’ can give very different inferred conclusions on whether car ownership is desirable or not.

 

So the phrasing of questions in surveys can be tuned to more likely elicit certain responses, either via leading questions, or via ambiguity in conjunction with any prior assumptions that were held or purposely primed. How a question is asked is therefore important, such as open questions asking about side-effects, where the respondents may forget to mention something; or providing a closed checklist of possible side-effects, where if it’s not on the list then it won’t get recorded unless there’s an ‘other’ option with an open text input box too. Therefore a reader of a survey result needs to scrutinise and to take into account the exact phrasing of any survey questions asked, and not just the answers collected.

 

Question the questions when analysing surveys, and account for these potential concerns. Question their methodologies too. For example, in a test where people are reviewing a series of rich desserts, the first dessert has an advantage because every subsequent mouthful of any dessert is going to gradually make one not feel like wanting yet another mouthful i.e. the twentieth mouthful is not going to be as appealing as the first, even when consuming the exact same dessert, as people start to feel fuller and/or start to crave something different instead. (Also, other people’s opinions on something as subjective as taste preferences might not be the same as your own, so rather than just skipping to see their end scores, look at the reasons they give for them too.)

 

If you are writing questions for a survey and genuinely want useful and unbiased results – check for any question that could potentially be misunderstood or interpreted in many different ways e.g. what does a question mean by ‘basic needs’ or ‘middle class’ when it asks what proportion of the population do respondents think live in wealth or poverty? If you are a survey respondent then question any ambiguities you find in the questions if the survey conductor is present. And when analysing survey responses completed by others, critique whether some respondents may have used a particular interpretation of an ambiguously-phrased question and other respondents another? Ambiguity is like the headline ‘A Person Suffers from Concussion Every Minute’ – does it mean different people or the same poor person?! (That’s why in maths, using mathematical notation offers more precise, less ambiguous, semantic meanings compared to natural language.)

 

Qualitative data or descriptions are especially prone to highly personal interpretations e.g. something considered ‘lots’ or a ‘revolt’ by one person could be considered ‘miniscule’ or a ‘squabble’ by another. Yet behind even every hard quantitative number, lots of questions of clarity can and should be asked too e.g. why did the experimenter use this quantitative measure or operational definition for ‘creativity’ or ‘well-being’ and not another?

 

Anecdotal self-reports of anything, such as ‘feeling better’, may seem sufficient and unavoidable in some contexts, but in other contexts they can present very unreliable evidence because with some illnesses a person can ‘feel fine’ but is in serious imminent fuzzy trouble inside e.g. regarding using sham HIV/AIDS treatments. In the main, personal anecdotes are far too potentially biased, unrepresentative of the average when particular ‘interesting’ anecdotes are focused upon, and full of alternative potential causal reasons to the ones claimed in a self-report, to be used as reliable evidence on their own, where avoidable. Any form of participant self-reporting can also be muddied by intentional deceptions or exaggerations (anonymising their responses so that it’ll not be known who said what will minimise this but likely won’t ever completely eradicate it) or unintentional misperceptions or misremembering. Someone may believe that it was a placebo that cured their cough but it might’ve been simply the passing of time and the body’s own healing, for instance.

 

Polls or surveys conducted on social media can be unreliable for gauging the feelings of an entire population, because followers of a particular social media account tend to hold particular political and/or other views, hence those who complete such polls or surveys tend to be biased by selection biases. Survey data is frequently fabricated too!

 

The respondents of surveys may lie or otherwise say what they think they ought to say rather than disclose what they really think or what they really do in private. Surveys are often administered when a person is listening and recording the responses, and there might even be a camera or microphone in the room, so respondents will tend to say what will protect their social reputation rather than the truth e.g. that their actions care about the environment, or they spend ages on foreplay! This can happen even if the replies will be anonymous and given in private because we don’t recognise our own unconscious biases e.g. people may say that they trust their doctor over a journalist, and they may consciously believe that this is true of themselves, yet some people will end up doing the exact opposite, such as regarding vaccination advice.

 

So we sometimes act one way but think we act in another way – we barely truly know ourselves and our unconscious biases. Polls or surveys with self-reported answers can therefore be very unreliable because of these unconscious biases or indeed intentional deceptions to hide one’s true bigoted, embarrassing, greedy or selfish (i.e. reputation damaging) views; either in the responses or by virtue of not giving a response at all to pollsters.

 

And again, we must also question if there’s a selection bias to any survey responses. For example, are the majority of those who happened to be selected for their views by survey conductors, or those who selected themselves to offer their views such as by filling in an online form from a certain website, predominantly of a certain demographic or otherwise certain type of person that means that any results cannot be reliably generalised to the general population? This is a silly example but imagine a poll that asked ‘Have you ever used Twitter?’ and this was hosted on Twitter(!) It’d reveal that 100% of the people use Twitter, but the selection bias of the respondents here is quite clear. In less obvious cases, we must still query if there might be a bias by questioning how the survey was conducted, who were asked and how, and whether the volunteers or respondents might not represent the wider population.

 

Woof. So I’m going to ask a question that I hope any readers can help me out with – do you think using Twitter to take care of the comments for this blog was a good idea? If you think it does not work then please use the Twitter comment button below to tell me(!) If I hear nothing then I’ll assume it’s fine!

 

Comment on this post by replying to this tweet:

 

Share this post