with No Comments

Post No.: 0733rationalisations


Furrywisepuppy says:


‘Motivated reasoning’ is about justifying decisions, judgements and attitude changes (or lack thereof) based on our emotions and cognitive biases, in order to reduce cognitive dissonance and/or to serve self-serving outcomes.


Cognitive dissonance leads to self-justified and biased ‘rationalisations’. The phrase ‘sour grapes’ comes from a thirsty fox not being able to reach some grapes but then – without any hard evidence to prove so – the fuzzy fox claimed those grapes must’ve been sour and thus not worth reaching.


The cognitive dissonance arises from the fox incompatibly believing that he/she could reach any grapes he/she wanted to but then not getting those grapes despite being thirsty, hence one of those beliefs had to give – and the face-saving rationalisation conjured up by the fox in this fable was that he/she deemed those grapes as sour and therefore he/she didn’t want them anyway.


The fox could’ve instead adjusted his/her belief that he/she could reach any grape he/she wanted but the illusory superiority and self-serving biases that make one think one is at least above average in intelligence, ability and morality have a tendency to be the biases that are preserved while other beliefs are ditched or modified instead. (Doggies hate grapes. This isn’t a rationalisation in case I can’t reach some grapes – honest! Woof!)


It’s within everyday thoughts or conversations when we think or hear of face-saving rationalisations like ‘I’m smarter than you, but I didn’t get a top grade like you… because I just didn’t bother revising as much as you probably did’ or ‘I’m a better gamer than you, but you beat me… but that’s only because you must’ve cheated or secretly practised this game a lot recently’. More examples of rationalisations include ‘I’m never a bully, but they’re complaining about my behaviour at work… because it must be petty revenge for showing them up earlier or they’re just jealous of me for some reason’ or ‘I drive a petrol car, love frequently flying abroad and I cherish this freedom, but there’s apparently some compelling evidence that claims that humans are causing catastrophic climate change… thus the government and these scientists must be colluding in some kind of conspiracy to plant false evidence in order to generate reasons to curb our freedoms and control us’.


Many people who drink too much alcohol can rationalise to themselves that they don’t have an alcohol problem and can stop whenever they want to – but they just don’t want to. If one’s child won’t eat the vegetables one had cooked then one might rationalise it as ‘my child won’t eat any vegetables’, when it’s more probably ‘my child doesn’t like to eat vegetables the way I cook them’. The ‘5-second rule’ is how some humans can convince themselves that it’s okay to eat food after it’s been on the ground (it actually depends on what food it is and what the ground is like).


Hence one can virtually always post-justify or rationalise – at least to satisfy oneself – that one is always perhaps astute, special or has always believed in and done the morally right or fair thing, whatever happens or has happened with almost anything. Virtually anything can be rationalised or excused away.


Rationalisations are therefore why self-serving beliefs of all kinds are so stubborn to shift! Even counter-evidence for our beliefs can be easily dismissed. We frequently rationalise things without need (or really care) for evidence. If we disagree with a claim then we might try to rationalise it as fake or irrelevant, or if someone blames us for something we did but we still believe we’re in the right then we might think they are taking some private issues out on us, have some hidden agenda, have sour grapes against us, or something else that’ll preserve our own worldviews. And this’ll become our narrative of the events – which we’ll presume is the objective truth – but it’ll only be our own biased narrative of the events. Rationalisations are hypotheses that frequently start along the lines of ‘they must’ve…’ or ‘what must’ve happened is…’. These hypotheses may not be the actual truths but they’re true enough for our versions of events and that’s all that matters to relieve any dissonance and maintain our pre-existing worldviews and self-concepts.


The motivation to eliminate cognitive dissonance is greater if we’re overconfident in ourselves. When we think we’re great at something but are then shown proof of how we’re not – we’re far more likely to deny or ignore that proof and rationalise it away. So we might think we’re erudite yet we got an easy multiple choice question wrong, thus we might reason ‘I didn’t listen to my instincts that’s why’ to preserve our self-concept that we’re deep inside genuinely sharp but we just didn’t listen to our true selves.


Minimising dissonance is also why our partner appears perfect at the start of the relationship, as we dismiss the thoughts that ‘he drinks too much’ or ‘she spends too much time watching trash TV’. But our sentiments will flip to the opposite way if we decide we want to split up, as we wonder how we ever fancied them at all as we justify all the reasons to leave! (This is also proof that our memories are merely reconstructions of past events filtered through the lens of what we’re presently thinking and feeling, instead of like pressing replay on a faithful recording of history.)


Even what’s irrational can be rationalised via rationalisations – as in self-justified rather than actually made rational. We often cherry-pick the assumptions that best fit the conclusions we prefer – in fact, whenever we have a personal stake in something, we typically start with the conclusions we desire then reverse engineer arguments to try to support or justify them. And even if these are falsified, we may rely heavily on ad hoc reasoning to try to save our beliefs. An ad hoc hypothesis is a hypothesis that’s tagged on after the results have come in, in order to save a theory from being falsified (e.g. if one wants to believe that spirits exist but nobody can find them, then one can avoid ever being proven wrong by using ad hoc arguments such as ‘they are invisible’, ‘they move in mysterious ways’, etc.). Not all ad hoc hypotheses are necessarily incorrect but the amended theory must still have predictive power and be falsifiable.


Due to confirmation bias, if one is anti-police then every example of police corruption is rationalised as an indication of systemic corruption in all police forces, or if one is pro-firearms then every example of a mass-shooting incident is rationalised as a one-off and not an indicator of wider or deeper firearms issues. The victims may even be rationalised as ‘crisis actors’. A video on holiday made by a terror suspect that looks just like an ordinary holiday video to anyone else can be reasoned to be deliberately made to took normal to hide its true intent, the camera pointing to a dustbin for just a brief second is confirmed to mean that the location has been selected, and the phrase, “I’m going on holiday” is assumed to be code for, “I’m going to commit a terrorist attack”! Our minds can imagine incredible narratives for events when (we feel) there is a lack of complete and verified information, in order to try to piece together a plausible version of events – assumption or intuition is what we (have to) resort to when we have nothing more solid to go on. Therefore it’s important to recognise assumptions for what they are – just heuristic guesses.


We mainly make up and believe in unproven assumptions to make us feel better about ourselves or to place us on a higher moral ground over others.


If we inadvertently hurt or neglect to help someone who isn’t too close to us, we might post-rationalise that they probably deserved it in some way so that we can sleep better at night. If we get ripped-off, we might reason that what we got was worth the price after all. If we get paid well despite working for a firm that engages in unscrupulous practices, we might search hard for arguments to explain why those practices aren’t unethical otherwise we’d be considered immoral for working for them. (It’s the same with defending one’s nationality, ingroup affiliations or anything else one is associated with – for the sake of our image, we only want to be associated with respectable things, thus to allay cognitive dissonance, we will typically fight hard to defend anything that we’re personally associated with.) If someone we fancy rejects us, we’ll rationalise that we didn’t fancy them that much anyway. If we’re brilliant at a sport then we’ll likely call it an amazing sport, but if we’re woeful at it then we might call it a crap sport!


This shows us that these kinds of rationalisations can protect our mental health by protecting our self-esteem and sense of moral virtue. It’s a defence mechanism for our egos. But a better way to protect the mental health of ours (and others who’ll be subjected to our delusions or inability to accept and learn from our mistakes!) is to practise self-compassion. We make mistakes, but that’s okay because ‘I did something wrong’ rather than ‘I’m categorically evil’, or ‘I performed badly this time’ rather than ‘I’m always useless’. The only alternative isn’t to beat ourselves up.


So view successes and mistakes not as reflections of your core character but as events you did i.e. it’s not ‘I’m smart/stupid’ but ‘I made a smart/stupid decision’. This way it’s easier to admit to one’s mistakes so that one is able to learn from them.


Rationalisations won’t be applied for the mistakes of those whom we think we’re better than or don’t like either. So if we didn’t fall for that scam but someone we didn’t like did then we’d probably ridicule them for being completely gullible! Or if we didn’t work for that firm and there’s no hypocrisy where we do work, we’d regard that firm as scum!


Because of the way we can employ rationalisations to excuse away our own immoral attitudes or impieties in order to maintain a self-identity of a ‘good and attractive individual’ that mightn’t be completely tenable – instead of believing in a fixed mindset that we are either fundamentally good or bad individuals and that’s that, we should adopt a growth mindset where we believe we are always a work in progress. Being told that we have unconscious biases, for example, can actually make us close down because we feel a powerful need to defend our identities. But when we believe we’re not perfect, we can admit to our mistakes rather than be defensive; and when combined with a growth mindset, we can work on ourselves to better ourselves instead of think we’ve got nothing more to learn. We might believe we’re protecting our public reputation as a ‘good person’ when we defend our self-belief that we’re a good individual despite our inclinations or actions – but others will likely see through our arrogance hence we’re actually doing harm to our public reputation whenever we don’t show humility.


So you’re a ‘good-ish person with room to learn and further grow’ – and with this attitude we can become better than someone who’s just ‘good’. When we think we’re perfect or think we know all we need to know, we stop seeking education and growth, which ironically means we’re far from perfect or the most well-informed we can be. Meanwhile, as learners for life – we open up to every opportunity to learn something new about ourselves and about the universe.


Woof! Overall, however, it’s safe to say that we don’t perceive the world in objective ways. We view it through our own narratives that are in part shaped by the rationalisations we come up with to explain away the things we don’t like or want to believe.


Comment on this post by replying to this tweet:


Share this post