with No Comments

Post No.: 0289utilitarianism

 

Furrywisepuppy says:

 

Utilitarianism is an example of consequentialism because it cares ultimately about the outcomes or consequences of things rather than the methods to achieve them – in the case of utilitarianism, this means maximising the utility of our decisions, which in practical terms means maximising the net outcome of the total pleasure, happiness or well-being minus the total pain. This all seems highly reasonable – to achieve the best possible aggregate result overall.

 

But utilitarianism doesn’t care for how the utility is shared or distributed amongst a group – just that it produces the highest possible utility for the group in total, which means that, although the interests of all beings are considered equally, (gross levels of) inequality in terms of the outcomes isn’t considered a problem under utilitarianism.

 

It also doesn’t care about punishing innocent people if it’ll improve overall utility. And it may demand us to somehow predict the far future consequences of our actions. For example, we don’t know this at this time but what if we’re saving someone who’ll end up growing up to become a murderous dictator who’ll kill thousands of people? Not likely, but this extreme example shows that it’s not always clear what’s the best or worst thing to do overall when we cannot practically and precisely predict the far future. To solve this problem, most utilitarians support doing the thing that will most likely bring the greatest utility, which here would mean saving the person.

 

Now should overall utility be measured at the country level or whole-world-of-humans level? What or who is included in this group that we’re trying to maximise the total good for? Just humans, with other fluffy mammals, with all other animals, with all plants too, with all biological life, with sentient robots, with regard for future species (but maybe not inanimate things like rocks or sand)? Woof.

 

Jeremy Bentham, the founder of utilitarianism, believed that the good is best characterised in terms of pleasure. Unlike Immanuel Kant, he believed that maximising pleasure and minimising pain is all that matters, morally speaking. According to Bentham, what the government is allowed and even required to do is determined by what maximises the aggregate of pleasure minus pain. The presence of pleasure or pain is precisely what makes a situation good or bad and an act right or wrong – not any notion of rights (of people, or any animal for that matter).

 

However, another problem with utilitarianism (‘the greatest good for the greatest number’) and calculating the rational cost-benefit result or expected value in order to maximise total social utility, is how to assign a commensurable value to life and how to even objectively measure ‘happiness’ or ‘pain’ at all (hedons, or ‘happiness points’)? How can we objectively measure and compare ‘one unit of joy’ with ‘one unit of pain’ (whatever they are) to come up with a net payoff or expected value calculation? Not everything is convertible into dollars and cents to be able to make a clear cost-benefit analysis. The problem isn’t the principle of trying to assign a value to a life but how?

 

Having said that, the free market implicitly, as well as governments more explicitly, does, or do, it. For instance, when the free market doesn’t freely want to reduce carbon emissions because it puts profits first, it’s essentially putting a limited monetary value on life and the quality of life i.e. lives do apparently have a price. The rights of individuals are also sometimes violated if they are in the minority. Measures of utility like GDP or GNP suffer from problems such as distributive justice/inequality and harming innocent people, as well as the (misguided) effort in trying to make everything of value fit into one category (in this case, a monetary market value).

 

Should we therefore take John Stuart Mill’s refinement of utilitarianism with ‘higher’ and ‘lower’ pleasures then? He wrote, “Of two pleasures, if there be one to which all or almost all who have experience of both give a decided preference, irrespective of any feeling of moral obligation to prefer it, that is the more desirable pleasure. If one of the two is, by those who are competently acquainted with both, placed so far above the other that they prefer it, even though knowing it to be attended with a greater amount of discontent, and would not resign it for any quantity of the other pleasure which their nature is capable of, we are justified in ascribing to the preferred enjoyment a superiority in quality, so far outweighing quantity as to render it, in comparison, of small account.”

 

But if John Stuart Mill’s type of ‘utilitarianism married with individual rights’ suggests that justice is a higher form of pleasure then that will sometimes mean that individual rights will need to be placed above the rights of the many, and if so then it would run counter to his core utilitarian philosophy(!)

 

Both Bentham and Mill reject the idea that individuals have natural rights. They both agree that the good is best understood in terms of pleasure, not virtue, and they both argue that the good (happiness or pleasure in this case) holds primacy to the right (e.g. individual rights). But whilst Mill argues that there are higher and lower pleasures, Bentham rejects the idea that pleasures differ with respect to their quality (Bentham believes that all pleasures are of the same quality).

 

Now some things, when analysed hard enough, convolute or become kind of paradoxical or contradicting. For example, if the argument is that government intervention should not be allowed and it should be up to individuals in a free market to decide what they want and the fittest should get their own way, then what if the majority of people freely wish to form a government and impose interventions on themselves and others, and they’re fit enough to get their own way? Are we truly free if there are constitutional limitations on how we can limit ourselves and others?! Morality and utility aren’t always one and the same, just like individual liberty and maximising pleasure aren’t one and the same. Efficiency and justice aren’t always the same things either.

 

If we should care about our long-term interests or outcomes, rather than short-term ones, then how long into the future? The following is a hugely hypothetical scenario and pretty pessimistic about the human species (but humans are normally hyper-biased about humans being the best species ever, for being humans, hence a relatively more pessimistic view might actually be fairer to all life overall) – but what if the best thing in the incredibly-long-term would be for people to naturally do all the immediately greedy things that harm the human specie’s chances of personal survival so that the species goes extinct as soon as possible and so that a smarter, new dominant species can eventually (whether in this eon or another) inherit the Earth as soon as possible? Mass extinction events have happened before, and arguably more intelligent species did eventually dominate as a result (this is if humans are considered the most intelligent species on this planet so far and if humans would never have likely evolved if the non-avian dinosaurs didn’t go extinct).

 

Since we can work out from previous mass extinctions so far that a smarter species would eventually inherit the planet if given the chance to, this hypothetical is arguably quite likely. Any future species, like contemporary species to humans right now, will share some genes that humans also possess, and it’s better to have some genes survive into the very far future than potentially none at all, perhaps because a future, vastly more intelligent, species will be a far better space-faring species.

 

Humans should save as many human lives as possible because that makes for a better human world – but what if this drives up the global population (despite fewer children being born per family), which will likely cause immense global conflict and suffering in the future because of Earth’s limited resources?

 

I’m not saying that I support any of the above couple of views but they’re thought experiments to consider…

 

Whether an ‘act utilitarian’ (individual acts that break the rules are permissible if they’ll likely end up with good net outcomes e.g. torturing a particular terror suspect to save more lives overall) or ‘rule utilitarian’ (one must adhere to the rules regardless e.g. torturing suspects is never ever permissible because of the sort of society this would create overall) – a problem therefore with utilitarian consequentialism is that one can potentially argue that almost any decision is moral with an ultra-long-range hypothetical prediction because of some hypothetical far future scenario where one’s actions, whatever they are, are argued to ultimately serve the greater good (and really, all long-range predictions – even what’ll happen next year or day in some contexts – are hypothetical apart from macro physical phenomena like the Sun is going to turn into a red giant in a few billion years time; or maybe not even something like this if the technology could be developed and the means and resources be harnessed to control stars?)

 

Or if we only maximise according to things that have great certainty then that might mean immediate greed is justified because who can be absolutely sure that one’s kindness will create a greater good in the future (for an act utilitarian)? ‘A bird in the hand is worth two in the bush’, as it were. Yet the concept of ‘certainty’ is itself debated. Even scientifically, certainty depends on what’s known, and we’ll never know how much we don’t know.

 

Would rape be acceptable if the pleasure it gives to the rapist outweighs the pain to the victim?! Since any conversion rate or ratio between a certain ‘pleasure’ event and a certain ‘pain’ event is subjective or personal (just like some people find eating hot chillies overall pleasurable), one could potentially reason a range of far future scenarios where the consequences of one’s actions will produce the greatest net payoff. And because any conversion rate or ratio is subjective, this means that the average conversion rate or ratio today between two things might be different in the future thus we cannot assume that today’s rate should apply to the future when making a utilitarian forecast – what brings us a high pleasure-to-pain ratio today may not bring future generations the same amount of pleasure-to-pain for they may learn things that we don’t yet know (just like a lower percentage of people in the UK today find eating meat pleasurable than people did in the past).

 

Would it be less moral to spend a considerable sum of money on saving your own child’s eyesight compared to spending that same amount on saving the lives of a dozen children elsewhere or affording them a primary education that’ll set them up well for their future and the greater benefit this would bring to the world? Could you even have the heart to smother a crying baby to save the lives of many more people who are being hunted by enemy forces? Why is maximising pleasure or satisfaction the goal rather than, say, maximising rights, freedoms, care and respect?

 

The CIA faking a vaccination programme in Pakistan in an attempt to find Osama Bin Laden may have ended up harming US international relations with Pakistan in the bigger picture, as well as generated a general distrust of even real vaccination programmes? So it’s hard to know right now whether the way Osama Bin Laden was found then killed was overall a good thing for the world. This example shows us that it’s incredibly difficult to predict the consequences of our actions even when something seemed crystal clear when enacted.

 

Rule utilitarianism may therefore have the edge over act utilitarianism, but that still has its own controversies and contentions… but so do all moral philosophies, as we’ll discover as we investigate more schools of thought!

 

Woof!

 

Comment on this post by replying to this tweet:

 

Share this post