Post No.: 0550
Furrywisepuppy says:
With ever-advancing technologies, the fleshy human meatbag will increasingly become the limiting factor. For example, vehicles could potentially drive at much higher average speeds on the roads safely… as long as every single vehicle was autonomous/self-driving.
Artificially created robots might inherit the galaxy in the long run? They are currently undergoing vastly faster rates of evolution than natural complex organisms can achieve. If humans are the ultimate design – sculpted to perfection by either God or natural selection – then no one would be able to imagine, design and build artificial machines that can do a better job than them in particular tasks… but we can.
We must note however that ‘artificial general intelligences’ (AGIs), which are considered equivalent to humans in intelligence, aren’t close to reality yet; albeit most experts believe it’s just a matter of when, not if (and some believe it’ll be sooner, not later). There are currently lots of ‘artificial narrow intelligences’ (ANIs) that possess narrow functional areas of ability. And perhaps eventually, as a consequence of breaching the technological singularity and unleashing a runaway reaction of self-improvement cycles, there’ll be ‘artificial super intelligences’ (ASIs) that’ll surpass human intelligence across all fields – which could spell doom for humans in the same way how humans selfishly treat relatively less intelligent animals as pests, slaves or mere resources!
…But one step at a time before I can have my cool robot army. Right now, the development of autonomous vehicles already poses a lot of philosophical dilemmas that need answers. Let’s start with – if who’s responsible for something is dependent on who’s in control of it, then who’s in control in the case of an autonomous vehicle?
If we do something bad whilst sleepwalking then we weren’t in conscious control and therefore not regarded as legally responsible for those actions. However, can we therefore excuse unconscious biases like racism in situations where snap decisions must be made?
Alternatively, isn’t the person behind the wheel responsible if such a vehicle crashes precisely due to not being in control of the situation i.e. they didn’t apply enough due care and attention as a backup to the autonomous system?
Alternatively again, a human may set a goal but not control and therefore not be responsible for how the vehicle tries to get there (e.g. sets the destination, but the vehicle runs over 56 badgers to get there, or fails to get there altogether). Another way of looking at this is if you told a human taxi driver where you wanted to go, and then an accident occurs – would that accident be your fault just because you set the goal or destination?
Alternatively once more, would one be responsible because it’s ultimately oneself who decided to take that journey oneself and who took the choice to delegate control to a machine in the first place? Another way of looking at this is that when a human field marshal issues orders to human soldiers, the former takes a huge responsibility for calling those orders.
Deep learning is used in autonomous vehicles and other applications. But these AI systems can be like opaque ‘black boxes’ in the sense that they don’t give enough or any explanation as to how they come to the precise decisions/outputs they do. Like human agents – we can’t fully peer into how they decide what they decide to do.
Does this make autonomous vehicles essentially independent agents on our roads? Can we firstly ever accept that artificial technologies can have agency like natural beings have? Well are we perhaps essentially just biological machines but machines nonetheless ourselves?
Answering this question is fundamental because agency and responsibility are considered linked, and if such technologies make their own decisions then it’s vital to consider how trust between technologies and us will change. Or maybe whereas a human taxi driver chose that profession, an AI did not – so even if it has agency then it’s essentially being forced to do that job as a slave? Still, the issue of trusting machines remains.
Will they have status like pets or be fully responsible agents themselves? But owners can train their pets but not their autonomous vehicles in the same way, so will this affect who’s responsible? Woof.
Then again, even if a pet were un-trainable, the un-coerced decision to have a pet, and that particular pet, would still have been the owner’s, thus the owner would still be responsible for whatever the pet does; and thus the same regarding anyone who decides to buy and use a particular autonomous car.
Will it be like it’s the parents’ choice to have children, even though the children become their own autonomous agents? With humans, various jurisdictions have chosen (arguably arbitrary) ages when responsibility transfers from the parents to the child – so at what (arbitrary) point of sophistication will responsibility transfer from whichever human(s) to an AI as its own autonomous agent?
It arguably boils down to who or what is really making the key decisions? Instead of the car’s algorithms or the designated primary passenger or people inside the car – perhaps it’s the responsibility of the designers of the vehicle, the programmers who initiated the AI, the manufacturers and/or even the state government that sets what’s road legal? (Post No.: 0493 underscored the responsibilities of designers.) If it’s the responsibility of everybody on that list because if even one piece of that chain were broken then a crash involving an autonomous vehicle wouldn’t have been possible, then what will that mean in practice? (In any context, it’s always easier to express what we should aim for, such as to ‘be careful’ when doing something that must be done – but the difficulty and real-world wisdom is in knowing what this means in concrete practice?!)
If the universe is completely deterministic then for machine learning AI, it’s down to the initial programming and then the training data that teaches it – which is somewhat parallel to everything ultimately being down to the initial conditions of the universe along with the laws of physics that act upon them if so – which already poses questions of free will and responsibility even for human behaviours, never mind self-driving cars!
Other dilemmas with autonomous vehicles (and human drivers too by the way!) include the ‘trolley problem’. There are a number of different variations of this thought experiment to test different scenarios and to test for any illogical inconsistencies. But generally, the majority of humans would opt to spare as many lives as possible, the young over the elderly, and humans over other animals. And with a smaller majority, most humans would opt to spare pedestrians over passengers of a runaway trolley/vehicle, females over males, and people deemed of higher status (e.g. wealthier, not criminal) over people deemed of lower status.
Another factor is that most people would rather do nothing than intervene and be directly responsible for anyone’s death. If one pushes the lever, even if this saves the most lives possible, then one will feel more responsible for those who died.
There are cultural differences, and there are no correct answers. But more people would push a lever to divert a runaway train so that only one person dies rather than five, compared to push a person off a bridge to stop a runaway train so that this person (who happens to be heavy enough for the sake of this thought experiment) dies rather than five on the track. This suggests that the distance or disconnection to the person one interacts with is also key, due to dehumanisation. Hence kills or assassinations committed via remote-controlled drone strikes from far away are easier to execute compared to face-to-face kills, even though, morally, a life should be regarded as a life regardless of the method of murder. Humans certainly didn’t evolve with the anticipation of being able to kill others from such long ranges thus this is perhaps a contributory reason why empathy severely reduces the greater the distance people are from their victims – when they cannot see the fear on their faces or therefore empathise with their sense of terror.
However, in contrast, the majority of humans would decide not to harvest the organs of one healthy person to save the lives of several other people in need of organ donations – even though the logic, on the face of it, is similar i.e. choosing the option that saves the greatest number of people(!) This appears irrationally inconsistent.
Some may argue that unintended bad consequences (regrettable by-products) are more acceptable than intended and instrumental ones in the pursuit of a greater good – yet people are more likely to push a lever that drops a heavy person onto a track so that only one person dies rather than five, than directly push a heavy person onto a track to stop a runaway train so that this heavy person dies rather than five on the track, when the intents are essentially the same!
Most people would feel emotionally guiltier for physically manhandling a person to their death than killing people who are more distant and anonymous. It’s indeed easier for people who aren’t diagnosed as psychopaths to cold-bloodedly kill or let people die when they cannot see the whites of their victims’ eyes and/or they don’t know their names – distance and dehumanisation therefore situationally increases the chances of anyone behaving more essentially psychopathically. Our behaviours aren’t just determined by our biology or personalities but by our environment and current contextual factors too.
Farmed animals aren’t usually given (friendly or humanised) names because this can make them harder to later slaughter. Sticking a pair of googly eyes on inanimate objects can conversely make us care more about them!
Some people also argue that inactions/omissions that cause harm aren’t as bad as actions that cause harm. But if intention and action/inclusion are critical for moral decisions or culpability then can neglect therefore be excusable i.e. should not even bothering to conscientiously think about the possible long-term consequences of one’s inactions, hence a resultant consequence will end up being ‘unintended’, be acceptable as a solid moral defence? Should not getting oneself into situations to possibly help others, so that one can say that it was impossible to have ‘reasonably acted’ to save another person’s life, be acceptable as a solid moral defence?
The level of certainty of outcome matters too – such as torturing a suspect bomb-maker when there has been a tip-off that a bomb is somewhere in a crowded city versus when there isn’t the feeling of an imminent threat.
Overall, there’s plenty of evidence suggesting that our personal morality isn’t based on rationally consistent thought but on emotional gut reactions – but we will attempt to rationalise them after we’ve come to a decision.
Just like different countries can have different socio-political ideals and norms (such as individualism, collectivism, laissez-faire, egalitarian) – what’s considered ‘morally right’ between killing a furry dog or an elderly person, a homeless person or a rich person, for instance, can differ between different people.
And if even humans cannot answer such moral questions unanimously – then how can autonomous vehicles from all manufacturers be expected to?!
Woof! There won’t ever be objectively right answers to all of these questions and dilemmas – but as long as we can accept beforehand which answers we should all adhere to then the legal issues surrounding autonomous vehicles will be workable. It’s like anything else where there’s no objectively morally correct answers, such as the rules of golf or which side of the road to drive on – as long as we can all accept what the rules are (even if we might not all agree with them) then we can all play without arguments or the system will work without chaos. There’ll still inevitably be pickles though because we’ll surely have failed to anticipate something, hence why such laws, like any other, should be subject to constant review, amendments and evolution as we learn more.
Comment on this post by replying to this tweet: