with No Comments

Post No.: 0875virtual

 

Furrywisepuppy says:

 

Designers need to think hard to consider, not just how their technologies may be used, but how they may be abused. What we intend or hope for may precisely facilitate execrable behaviours, like how apps or devices designed to track people or objects so that loved ones always know where they are can also be exploited by stalkers/cyberstalkers too. Some technologies can be beneficial for society but also utilised for selfish, destructive ends too.

 

Laws tend to lag behind the pace of technological advancement too. Like howl ethics, crime and punishment play out inside or regarding virtual worlds if our virtual avatars or holograms become more like everyday extensions of ourselves, and more actions and consequences become as meaningful in the virtual world as in the non-virtual world?

 

In videogames, many people routinely witness human virtual characters essentially getting killed, often in gruesomely graphic ways, without batting an eyelid – yet they might recoil if they witness human virtual characters getting raped. People might care more if it were their very own avatars that get ‘killed’, or deleted or banished, from a virtual platform though, even though no non-virtual person is getting harmed in a physical sense and the user could just create a fresh avatar.

 

Virtual items aren’t real items if they get stolen, but they afford a certain utility for the owner even if they’re not physical assets. Or of course they’ll carry monetary value if they can be traded.

 

There are most probably umpteen other regulatory questions we’ve not yet even anticipated concerning the virtual ‘metaverse’ – alongside the usual harassment, bullying, security, data misuse and privacy issues we presently can. And they’ll directly or indirectly affect real humans, human feelings and human lives, in the physical world.

 

There are also plenty of regulatory questions concerning the fields of artificial intelligence and robotics we need to answer in society – and answer soon. There are a vast array of different dilemmas to consider for AI programs, including when teaching driverless or self-driving cars, like whether they should sacrifice the passengers of the vehicle or some road users if those were the only two options available? (And does it matter how old, famous, rich or whatever those people are?) How about how facial recognition systems may be used in surveillance capitalism or in state surveillance? For robots, Asimov’s Three Laws of Robotics can (somewhat) work in practice in fiction, but perhaps not in reality.

 

These are questions that science (and in turn humans, never mind AI) cannot objectively answer; and any democratically-agreed answers may change depending on the time and place/culture too.

 

By the way, from a rational perspective, autonomous robots cannot be expected to never cause accidents but arguably all they need to be is safer than the best alternative i.e. humans, who are hardly perfect themselves. Yet, in the case of autonomous vehicles, should the standard to beat be the average human driver, a ‘highly competent’ human driver, or other safer modes of transport like trains or planes? Or why shouldn’t we aim for zero crashes?

 

Discussions about AI do tend to demand that AIs should be held to higher standards than humans when it comes to having the best interests of humans at heart. Yet not all humans have the best interests of humans at heart(!) Well different people believe in different moral philosophies or conceptions of what’s right or wrong, fair or unfair. We ourselves can change our minds from one moment to another, hold inconsistent standards for different situations, have preferences for now versus our long-term interests, and we can collectively culturally shift in our moral scruples. So humans don’t even agree with each other when it comes to ethical dilemmas. This may mean that AIs will depend on which humans set the goals and then supply the data to train them.

 

One expression of this is – despite all these thought experiments regarding trolley problems and the decisions that autonomous cars may face – would a manufacturer really want to sell autonomous vehicles that save as many people as possible versus prioritising saving its own passengers (i.e. its own paying customers)? Would you buy a car that sacrificed you for the many?

 

Humans set the goals or objectives and then machine-learning AIs find their own ways to fulfil them. This has often led to biased decision-making systems. These objectives could also from the outset be narrow private-interest objectives rather than wider public-interest ones – to serve the few who would gain enormously from the activity over the greater social good for the present or future. It’s a problem that’s related to an ‘I don’t care how you get it done, just get it done’ attitude (and ‘it’ usually means ‘whatever maximises a firm’s profits’ – we’ve seen this with social media platform algorithms).

 

But if an objective shouldn’t be sought ‘at all costs’ then what should the limit of those costs be? Humans need to set quite precise objectives and boundaries too because computers don’t understand context or nuance like humans do. Yet human codified laws understand that being deliberately vague can help to cover various contexts and to catch unforeseen scenarios that ought to fit into the spirit of those laws compared to quite specific rules that might be easily circumvented by unexpected and deviant behaviours.

 

In the field of developing standalone robotic units with high degrees of autonomy, and possibly even sentience and consciousness, and similar to deciding when the responsibility of a child’s actions will become his/her own rather than his/her parents’ – at what point shall the responsibility for an error be attributed to an AI as an individual entity and identity as opposed to the human or corporation who developed or initiated it? Or should it never shift no matter how sentient or conscious a machine might arguably become in the future?

 

And to be consistent – if blames do shift then credits must do too, thus any earnings from an AI’s decisions should also become attributed to the AI as a discrete individual entity rather than the company who developed it! If synthetic robots develop the right machinery to exhibit consciousness then we’ll need to consider the rights of robots too.

 

The ‘uncanny valley’ effect is when robots seem most creepy when they resemble humans or other organic creatures quite closely but not closely enough. We’re not bothered as much when they clearly look like toy or industrial robots, and we won’t be as perturbed if they look, move and behave absolutely perfectly like the animals they’re trying to simulate.

 

Robotic pets that are made to look and act as realistic as possible do seem to boost the well-being of dementia patients. But is there a problem with letting people get emotionally attached to artificial machines?

 

Whilst on the topic of pets – some people choose to clone their recently deceased pets. But they’re apparently more prone to disease, it requires extracting eggs from female animals then using them as surrogates, and there’s presently a high failure rate, which means so much stress for those poor surrogate mother animals. Personality isn’t solely genetically determined anyway – the rest comes from nurture, life experiences and other environmental factors. Thus a cloned pet mightn’t turn out like the original anyway. Wuff!

 

We normally find restorative technologies/operations fair and compassionate, like getting an artificial arm to replace a lost one – but not augmentative technologies/operations, like perhaps implanting a third mechanical arm on top of a pair of healthy arms.

 

So is using medical technology for enhancement, like drugs that help people to pass exams, okay? What about cloning technologies, eternal life, choosing the gender of one’s baby, living donor organ transplants, or embryonic stem cell and induced pluripotent stem cell usage? The ability to modify genes and/or their expression has its worthy purposes, like therapeutic uses for people born with muscular dystrophy; as well as its dark sides, like gene doping in sports.

 

And things like powered exoskeletons for therapeutic use can be easily modified to become essentially weaponised. Boston Dynamics may have objected to paintball guns being attached to its robotic quadrupeds – but it’s inordinately naïve to think that many others haven’t envisioned something much deadlier with such creations already! This has happened with small aerial drones – commercial drones can be adapted for combat or terrorism, like retrofitting them to hold small explosives that can be delivered to targets.

 

We’re already aware of the threats posed by killing power being concentrated into ever-smaller form factors, like nuclear fission weapons that can portably fit inside backpacks. It’s not anticipated that pure fusion weapons can be developed but we never know how inventive we can be in weaponising things. Certainly the largest yield nuclear weapons by far are thermonuclear, which utilise both nuclear fission and fusion to generate an explosion.

 

There’s a far greater risk of a sudden calamitous singular mass incident due to an inherent software bug or the intentions and actions of a miscreant hacker too. With ever more powerful tech, a single unintended error or intended act of terror becomes ever more potentially destructive. One accidental musket round discharge isn’t as terrible as one accidental nuclear explosion, which mightn’t even be as potentially impacting as one centralised AI that controls important infrastructure nationwide.

 

The power of quantum computing is anticipated to make current encryption technologies obsolete in ‘the quantum apocalypse’, which has implications whenever you message, shop, bank or really do anything online; although scientists are working on a ‘post-quantum’ world of cryptography to safeguard against that challenge.

 

Regarding the commercialisation of space, like space travel, space mining and colonising other planets – commercial entities are focused on private profits (often without concern for spinning truths or spreading negative externalities in this quest), whilst publicly-funded scientific endeavours are about discovery and public interests. The economics do need to make sense for these exploration projects though thus we need to find the right balance between making money and serving shared interests. It requires the right regulations, and these need to be right from the beginning. Commercial air travel has been regulated from about the start yet these still failed for a long time to care sufficiently about the problems of pumping greenhouse gases high into Earth’s atmosphere. We can already see the growth in space junk orbiting the planet since private companies have entered this space. Kessler syndrome, or the runaway cascade of space debris, is a threat towards anything that orbits the Earth, like the satellites that enable us to use our banking services or social media every day.

 

…This has become a post about a range of foreseeable threats and conundrums posed by advancing technologies, like in the areas of virtual worlds and virtual property, AI, robotics, augmentative technologies, weaponising inventions and space.

 

Now whatever we believe are the right freedoms and legal restrictions to set for each of these areas and more – since this is an ethics and philosophy post – it’s arguable that our stances should be consistent otherwise we’re just selecting whatever suits our own benefit or convenience in each individual case rather than truly applying sophisticated considered contemplations (like arguing for a laissez-faire industry environment when we’re the one’s with a shareholding stake in a related business yet arguing for strong consumer protections when we’re more likely going to be the consumers, or dominance is fine when we’re in a position to push others around as employers but everyone should be given more equal power when we’re the ones being exploited as employees, etc.). We (ideally) don’t want justifications after our personal stakes have transpired to bias us but morally just considerations before we know how we might reap the gains and/or costs. (Realistically, however, we’ll only each start to care about such debates once we know which position we’ll be in!)

 

And we certainly should aim to make these considerations and then act appropriately upon them before any major catastrophes occur from our hesitancy or imprudence.

 

Woof.

 

Comment on this post by replying to this tweet:

 

Share this post