with No Comments

Post No.: 0493designers

 

Furrywisepuppy says:

 

As chewed over in Post No.: 0480designers, engineers and inventors (and those who employ or fund them) bear huge ethical responsibilities. It’s not just about considering the life cycle and ecological impact of what we design – designers shape human experiences and behaviours too. Quite directly and intentionally so when it comes to habit-forming design to get people hooked on using certain apps. Gender biases can creep into design – robots that take a female form or sat navs that have a masculine voice are already loaded with associations that’ll shape how people will treat them. People’s instincts, such as empathy, can be exploited by using anthropomorphised or zoomorphised robots. Videogame designers inherently play god in numerous ways by making world, motion and character design choices (e.g. sexualised characters). When virtual reality worlds and technologies one day become so convincing, users could possibly become lost in these creations. Tech changes society by bringing forward new norms and new obligations. (The film Gattaca portrays how technology can change society and introduce new obligations and responsibilities.)

 

Even without designers explicitly reflecting on their creations from moral standpoints, they’ll be potentially shaping moral decisions and actions and the quality of people’s (and other organisms’) lives. So ‘with great power comes great responsibility’ as Uncle Ben said to Spider-Man. Good designers need to be great forecasters of the future world they’ll create or contribute to, from an ethical standpoint amongst others.

 

‘Mediation theory’ approaches technologies as mediators of human-world relations. The core idea is that technologies don’t simply create connections between users and their environment but actively help to constitute them. Two different starting points for discussing this human-world relationship are the ‘world side’ or hermeneutical approach (focusing on how the world is present for human beings or how the world is interpreted) and the ‘human side’ or existential approach (focusing on how humans are present in the world or how people realise their existence).

 

Technological mediations can be physical, cognitive or contextual – these can respectively create a direct physical relationship between us and the world, give information that informs our decisions and actions, or create an infrastructure that indirectly guides our decisions and actions. Designers can focus on the locus, the type and domain of mediation i.e. where does it work, how does it work, and what does it do? There are coercive (visible/explicit and strong influence), persuasive (visible/explicit and weak influence), decisive (hidden/implicit and strong influence) and seductive (hidden/implicit and weak influence) forms of mediation – these can respectively compel users to behave in specific ways (e.g. automatic speed limiters), encourage users to behave in specific ways (e.g. econometers that give feedback on a driver’s fuel consumption but can be ignored), require users to behave in specific ways (e.g. a building without an elevator so that people must use the stairs), or tempt users to behave in specific ways (e.g. placing the coffee machine in the middle of the office to lure people to have more informal interactions but they don’t have to). The most controversial form is hidden yet strong influences.

 

Mediations can be anticipated (as best as possible), assessed and explicitly designed. A result could be, for instance, putting in blocks to prevent a product being used in a certain way (e.g. cars that won’t even start unless the driver’s seatbelt is on), delegating something to technology (e.g. automatic light sensors that turn on a car’s headlamps when it gets dark), or selecting desirable default settings (e.g. eco-friendly ones). Actual, specific technologies can be a starting point for philosophical and ethical reflection, such as autonomous military machines. Overall, if there’s an ‘ethics of things’ – designing is actually doing ethics!

 

But how ethical is it to use design and affordances to influence people’s decisions and actions in the first place, such as via nudging, or even something more forceful like not allowing consumers the ability to easily dismantle and thus fix a product? How will this relate to fundamental values like autonomy or democracy? How much is technology created truly democratically and how much is driven by specific social groups of stakeholders such as tech billionaires and Silicon Valley investors?

 

Where should the goals of designers come from? A democratic basis is perhaps needed here to avoid a technocracy. How far can and should designers go to influence people’s behaviours? Well perhaps we could argue that people don’t have to buy or use what other people design or sell if they disagree with what their products do. It’s the consumer’s choice. However, try living without a computer in the modern world – some things we could previously do without them need them now. This can include participating in government e-petitions or even e-voting in some countries or states. Our lives can be hampered without joining certain online social media platforms, for instance. Some technologies are almost necessities rather than unprovoked or unhindered choices so how should we balance the give with the take? And sometimes, even if we personally choose not to use something, the choice of other people using that thing affects us.

 

From another perspective, if we don’t like speed cameras because this tech restricts our desire to drive fast, then understand that modern cars, as technologies themselves, allowed us to have the desire to drive fast in the first place. No one had a desire to drive fast until fast and perceived-safe cars, along with long, wide and straight-enough roads, were invented. In other words, our desires are shaped by technologies hence it’d be biased to be okay if they shaped our desires in some ways but not in others. It can present a mixed message though – to invite speeding but then curbing it – perhaps it’d be clearer if all cars were fitted with automatic speed limiters? (From yet another perspective, speed cameras are only enforcing a democratically-determined rule. We are free to build or hire our own private racetracks to drive as fast as we want but if we want to use public roads built with public money then we must morally abide with public laws. Or you could see if you can stoke up enough public support to get such laws changed democratically?)

 

What we do is co-shaped by the things we use, not just by our own furry intentions and wider social structures (e.g. a speed bump changes a driver’s intention from ‘driving as fast as one wants’ or ‘driving slow out of a responsibility towards others’ to ‘driving slow to save one’s own shock absorbers and spine’). So we can delegate things to tech that shape our intentions, duties and possibly even values. Technologies can be utilised to make us behave more ethically, or less ethically (in the case of something like peer-to-peer file sharing software).

 

Nudging, when used to try to create beneficial outcomes for users as individuals and/or for society as a whole, is sometimes called ‘libertarian paternalism’ – it’s paternal because it encourages desirable choices (e.g. the stairs are right in front of the entrance/exit) and libertarian because people can still ultimately exercise their own choice (they can still use the elevators if they want). They fall into the seduce or persuade forms of influence. We might perhaps want to somehow nudge consumers into long-term product ownership rather than live in a throw-away society. But the constant cycle of buying new things as often as possible is how businesses maximise their profits in capitalism – so the principle of nudging isn’t always implemented for our benefit but can be employed for the benefit of businesses (e.g. default ‘autoplay’ features so that we’re more likely to keep watching more videos, or the layout of the stairs and escalators in a multi-storey department store making us browse through every floor one at a time before we can get to the top floor or back).

 

It can be tricky though since our mediations can create unintended effects – putting more safety features in cars can lead to drivers taking more risks hence traffic accidents remain high, or people start to suspect that speed cameras are just for generating revenue rather than for reducing accidents. Gamification can result in gaming a system. Even though people would ultimately have fewer freedoms if it weren’t for designers and engineers (e.g. try microwaving something without people designing microwave ovens!), some design features could be accused of reducing the freedoms of its users. They could also introduce problems related to transparency, privacy, the lack of manual or ‘opt-out’ overrides, the abuse of manual overrides, putting too much on the shoulders of designers, something may lose effectiveness over time (such as users ignoring warning labels over time), or users will become ever lazier for assuming that technologies will sort everything out for them. Brains might shrink if people over-rely on technologies, just like men’s testes shrink if they use anabolic steroids (drugs are technologies too) as the major source of their testosterone instead?

 

This laziness includes for abdicating our own moral decision-making and responsibilities themselves if these get delegated to technologies i.e. doing something good not because we’re good individuals but because we’re forced or nudged to. This may still be prudent since it produces desirable standards and outcomes that might be difficult for people to uphold in practice, in a similar way we put guardrails on balconies rather than just trust people to not step too far. Well if we do trust people to look after themselves but they fall off a building anyway, you can be sure that designers will be blamed regardless(!) They might not be the same people in both cases but users will blame others rather than themselves whatever happens i.e. restrict people’s freedoms in order to protect them and they might complain, but allow them to hurt themselves and they’ll complain about others failing to protect them(!) Many users want autonomy yet not the accompanying responsibility if things go wrong – so why not delegate it to tech? Lots of things in all kinds of contexts are overall good for us but we just don’t like doing or choosing those things. We should therefore not outright reject these technologies and their mediating roles, yet we should never stop critically questioning how we access and utilise them.

 

Maybe like addictive drugs, something could do us harm yet enough of us will still want it? Scientific research could reveal something but people might not want to listen anyway. Something could be designed for a purpose but then be misused or abused, like a nail gun can be used to build things by driving nails into wood, or be used to deliberately hurt people. It’s more provident to assume that if something can be abused then it will be abused by at least someone. It’s not good enough for designers to state ‘that’s not my intention’ whilst knowing full well how their products could be abused. Diligent designers understand that user problems are really design problems – it’s not stupid users but poor design. There may be implied but unintentional effects, like staff monitoring software checking on people working from home picking up on private conversations or images too. Now throw in some reasonably unforeseeable and unpredictable variables, like malfunctions, children or adverse weather conditions, and something might behave in a totally unexpected way?

 

There are also cases like the Boeing 737 MAX aeroplane wrongly overriding human pilot controls. This was a flawed human design choice to make the system do exactly that – but things like machine-learning autonomous/self-driving vehicle artificial intelligences can generate flawed decisions (outputs) that aren’t programmed in by human designers, plus are additionally opaque in how those decisions are arrived at, too. We could open a Pandora’s box – although over-dramatised by the media, two chatbots developed by Facebook (now Meta) started to communicate with each other in a way that was more efficient to them but was unintelligible to humans.

 

Many scientists are concerned about the day when machines outsmart or override their makers…

 

Woof!

 

Comment on this post by replying to this tweet:

 

Share this post