with No Comments

Post No.: 0185technologies

 

Fluffystealthkitten says:

 

Technology is overall a wonderful thing and I love checking out and looking forwards to all sorts of new gadgets (I want a grapple gun like Batman!) Indeed, without a whole myriad of innovations, I wouldn’t be able to publish this post and you wouldn’t be able to read it where you are. Modern fluffy life largely isn’t that bad, and a lot of that is thanks to some amazing technologies.

 

But the advent of almost every new major technology in history has sparked a debate on their fears and concerns – including books and radio, as well as TV and mobile phones. Some of these have proven to be false alarms or have been overblown via the media (e.g. getting cancer from the radiation emitted from mobile phones is doubtful, albeit some still think it’s too early to know for sure). Yet it’s absolutely right that we should talk about these concerns with at least attempted foresight rather than risk sleepwalking into problems when it could become too late.

 

Some regulations may be necessary in case of potentially harmful technologies or unethical practices – we must at least attempt to foresee the unforeseeable. We need to think of the long-term consequences of any new or rapidly-developing major technology now because we need to learn from history. For example, we need to learn from our failure to think about how to deal with the waste that goes into landfill and water systems from modern products that include toxic metals that leach into the environment. There are plastics that take many generations to break down, and even when they do breakdown they can cause problems to wildlife and even possibly the entire food chain in the form of microplastics (which are technically any piece of plastic less than 5mm in diameter).

 

There is the overuse of antibiotics that has led to antibiotic resistances. This example also demonstrates that each person individually thinking with an attitude of ‘I’m just one person being prescribed a course of antibiotics even though it wasn’t absolutely necessary, and surely I cannot make that much difference to the world’ can lead to collective global problems.

 

If only we could turn back time, many humans would have done some of the things humanity has done differently. We cannot turn back time though – but we do now have the benefit of hindsight and can learn from these mistakes and apply the lessons going forward.

 

Lots of commercial products that were later found to be damaging were even considered to be a good thing when they were first invented or discovered. For example, chlorofluorocarbon (CFC) chemicals, which were later found to be rapidly damaging the ozone layer that protects the Earth from harmful ultraviolet rays coming from the Sun. This is why we must be wiser now to consider the risks and consequences of new substances and technologies, such as nanomachines, gene-customisation or whatever we may create and spread or share.

 

For instance, increasing portable power density (packing more and/or longer-lasting power into portable cells or batteries) is a goal that will seem to benefit a lot of other technologies – but foreseeable consequences include how these will become effectively small legal explosives (lithium batteries of today already pose a risk of catching fire or explosion). All sorts of devices with microphones and cameras are already posing a worrying risk to the privacies of people who never consented to the collection of their data. And we are rightfully currently considering the consequences of artificial intelligences taking jobs away from humans, and driverless cars facing decision dilemmas, congesting roads to avoid parking fees and deciding who should be accountable for any accidents, for example. Some of these scenarios may be many years in the future but it’s best to consider them as early as possible.

 

We’ve got to think in terms of entire product life cycles too – from the extraction and refinement of the raw materials, the manufacture of the goods, the transport, the marketing, the retail, its use, then disposal, repair or recycling. We know for certain now that when corporations focus solely or primarily on profit maximisation, this does not necessarily result in better long-term outcomes for human civilisation or the planet as a whole.

 

New technology firms, in particular, employ a lot of hype in order to attract major investment for their ventures, and their products therefore frequently don’t or cannot match this hype. They also frequently have an attitude of ‘release now, patch any problems later’, which can be dangerous in some contexts (e.g. who wants to be the unwitting guinea pigs for a new healthcare product? Computers, or really the humans who created or implemented them, aren’t always foolproof too – they’ll tend to magnify any human errors). Releasing early and often happens according to some (crude) interpretations of being ‘agile’ in software development and business.

 

In under-regulated sectors, investor pressure often means companies won’t wait for independently-verified scientific support to back their products before releasing their products onto the marketplace too, so that they can try to make a return for them as soon as possible. Also, transparency can be a problem because they consider their methods and processes as being competitive trade secrets. It’s not about curbing technological progress but ensuring that any major products do what they claim to do and don’t bring unfair, unacceptable or unmanageable side-effects. This often means somewhat regulating those technologies, as well as the various humans who create or shape them, for humans are certainly not infallible!

 

In the very short history of humans so far (‘very short’ in geological terms) – human technologies have been mostly fine (well debatably e.g. automobiles and local and globally-affecting pollutant emissions and road fatalities, nuclear weapons and perpetual ‘cold war’ tensions, and living longer and being able to sustain a larger global population thanks to technologies in the fields of food and medicine has brought its own array of societal problems too) – but a species only has to catastrophically fail once, especially the more powerful a technology is, and indeed technologies are only getting more and more powerful.

 

The record of this very short human industrial history so far does not necessarily predict its future. It’s far more rational to be patient and to spend the cost of considering the precautions in advance than trying to deal with the potential grave costs of any negative aftermath. We need to all care beyond caring just about our own individual interests within our own individual lifetimes. We might still miss something that we won’t predict but if we do predict something then we can and should be prepared for it.

 

Meow. Maybe we just need some kind of technology that can perfectly predict the far future(?!)

 

Comment on this post by replying to this tweet:

 

Share this post