with No Comments

Post No.: 0624artificial

 

Fluffystealthkitten says:

 

Some might disagree with this but deep learning (DL) is a subfield of artificial neural networks (ANN), which in turn is a subfield of machine learning (ML), which in turn is a subfield of artificial intelligence (AI). Artificial intelligences are (sets of) algorithms, but not all algorithms are artificial intelligences.

 

Any kind of non-biological intelligence is essentially classed as artificial intelligence. I first came across the term ‘AI’ in the context of computer-controlled enemies in videogames – usually when it was pointed out how dull-wittedly they behaved! But machines have been getting smarter every year, and they play key parts in our modern lives and economies in ways that many people might not realise. It’s not just the future – it’s here today, including whenever you use a search engine, have a video or product recommended to you by an app, or notice that a camera can figure out where your face is.

 

In very brief, machine learning relates to computer algorithms that can learn how to perform tasks better by themselves through their own experience and use of data. Artificial neural networks and deep learning attempt to mimic the network of neurons in biological brains, and the difference between the two is the depth of layers of ‘neurons’ from input to output, with deep learning having more layers than artificial neural networks; although some experts find this differentiation by number of layers arbitrary thus use these two terms interchangeably.

 

A bias in reporting only success stories makes much of the public believe that current AIs can magically solve every problem imaginable if only we collect and process as much data as possible. There’s currently arguably an over-hype of many technologies right now, not just AI. (Or the over-hyping of new technologies is simply a perennial thing through the ages.) This isn’t just about the optimism but also the pessimism and fear though – there arguably hasn’t been much progress towards creating T-800-type Terminator AGIs so far. Having said that, there’s been lots of progress in creating Aerial drone and Ground-type Hunter-Killer automated and autonomous weapons systems! And AIs can suddenly advance exponentially rapidly.

 

Artificial Narrow Intelligences (ANIs) can only complete very specific tasks, and the AIs we have as of writing are of this type. Artificial General Intelligences (AGIs) would have adaptable intelligences comparable to human intelligence. And Artificial Super Intelligences (ASIs) would surpass human intelligence and ability.

 

‘Supervised learning’ in AI relates to tasks where the intended goal is explicitly stated, such as ‘to identify all the cats from all the images’, and the algorithm will be firstly taught with some training data that’s already been labelled by humans that say ‘these images contain cats and these other images don’t’ before the algorithm’s own judgement is tested with some new images that it hasn’t seen before. It can often take thousands of pieces of labelled training data to teach an algorithm until it’s reasonably accurate on its own, and it’ll never likely be 100% accurate too. But humans aren’t 100% accurate judges either. Indeed, all machines need to do is to fail less than humans fail. For example, it’s not like human drivers don’t crash a lot of vehicles. (For more about autonomous vehicles, please check out Post No.: 0550.)

 

‘Unsupervised learning’ relates to asking an algorithm to find any interesting patterns it can for itself from a bunch of data, such as analysing lots of videos and seeing what patterns it then perceives (for which it might discover that videos containing cats are quite popular!)

 

‘Transfer learning’ relates to how an AI trained in, say, detecting if any cats are present in images, might find it relatively easy to learn to also detect if any dogs are present in images after using a smaller set of dog image training data compared to if it had to learn to detect if any dogs are present in images from scratch.

 

‘Reinforcement learning’ is similar to rewarding and/or punishing real-life pets in order to get them to do more of the things you want them to do and less of the things you don’t.

 

From a highly abstract level, these approaches to learning are similar to the various ways that humans learn too. People can be taught and then go on to further their learning for themselves, can learn totally from their own experiences, can find it faster to learn a task after mastering a closely-related task, and can modify their own behaviours based on what’s pleasurable or painful.

 

There are numerous ethical and other issues related with AI that we shouldn’t over-blow yet must continually be mindful of. Lots of tech now claims to utilise AI but there are good and bad implementations of AI. Creators of AIs must routinely audit for and fight against biases in their systems, such as facial recognition systems that work better for light-skinned compared to dark-skinned people, or applicant screening systems that favour candidates who went to private schools or came from wealthier neighbourhoods.

 

Again humans are hardly perfect judges either though. Humans can be biased too, not just AI. On top of that, humans make inconsistent or ‘noisy’ decisions depending on who’s making the decision or what mood they happen to be in, whilst computers are consistent (although this might mean consistently biased or wrong) and aren’t influenced by whim.

 

Indeed, machine learning AIs have been basically learning from and reinforcing pre-existing human biases through how the world has been presented to them (via the data). Feeding in more inclusive data, ‘zeroing out’ biases in algorithms, a more diverse workforce that can spot biases more readily, and better transparency and auditing processes that systematically check for biases, are some proposed solutions.

 

Although there are often ways to sufficiently get around this problem – because deep learning or artificial neural network systems learn for themselves instead of having every instruction programmed into them by human programmers, we don’t always fully understand how AIs make the decisions they do. The tasks that current AI technologies are being asked to perform are limited and bounded (until AGIs potentially arrive) and there are different ways we can supervise these systems – yet it’s still often quite opaque how artificial neural networks come to the precise outputs they come to, and this can be problematic when trying to diagnose how a system came to a potentially biased decision it did, for instance, and therefore how to spot if it is being biased and to correct for such biases.

 

This is again kind of like when human agents, from a neuroscientific perspective with present understanding, make decisions in many cases – even though it must be clarified that current artificial neural networks operate quite differently to biological brains despite trying to mimic them (not that we fully understand how biological brains work yet). Like AIs, humans often struggle to explain why they do the things they do too, especially when they ‘follow their gut feelings’. This lack of explanation from an AI can reduce user trust in it, just like if a human judge in court were to make a verdict without being able to explain how she/he came to that decision.

 

An adversarial attack on an AI might involve trying to fool it into making an incorrect decision. Just a minor modification to an image, that’s imperceptible to the human eye, can make an AI suddenly think a bird is a hammer, for instance! Humans and present computers process images quite differently – hence why computers can process QR codes far better than humans can, but computers can be fooled by miniscule manipulations of other images that wouldn’t ever fool humans.

 

This principle can be used to counter spam filters when applied to words, phrases and sentences in emails, or to fool autonomous vehicle cameras into failing to detect a road sign that humans can clearly see, for example. So there are real dangers. Work is constantly being done to combat such attacks but it’s another area of crime, spam and fraud in general where criminals, jokers and other nefarious actors will continue to trade blows in an arms race with those who are fighting against them, and this will probably last for as long as the technology and/or motivation to defraud exists.

 

Rather than attacking existing AI systems, AIs are often used for detrimental purrposes themselves – such as for generating fake product reviews or fake political comments online, fake videos (deepfakes) that show things that the real people being imitated didn’t do, and mass surveillance used not just for the sake of protecting people’s security but to keep citizens oppressed and in line with a government’s wishes or to help a corporation better extract profits off its users under an unhealthy bargain.

 

Military AIs could trigger ‘flash escalations’ with each other and produce unpredictable, cascading effects that are too rapid for us to halt – similar to the ‘flash crashes’ that bedevil the stock markets and the ‘flash spikes’ in prices sparked by competing bot traders.

 

There are also concerns about AI automating particularly the tasks, and in turn jobs, that poor people in developing countries typically do, such as agriculture, textiles and low-end manufacturing, as the country’s citizens try to climb the economic ladder from poverty to prosperity, just to find that those bottom rungs of the ladder have disappeared. As a very rough rule of thumb – tasks that you could do within about a second or two of mental thought and are highly repetitive and routine are most amenable to automation. However, many reports predict that AI will also create many new jobs too. Only time will tell which predictions will be correct.

 

We’re sometimes incredibly impressed when a particular human can perform some task like computing sums to a speed that most other humans can’t – but we forget that computers can already do that far faster and more accurately hence that skill won’t make a great career for that person (except in the entertainment industry I suppose?) All skills are valuable in life, but some skills are more valuable in the economic markets than others – and the types of skills that contemporary computers aren’t anywhere as capable as humans at performing include social skills. Fluffy pet dogs and cats are still a million times better than pet robots for the vast majority of people right now, so the jobs of my sistren and brethren are safe… although the lack of drool, pee, poo, moulted fur, dander, and inevitable death, are some advantages for robotic pets. Meow!

 

Will self-learning and self-adapting AIs even eventually take over many programming or coding jobs themselves one day?!

 

In the meantime, being a lifelong learner will partly ameliorate this problem, as we keep upskilling in this ever-changing world of work. And what could disputably work alongside this is a basic universal income that is contingent upon those who are able to learn continuing their personal educational development so that they can continue to contribute to the economy and taxes that will help pay for all this. Political interventions that raise all segments of society – particularly the most disadvantaged – and restrain ever-widening wealth, and in turn opportunity, inequality will be necessary. Tech taxes may need to be higher if automation takes more jobs away than it creates and/or if those new jobs aren’t as well-paid or as secure as those they replace, or if more of the workforce will need more highly technical qualifications to feed this industry itself.

 

Regulations will need to somehow protect the interests of society at the same time as not stifle progress. But we’re generally against or indifferent about things unless we’re a part of it – so we might call for greater regulations on AI technologies that invade our privacy, for instance, as consumers of tech; but if we one day find ourselves working in that industry, we might start to argue against external regulations that curb our ability to maximise our own profits!

 

Meow!

 

Comment on this post by replying to this tweet:

 

Furrywisepuppy39

 

Share this post