Post No.: 0419
Reality is an illusion constructed inside your own brain. Everything you perceive is down to a personal interpretation, based on your own genes combined with your own personal present and importantly past experiences so far of the sensory information received from all of your limited senses, which together creates your own version of reality.
Although it will be very close and there will be a general agreement (for all humans are genetically from the same species and experience the same world), this reality may not be the exact same as everyone else’s. Your experience of reality is just one interpretation of reality. One’s perception of reality at any one moment has less to do with the information one receives via one’s senses and more to do with what’s already inside one’s brain, or one’s internal model.
For example, seeing is far more than about whatever we’re receiving into our eyes. We often notice this when novices are trying to draw or paint from still-life or life models – instead of really observing and translating onto the paper or canvas what they see in front of them, they’re drawing or painting partly based on what’s inside their internal models of the world, such as that people’s eyes are high up in the face, and that water is just blue. (This may be a reason why people with aphantasia (people who cannot visualise images in their mind’s eye) aren’t really disadvantaged when it comes to drawing – they try to draw what they actually see rather than what their mind assumes. Well not many people without the condition can draw exactly what they visualise in their mind’s eye anyway. It may also show that visualisation and creativity aren’t the same things.)
Professional artists sometimes therefore try to employ techniques to help them look at who or what they’re drawing or painting with ‘fresh eyes’, such as by occasionally viewing their subjects via a mirror or even turning the canvas upside-down, because they understand their minds may have settled on some visual assumptions – something may look subjectively right but isn’t objectively right. Of course, out of our own personal and intimate subjective experiences can come great art – photorealistic art, although an objectively accurate visual depiction of reality, often lacks something that reveals more about the subjective human experience and condition.
Much of the experience of seeing is thus not via our eyes but inside our brains. The same with hearing and our ears, smelling and our noses, tasting and our mouths and noses, and physical touching and our skin and hair – our sensory experiences are mostly constructed inside our brains. (Much of what we sense and decide happens below or beyond our consciousness too – read Post No.: 0321 to understand that consciousness is just the tip of the iceberg.) From all of our senses, be it sound waves or photons of light, etc. – it all gets converted into electrochemical signals, which are somehow then reconstructed inside our brains and turned into an ‘experience’; into some kind of personal furry interpretation of reality.
We’re constantly cross-referencing all of our sensory information together from all of our available senses in order to try to make a coherent and consistent construction of our internal model of the world. Humans are considered ‘cognitive misers’. (Over)generalised categories, associations and heuristics are employed when assessing new instances of people, things or events because human minds try to be efficient to save time and energy, but this can result in errors of judgement. People’s internal models hold rules like ‘what goes up must come down’ and their various stereotypes or categories of groups of people or things. But these are often crude generalisations because some things can, in all practical intents and purposes, leave Earth’s gravitational influence and never come back down, and of course these stereotypes often speak only the truth about people’s own lack of finer knowledge of those who look or sound a certain way or come from a certain place.
In theory, the signals one receives via one sense will make no sense unless they can be cross-referenced with other senses – for instance, touching in order to work out depth perception by reaching out and realising that some things we see are nearby and some things we see are far away from us, or taste is affected by what we smell or even by what colour we see based on what tastes are associated with those colours, or what we hear is shaped by the context of what we see. The brain is also trying to synchronise all of this information together despite the different reception and processing timings or speeds of different sensory information. Sound takes longer to travel than light, and more of a normal human brain is dedicated to processing visual compared to auditory information too – yet despite this, because the visual system is far more complex than the auditory system in normal human brains, visual signals actually take longer to process than auditory ones inside the brain (hence a nearby starter gun is more effective than a nearby changing traffic light if you want the fastest starting reaction times in a race). You’ll also empirically react faster to a touch on a hand than a touch on a foot, even though your brain will hide this fact.
So the brain collects some sensory information from the outside world (as well as from the inside world i.e. inside our own bodies, such as when our fuzzy tummy rumbles), and then decides on a story of what happened. We’re therefore actually always living in the past rather than strictly in the present because processing sensory information takes time, even though we think we’re always living in the ‘absolute now’. We therefore technically perceive a delayed version of reality too.
We can think of the brain as a mental representational system – it makes simplified, idealised maps of things, people, places and experiences, that capture hardly all of the detail but just the bits the observer finds important. In other words, lots of information is ignored or discarded. The copies of the outside world stored inside our brains aren’t like video recorder files, but are patterns of neural activation that represent patterns perceived in the real world.
We learn by trying to make sense of the world, so if we witness something that goes against our personal internal model of the world, such as seeing something apparently floating without assistance, or things disappearing into nothing, then it’s intriguing for us. We want to reconcile what we’ve just ostensibly witnessed with what we’ve previously known so far to alleviate this dissonance. And even if we know that something must be a trick – such as in magic tricks or visual illusions – trying to work out how a trick works is a huge source of fascination for most of us. (And maybe because we know that magic tricks are entertainment, we can experience something disorientating without feeling disturbed, which we likely would if we witnessed the same things outside of the context of a magic trick! This would perhaps be comparable to watching violence and bloodshed in a movie compared to if we didn’t know we were just watching people acting.)
We have 5 basic senses – sight, hearing, smell, taste and touch. But we are also said to have other senses like proprioception, balance, acceleration, temperature, pain, time and hunger. The total number of discrete senses is currently disputed. Some animals other than humans and dogs have other physical senses such as for sensing the Earth’s magnetic field, fine differences in the humidity of an environment, greater ranges of the light or sound spectrums, and who knows what possible psychological senses (along the lines of a sense of agency or familiarity) without these creatures being able to tell us, too?
The different working senses of different animals or individuals (e.g. some individuals are deaf or have synaesthesia), and the different balances of senses of animals with the same senses (e.g. dogs mainly rely on their senses of smell and hearing, while humans rely more on their sight) result in different internal models and perceptions of the world. We also know that we can look but not see, listen but not hear, sniff but not smell, eat but not taste, or touch but not feel.
There are many types of agnosia, where the sensory organs appear to be working normally but the brain is unable to process and ultimately interpret the information it’s receiving. This includes the inability to recognise faces, speech, written text, moving objects, or even static objects properly at all. These are some of the exact challenges that artificial intelligences have been progressively overcoming i.e. whether for natural organisms or artificial machines, or using organic learning or machine learning, you need far more than just eyes/cameras, ears/microphones, nerve endings/touch sensors and other physical sensors to make sense of and navigate the world – again, how we perceive the outside (and inside) world is a construction of reality made inside our own brains/computers. Now it shouldn’t really be surprising that artificial machines and natural organisms are appearing more similar in many ways (including the use of heuristics, or the opacity of how some complex AIs come to their decisions, which echoes the opacity of how we often come to make our own decisions) – they and we are both ultimately physical machines that operate according to the same physical laws in the same physical universe.
In the context of this post, we’ve been talking about ‘versions of reality’ in terms of the interpretations of our raw sensory inputs (e.g. the sensation and then perception, and qualia, of the colour ‘orange’). When a lot of people talk about people’s ‘versions of reality’ in everyday conversations though, they’re generally commenting on people’s social arrogance and delusions (e.g. someone thinking they’re the greatest person in the world despite their lack of evidence to back that up, or believing that something is true despite hard evidence to the contrary).
Well even in this context it shows how people have different versions of reality, and even more clearly so! This is where some people’s versions of reality can deviate quite significantly from everyone else’s more than others. This is where inconsistencies are the main bugbear. Whereas most people will, say, consistently perceive a certain sound when an accompanying mouth movement is presented at the same time, and then consistently a different sound when a different accompanying mouth movement is presented at the same time, despite the exact same raw sound being used in both scenarios – some people can, for instance, inconsistently believe that other people are dishonest if they spin the truth yet one is still considered honest if one spins the truth in the exact same way(!) Whereas the former context relates to the sensory perception of reality, the latter context relates to the social perception of reality i.e. where biases and fallacies exist; albeit it’s important to note that they’re not mutually exclusive domains because they both relate to our psychology.
We might therefore need to investigate the psychology or ‘psychology’ of ultra-advanced AIs one day? Well we already know that AIs can exhibit biases too, such as from the result of using skewed training data in machine learning, which is equivalent to when an ordinary human has not experienced enough diversity in his/her own life so far, which can lead to making prejudicial judgements and discriminatory behaviours. As artificial intelligences develop, perhaps we’ll not only learn more about them but more about us as organic intelligences in parallel too?
Woof. Do you think our lack of objective experience is disturbing to know or conversely wonderful? Please use the Twitter comment button below to tell us what you, personally, think.