with No Comments

Post No.: 0301cognition

 

Furrywisepuppy says:

 

‘Embodied cognition’ is about the (non-neural) body being integral to the mind too. A mind and body is also shaped by, and shapes, the wider environment. The notion of a ‘mind-body dualism’ is therefore untenable because the mind and body are integrated – essentially as one rather than inherently separate. Our thoughts grow out of our physical bodies, the way we’re situated in the world, and the way we interact with the physical environment. We’re not disembodied souls temporarily ‘possessing’ or inhabiting physical bodies. Our furry thoughts, feelings, capabilities and limitations intrinsically grow out of this embodiment.

 

Not everything we animals do require consciously and explicitly thinking about doing it, or thinking about how to do it, in order for us to do it. (In fact, if we’ve been, say, walking successfully for years, if we start to think too much about how to walk, we can stumble and perform less fluidly.) Thus when thinking about cognition, we must also take the (non-neural) body morphology and environment into account too. Embodiment could simplify the problem that the mind is left with to solve. The point is that the brain doesn’t have to think of everything the body must do in order to accomplish every physical task – a body, that has physically evolved for performing specific tasks within specific environments, will enable those tasks to be more-or-less automatic or effortless (e.g. walking, or putting one foot in front of the next and catching one’s fall continually).

 

There are lots of examples in nature where the physical structure of an organism seems to have been evolved to efficiently solve certain problems so that the animal or plant doesn’t need to rely (too heavily) on the work of its/a brain (e.g. birds in nature are more ‘gliders’ rather than ‘fly-by-wire’, although fly-by-wire aircraft trade inherent stability for greater manoeuvrability) – and we can use this knowledge when creating robots as well i.e. instead of expecting or demanding the CPU or ‘brains’ of a robot to do all of the work in making that robot be able to walk, the robot’s body (e.g. leg length, maximum knee flexion range, maximum hip rotation range, centre of mass) could be designed to make walking more natural. Indeed, human joints may have the maximum flexion ranges they have, and no more, because they evolved for efficiently doing whatever humans did to survive.

 

There are passive dynamic walking machines that show that embodied robots don’t necessarily need a computing brain at all to walk – this demonstrates that actually building such a physical system could possibly tell us more about how we as animals walk than trying to introspectively think about how we mentally get to swing one foot forwards then catch our balance, etc.. Embodied robots need some, but not as much, computing power yet look more effortless and natural in their motion relative to those that don’t utilise embodiment and only have a ‘centralised mind’ to compute and process everything. Some machines don’t even necessarily need what we’d call a brain or a mind in order to perform certain functions in the world (e.g. a windmill).

 

But I want to interject something here about having mental intentions when it comes to having minds – we don’t attribute the weather as having intentions to ruin our picnic when it acts because it ‘just does what it does’, but living creatures are attributed with intentions whenever we act even though we could also arguably be said to ‘just be doing what we’re doing’; just being part of the physical order and environment. Animals (and arguably plants) do use desires, hopes, beliefs or certainly perceptions to perform complex tasks. Yet we shouldn’t assume that this necessarily means something complicated must be going on inside of us regarding making an explicit plan about what’s out there in the environment and what one needs or wants to do.

 

All in all, the brain and body co-evolved, in accordance with the environment, all together. The body doesn’t have to do all of the work it needs or wants to do if it exploits environmental factors, such as thermals for flying birds. We can also shape the environment in order to make certain tasks easier for us to do, such as building a bridge over a river – this interaction with the world (re)structures the environment itself and can make a complex task easier to complete.

 

Separating the ‘higher level cognition’ tasks (e.g. working out where one wants to go and what route to take) from the ‘lower level cognition’ tasks (e.g. keeping on the pavement and avoiding falling over), by using physical sensory systems to deal with the lower level processes, frees up the capacity to concentrate on the higher level processes for our minds (i.e. the more abstract and conceptual levels such as getting from A to B); and as a result is more efficient. Great strides in robotics have been made with the use of multi-player control systems (e.g. with driverless cars) by separating the ‘higher level cognition’ tasks (e.g. navigation) from the ‘lower level cognition’ tasks (e.g. keeping on the road and avoiding collisions).

 

It does pose the question though – if we want to make artificial intelligences that think and behave exactly like humans, must they also need to grow and develop like humans from a child into an adult, both cognitively and physically? To have minds like humans, artificial intelligences may need bodies like humans living in environments like that where humans live too – with its mind and body co-evolving and co-developing, and together this embodied mind shaping the environment and being shaped by it? After all, a human being who’s raised in a very different environment to one that most humans are raised in ‘malforms’ in his/her development too (e.g. abandoned children in orphanages that don’t provide enough social warmth and stimulation). Raising a robot in a lab might be like raising a child in a lab rather than in the ‘real world’?

 

So maybe when thinking about ‘what it means to be human’, we therefore shouldn’t just look at what’s going on inside our own brains but also our bodies as a whole (our brain, like all of our other organs, is really just another part of our body after all), as well as the external manipulable environment, which includes the tools and creations we’ve designed and utilise (for and in the short and long-term), or just simply when we’re talking out loud, making bodily gestures, writing down notes or calculations on paper, or drawing sketches that we ourselves immediately inspect as we think i.e. thinking isn’t just done inside our heads. The body and environment kind of becomes an extension of our working memory (or computing cache for a robot).

 

We use our bodies and we restructure the world to promote better thinking itself – we are seldom able to solve complex problems just by pondering inside our heads and so it’s a combination of our brains, our wider body and making use of the environment that helps us to think. The ‘naked brain fallacy’ is believing that the brain alone takes full credit for all intellectual achievement, when in fact the brain is just one player within a busy stage of props whose contributions are complex and profound – this further highlights that intellectual achievement isn’t just the function of one’s own biology but also the function of the available opportunities and the wider environment (materially, technologically, socially, culturally, inspirationally, etc.) one has access to and is exposed to. Woof.

 

Our immediate environment is an extension of our minds hence a messy and disorganised desk will have some effect on our quality of thinking and task focus. This isn’t to say that one must constantly tidy up as one is working but it’s probably a good idea to tidy up periodically or try to keep tidy in the first place where possible.

 

So we should not think of minds or brains as disembodied computers in charge of machines made of meat and bone, but rather as completely integrated with our physical capacities and constraints, as well as our interactions with the world. Control and information processing isn’t just restricted to the mind or brain – but does this mean that, if a ‘decentralised’ mind is regarded as an abstract entity and the mind extends beyond the brain and into the rest of the environment/world, then are everyone’s minds essentially interconnected or at least sometimes overlapping with each other’s?

 

Our bodies and the environment help our minds to think. We use computers, robots and artificial intelligences themselves to expand our own capabilities and overcome any inherent obstacles with our mental or physical limitations. Our own personal minds of course still play a major role in helping our bodies to walk and navigate the environment. In fact, some scientists believe that the overwhelming purpose of brains are to make organisms move (and obviously to ultimately try to move in ways that aid survival and reproduction), hence arguably why chess computers that beat humans are relatively easy to make compared to robotic hands that rival human movement and control. (Read Post No.: 0056 for more.) And it’s not merely a matter of the mechanical side of things – we know how skeletons, joints and muscles move and can replicate them artificially but the main problem is replicating the nervous system (the brain and sensing). When we speak and communicate, we use muscles – and indeed seeing, hearing and sensing things in the environment are all useless unless we can move in response or anticipation to them.

 

Maybe computers that think like humans cannot therefore be disembodied (like HAL 9000 or Holly)? Indeed, if a human was born without, or loses their, limbs (and sexual organs and sexual drives!) – his/her psychology would be severely affected by this sort of existence. No ageing, social learning, etc. would matter too. Our body and the environment we sense and interact with are an integral part of how we perceive, think, learn and act.

 

Whatever the case, when we build robots, we can test out whether our theories of how humans work are correct or close according to how closely these artificial robots mimic human abilities, movements and behaviour – we could, for example, infer that humans most likely use or not use embodied cognition? And once we’ve built an artificial robot from scratch that is like a human in every single way (albeit ‘artificial general intelligences’ are nowhere near the horizon yet) then we would potentially be able to claim that we fully understand how humans work too? (It’s like if we could build a full-size solar system from scratch then we could potentially say for certain that we fully understand how they work (or at least that particular solar system). This still might not be true though because being able to give birth to a puppy from scratch doesn’t mean we’ll automatically fully know how one works! Sometimes we can fully solve problems without fully understanding how something works, like in the case of epidemiologist John Snow working out that a particular water pump was the common source of death without needing to know how cholera actually spread. Well another key sign of understanding something is the predictive power of our models – and will we one day be able to accurately predict the future of human civilisation, just like we can (somewhat) model and accurately predict the weather for the next few days?)

 

Now your brain can be alive but your conscious mind no longer present (i.e. a brain in a true or very deep state of coma) hence a question is what parts of the brain or what brain operations are necessary and sufficient to produce what we’d call a ‘mind’? Being alive seems necessary but not sufficient to have a mind. The same question can then be asked of when a computer or robot will start to have a mind too?..

 

Woof!

 

Comment on this post by replying to this tweet:

 

Share this post