Post No.: 0031
Furrywisepuppy says:
We improve by knowing more and doing more (including via more repetition and practice). But there is a fallacy in believing that just because one is good at one specific task, it necessarily means that one will be good at another seemingly related (but not actually closely related enough) task. ‘Specificity’ means that specific skills aren’t usually generalisable, although many people assume they are (this is due to cognitive biases such as the ‘halo effect’, where good traits in one area are automatically assumed to mean that good traits are present in other areas associated with the same person, firm or entity e.g. a good-looking person is also assumed to be intelligent, or a brand that sponsors a charity is assumed to be virtuous in other regards too). So if you really want to be good at something – you’ve got to do that something or at least directly and accurately simulate it.
It all sounds obvious to do a very thing if you want to be good at that very thing but too often people assume that if someone is great at A then they necessarily must be great at B too, when this isn’t always the case. The single best predictor of how good you are or how much you know about something is how good you are or how much you know regarding that specific domain in question, not your general abilities or proxy knowledge – one therefore needs to specifically acquire more knowledge or skill in the specific domain one wishes to be competent in. This really should be stating the obvious yet many people still intuitively rely on superficial inductive logic such as ‘they’re desirable in this way so they must also be desirable in that way too’ (and then they date these people and often find out that they’re not! It also means they may ignore those who don’t seem as desirable at first glance).
We have long sought for a ‘g factor’ (general factor) of general intelligence but intelligence is a hypothetical construct that is not directly observable and has no true definitive or objective measure. A traditional IQ test is just one of many ways of trying to measure a person’s general intelligence but it’s not a definitive test. No test or set of tests of intelligence can practically be complete and definitive in determining a person’s general intelligence. For instance, even regarding just a test of trivia – out of the trillions of possible questions (and always growing as history and the total pool of human knowledge grows) in the world, across all languages and cultures, time and space, one will only encounter or get asked a small fraction of them in one’s lifetime – therefore being good at quizzes will just depend on which questions out the trillions you are specifically asked. Quiz shows tend to focus on what they consider as ‘general knowledge’ but their focus on particular areas is typically arbitrary or biased depending on their target audience (e.g. questions about the monarchs of England for quiz shows in England, rather than e.g. questions about the tribal monarchies of Africa).
Therefore although high general intelligence quotient (IQ) scores or problem-solving skills will help a bit, it won’t be anywhere as powerful as having specific knowledge and experience in the target subject or task (e.g. being good at Sudoku won’t necessarily help you to become better at performing any other task, even if it involves numbers like accounting). Being good at one thing doesn’t necessarily mean being good at something else. So you’ve got to learn and practise the very things you want to become better at. ‘Brain training games’ are certainly not bad for the brain but you may need to curb your fuzzy expectations regarding how you think they’ll help you become better at other tasks.
All these ‘brain training games’ are typically stripped-down, sanitised one-dimensional puzzles that don’t represent real-world puzzles, problems or tasks such as gardening, cooking, fixing something, playing a team sport, learning a new subject, language or instrument, creative writing, an arts project, looking after a child or volunteering, for example. It’s analogous to isolated bicep curls on a bench compared to compound exercises using free weights such as deadlifts, which closer represent some real-world movements and tasks. Indeed, playing a full sport such as tennis, golf or basketball, or dancing (which is an excellent activity), is relatively even more complex, multi-faceted and real-world and therefore better for challenging and training the brain than some marketed ‘brain training games’. But ultimately, if you want to get good at a particular thing then there’s no real substitute for doing that very thing.
Being a great cyclist at elite level won’t necessarily mean one will be a great runner at elite level unless one specifically trains hard at running too. And this is why there is no objectively definitive ‘ultimate athlete’ in the world – every top athlete is only a specialist in his/her own dedicated sport(s). Likewise, no academic is an expert in every field – every academic is only a specialist in his/her own dedicated field(s). No one is the best at everything for being the best at one or a few specific things.
The skill set for being a start-up entrepreneur isn’t the same as for running a giant hierarchical corporation. There are so many areas in medicine hence being trained as a pharmacist won’t alone make one perceptive enough on mental health issues, for instance, even if one has possibly been staring at a mental health case in the face for years. Current artificial intelligences are incredibly narrow in what they can do (e.g. one that can identify certain types of lung infections has to be retrained from scratch to be able to identify other types of lung infections). Even top flat 400m runners won’t immediately feel comfortable at running the 400m hurdles because of the stride pattern that the hurdles require.
Aesthetic bodybuilders do poorly against dedicated strongmen/women in competitions of strength (even though bodybuilders try to aesthetically embody the ultimate image of strength!) A top powerlifter with great squat strength is highly unlikely going to be a top high jumper, even though much of the same muscles are employed i.e. strength isn’t the exact same thing as power, and one’s power-to-weight ratio matters too. In many sports, a greater bodyweight is costly because one must constantly haul it around, and muscles require oxygen to operate too hence endurance suffers with greater muscle mass. A greater physical size can cause more drag or air resistance too, which can work against oneself in speed events. Therefore being excellent regarding one physical attribute can sometimes directly negatively limit other physical attributes.
There isn’t any evidence to suggest this kind of effect for cognitive attributes though – just the opportunity cost of spending time learning one thing means that this very same time logically cannot be spent on learning another, and we each only have 24 hours per day maximum to spend; although some people live for longer than others (having said that, this doesn’t mean that those who’ve lived for longer have necessarily spent every second of their life learning new things!) These and many other factors are things that one must consider when training for a particular sport or goal.
There are also cases when a person is good at doing something in one context but not in other contexts (e.g. performing a task in private versus the same thing in public), hence training in the specific context one cares about is often crucial too.
And therefore if you want to test for a particular ability then there’s no better way than testing specifically for that very ability in its proper context (e.g. don’t test an interview candidate’s ability to remember strings of numbers in a quiet room if what you really want to know is if they’re good at remembering instructions in a noisy environment). This all relates to scientific research too (e.g. just because something apparently works in mice or on a Petri dish, it doesn’t mean it’ll necessarily work in humans). Specific tests for specific conclusions are fine (e.g. testing track driving skill via driving on a track, or even via an extremely accurate and immersive simulator with full sensory feedback – any variable that’s missing or inaccurate will proportionately make the test less reliable as an indicator of real-world performance though).
Woof. (Going off topic, I have mixed feelings about mice or other animals being used in experiments for finding things for the benefit of humans – it’s a tough dilemma with no generalisable answers. Maybe I’ll think about this subject a bit more. For now, I’m adamant that they shouldn’t be used for testing cosmetics but for potential life-saving treatments it is dilemmatic.)
Comment on this post by replying to this tweet: