with No Comments

Post No.: 0031specific


Furrywisepuppy says:


We improve by knowing more and doing more (including via more repetition and practice). But there is a fallacy in believing that just because one is good at one specific task, it necessarily means that one will be good at another seemingly related (but not actually closely related enough) task. ‘Specificity’ means that specific skills aren’t usually generalisable, although many people assume they are (this is due to cognitive biases such as the ‘halo effect’, where good traits in one area are automatically assumed to mean that good traits are present in other areas associated with the same person, firm or entity e.g. a good-looking person is also assumed to be intelligent, or a brand that sponsors a charity is assumed to be virtuous in other regards too). So if you really want to be good at something – you’ve got to do that something or at least directly and accurately simulate it.


It all sounds obvious to do a very thing if you want to be good at that very thing but too often people assume that if someone is great at A then they necessarily must be great at B too, when this isn’t always the case. The single best predictor of how good you are or how much you know about something is how good you are or how much you know regarding that specific domain in question, not your general abilities or proxy knowledge – one therefore needs to specifically acquire more knowledge or skill in the specific domain one wishes to be competent in. This really should be stating the obvious yet many people still intuitively rely on superficial inductive logic such as ‘they’re desirable in this way so they must also be desirable in that way too’ (and then they date these people and often find out that they’re not! It also means they may ignore those who don’t seem as desirable at first glance).


We have long sought for a ‘g factor’ (general factor) of general intelligence but intelligence is a hypothetical construct that is not directly observable and has no true definitive or objective measure. A traditional IQ test is just one of many ways of trying to measure a person’s general intelligence but it’s not a definitive test. No test or set of tests of intelligence can practically be complete and definitive in determining a person’s general intelligence. For instance, even regarding just a test of trivia – out of the trillions of possible questions (and always growing as history and the total pool of human knowledge grows) in the world, across all languages and cultures, time and space, one will only encounter or get asked a small fraction of them in one’s lifetime – therefore being good at quizzes will just depend on which questions out the trillions you are specifically asked. Quiz shows tend to focus on what they consider as ‘general knowledge’ but their focus on particular areas is typically arbitrary or biased depending on their target audience (e.g. questions about the monarchs of England for quiz shows in England, rather than e.g. questions about the tribal monarchies of Africa).


Therefore although high general intelligence quotient (IQ) scores or problem-solving skills will help a bit, it won’t be anywhere as powerful as having specific knowledge and experience in the target subject or task (e.g. being good at Sudoku won’t necessarily help you to become better at performing any other task, even if it involves numbers like accounting). Being good at one thing doesn’t necessarily mean being good at something else. So you’ve got to learn and practise the very things you want to become better at. ‘Brain training games’ are certainly not bad for the brain but you may need to curb your fuzzy expectations regarding how you think they’ll help you become better at other tasks.


All these ‘brain training games’ are typically stripped-down, sanitised one-dimensional puzzles that don’t represent real-world puzzles, problems or tasks such as gardening, cooking, fixing something, playing a team sport, learning a new subject, language or instrument, creative writing, an arts project, looking after a child or volunteering, for example. It’s analogous to isolated bicep curls on a bench compared to compound exercises using free weights such as deadlifts, which closer represent some real-world movements and tasks. Indeed, playing a full sport such as tennis, golf or basketball, or dancing (which is an excellent activity), is relatively even more complex, multi-faceted and real-world and therefore better for challenging and training the brain than some marketed ‘brain training games’. But ultimately, if you want to get good at a particular thing then there’s no real substitute for doing that very thing.


Being a great cyclist at elite level won’t necessarily mean one will be a great runner at elite level unless one specifically trains hard at running too. And this is why there is no objectively definitive ‘ultimate athlete’ in the world – every top athlete is only a specialist in his/her own dedicated sport(s). Likewise, no academic is an expert in every field – every academic is only a specialist in his/her own dedicated field(s). No one is the best at everything for being the best at one or a few specific things.


There are also cases when a person is good at doing something in one context but not in other contexts (e.g. performing a task in private versus the same thing in public), hence training in the specific context one cares about is often crucial too.


And therefore if you want to test for a particular ability then there’s no better way than testing specifically for that very ability in its proper context (e.g. don’t test an interview candidate’s ability to remember strings of numbers in a quiet room if what you really want to know is if they’re good at remembering instructions in a noisy environment). This all relates to scientific research too (e.g. just because something apparently works in mice or on a Petri dish, it doesn’t mean it’ll necessarily work in humans). Specific tests for specific conclusions are fine (e.g. testing track driving skill via driving on a track, or even via an extremely accurate and immersive simulator with full sensory feedback – any variable that’s missing or inaccurate will proportionately make the test less reliable as an indicator of real-world performance though).


Woof. (Going off topic, I have mixed feelings about mice or other animals being used in experiments for finding things for the benefit of humans – it’s a tough dilemma with no generalisable answers. Maybe I’ll think about this subject a bit more. For now, I’m adamant that they shouldn’t be used for testing cosmetics but for potential life-saving treatments it is dilemmatic.)


Comment on this post by replying to this tweet:


Share this post