with No Comments

Post No.: 0368predictions

 

Furrywisepuppy says:

 

This is what tends to happen when people make specific and date-specific (as in not vague or indefinite) predictions about untypical events one or more years down the line in a complex and chaotic domain such as economic forecasting – thousands of self-proclaimed experts in a field, or people who just want to publicly state their predictions, make their own predictions about some aspect of the future that’s actually technically difficult (chaotic) to predict.

 

…Now most of these people will later prove to have been barking up the wrong tree with their predictions but these people will keep schtum about their missed predictions. (I guess this is far better than a high-level government adviser – who advocates so-called ‘superforecasting’ too – editing his own blog to make it appear like he explicitly warned the world about the threat of coronaviruses in April 2019 though(!)) But a small handful of people will prove to be correct.

 

These few people will then be the ones who’ll agree to go on TV and under the media spotlight to shout about their predictions being correct. But because they were making long-range predictions in a chaotic domain – they actually only got it right by chance. By stepping back and looking at the bigger picture – one or two predictions being correct in reasonable detail out of thousands of people making guesses isn’t evidence of clairvoyance or skill. Even rarer is a person who’ll get two or three big, long-range and specific predictions correct – yet in the bigger picture again, this is not impossible via just statistical chance alone when hundreds of guesses are being made each and every week.

 

There’s a self-serving bias when selecting which particular predictions, hypotheses or guesses of any kind we wish to personally remember or remind other people about, to give the impression that we’re more correct than ever incorrect. It’s confirmation bias. So an economist may have made dozens of such predictions before but his/her missed predictions get conveniently forgotten about (he/she may have even made multiple conflicting predictions about the same events to hedge his/her bets), yet he/she will always volunteer or agree to appear in public to talk (smugly!) about his/her apparently on-target predictions. Predictions that were only tentatively held at the time may be claimed to have been quite strongly confident after they prove to be prescient too. Moreover, we’ll almost always be able to find at least one person, out of the thousands, who was correct, but it doesn’t necessarily mean that he/she had special skills or powers – this person will only prove that he/she possesses special clairvoyant skills if he/she can get lots of hits and their hit-to-miss ratio is high.

 

If it’s a honed skill then they or we would make many such predictions and get many of them correct, without getting lots of them wrong; which would be like hitting 100 aces… but double-faulting 1,000 times too! (Note that not making any prediction about something that later materialised and affects us is a failed prediction too.) We must take into account all the false negatives (a bit of a twist here but we’ll call these the undeclared predictions we make that were inaccurate, which we all make lots of these), false positives (the undeclared predictions we make that were accurate, although few of us will refrain from saying, “I knew it” even though we didn’t explicitly say ‘it’), true negatives (the declared predictions we make that were inaccurate, although few of us will want to disclose or remind others of our own inaccurate predictions), as well as the true positives (the declared predictions we make that were accurate, which we’ll wish to focus ourselves and everybody else on!)

 

So we mustn’t ignore all of their or our own missed predictions, even though we won’t likely hear about them being shouted about because few of us really want to ever volunteer or agree to appear in public to talk about the things we got wrong. (Members of parliament might find it hard to avoid being publicly interviewed about their errors of judgement though compared to other more private people.) It’s like a CV/résumé or social media profile – we want to show only our favourable side. But this presents a bias. There’s a self-reporting and media reporting bias.

 

Similarly, we’ve got to wonder if those who volunteer or consent to be interviewed or followed for a documentary regarding what they do, are representative of everyone who does what they do? Maybe they are the acceptable face of their industry? Maybe those who do things the immoral or illegal way declined to be interviewed or followed hence we’re only seeing a skewed and favourable side of their trade? These are some of the sorts of questions we must ask. Companies or organisations will typically either refuse to give an interview or they’ll give one on their terms with a carefully scripted and slick tour or presentation, showing the parts they want us to see and hiding off-limits what they don’t. We don’t get to see the full furry picture to make a fully-informed view. Their claims of protecting ‘legitimate trade secrets’ may be true or false.

 

Our intuitions are poor at questioning or considering what isn’t in front of us, so when a company shows us examples of their product passing a test, we can neglect to ask how many attempts had failed that they’re not showing us (e.g. it could work 5 out of 5 times, or 5 out of 100 times but we’re only shown the 5 times it worked).

 

A footballer who scores far more than misses whenever he/she takes penalties has true penalty-shooting skills. A person or certain group of people in a profession who only get their specific predictions correct a small percentage of the time, however, doesn’t or don’t have any true prediction skills at all – they’re just playing a numbers game and a biased reporting game.

 

And of course the more vague and cryptic or less specific and date-specific a prediction is, the more easily one can interpret it, or reinterpret it with the benefit of hindsight after an event has occurred, so that it’ll appear to be a correct prediction. This is how Nostradamus-type predictions work – these are worded intentionally very ambiguously so that they can be interpreted in many different ways to catch out people who’ll apply confirmation bias. The interpreter essentially does the work rather than the so-called prophet. So we need to firstly be able to identify what predictions are ambiguous, and then secondly not read too much into such vague, ambiguous or sometimes even self-contradictory information. Woof!

 

Whenever you hear lots of contradicting economic forecasts in the news, at least somebody is likely to be correct. And this is how the media will almost always be able to find and interview at least someone who got some major long-range prediction right. If the same person keeps on getting interviewed for making the correct predictions all of the time then this person will have true prediction skills – but it’s always different people each time.

 

Hence it’s indeed a statistical numbers game – when lots of people are making predictions, the chances of at least someone getting it right is high, but chances are that it’ll be a different person each time. Likewise, if an individual makes lots of gambles all of the time, chances are that he/she will win something eventually but we must look at his/her overall hit-to-miss ratio over a large number of trials; not just his/her absolute number of hits or a ‘one win out of one’ record. In some domains like long-range political forecasting, it all matches what we’d expect if it was just down to mere guessing and chance.

 

Lots of people will be playing the lottery this week and lots of different predictions of the winning numbers will be played, and if one or a few of these people match the correct numbers and win/share the big prize, the media will swoop onto these one or few individuals for their stories. Similarly, lots of people will be making their guesses about what’ll happen in the next few years in politics or economics and lots of different predictions will be made, and if one or a few of these people correctly predict the actual outcomes, the media will probably swoop onto these one or few individuals for their stories; coupled with the way that those who fail to make the correct predictions would rather not publicly talk any more about their failed predictions in the media, hence a reporting bias. So it’s a numbers game and just a statistical chance that at least somebody will guess the correct outcome when so many different guesses by so many different people are being made. Some of these individuals may claim to have special skills or powers of insight but unless they make a large number of predictions and perfectly guess the future correctly a vast majority of these times then they just got lucky.

 

When millions of people play the lottery, chances are that one or two people are going to match all of the numbers – but it doesn’t mean these particular people had skill in predicting lottery numbers. A small number of people will get multiple long-range predictions correct but once again this is within normal statistical odds – just like lightning does strike some people multiple times – but obviously such people are exponentially rarer, as we’d expect according to the pure probabilities once more. (Post No.: 0264 investigated probabilities too.)

 

A winner might then give advice about ‘the secret of their success’ along the lines of, “Just trust your gut” or, “Anything can happen if you just believe in it”, and such advice may get trusted by others. But we can statistically check the success-to-failure ratio of people who simply follow such advice and see that we need so much more than just instincts or faith to be successful (e.g. we need an education, good ideas, bouncebackability and opportunities too). But people might be accused of lacking enough faith instead. And the problem is, if people don’t accept that luck has a say in things, a lot of injustices like inequality will not be addressed.

 

We’ve got to understand which domains are chaotic and by nature hard to predict. Economics is such a field because there are lots of interconnected variables (e.g. something happening on the other side of the world can affect us) and one seemingly insignificant thing can possibly result in something significant down the line in an unforeseeable way. Also in this field, predictions create a feedback loop – if an economist makes a prediction, the markets react to it, whether it would’ve proven to have been correct or not! This means that the reality that it had tried to predict has changed because of the prediction existing itself. Modern economies are thus complex, and any ‘expert’ who states a long-range forecast with great certainty is therefore being irresponsible, and he/she would prove to not be an expert in this field at all. (It’s like if someone claimed to be able to confidently predict, with a greater than 1-in-6 chance, what number a fair 6-sided die will roll – he/she will prove to not be an expert of dice at all! Note that dice are not technically random but chaotic for their motion follows deterministic laws of motion. Although for all intents and purposes, they can be considered random.)

 

Economists should start to be more scientific and state their predictions along with a probability or confidence level (e.g. ‘there’s a 1-in-3 chance x will happen’). Unfortunately, most laypeople tend to prefer to trust experts or ‘experts’ who sound confident rather than uncertain, such as campaigning politicians.

 

Woof. We all need to be comfortable with handling and accepting different levels of uncertainty because that’s how much of the world, in practice, is. Over-confidence has led to many errors.

 

Comment on this post by replying to this tweet:

 

Furrywisepuppy23

 

Share this post