with No Comments

Post No.: 0478forecasts


Furrywisepuppy says:


Many investors and businesses are more interested in hearing the forecasts they want to hear than understanding whether such forecasts are dependable or not – hence all these 2, 3, 5-year turnover and profit projections. In Post No.: 0368, I implied that making long-range (longer than a year in the future) predictions in complex domains like economics or politics isn’t a skill that can be honed like scoring penalty kicks in football, because you’d expect someone who’s truly skilled at something to be able to consistently reproduce successful outcomes regarding what they’re skilled at. You’d want a high hit-to-miss ratio.


Nevertheless, some people are better at making forecasts than others when it comes to making forecasts up to a year ahead. Science writers Philip Tetlock and Dan Gardner searched for such people and coined them ‘superforecasters’. Although they still produce a lot of misses – being only slightly better than chance but consistently so – they understand the world better and are able and willing to take into account all of the evidence and data they can get hold of to shape their forecasts. The writers concluded that it’s about the way they think, and such things can be learned and practised, like any other skill…


Be open-minded. Foster a ‘growth mindset’. Be actively curious in wanting to learn about lots of different things and enjoy mental challenges. Yet even if you might know a lot of things, remain eternally humble about the things you know you don’t know, as well as about the things you don’t even know you don’t know – for we don’t always know what the full possibility space is. For example, we know the full possibility space of a 6-sided die roll because it’ll either roll a 1, 2, 3, 4, 5 or 6 and nothing realistically else, but when we don’t even know all the possible outcomes that could happen regarding something i.e. we don’t even know what size 100% of something is, then it’s harder to say what ‘68%’ of that something means either.


Be able to take on multiple, diverse perspectives, such as the smaller and mainly bigger picture, the short and mainly long term. We’re generally content with our own beliefs and therefore see no reason to entertain alternative views, but actively seek and consider a diversity of other people’s views, and synthesise them into your own as appropriate. (Well if you want to know something like the market price of a product then aggregating as many different views on it will logically get you closer to the market price because this price is precisely the aggregate view of what something is worth.)


Don’t be quick to accept a conclusion just because it’s the first one you’ve considered that sounds plausible. This is a typical trap that we all fall into because we’re cognitively lazy. Understand that things that happen didn’t have to happen, in the sense that things caused them to happen, and if those causes were different, a different result could’ve occurred. It’s comprehending the interconnectedness of the modern world and all the different variables that are complexly at play. And understand that all forecasts should come with varying granular degrees of certainty/uncertainty since this reality is complex. So, although they’re only going to be estimates, commit to specific numerical predictions (e.g. a ‘21%’ chance rather than a vague assertion like a ‘low’ chance, which can be, with the benefit of hindsight, reinterpreted as anything from 1-49%). We cannot make calculations like expected value calculations without being more specific too.


Beliefs are only ever provisional and are there to be tested – there should be no sacred beliefs to be permanently wedded to. So be flexible, and be ready and able to refine or update your beliefs as frequently and as needed as the data updates – being careful not to over-react to irrelevant information or under-react to information that disconfirms your current beliefs. Question things, especially if you think there’s a better way. Woof.


Be comfortable with numbers and statistics and utilise them where available rather than rely on your hunches or feelings, which are liable to cognitive and affective biases. Try to learn about and recognise the biases that may be affecting your thinking. Slow your thinking down and engage your more critical ‘system two’ rather than your more emotional ‘system one’. Use mathematical models, and multiple models, to assist your decisions if possible; as long as the data relied upon is accurate and representative, and you don’t over-extrapolate any patterns.


Being hard-working is always a prerequisite if you want to be proficient at anything. Embrace timely and constructive feedback from others too. Being self-critical is a facet of critical thinking. The resilience and furry determination to pick yourself up again or carry on is also required.


Working in a team is overall beneficial but has to be conduced right. Sharing and aggregating information tends to result in more accurate forecasts, but some members may socially loaf (not pull their own weight) and the team may fall into groupthink (blindly following the rest of one’s group and thus negating any diversity of views). Some may lack enough social intelligence and let their egos dominate discussions. Others may lack enough confidence to disclose what they really think. But teamwork skills can be trained and teams can be managed. The aim is to strike a balance between competition and collaboration – the competition to fight for one’s views yet the collaboration to listen to and incorporate other people’s views too. It’s about constructive criticism.


We also crucially need to be able to ask good questions – we need to be good at asking what sorts of things to forecast in the first place. For if we fail to even consider making a forecast about something that’ll affect us (e.g. the chance that Earth will be overtaken by a multi-dimensional empire, like the Combine… and then something like that happened!) then that would count as a failed prediction too, even though we might claim that we didn’t make a specific prediction about it and therefore it shouldn’t count. It’d be like trying to argue, “I didn’t state any odds for whether we’d be living under a pandemic lockdown in 2020 or not, so you can’t say I was wrong.” But this will count as a failed forecast – or worse, because we’ll have also failed to forecast the need for that forecast(!) Things that won’t affect us personally or greatly (e.g. who’ll win tomorrow’s snooker match) aren’t relevant but things that do or will affect us will.


A good question is relevant to the world we’re interested in, paints a clear and specific potential scenario that could happen in the future, and provides a clear and specific timescale of when this would happen. A vague question will provide less useful answers (e.g. the chance of a massive earthquake happening, anywhere in the world, at any time in the next 25 years). It also tends to be most practical to ask those questions where the answers are neither too easy nor too difficult to answer (albeit don’t neglect these types of questions altogether) because this is where we can get the most out from our efforts. Break large questions into smaller, easier questions, which when aggregated and accounted for, can help you to answer a larger question better.


Note that those who are good at asking the right questions and those who are good at answering them might be different people.


Keep a record of all of your forecasts and then keep track of their outcomes. How will you truly know how well you’re doing if you don’t record such data? Howl you know whether your forecasts are getting better or not? Our memories are too biased to rely on. Our biased memories and (re)interpretations of what we said in the past will result in us alleging that we ‘almost called it right’, that ‘we were correct but the timing was off’, or some other excuse(!)


Overall, improving your forecasting abilities involves gathering evidence from as many different sources as you can, thinking with probabilities rather than certainties, working in trained and managed teams, recording all of your forecasts and their results, and being willing to admit to your mistakes and change course when called for.


Most of all develop a mindset that embraces perpetual learning. It’s like a form of kaizen, or the desire for continual improvement. This is probably the single most important attribute. This essentially emulates how the scientific community behaves as a whole, which is about aggregating and critiquing everyone’s research to reach an overall consensus, which is prone to update as new evidence accrues. Something may be pretty much settled yet still has potential room for improvement or change.


A true ‘superforecaster’ would therefore tell you that prediction is difficult! They’re comfortable with saying, “I don’t know enough” but will then go away to try to learn more. They are most of all humble. This can lead to sitting on the fence though, but that’s better than being narrow-minded, egotistical and thinking that we know with absolute certainty what’s going to happen.


However, the media and public prefer to present and hear from pundits who can tell us unequivocal, confident, coherent and maybe controversial stories, and it’s these kinds of people who’ll become well-known too. We have a profound desire to forecast the future, and this demand is met by a supply of pundits and supposed fortune tellers and prophets, who’ll proffer their predictions for fame and/or money. They’re inaccurate most of the time precisely because of their hubris, yet they’re famous ‘experts’ because they’re skilled at telling tight, simple, clear and compelling stories that grab attention. Confidence and correctness aren’t correlated.


Yet shall we act upon something that we think, say, has a 43% chance of happening? And how? We must have enough confidence to be decisive or we’ll fail to do anything at all. Good leadership therefore requires both thinking and doing. Humility should make one think carefully about one’s actions and update one’s strategies for even the best-laid plans need to be continually reviewed based on the changing circumstances. And confidence should give one the strength to act upon one’s forecasts. What’s the value of guessing right but having done nothing about it?


No one’s ever going to be a perfect thinker because cognitive biases are a part of being human – a perfectly rational, unemotional and unbiased human will, well, not appear very human at all. (Not that algorithms or artificial intelligences can’t/won’t have biases too.) Also, some events are incredibly improbable and thus hard to predict due to their rarity, yet have a huge impact on the world and so, with the benefit of hindsight, appear like they should’ve been easily predicted (so-called ‘black swan events’). Some people in the world did anticipate the threat of viral pandemics, for there was a clear history of such events, but not precisely when or the full nature of the current virus. Evidently, not enough people, at least outside the field of studying infectious diseases, anticipated COVID-19. (And for many of those who claimed they had predicted something like it – again, what’s the value of guessing right but having done nothing about it?)


Nevertheless, there’s enough we can try to forecast, and although forecasts of events one or more years down the line are notoriously difficult to make for anyone or any team, even with the help of computer models (and there are opportunity costs in a finite-resource world hence we cannot fully prepare for every possible major global event, ‘just in case’ they might happen e.g. stockpiling everything) – there are many useful things to try to forecast within a year. And sometimes just being in the right ballpark, just like trying to at least estimate a number to the right order of magnitude using a ‘back-of-the-envelope’ calculation, is useful enough in the real world.




Comment on this post by replying to this tweet:


Share this post