In a famous study (just finished his book Superforecasters), Philip Tetlock found that many experts struggle to outperform simple models — or even chance. This led to the well-known “dart-throwing monkey” analogy, suggesting that some forecasts are no better than random guessing. But does that mean all predictions are unreliable? Not quite.
Take software development as an example. Experienced programmers can accurately estimate short tasks, but when it comes to projects spanning months or more, even the best developers rely on guesstimates. My old boss had a rule: Month+ forecasts are not approved, because they will take double the time. Period. It wasn’t pessimism — it was experience. Over time, we saw that breaking large tasks into smaller, well-defined chunks improved predictability.
Weather forecasting follows a similar pattern. Thanks to advances in meteorology and computing, short-term predictions (up to about 10 days) are highly reliable. However, beyond two weeks, accuracy drops sharply as small uncertainties compound over time. The same challenge applies to economics, politics, and business — short-term trends can often be anticipated, but long-term forecasts become increasingly uncertain due to unpredictable events.
Forecasting works within defined boundaries, setting aside human biases and the curse of knowledge. Sometimes these limits are time-related; other times, they stem from gaps in knowledge or data. One field where these boundaries are particularly critical? Construction project cost estimation.
Large-scale projects — whether infrastructure, commercial developments, or industrial facilities — are notoriously difficult to estimate accurately. Why? Because they involve layers of complexity: fluctuating material prices, unexpected site conditions, regulatory changes, supply chain disruptions, and even geopolitical events. While experienced estimators and risk experts use models to predict costs, there’s always a margin of error, and ignoring it can be costly.
Take the case of megaprojects like high-speed rail networks or Olympic venues. Initial cost estimates are often far lower than final expenditures. The Sydney Opera House, for example, was initially budgeted at $7 million but ended up costing $102 million — a fourteen fold increase. Many of these overruns result from optimism bias, where planners underestimate risks and assume best-case scenarios.
Project forecasting is an integration of weather conditions, contractor performance, price fluctuations, supply chain stability, and countless other moving parts. With so many complex factors at play, how can we make accurate predictions?
To navigate these challenges, cost forecasters rely on strategies like:
At the core of all this is a simple but essential mindset: No forecast is perfect, but understanding its limits makes it useful. The key is to trust expert predictions within their known boundaries — while staying agile enough to adapt when reality takes an unexpected turn.
I think that the boundries of cost estimation is super interesting area to discover, what do you think? :)