Most highway projects are doomed to miss their budget — before they even begin.
Early cost estimates set the tone for funding, political will, and public support. Yet across the U.S., these estimates are often wildly inaccurate. A study from Montana DOT found that final construction costs were, on average, 46% higher than what was projected during programming. And it’s not just error — it’s structural. Many agencies knowingly underestimate to keep projects alive in the early phases, a practice known as “optimism bias.”
The result? Nine out of ten major infrastructure projects experience cost overruns. That means states spend months budgeting for one price, only to find themselves scrambling to fill gaps, delay phases, or cut scope. In the worst cases, underestimated projects never recover — funding gets pulled, and roads or bridges stay broken.
This problem isn’t new. But what’s surprising is how little consistency exists in how states actually make these early estimates.
The survey at the heart of this research asked state DOTs a simple question: How do you build your earliest cost estimates? The responses revealed just how fragmented and outdated the current practices are.
Only 36% of state DOTs responded, and among those, there was no consensus approach. The most commonly used tool? Excel. Most agencies rely on basic spreadsheets populated with historical averages — cost per lane mile, square footage, or bridge length — adjusted manually using engineering judgment.
There is no federally prescribed method for how to perform these estimates. And while some agencies factor in inflation or local market shifts, most use flat contingency percentages — ranging from 5% to 45% — without any formal risk model. Few offer structured training. Even fewer provide systematized tools across regions.
This patchwork system makes it hard to benchmark, hard to improve, and dangerously easy to misjudge the true cost of large projects — especially when construction doesn’t begin for another 2–5 years.
If early cost estimates are unreliable, the natural question is: What’s being done to account for risk? According to the survey results, the answer varies wildly — and often, not enough.
One of the clearest findings was the lack of a standardized approach to estimating contingency costs. Most state DOTs apply contingencies as a flat percentage of the base cost estimate. These percentages typically range from 5% to 45%, depending on the perceived uncertainty and the project type. But crucially, very few agencies link these percentages to formal risk assessments or historical variability. It’s engineering judgment — not data — driving most decisions.
“The survey showed different approaches to estimating contingency costs in preliminary estimates… a percentage‐based contingency calculation was included in the initial scoping estimates.”
This kind of approach leads to both overconfidence and inconsistency. A flat 20% contingency might be excessive for a straightforward resurfacing job, but dangerously low for a complex bridge with permitting or utility relocation risks. Yet the toolkits used by most DOTs can’t distinguish between these scenarios.
Worse still, most estimates are presented as single-point values, not probability-based ranges. That creates a false sense of certainty among planners and decision-makers.
“Preliminary estimates are typically point estimates with a single cost value… [but] this may lead to a false sense of confidence as it does not indicate a confidence measure nor the potential for cost growth.”
In contrast, the Federal Highway Administration (FHWA) and some leading DOTs — like Washington State — recommend stochastic or range-based methods that account for variability. These techniques provide an estimated cost range with associated confidence intervals, explicitly communicating the likelihood of cost overrun or underrun. But adoption remains rare.
For instance, a 2005 initiative by Washington DOT introduced risk-based estimating for mega-projects over $100 million, using Monte Carlo simulations and expert scoring. Their method has since become a national reference point for risk-informed cost planning. Similarly, studies have shown that bootstrap sampling techniques can improve accuracy by generating empirical distributions from historical bid data — yet these remain the exception, not the rule.
Meanwhile, many agencies continue to lack formal policies for adjusting costs based on market conditions, inflation, or scope evolution. Only 21 states are known to compute their own Highway Construction Cost Indices (HCCIs), and even those often rely on limited bid data — sometimes as little as 14% to 50% of total bid prices, which undermines reliability.
The takeaway? Contingency costs are widely recognized as important — but they’re still handled with guesswork more often than with data. As long as risk is priced in through intuition alone, early estimates will continue to mislead more than they inform.
When estimates go wrong, the consequences ripple far beyond the budget spreadsheet.
A preliminary underestimate doesn’t just mean the project runs over budget — it can derail the entire development process. Projects that appear affordable in early planning phases can face cancellation or re-scoping once true costs emerge. If the updated estimate doesn’t align with the original benefit-cost ratio, the project might lose funding or political backing altogether.
According to one study cited in the report, 9 out of 10 infrastructure projects experience cost overruns — a pattern closely tied to the consistent underestimation of early costs due to optimism bias. In Montana, an analysis of state DOT data showed final construction costs were 46% higher than original programming estimates.
These inaccuracies come with steep opportunity costs. When a project absorbs more money than expected, it drains funds that could have supported other priorities. And since public trust in infrastructure spending is already fragile, frequent overruns can create reputational damage, reduce public support, and complicate future bond measures or federal grant applications.
On the flip side, overestimating costs can be just as harmful. If a project appears too expensive in early estimates, it might never make it into the queue, even if it’s critically needed. Flat contingencies and conservative buffer margins — when applied across a portfolio — can quietly shrink a DOT’s capital ambitions.
The bottom line: inaccurate cost estimates distort the decision-making process. They can delay or kill high-impact projects, mislead the public, and waste limited federal and state dollars. And the farther upstream the error occurs, the harder it becomes to correct.
Despite the gaps in current practice, some states are beginning to rethink how early cost estimation is done.
A few DOTs have moved toward risk-based, statistical, or range-based estimating techniques, especially for large or high-uncertainty projects. Washington State’s adoption of Monte Carlo simulation for mega-projects was one of the earliest examples of integrating probabilistic thinking into public-sector infrastructure planning. Montana, Kentucky, and North Carolina have all piloted data-driven regression models or neural networks to improve early estimates.
Some DOTs also compute their own Highway Construction Cost Indices (HCCI) to adjust historic bid data for inflation and market conditions. But here, too, there are limitations — many agencies use only a fraction of available bid data, and few disaggregate by region, project type, or size. That means even when adjustments are made, they may not reflect real-world pricing.
“Currently, State DOTs use as little as 14% to below 50% of the total construction bid prices to calculate their state-level HCCI… most methods are not sophisticated enough to assure that HCCI reflects true market conditions.”
But there’s a quiet shift underway.
Some states are starting to develop customizable cost estimating tools that combine historical bid data with inflation modeling, risk factors, and project-specific characteristics. These tools promise to move DOTs away from Excel templates and toward systems that adapt and learn over time.
One of the more promising efforts? South Carolina. While this post focused on national trends, South Carolina DOT is taking a different path. They’re building a tool designed to generate probabilistic cost ranges based on just a few inputs, backed by real bid data and inflation-adjusted curves. It’s one of the clearest examples of a DOT trying to modernize how cost estimates are generated from day one.
But that story deserves a post of its own.
What’s next? The tools to fix this are already here — and South Carolina’s work is proof. In a follow-up post, we’ll explore how they built a cost estimation model that blends historical data, inflation tracking, and probabilistic forecasting into a practical, DOT-ready tool. If you’re serious about improving how infrastructure gets funded, that’s where the story goes next.