The Uniqueness Trap

Authors: Bent Flyvbjerg, Alexander Budzier, M.D. Christodoulou and M. Zottoli

The Uniqueness Trap

Uniqueness bias is what psychologists call the tendency individuals have to think they’re more unusual than they actually are. In the field of project management, it manifests itself as the belief that projects are one of a kind. This is partly a conscious choice, stemming from the view that when something is presented as unique and new, it’s more likely to attract support and funding. But the bias is also deeply entrenched in the project management profession and the literature about it. The U.S.-based Project Management Institute, for example, defines a project as “a temporary endeavor undertaken to create a unique product, service, or result.” The U.K.-based Association for Project Management defines a project similarly, as a “unique, transient endeavor.” The very first study of projects as a management problem identified their finite duration as a “unique aspect of the project manager’s job.” And in his classic book Development Projects Observed, Albert O. Hirschman concluded that each project he had studied represented “a unique constellation of experiences and consequences.”

To find out how distinctive most projects actually are, we analyzed data on more than 1,300 IT projects in 34 companies, which had budgets ranging from $77,000 to $4.5 billion. We then took a deep dive into 219 of them, which were located in North America, Europe, the Middle East, Africa, Asia, and Australasia, to see whether managers believed that their projects were unique and how that perception affected project performance.

What we found was sobering. Our analysis suggested that managers are indeed highly prone to believing that their projects are one of a kind, even though few, if any, actually are. This causes them to think they have nothing to learn from other projects. Most important, it leads them to underestimate risk and overestimate opportunity and thus make poor decisions. Specifically, the more distinctive managers consider a project to be, the more likely it is to exceed its budget, and the more likely the overrun is to be considerable. That led us to the conclusion that improving project performance has less to do with managing the activities involved and more to do with addressing how project managers make decisions.

In this article we’ll look first at the relationship between perceived uniqueness and performance outcomes and show how little grounding the perception of uniqueness has in reality. Then we’ll offer a theory about why uniqueness bias occurs and conclude with some advice on how managers can fight against it.


Idea in Brief

  • The Trap

Project planners and managers are primed to see their projects as one of a kind, especially those that are complex and new to them personally. In reality, however, few, if any, projects are unique.

  • Why It’s a Problem

When people assume that their projects have no precedent, they don’t look for lessons from other projects. As a result they make poor decisions that lead to significant cost and schedule overruns.

  • How to Avoid It

Before putting a project plan together, ask people in your company if they’ve seen anything like it before; if they haven’t, look externally for similar projects. If you can’t find any direct analogues, break the project down into components, which may be comparable across projects. Then use forecasting and risk assessment methods to reduce other biases that may undermine good decisions.


The Cost of Uniqueness Bias

To quantify the effect of perceived uniqueness on projects, we asked the managers of each of the 219 projects in our sample to indicate, on a scale of one to 10, how much they agreed with the statement “This project is unique, and therefore it is difficult to compare with other projects.” Twenty-seven percent gave their project a score of seven or higher.

Next we tested the association between perceived uniqueness and performance, which we assessed by measuring the benefits delivered and cost and schedule overruns. The results supported our hypothesis that project leaders’ view of projects as unique was correlated with underperformance. We found that a one-point increase on the 10-point scale was associated, on average, with a five-percentage-point increase in cost overruns. That meant that in projects receiving the highest rating—a 10—cost overruns were 45 percentage points higher, on average, than overruns in projects receiving the lowest rating (a one). Worryingly, in 37% of the projects rated a 10, the cost overrun was extreme—exceeding the budget by more than 75%.

It should be noted that the prior conclusions are based on perceived uniqueness. As we discovered, the perceptions didn’t necessarily match reality.

Do Unique Projects Actually Exist?

The short answer is no. In fact, whenever we came across a project we thought was unique, it turned out not to be.

Here’s an example: In 2004 the top civil servant in charge of decommissioning nuclear power plants in Sweden needed a reliable estimate of how much that effort, which would take decades, would cost, as well as how expensive it would be to safely store nuclear waste, which would last centuries. The Swedish government was going to ask the nuclear industry to pay into a fund to cover the costs, and it needed to know how much to collect.

The Swedish official approached one of us, Bent Flyvbjerg, for advice. Bent didn’t think he could help. At the time he didn’t have any data on nuclear decommissioning. No other country had carried out such a program. (Decommissioning nuclear plants has become more common since then.) The project truly did seem unique. But the Swedish official had read an article Bent had written about the costs and cost risks for transportation infrastructure projects involving roads, bridges, tunnels, and rail lines. He proposed using Bent’s data as a “floor” and assuming that the real cost risks of nuclear decommissioning would be higher. The Swedish government could get the nuclear industry to start making payments based on the floor and then adjust the estimate and the payments as it learned more about decommissioning. Bent realized that he had fallen into the uniqueness trap by assuming that the manager of a project as unprecedented as nuclear decommissioning would have nothing to learn from other projects. He has never forgotten that lesson.

Were the managers in our sample of IT projects similarly mistaken? We looked at the 59 projects with a perceived uniqueness score of seven or higher and compared their functional scope, descriptions, and start dates against those of 6,219 other projects in another, larger database. We found that with all 59 projects, including those rated a nine or a 10, a similar project had, in fact, previously been executed in the same organization or the same industry. In other words, none of the projects could be considered unique. For example, five of the 59 projects were regulatory-compliance projects in banks. We established not only that each of the banks in question had completed similar regulatory efforts before but also that every other bank in its relevant jurisdiction was working to address the same type of regulation at the same time.

On that basis we concluded that many more projects are perceived as unique than actually are and that perceived and actual uniqueness are not correlated. We also found that perceived uniqueness is what matters to project performance, because when managers think there is nothing to learn from other endeavors, the lack of learning will hamper their projects.

How Uniqueness Bias Happens

Our study suggests that the bias is linked to certain project features. Perceived uniqueness was generally correlated with a project’s complexity, its political sensitivity, its number of unknown variables, and the extent to which its requirements shifted. But none of those characteristics had a statistically significant effect on their own, which implied that they could not by themselves explain extreme cost overruns. From a statistical perspective, the uniqueness bias was the cause of the overruns, and despite the correlations, it was not rooted in a project’s complexity, sensitivity, uncertainty, or requirements.

So where did the bias come from? One strong possibility is that it resulted from the tendency to assume that what’s unique to you will be unique to everyone. For instance, California has never built a high-speed rail line before, so in that sense, the recent efforts to construct one between Los Angeles and San Francisco may be considered unique. But there are plenty of precedents outside California: Dozens of similar rail projects have been built around the world, with data and lessons learned that would be highly valuable to California for assessing costs, schedules, contracting relationships, procurement, revenues, and environmental impact.

Our research appears to confirm that people are more likely to believe that a project is unique if they have no personal experience of anything similar. Consider what happened with the chief information officer of one large global logistics company that participated in our study. When we debriefed the company about its results, the CIO spotted a project described by his managers as absolutely unique, scoring a 10 on our scale. When the CIO asked which project it was, he learned that it was the installation of a standard software package for supply chain and warehouse automation in the Czech Republic. That surprised him because the company had installed this package for clients in nearly 1,000 other locations. He phoned the Czech project manager on the spot to find out what was going on. The manager explained that the project was unique because it was the first time that this software would be used in the Czech Republic.

The uniqueness trap feeds into what the Nobel laureate Daniel Kahneman called the “inside view.” When managers fall into it, they will fail to gather data and proven insights that could help them and will build budgets and schedules based only on their own beliefs and personal experiences. That can be risky: Plenty of behavioral research shows that when decision-makers do this, they tend to underestimate not only average risk but also the probability of rare, catastrophic outcomes. Another Nobel laureate, Richard Feynman, famously found that this was precisely what happened in the Challenger space shuttle disaster: The inside view of flight risk at NASA, especially among its top managers, was so narrow that it caused the agency to wildly underestimate the chances of an explosion, resulting in the tragic loss of the shuttle with all seven astronauts aboard.

Take an Outside View

The cure for uniqueness bias is to always assume that someone, somewhere has undertaken a project like yours, adopting what Kahneman called the “outside view.” Before you start putting your project together, therefore, ask other people in your company if they’ve seen anything like it before, because chances are that, as we just saw with the logistics firm, someone in your organization has done something comparable.

If you can’t find any direct analogues, break the project down into modules and subprocesses, which may then prove comparable across projects. One project leader at a major international bank told us that many of its teams had believed that their projects—especially big IT-led change programs—were unique, but after disassembling them into specific tasks and approaches, they found opportunities to leverage experiences from other projects. As he explained, “If you’re developing a run book for a go-live migration, you should talk to people who have done migrations before. Or if you’re trying to estimate the lead times in establishing your test environment for a new project, ask other projects and teams for their experiences with lead times to get an outside view, and use this to challenge the inside view of your team.”

If you can’t find analogues inside your organization, look further afield. At a McKinsey conference for IT leaders we attended, a participant whose company had been involved in the invention and rollout of mobile texting suggested that it had truly been a unique project. The leader explained that it had taken only a few weeks to develop the SMS app and that no one on the project or outside it had really understood what the team had invented. Adoption was slow at first. The project seemed minor. No one could have predicted the explosion in usage that would follow, and no other project had set a precedent for it. So mobile texting was unique in that sense, or so the leader argued, and many of us in the room agreed at first.

But then others jumped in and suggested that texting was not unprecedented. A host of communication technologies could be considered its forerunners, including the telegraph, the radio, the telephone, the fax machine, and early versions of today’s internet, such as ARPANET. A systematic study of the diffusion of these and other new communication tools would have given the inventors of texting an idea of the uncertainties and the S-curve growth pattern—with a slow start and acceleration later—that they were likely to face. Had anyone thought about that? No, because everyone saw texting as both unique and unimportant and was therefore not motivated to look for similarities.

Once you have found your analogues, be careful about how you process the information you glean from them. Even when taking an outside perspective, project managers making forecasts and decisions can fall prey to other biases that cause them to discount the risks attached. Fortunately, there are forecasting and risk assessment methods that help eliminate or reduce biases. The main ones are discussed next.

Reference-class forecasting

This is a way to predict the future by looking at what has happened in similar situations. In a project management context, it involves comparing possible outcomes of your project in terms of costs, timeliness, and other performance measures with how all the similar projects performed on the same measures. In other words, to assess the probability of a 10% cost overrun in your project, look at how often a 10% overrun has occurred for the whole class of comparable projects. This approach was first applied in 2004 for a mass transit project in Scotland, and today it’s used in hundreds, if not thousands, of projects across business and government.

Similarity-based forecasting

A complementary and more focused tool bases predictions about the performance of a system or a project over time on the past performance of a similar system operating under comparable conditions. Similarity-based forecasting helps managers identify unexpected outcomes and variations in actual operating conditions. It can be applied in many contexts, notably in macroeconomics, where economists believe that drawing on data from situations that match your current conditions produces more accurate forecasts than relying on a more general dataset.

Premortems

In these exercises participants presume that a particular outcome will happen and offer an analysis of why it will occur. For example, before starting a project you might assume that it will be completed 10 months later than forecast and then explain why. Premortems entail what behavioral psychologists call “prospective hindsight,” a concept that began to appear in management literature in the wake of a groundbreaking 1989 article by Deborah Mitchell, Edward Russo, and Nancy Pennington. They’re a highly effective way of surfacing potential problems. The 1989 research suggests that prospective hindsight can improve decision-making and also make people significantly more proactive.

Noise audits

Kahneman, Andrew Rosenfield, Linnea Gandhi, and Tom Blaser described this technique in the 2016 HBR article “Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making.” The idea is that human decision-makers are swayed not only by biases but also by “noise”—factors unrelated and irrelevant to the decision being made. A noise audit helps them measure the effects of those factors. It involves presenting multiple decision-makers with a set of similar hypothetical situations and asking them to predict outcomes. For instance, you might ask a group of judges to predict the sentences for a set of similar criminal convictions. The objective is to assess how the predicted sentences of each judge vary across cases as well as how they vary across the group of judges as a whole. Typically, the noise level is the standard deviation of predictions across cases and across the predictors. If it’s high, then the judges need to revisit how they make sentencing decisions. The tool can be applied to help project managers identify whether they’re likely to be swayed by irrelevant factors in making key decisions in, say, purchasing services or hiring.

These methods, and their effectiveness, are well documented in management literature. Anyone interested in eliminating uniqueness bias and other preconceptions that distort decision-making—which is to say, anyone interested in running projects and organizations successfully—should become versed in them.

It’s easy to understand why people think their projects are unique. It stems from what Kahneman called “fast thinking,” which is humans’ mental default mode. Fast thinking saves project planners and managers the considerable effort of figuring out which class of project a new undertaking belongs to, what the averages and extremes are for that class, how those values translate into risk, and how that risk may be mitigated. But very few, if any, projects are unique, no matter how complex they are. Unless you accept that and invest in identifying similar endeavors and learning from them, your own project will most likely come in late and well over budget and underdeliver on benefits.

Harvard Business Review (HBR)

Please Log in to leave a comment.