What is the concept of the Planning Fallacy and how does it affect decision making and time management?

The planning fallacy is a cognitive bias that affects our ability to accurately predict the time and resources needed to complete a task or project. It refers to our tendency to underestimate the amount of time and effort required to complete a task, and to overestimate our ability to meet deadlines. This concept has been studied extensively in psychology and has significant implications for decision making and time management. In this essay, we will explore the origins and effects of the planning fallacy, and how it can impact our decision making processes and ability to manage our time effectively. We will also discuss strategies for overcoming this bias and improving our planning skills.

The planning fallacy is a tendency for people and organizations to underestimate how long they will need to complete a task, even when they have past experience of similar tasks over-running. The term was first proposed in a 1979 paper by Daniel Kahneman and Amos Tversky. Since then the effect has been found for predictions of a wide variety of tasks, including tax form completion, school work, furniture assembly, computer programming and origami. The bias only affects predictions about one’s own tasks; when uninvolved observers predict task completion times, they show a pessimistic bias, overestimating the time taken. In 2003, Lovallo and Kahneman proposed an expanded definition as the tendency to underestimate the time, costs, and risks of future actions and at the same time overestimate the benefits of the same actions. According to this definition, the planning fallacy results in not only time overruns, but also cost overruns and benefit shortfalls.

 

Demonstration

In a 1994 study, 37 psychology students were asked to estimate how long it would take to finish their senior theses. The average estimate was 33.9 days. They also estimated how long it would take “if everything went as well as it possibly could” (averaging 27.4 days) and “if everything went as poorly as it possibly could” (averaging 48.6 days). The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in the amount of time they predicted.

Another study asked students to estimate when they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done.

  • 13% of subjects finished their project by the time they had assigned a 50% probability level;
  • 19% finished by the time assigned a 75% probability level;
  • 45% finished by the time of their 99% probability level.

A survey of Canadian tax payers, published in 1997, found that they mailed in their tax forms about a week later than they predicted. They had no misconceptions about their past record of getting forms mailed in, but expected that they would get it done more quickly next time. This illustrates a defining feature of the planning fallacy; that people recognise that their past predictions have been over-optimistic, while insisting that their current predictions are realistic.

 

Explanations

Kahneman and Tversky’s original explanation for the fallacy was that planners focus on the most optimistic scenario for the task, rather than using their full experience of how much time similar tasks require. One explanation offered by Roger Buehler and colleagues is wishful thinking; in other words, people think tasks will be finished quickly and easily because that is what they want to be the case. In a different paper, Buehler and colleagues suggest an explanation in terms of the self-serving bias in how people interpret their past performance. By taking credit for tasks that went well but blaming delays on outside influences, people can discount past evidence of how long a task should take. One experiment found that when people made their predictions anonymously, they do not show the optimistic bias. This suggests that the people make optimistic estimates so as to create a favorable impression with others.

Some have attempted to explain the planning fallacy in terms of impression management theory.

One explanation, focalism, may account for the mental discounting of off-project risks. People formulating the plan may eliminate factors they perceive to lie outside the specifics of the project. Additionally, they may discount multiple improbable high-impact risks because each one is so unlikely to happen.

Planners tend to focus on the project and underestimate time for sickness, vacation, meetings, and other “overhead” tasks. Planners also tend not to plan projects to a detail level that allows estimation of individual tasks, like placing one brick in one wall; this enhances optimism bias and prohibits use of actual metrics, like timing the placing of an average brick and multiplying by the number of bricks. Complex projects that lack immutable goals are also subject to mission creep, scope creep, and featuritis. As described by Fred Brooks in The Mythical Man-Month, adding new personnel to an already-late project incurs a variety of risks and overhead costs that tend to make it even later; this is known as Brooks’s law.

Another possible explanation is the “authorization imperative”: Much of project planning takes place in a context where financial approval is needed to proceed with the project. And the planner often has a stake in getting the project approved. This dynamic may lead to a tendency on the part of the planner to deliberately underestimate the project effort required. It is easier to get forgiveness (for overruns) than permission (to commence the project if a realistic effort estimate were provided.) Such deliberate underestimation has been named strategic misrepresentation.

 

Methods to curb the planning fallacy

Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg developed reference class forecasting to eliminate or reduce the effects of the planning fallacy in decision making.

Scroll to Top