Hide table of contents


  • I recently finished reading the book How Big Things Get Done by Bent Flyvbjerg and Dan Gardner which was published this year in February.
  • The book explains why many projects fail (go over budget, time or are never finished) and what strategies exist to make projects less likely to fail.
  • The book’s focus (and examples) is mainly the infrastructure megaprojects field (building bridges, tunnels, museums, etc.) but the insights can be applied to all sorts of projects, even small ones.
  • The book essentially summarizes the experience and research of both authors over many years in an engaging way (it’s an anecdote dense book!)
  • I tried to put what I found to be some of the main lessons from the book to help you decide if the book would be for you. 
  • Should you read this book? Do I recommend the book?
    • While the book contains several insights from (empirical) research, it’s not an academic book / handbook. If you’re looking for a highly “fact-dense” book then I would perhaps suggest reading Megaproject Planning and Management: Essential Readings or the The Oxford Handbook of Megaproject Management instead.
    • However, if you want to
      • Get a fresh perspective from what most project management books talk about (despite having read several books on project management I came across lots of new things on this book)
      • Read about the interface between project management and forecasting
      • Hear from several real-life examples (ranging from the Sydney Opera House to the recording studio of Jimi Hendrix: Electric Lady Studios)
    • Then I think you’ll enjoy this book and learn quite a few things (I certainly did).
    • I recommend the book to people particularly interested in improving their project planning skills.

How I found out about the book

  • Last year I was looking into resources that could help me educate myself about managing big projects and why these fail.
  • To my surprise a book about this exact topic happened to be in the making.
  • The book is co-authored by Bent Flyvbjerg and Dan Gardner.

Meta note: I strongly prefer writing in simple (mostly straightforward language) and bullet points (Workflowy to be blamed). Some points might seem oversimplified to you, but it greatly reduces my germane load (thanks to Meghan Barrett for showing me and my colleagues that that word even exists) and I hope it reduces yours as well.



Understand how much personal psychology vs. strategic misrepresentation affects decisions in your project. Flag strategic misrepresentation when you see it.

  • Any project moves along an axis of “Low politics” to “High politics
    • Low politics means low-stakes, no or little “power games”, no manipulation for own’s self benefit/interest, no competition for scarce resources, absence of powerful individuals
    • High politics is the contrary of the above. Usually, the larger the project, the more high politics it gets and vice versa.
  • The main reasons for poor and/or rushed decision-making during a project is greatly influenced by where the project is on this axis
    • Low politics: Individual psychology[1] is usually to be blamed for poor decision-making.
    • High politics: Strategic misrepresentation[2] is usually to be blamed.

Read more:



Active planning isn’t a delay in project delivery. It is the time to cheaply[3] explore and experiment. Recognize and reward active planning.

  • Projects can be divided into two phases
    • Planning (or design, development):
      • Pushing the project’s vision to the point where it is sufficiently researched, analyzed, tested and detailed so we have a reliable roadmap of the way forward”.
      • The planning phase is a low commitment phase.
      • Tests and experiments in this phase are relatively cheap.
    • Delivery (or construction, production):
      • This is when you roll /deploy out what you’ve crafted in the planning phase
      • The delivery phase is a high commitment phase (e.g. due to contracts being signed and hard deadlines set, etc.)
      • Tests and experiments in this phase (whether intentional or unintentional) can be very costly or lead the project to shut down completely (e.g. a safety failure of a product)
  • It is tempting to go into the delivery phase faster because of understanding the planning phase as a waste of time.
  • However, superficial planning usually only makes people happy at the start. Once you get to the delivery phase, the project starts to face problems that should’ve been identified and dealt with within planning. All of a sudden you find yourself in a “putting out fires” phase rather than “smooth delivery” phase.
  • Planning has a bad reputation because it’s seen as a highly bureaucratic exercise e.g. fill out a project charter, do a stakeholder analysis, etc.
  • While planning can involve varying levels of bureaucracy, it should be understood as a time to carefully consider purposes and goals, explore and test alternatives, sketch ideas, build models/MVPs, investigate difficulties and risks and find solutions to these difficulties.
  • Additionally, an iterative process in the planning phase helps correct the illusion of explanatory depth[4] (bias) of the planner: When a person is forced to build a model/MVP of the project, it forces them to explain and make sense of the project to themselves and others.

Read more:



If you can’t create (or simply shouldn’t release) a minimum viable product (MVP), create a maximum virtual product[5] instead.

  • But wait, isn't extending the planning phase in tension (or even against) the well known lean startup model?[6]
  • In case you're not familiar with the lean startup model, in a nutshell: 1. Release a product quickly (minimum viable product) 2. Develop the product in response to consumer feedback 3. Repeat
  • Bent argues that his think slow, act fast approach contradicts the lean startup model only if you take a narrow view of the nature of planning
  • Planning is doing, iteration and learning before you deliver at full scale
  • The difference of his approach vs. lean startup model is the method of testing
  • The ideal testing method is: Test whatever you want to test in the real world with real people
  • This type of testing is almost never possible for big projects because it is
    • too expensive
    • compromises safety or
    • would simply take way too long
  • The minimum viable standard (for release) for e.g. skyscraper has a much higher bar than for a phone app (and even then if the phone app has high security risks, reaches of privacy etc. then that would still put at a much higher bar on the MVS)
  • Therefore the minimum viable product way of testing (depending on what you're planning or what parts of your project you're planning) isn't always feasible
  • When a minimum viable product is not feasible, use a maximum virtual product[7] instead.
  • Creating a maximum virtual product requires access to the necessary technology (which need not necessarily be sophisticated) e.g. for a movie this can be storyboards and sketches

Read more



Reduce the window of doom.

  • Window of doom or window of vulnerability: The time that passes from the decision to do a project to its delivery during which an event can “crash through” and create trouble. That event can include a black swan.
  • A black swan is a low probability event with extreme consequences e.g. COVID or a stock market collapse
  • It is important to keep the window of doom as small as possible: Keep the delivery of the project short to the extent possible in order to reduce this risk
  • Active planning (see above) is one of the main strategies to reduce the window of doom by making project delivery quicker

Read more:



Your project isn’t as unique as you think. Plan it under that assumption.

  • It is tempting to think about your project as unique and as a “never done before” thing. This is called uniqueness bias.
  • While that’s good for your ego, it’s really bad for planning and making project forecasts because
  • You’ll miss the chance to identify people that have worked on similar projects and learn from their experience and mistakes
  • You’ll fail to identify those experiences and mistakes as applicable to your project (because your project is different, unique) and therefore
  • You’ll miss the opportunity to utilize data from similar projects for your own project forecasts
  • Identifying the characteristics of your project that are similar to other projects. will help you find the project’s reference class

Read more:


Don’t settle on a bad anchor when making a forecast and make it easier for your project team to access better anchors.

  • When forecasting numbers for your project (e.g. costs and duration) a common approach is anchoring & adjustment: You find a reference number (e.g. from past projects) and then you adjust it based on your project’s characteristics (scope, complexity, inflation, etc.)
  • It’s tempting to think that the main way you improve your forecast’s quality is via adjustment
  • However, because your forecast is biased towards the anchor, it’s easy for your forecast to be (very) off when you’re starting with the wrong anchor.
  • Bad anchor = bad forecast. Good anchor = good forecast.
  • Make an effort not to pick the first number you find. Seek the best anchor you can get depending on what data you have available.
  • If you’ve identified your project’s reference class, look for external data within that reference class.
  • It’s typical for project teams not to collect old project data. This data however can be really valuable in order to be used as anchors for future project forecasts.
  • Collect your old project data and (as feasible) internally and externally in order to improve your project forecasts.

Read more



Identify the cost risk[8] distribution of your project.

  • Most project types (see reference class) have fat tails[9]. This is crucial for knowing how to forecast project costs properly.
  • Projects with a normal or near-normal cost distribution have a “regression to the mean”:
    • Most projects are clustered around the middle (of the distribution)
    • You have a few projects on the far right and far left
    • But even the most extreme data points are not far away from the mean
    • Most projects are not normally distributed
  • Projects with a fat-tailed cost distribution have a “regression to the tail
    • Challenge: You might not know your project is fat-tailed; if unsure assume your project is part of a fat-tailed distribution
    • Information technology projects have fat tails
    • The mean is not representative of the distribution and therefore not a good estimate for forecasts
    • Smaller projects are susceptible to fat tails and therefore black swans too. Resist the temptation to think they don’t apply to your “small project” (even house renovations)
    • Fat tailed distributions are typical within complex systems. We live in increasingly complex world with interdependent systems making fat tails more likely.

Read more:



For fat-tailed project forecast risk, not single outcomes.

  • Projects normally distributed
    • Idea: Forecasting a single outcome
    • Recommended forecasting method:
      • Reference class forecasting (RCF) using the mean (that you can get with the data gathered)
    • Risk mitigation strategy
      • Your forecast will account for 50% risk of cost-overrun (since average in a normal distribution)
      • To further reduce risk: Add a 10-15% contingency reserve
      • Ask yourself: Are you expecting to perform worse or better with your project (compared to the mean) and why? Do you expect to be in the “middle” cluster or on the far right/left?
  • Fat-tailed projects
    • Idea: Forecasting risk
      • The project is likely to cost more than xx. Why?
    • Risk mitigation strategy
      • In a typical fat-tailed distribution, about 80% of the outcomes will make up the body of the distribution, the rest (20%) will be the black swans (extreme outcomes)
      • For the main body of the distribution (80%)
        • Regular risk mitigation: Contingencies/reserves
      • For the other 20% (tail outcomes / black swans)
        • Since very high contingencies are economically prohibitive (e.g. 300%, 400% etc.), cut off the tail (black swan management)
        • Exhaustive planning (see above)
        • Further risk mitigation

Read more


Project Management & AI

  • I obviously couldn’t finish this post without mentioning the intersection of project management and AI
  • In the book there was only one mention of the use of AI in project management which was the inchstone approach (supported by AI) to improve estimates of project timelines
  • The main use cases for using AI in project management I found (elsewhere) were
    • AI-generated schedules
    • AI-generated risk logs and
    • AI-assisted cost estimation
  • I didn’t find any forecasts on Metaculus specifically related to this, but feel free to point them out in the comments if I missed something.
  • The closest thing I found is a 2019 forecast from the consultancy Gartner: "By 2030, 80 percent of the work of today’s project management (PM) discipline will be eliminated as artificial intelligence (AI) takes on traditional PM functions such as data collection, tracking and reporting."
  • Additionally, here are a couple of resources that might be interesting for you

Other books you might be interested in reading


  1. ^

     See “Thinking Fast and Slow” by D. Kahnemann

  2. ^

     Strategic misrepresentation: The tendency to deliberately and systematically distort or misstate information for its strategic purposes

  3. ^

     Relative to the costs of delivery.

  4. ^

     People feeling they understand complex phenomena with far greater precision, coherence and depth than they really do

  5. ^

     Yep, that’s also MVP.

  6. ^

     Known from from entrepreneur Eric Ries and Silicon Valley

  7. ^

     Hyperrealistic, highly detailed model for the project

  8. ^

     Cost risk is the risk that a project will spend more money than was originally budgeted.

  9. ^

     Really extreme cost overruns are common





More posts like this

Sorted by Click to highlight new comments since:

I want to say that not only was this a good summary, this post also nudged a professional development book club that I am in to read the book, and we all agreed that it had a lot of helpful/useful ideas.

Great summary!

This was one of my favourite books I've read so far (that and A Little Life, amazing)

One of my favourite parts was on reference class forecasting. If helpful I wrote a summary of how to do it:


Curated and popular this week
Relevant opportunities