Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far?
Epistemic status: Low certainty. These are tentative thoughts, and I’m open to alternative perspectives.
Posting from an alt account.
The Effective Altruism community has made significant strides in recognizing the importance of quitting projects that don’t deliver short-term results, which helps counteract the sunk cost fallacy and promotes the efficient use of resources. This mindset, in many cases, is a positive development. However, I wonder if we’ve over-updated in this direction. Recent posts about project cancellations, like the one regarding the Center for Effective Aid Policy (CEAP), have received considerable attention—CEAP’s closure post garnered 538 karma, for instance. While I don’t have a strong opinion on whether it was prudent to shutter CEAP, I am concerned that its closure, and the community's reaction to it, vibes in a direction where initial setbacks are seen as definitive reasons to quit, even when there might still be significant long-term potential.
From an outside perspective, it seemed that CEAP was building valuable relationships and developing expertise in a complex field—global aid policy—where results may take years to materialize. Yet, the organization was closed, seemingly because it wasn’t achieving short-term success. This raises a broader concern: are we in danger of quitting too early when projects encounter early challenges, rather than giving them the time they need to realize high expected value (EV) outcomes? There’s a tension here between sticking with projects that have a low probability of short-term success but could yield immense value in the long run, and the temptation to cut losses when things aren’t immediately working out.
High-EV projects often have low-impact modal outcomes, especially in the early stages. It’s entirely possible that a project with a 20% chance of success could still be worth pursuing if the potential upside is transformative. However, these projects can look like failures early on, and if we’re too quick to celebrate quitting, we may miss out on rare but important successes. This is particularly relevant in fields like AI safety, global aid policy, or other high-risk, high-reward areas, where expertise and relationships are slow to develop but crucial for long-term impact.
At the same time, it’s essential not to continue investing in clearly failing projects just because they might turn around. The ability to pivot is important, and I don’t want to downplay that. But I wonder if, as a community, we are at risk of overupdating based on short-term signals. Novel and complex projects often need more time to bear fruit, and shutting them down prematurely could mean forfeiting potentially transformative outcomes.
I don’t have an easy answer here, but it might be valuable to explore frameworks that help us better balance the tension between short-term setbacks and long-term EV. How can we better distinguish between projects that genuinely need to be ended and those that just need more time? Are there ways we can improve our evaluations to avoid missing out on projects with high potential because of an overemphasis on early performance metrics?
I’d love to hear thoughts from others working on long-term, high-risk projects—how do you manage this tension between the need to pivot and the potential upside of sticking with a challenging project?
I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).
In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.
Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.