So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).

This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.

Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.

On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.

The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.

The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is importantwhat the evidence for each problem area is, and which area you would be a good personal fit for.

So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, and can come away thinking “any of these are high impact”, when the more correct view, taking into account the power-law distribution, would be more like “any of these could be the most impactful intervention, but we don’t know which one yet. After doing some reflection on myself and the evidence, I think problem area X is likely to be the most impactful or most important.” [4]

There is no one who has done your hard cognitive work for you. You still have to think about which things you think will lead to high impact, and which things you are a good personal fit for.

Thanks to Sam and Conor for feedback.
I’d be interested to hear if you think I’m overstating how common this trap might be.


  1. For example issues regarding deferring, personal fit, and probably more. ↩︎

  2. Now there's an ominous sentence if I've ever seen one. ↩︎

  3. You can of course have meta-beliefs about your expected posterior beliefs about the distribution of impact (that it will be power-law distributed), but not about the position of any single intervention/cause area in that distribution. ↩︎

  1. Yes I am sneaking in here a transformation from “this area/intervention is the most impactful” to “I can do my most impactful work in this area/intervention”, but I don’t think that is substantial. ↩︎

Show all footnotes
Comments1


Sorted by Click to highlight new comments since:

I love the point about the dangers of "can't go wrong" style reasoning. I think we're used to giving advice like this to friends when they're stressing out about a choice that is relatively low-stakes, even like "which of these all-pretty-decent jobs [in not-super-high-impact areas] should I take." It's true that for the person getting the advice, all the jobs would probably be fine, but the intuition doesn't work when the stakes for others are very high. Impact is likely so heavy-tailed that even if you're doing a job at the 99th percentile of your options, it's probably (?) orders of magnitude worse than the 99.9th percentile — meaning you're giving up more than 90% of your impact.

A corollary is that different roles and projects within each cause area are also likely to be heavy-tailed, and once again, I hear the advice of "can't go wrong" in pretty inappropriate contexts. Picking the second-best option likely means giving up most of your impact, which is measured in expected lives saved. You can definitely go wrong!

Now, we all have limited cognition and for these kinds of choices; we ultimately have to make choices (and doing nothing is also a choice), and we'll inevitably make mistakes, and we should treat ourselves with some compassion. But maybe we should reframe comments like "you can't go wrong" as like, "sounds like you have some really exciting options and difficult choices ahead" — and, if you have the bandwidth to actually do this — "let me know if I can help you think them through!"

Curated and popular this week
abrahamrowe
 ·  · 9m read
 · 
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines:  I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower. But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions. As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing. I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
Dorothy M.
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa
 ·  · 3m read
 · 
Written anonymously because I work in a field where there is a currently low but non-negligible and possibly high future risk of negative consequences for criticizing Trump and Trumpism. This post is an attempt to cobble together some ideas about the current situation in the United States and its impact on EA. I invite discussion on this, not only from Americans, but also those with advocacy experience in countries that are not fully liberal democracies (especially those countries where state capacity is substantial and autocratic repression occurs).  I've deleted a lot of text from this post in various drafts because I find myself getting way too in the weeds discoursing on comparative authoritarian studies, disinformation and misinformation (this is a great intro, though already somewhat outdated), and the dangers of the GOP.[1] I will note that I worry there is still a tendency to view the administration as chaotic and clumsy but retaining some degree of good faith, which strikes me as quite naive.  For the sake of brevity and focus, I will take these two things to be true, and try to hypothesize what they mean for EA. I'm not going to pretend these are ironclad truths, but I'm fairly confident in them.[2]  1. Under Donald Trump, the Republican Party (GOP) is no longer substantially committed to democracy and the rule of law. 1. The GOP will almost certainly continue to engage in measures that test the limits of constitutional rule as long as Trump is alive, and likely after he dies. 2. The Democratic Party will remain constrained by institutional and coalition factors that prevent it from behaving like the GOP. That is, absent overwhelming electoral victories in 2024 and 2026 (and beyond), the Democrats' comparatively greater commitment to rule of law and democracy will prevent systematic purging of the GOP elites responsible for democratic backsliding; while we have not crossed the Rubicon yet, it will get much worse before things get better. 2. T