This was a post I made in the EA subreddit (link here), but it might also receive some interesting input here, hence this crosspost. Would love to hear feedback on it!


One of the key insights of EA for me is that altruism is dependent on power laws. Certain interventions are much more effective than others, even though these effective options are sometimes ignored by the larger public.

For a while I have had the intuition this might also be the case for politics. I have been exhausted by the constant media cycle around it, with every week bringing a new controversy to be outraged about. But what if politics is dependent on power laws? What if, for example, 80% of the policy interventions have only 20% of the total impact in terms of lives saved, living quality improved, qalys extended...etc, but 20% of political actions produce 80% of the results?

Some cases I noticed seem to comply with this thought:

- George W. Bush, for all the bad things that he did, funded a little known anti-AIDS programme that probably saved more lives than were lost in the Iraq war.

- land reform in Asia: because of a quite specific alignment of people and forces the US supported a certain type of land reform in Taiwan, Japan and South-Korea after World War 2, which helped kickstart these countries' industrialisation, and economic trajectory out of poverty. (Sources: &

- biotech R&D: South-Korea's industrial policy on biotech, active since the 1990ies, combined with the traumatic experience of MERS in 2015 built the basis for a quick setup of test-and-trace capacities when other countries fumbled with ramping up testing in the early stages of the pandemic. (source:

Of course most of this is anecdotal, and there's disagreement about how effective these policy actions were, but it would be logical that principles derived from development interventions carry over into politics. If EA's can identify government actions with potentially high payoff, it could be a very good way to be effective. It would also add some direction to what people such as Rob Wiblin already said about the social impact of voting, and about how EA's should engage with politics.

But again, this is mostly a gut feeling, so I would love input from EA's, and see if others have (not) been thinking in the same way.




Sorted by Click to highlight new comments since: Today at 1:15 AM

One thing to note here is that lots of commonly-used power law distributions have positive support. Political choices can and sometimes do have dramatically negative effects, and many of the catastrophes that EAs are concerned with are plausibly the result of those choices (like nuclear catastrophe, for instance). 

So a distribution that describes the outcomes of political choices should probably have support on the whole real line, and you wouldn't want to model choices with most simple power-law distributions.  But you might be on to something-- you might think of a hierarchical model in which there's some probability that decisions are either good or bad, and that the degree to which they are good or bad is governed by a power law distribution. That's the model I've been working with, but it seems incomplete to me.

Worth noting that if some political choices have very large negative outcomes, then choosing political paths that avoid those outcomes would have very positive counterfactual impact, even if no one sees it.

Good point! I can't say I have an immediate response, but I'm gonna think a bit more about this.

If EA's can identify government actions with potentially high payoff, it could be a very good way to be effective.

This seems incomplete.

The criterion for effective action here would rather be something like a  (1) correctly estimated (2) high expected value of (3) marginal effort, e.g. that additional donations or work can affect the probability of important policy changes.

It  could be true that policies follow a power-law without this implying many effective actions (e.g.  this could be true in policy spaces that are crowded and where additional effort for one "side" leads to counteracting by another).

Most EAs working on issues outside global development seem to believe that funding marginal policy change in fairly technical issue areas (such as, before 2020, bio-risk, and AI policy, and also the top recs in climate) is very high EV,  with top recommended funding opportunities usually ones that influence policy in some way (in  a wide understanding of "policy", where this includes field building / coalition building). Matt's piece linked below gives good evidence for why that seems a reasonable assumption.

There's some related discussion and empirical evidence here.

Thank you! Great post, with a lot of what I was looking for in there.

Curated and popular this week