Hide table of contents

This is a list of cause areas that EA should consider prioritising more.

For most of these cause areas, I'm aware of a very small number of EAs working on them. 

 

In another post where I suggested that EA cause priorities are highly uncertain and probably prone to "founder effects", I proposed this thought experiment:

Imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?

 

These cause areas are  ones that I can imagine an effective altruism movement in another timeline prioritising to the extent that our EA movement prioritises randomista development and global health, farmed animal welfare, pandemics, AI safety and community building at elite universities.
 

  1. Improving democratic processes, reducing effects of media bias, money, voter suppression, gerrymandering, etc, especially in LMICs, maybe just throwing money at election commissions, helping them work better 
  2. Non randomista global health and development - eg - funding non-partisan policy think tanks and university departments in LMICs to advocate for better policies
  3. Metascience, open access and science methods - probably huge flow through effects
  4. Growing EA in India - large English speaking population, governments have historically been influenced by technocrats, middle income country, currently experiencing democratic backsliding so good place to promote liberal democracy, growing alternative protein industry, emerging meat eater problem, high risk area for zoonotic pandemics to originate, high burden of antimicrobial resistance, lots of software engineers so good place to grow AI safety awareness, is a nuclear power bordering two nuclear powers, large carbon footprint
  5. Antimicrobial resistance - straightforward conclusion of longtermism - short term gains are being prioritised and risking long term harms 
  6. General medical research - research funding seems poorly optimised and there are probably areas we can identify which perform better under the ITN framework, very strong track record of social impact
  7. Encouraging the use of tiered pricing systems across countries by multinational corporations to improve access to goods and services for poorer countries without reducing profits for companies 
  8. Improving education systems - designing curricula more systematically, tied to what produces value to the student, produces value to society and helps the student improve the world, and makes the student happier in the long term, rather than tied to traditional educational disciplines. Simple interventions could be along the lines of advocating for more lessons focused on ethics, economics, statistics, psychology, positive psychology, health, the scientific method, democracy, politics, voting and extremism and having students be examined on these topics.
  9. Fighting the credential arms race
  10. Studying the positionality of goods and services with regards to effect on wellbeing, and suppressing industries focused on producing highly positional goods and services.
  11. General advocacy and education to improve public opinion on key political issues where public opinion diverges greatly from what is morally good / empirically effective - free trade, immigration, foreign aid
  12. Space governance seems more urgent than many other cause areas - the tractability will probably reduce over time as norms become established organically
  13. Frugal Innovation in global health (innovation targeted at maximising impact of resources in low resource settings)
  14. Digital health in global health - highly scalable
  15. Political representation for children, children’s rights and children’s issues (not the same as representation of future generations)
  16. Political representation for animals
  17. Political representation for foreigners
  18. Treatment resistant depression, scalable mental health services and development of better antidepressants - particularly under prioritarianism
  19. Child sexual abuse - particularly under prioritarianism
  20. Torture of detainees - particularly under prioritarianism
  21. Palliative care, including opioid access and development of alternative and better painkillers - particularly under prioritarianism
  22. Eradication of infectious diseases - particularly under longtermism (as we approach eradication of an infectious disease, further reduction of the disease burden becomes less cost-effective, but if we factor in long-term benefits, it is probably a good use of resources)
  23. Better treatment of prisoners - particularly under prioritarianism
  24. Loneliness
  25. Vaccine hesitancy
  26. Global minimum wealth tax + cracking down on tax evasion by wealthy people in LMICs
  27. Land value tax advocacy
  28. Supervolcanoes
  29. Community building in the least populous countries to influence national policy and then international governance via international organisations
  30. Optimising intellectual property systems to speed up innovation

36

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since: Today at 2:20 PM

Agreed about the tractability of space governance over time. We are setting up foundations for this field at the Center for Space Governance – if anyone is interested in supporting us, please get in touch.

Given a limited stock of resources, prioritizing one thing more means prioritizing some other things less. Do you have thoughts about which causes "EA" should consider prioritizing less?

I’d say community building at elite western universities, general global priorities + longtermism philosophy research and the development of alternative proteins. But also could make sense to very broadly shift resources from current priorities to these ones.

Most of these seem intractable and many have lots of people working on them already.

The benefit of bed nets and vitamin A supplementation is that they are proven solutions to neglected problems.

Agree that it would be difficult to generate comparably high certainty evidence for most of these cause areas.

However, I think interventions in these areas could still have high expected value and perform well on the ITN framework so could still be worth pursuing, the way pandemic preparedness, AI safety and broader approaches and political approaches to international development and farmed animal welfare are pursued by EA at the moment.

Interested to hear which of these causes you feel are not neglected at the moment. I’d say you’re probably right for 19, 20 and 25.

On 18A/C, there have been a number of very expensive trials for new antidepressant agents, including for TRD . . . and they have generally underperformed in Stage III trials. There is a huge financial incentive for a successful product in this area in the high-income markets. So not particularly neglected, and I'm not sold on tractability either. For example, I don't think the current armamentum of antidepressants as monotherapy is more effective than the old school MAOIs and TCAs from decades ago (although the drug interactions, cheese-eating risk, and overdose risk  SSRIs are much improved with SSRIs etc.). 

I think promoting access to mental-health care in low-income countries is an easier argument to make than throwing billions more into trying to find a better treatment for TRD.

Good point, this makes sense.

I'd add specifically transforming American governance/political system and protecting the United States from destabilization/the anti-democratic threat. (I'm singling out the United States because of its consequences as the world's superpower.)

This may fall under "general medical stuff" but I've always been surprised how little EA seems to care about aging and human longevity, especially given how fond this community is in measuring "quality life years".

Progress here could solve depopulation problems among the other obvious benefits.

I think there’s a decent “extending lifespan would slow down generational replacement, slowing down moral progress” argument which means that extending lifespan is lower EV than lots of other stuff

If I understand what you're saying correctly, this is another reason I don't identify as EA.

You're basically saying people dying is advantageous because their influence is replaced by people you deem as having superior virtues?

Its not obvious to me that "replacement" generations have superior values to those that they replace merely on account of being younger/newer etc..

But even accepting that's the case, how is discounting someone's life because they have the wrong opinions not morally demented?

I’m saying that’s a benefit of death and that it reduces the EV of extending lifespan, not that it makes death good overall or means that extending lifespan is net negative.

Even if lifespan extension is good, it shouldn’t be a major EA cause area unless it has a very high EV.

Agree that the importance of generational replacement for moral progress is not super clear, but I expect that the effect of generational replacement is large enough for maximum lifespan extension to not be a good focus for EA.

Also worth adding that there is strong private sector incentive to develop anti aging interventions which also makes this a less promising cause area for EA.

Also agree that “value lives equally” is a good principle, but when allocating limited resources to high impact interventions, I think it makes sense to account for all known factors, including the effects of the moral views of the beneficiaries of the interventions, even if that causes us to value lives slightly unequally.

Also I don’t think my views are generally representative of EA so would advise against making judgements about EA on my views alone.

I'm loathe to use this, but let's use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).

There is nothing worse than death. There are no benefits unless that death unlocks life.

I don't think the (likely nonexistent) positive effects of "generation replacement" will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.

I don't think "personal beliefs" should be included in an "all known factors" analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?

I also don't think there's a "but" after "all lives are equal". That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it's a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.

I do agree that there is private sector incentive for anti-aging, but I think that's true of a lot of EA initiatives. I'm personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it's even MORE important to invest there, if you're inclined to be skeptical of the profit motive (though I'm not, so I'm not included).

FWIW, my view is that there are states worse than being dead, such as extreme suffering.

I don’t mean that we should place less intrinsic worth on people’s lives because of their views, but I think it is okay to make decisions which do effectively violate the principle of valuing people equally - your women and children on lifeboats first is a good example of this. (Also agree with you that there’s a slippery slope here and important to distinguish between the two)

I think “don’t donate to solving problems where there is strong private sector incentive to solve them” is a good heuristic for using charity money as effectively as possible, because there is a very large private sector trying to maximise profit and a very small EA movement trying to maximise impact. Agree that EA doesn’t follow this heuristic very consistently, eg - I think we should donate less to alternative protein development since there’s strong private sector incentive there.