Hide table of contents

Longlist of Causes

CEARCH keeps a running longlist of causes (link) that may merit further research to see if they are highly impactful causes worth supporting. The list, which covers the broad areas of global health & development, longtermism, as well as EA meta, is currently around 400 causes long.

In compiling this longlist, we have used a variety of methods, as detailed in this search methodology (link); core ones include:

  • Using Nuno’s excellent list as a starting point.
  • Conducting consultations and surveys (e.g. of both EA and non-EA organizations and individuals).
  • Performing outcome tracing (i.e. looking at good/bad outcomes and identifying the underlying causes): The Global Burden of Diseases database and the World Database of Happiness are especially useful in this regard.

Our hope is that this list is useful to the community, and not just our own research team.

Notes:

  • Classification of causes is fairly arbitrary, and each organization has their own approach. CEARCH find it useful to think of causes in three distinct levels, from broadest to narrowest:
    • (1) High-level cause domain, which are problems defined in the broadest way possible: (a) global well-being, which concerns human welfare in the near-term; (b) animal welfare, which is self-explanatory; (c) longtermism, which concerns human welfare in the long-term; and (d) EA meta, which involves doing good through improving or expanding effective altruism itself.
    • (2) Cause areas, which are significantly narrowed down from high-level cause domains, but are still fairly broad themselves. For example, within global well-being, we might have global health, economic & development, political reform etc
    • (3) Causes, which are problems defined in a fairly narrow way (e.g. malaria, vitamin A deficiency, childhood vaccination, hypertension, diabetes etc).
  • Of course, causes can always be broken down further (e.g. malaria in sub-Saharan Africa, or childhood vaccination for diphtheria), and going through our list, you can also see that causes may overlap (e.g. air pollution in a general sense, vs ambient/outdoor particulate matter pollution, vs indoor air quality, vs specifically indoor household air pollution from soot). The reason for such overlap is partly a lack of time on CEARCH's part to rationalize the whole list; but partly it also reflects our view that it can be valuable to look at problems at different levels of granularity (e.g. at higher levels, a single intervention may be able to solve multiple problems at the same time, such that a broader definition of a cause areas helps find more cost-effective solutions; conversely, at lower levels, you can focus on very targeted interventions that may be very cost-effective but not generally applicable).
  • Note that animal welfare causes are not in this longlist, as CEARCH has so far not focused on them, for want of good moral weights to do evaluations with. This should not be taken to imply that animal causes are unimportant, or that research into cost-effective animal causes is not valuable.

 

Cause Exploration Contest

Open Philanthropy had its excellent Cause Exploration Prize; here, we’ll like to do something similar but make the bar significantly lower.

  • We invite people to suggest potential cause areas, providing a short justification if you feel it useful (e.g. briefly covering why the issue is important/tractable/neglected), or not, if otherwise (e.g. the idea simply appears novel or interesting to you). All ideas are welcome, and even causes which do not appear intuitively impactful can be fairly cost-effective upon deeper research.
  • People are also welcome to suggest potential search methodologies for finding causes (e.g. consulting weird philosophy, or looking up death certificates).

Prizes will be awarded in the following way:

  • USD 300 for what the CEARCH team judges to be the most plausibly cost-effective and/or novel cause idea (and that is not already on our public longlist of causes).
  • USD 700 for what the CEARCH team judges to be the most useful and/or novel search methodology idea (and that is not already listed in our public search methodology document).

Entries may be made here. The contest will run for a month, until 31st July 2023 (23:59, GMT-12). Multiple entries are allowed (n.b. do make separate individual submissions). The detailed rules, for those who are interested, are available here.


 

Comments6


Sorted by Click to highlight new comments since:

My entry:
 

Modern slavery

(Disclaimer the following is my initial impressions based on 2 minutes of Googling, cannot promise accuracy)

Scale – 400k-1million people are in slavery in the DRC. They lead horrendous lives suffer a myriad of terrible health conditions and are not free. The number is huge, more than die of Malaria each year, more than die of AIDs each year. EAs have looked into US criminal justice but there might be nearly as many slaves in the DRC as there are prisoners in the US and ALL of them are being held unjustly and likely suffer in many more ways than US prisoners.

Tractability – the animal welfare movement has over the last decade, developing a host of evidence based tools that have lead to win after win for animal welfare. In particular we have a playbook for targeted corporate campaigns and have been immensely successful at driving corporates to commit to ethical practices. Most of the products of slavery in the Congo are used by Western companies that could be pressured to change. In many ways this should be even easier than the case for animals as people care more about humans than animals.

Neglectedness – No body seems to be doing this (based on my 1min of Googling). The anti-slavery space seems very very focused on a slavery in HICs (like trafficking to the US or the UK) and not on the Congo. It is talked about but I did not find any targeted campaigns.

30 second BOTEC – Number of years a corporate campaign program would need to run that could end 75% of slavery in the Congo x cost per year / 75% x number of slaves * a best guess DALY burden of life in slavery = ( 10 x $2,000,000 ) / ( 75% x 700,000 x 12.5 ) = $3/DALY
 

Cheers, Sam!

Hey Joel! Cool list you already have.

Is the 300 USD prize for "(2) Cause areas" and/or "(3) Causes"? You distinguish them at the start of your post but then refer to "potential cause areas", "causes", and "cause ideas" in describing the contest.

Also, its just one 300USD prize and one 700USD prize, right?

Thanks!

Hi Jamie. For both (causes broadly defined)! Yes, it's just one USD 300 prize (for causes), and one USD 700 prize (for methodologies).

What a wonderful project! I really think any attempts to expand effective altruism beyond its current four main cause areas (global health, animal welfare, global catastrophic risk and EA meta) should be strongly encouraged.

Thanks! It would be interesting if we could identify a genuinely new high-level cause domain outside GHD/animals/longtermism/meta - though given how broad these are, it's definitely easier finding new important/tractable/neglected ideas *within* these domains than without.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra