Hide table of contents

Introduction

The Centre for Exploratory Altruism Research (CEARCH) emerged from the 2022 Charity Entrepreneurship Incubation Programme. In a nutshell, we do cause prioritization research, as well as subsequent outreach to update the EA and non-EA communities on our findings.

Exploratory Altruism

The Problem

There are many potential cause areas (e.g. improving global health, or reducing pandemic risk, or addressing long-term population decline), but we may not have identified what the most impactful causes are. This is the result of a lack of systematic cause prioritization research.

  • EA’s three big causes (i.e. global health, animal welfare and AI risk) were not chosen by systematic research, but by historical happenstance (e.g. Peter Singer being a strong supporter of animal rights, or the Future of Humanity Institute influencing the early EA movement in Oxford).
  • Existing cause research is not always fully systematic; for lack of time, it does not always involve (a) searching for as many causes as possible (e.g. more than a thousand) and then (b) researching and evaluating all of them to narrow down to the top causes.
  • The search space for causes is vast, and existing EA research organizations agree that there is room for a new organization.

The upshot of insufficient cause prioritization research, and of not knowing the most impactful causes, is that we cannot direct our scarce resources accordingly. Consequently, global welfare is lower and the world worse off than it could be.

Our Solution

To solve this problem, CEARCH carries out:

  • A comprehensive search for causes.
  • Rigorous cause prioritization research, with (a) shallow research reviews done for all causes, (b) intermediate research reviews for more promising causes, and finally (c) deep research reviews for potential top causes.
  • Reasoning transparency and outreach to allow both the EA and non-EA movement to update on our findings and to support the most impactful causes available.

Our Vision

We hope to discover a Cause X every three years and significantly increase support for it.

Expected Impact

If you're interested in the expected impact of exploratory altruism, do take a look at our website (link), where we discuss our theory of change and the evidence base. Charity Entrepreneurship also has a detailed report out on exploratory altruism (link).

Team & Partners

The current team currently comprises Joel Tan, the founder (link).

However, we're looking to hire additional researchers in the near future- do reach out (link) if you're interested in working with us. Do also feel free to get in touch if you wish to discuss cause prioritization research/outreach, provide advice in general, or if you believe CEARCH can help you in any way.

Research Methodology

Research Process

Our research process is iterative:

  • Each cause is subject to an initial shallow research round of one week of desktop research.
  • If the cause's estimated cost-effectiveness is at least one magnitude greater than a GiveWell top charity, it passes to the intermediate research round of two weeks of desktop research and expert interviews.
  • Then, if the cause's estimated cost-effectiveness is still at least one magnitude greater than a GiveWell top charity, it passes to the deep research round of four weeks of desktop research, expert interviews and potential commissioning of surveys and quantitative modelling.

The idea behind the threshold is straightforward - research at the shallower level tends to overestimate a cause's cost-effectiveness, so if a cause doesn't appear effective early on, it's probably not going to be a better-than-GiveWell bet, let alone a Cause X magnitudes more important than our current top causes. Consequently, it's likely a better use of time to move on to the next candidate cause, than to spend more time on this particular cause.

Evaluative Framework

CEARCH attempts to identify a cause's marginal expected value (MEV):

  • MEV = t * Σ(n = p * m * s * c)

where

  • t = tractability, or proportion of problem solved per additional unit of resources spent
  • p = probability of benefit/cost
  • m = moral weight of benefit/cost accrued per individual
  • s = scale in terms of number of individuals benefited/harmed at any one point in time
  • c = persistence of the benefits/costs

This can be viewed as an extension of the ITN framework, for this approach also takes into account the three ITN factors:

  • Importance: Factored in with p * m * s * c.
  • Tractability: Factored in with t.
  • Neglectedness: Factored in with (i) c, since the persistence of the benefits will depends on how long the problem would have lasted and harmed people sans intervention, and that in turn is a function of the extent to which the cause is neglected; and (ii) t, since tractability is a function of neglectedness to the extent that diminishing marginal returns apply.

However, the MEV framework has the additional following advantage:

  • Through c, it takes into account of not just the decline (i.e. non-persistence) of a problem from active intervention (i.e. the neglectedness issue), but also decline from secular trends (e.g. economic growth reducing disease burden through better sanitation, nutrition, and greater access to healthcare).

In implementing the MEV framework, especial effort is made to brainstorm for what benefits and costs there are - though, in our experience, the health effects tend to swamp the non-health effects.

For more details, refer to this comprehensive write-up on CEARCH's evaluative framework (link).

Research Findings

We recently finished conducting shallow research on nuclear war, fungal disease, and asteroid impact. To summarize our findings:

Nuclear War

Taking into account the expected benefits of denuclearization (i.e. fewer deaths and injuries from nuclear war), the expected costs (i.e. more deaths and injuries from conventional war due to weakened deterrence), and the tractability of lobbying for denuclearization, CEARCH finds that the marginal expected value of lobbying for denuclearization to be 248 DALYs per USD 100,000, which is around 39% as cost-effective as giving to a GiveWell top charity.

For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).

Fungal Disease

Considering the expected benefits of eliminating fungal infections (i.e. fewer deaths, less morbidity and greater economic output) as well as the tractability of vaccine development, CEARCH finds that the marginal expected value of vaccine development for fungal infections to be 1,104 DALYs per USD 100,000, which is around 1.7x as cost-effective as giving to a GiveWell top charity.

For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).

Asteroids

Factoring in the expected benefits of preventing asteroid impact events (i.e. fewer deaths and injuries) as well as the tractability of lobbying for asteroid defence, CEARCH finds that the marginal expected value of such asteroid defence lobbying to be 1,352 DALYs per USD 100,000, which is around 2.1x as cost-effective as giving to a GiveWell top charity.

For more details, refer to our cost-effectiveness analysis (link) on the matter as well as the accompanying research report (link).

General Comments

The causes were selected purely out of interest, not because these causes were expected to be especially cost-effective. However, expectations at the outset were that, in terms of their cost-effectiveness, the causes would rank in the following way (in descending order):

  1. Fungal diseases: Importance probably low compared to longtermist causes, though the problem is certain and there seem to be decently tractable solutions (e.g. advance market commitments).
  2. Nuclear war: Change here is likely to be extremely intractable, while the per annum probabilities are fairly low if still meaningful.
  3. Asteroid impact: High impact on occurrence but not neglected given DART, while the probability of occurrence is extremely low and one imagines that tractability isn't that great (effective but expensive).

The results (asteroid impact being the most cost-effective cause, followed by fungal disease, and then nuclear war) were hence moderately surprising. While we wouldn't over-update on such a small sample, we do think it's a data point against the value of intuition in selecting cause areas for initial cause prioritization research, and for making the effort to research as many causes as possible, even ones that do not seem especially important on the surface.

Going Forward

CEARCH will be publishing more detailed forum posts on nuclear war/fungal disease/asteroid impact, and will also continue doing research into additional causes, following the process and methodology outlined above. Comments and criticisms on our research methodology and on our specific research results are, of course, welcome.

Comments15
Sorted by Click to highlight new comments since: Today at 4:25 AM

Congratulations on launching your new organisation!

When I read your post I realised that I was confused by a few things:

(A) It seems like you think that there hasn't been enough optimisation pressure going into the causes that EA is currently focussed on (and possibly that 'systematic research' is the only/best way to get sufficient levels of optimisation.
 

EA’s three big causes (i.e. global health, animal welfare and AI risk) were not chosen by systematic research, but by historical happenstance (e.g. Peter Singer being a strong supporter of animal rights, or the Future of Humanity Institute influencing the early EA movement in Oxford).

I think this is probably wrong for a few reasons
1. There are quite a few examples of people switching between cause areas (e.g. Holden, Will M, Toby Ord moving from GHD to Longtermism). Also, organisations seem to have historically done a decent amount of pivoting (GiveWell -> GiveWell Labs/ Open Phil, 80k spinning out ACE ...).
 

2. Finding cause x has been a meme for a pretty long time and I think looking for new cause/project etc. has been pretty baked in to EA since the start. I think we just haven't found better things because the things we currently have are very good according to sone worldview.

3. My impression is that many EAs (particularly EAs that are highly involved)  have done cause prioritisation themselves. Maybe not to the rigour that you would like but my sense is that many community members doing this work themselves and then doing some aggregation by looking at what people end up doing gives some data (although I agree it's not perfect). To some degree cause exploration happens by default in EA.
 


(B) I am also a bit confused why the goal or proxy goal is find a cause every 3 years? Is it 3 rather than 1 or 6 due to resource constraints or is this number mostly determined by some a priori sense of how many causes their 'should' be.

(C)  Minor: You said that EAs big 3 cause areas are global health, animal welfare and AI risk. I am not sure what the natural way or carving up the cause area space is but I'd guess that Bio security should also be on this list. Maybe something pointing at meta EA depending on what you think of as a 'cause'.

 

I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:

  1. The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.

  2. If you don't require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.

  3. If you aren't averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you don't discount moral patients in the far future or don't care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties I'm neglecting.)

Thanks a lot for the feedback!

(a) Agreed that there is a lot of research being down, and I think my main concern (and CE's too, I understand, though I won't speak for Joey and his team on this) is the issue of systematicity - causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn't deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.

My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.

I understand that GWWC is looking to do evaluation of evaluators  (i.e. GiveWell, FP, CE etc) and in many ways, maybe that's far more feasible in terms of providing the EA community with systematic, comparative results - if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.

(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don't have a good/strong sense of how long it'll take,  so it's probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be -  the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it's so hard to say.

(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!

very cool! A couple q's:

  • research at the shallower level tends to overestimate a cause's cost-effectiveness

sounds plausible to me, but curious why you think this.

  1. On nuclear war, did you try to factor in the chance that a nuclear exchange could lead to a catastrophic collapse that leads to extinction?

(1) Theoretically, additional detail to your CEA means: (a) a more discrete and granular theory of change, which necessarily reduces the probability of success, and (b) trying to measure more flow-through effects/externalities, which while typically positive, are more uncertain and tend also to be less important compared to the primary health effects measured. With the impact of (a) > (b), more research attrites the estimated cost-effectiveness.

(2) Empirically, and from past experience, this has been the case for various organizations, to my understand. Eric Hausen has spoken about Charity Science Health's process (more you look at something, the worse it seems), and GiveWell has written about this before, I believe (somewhere, might dig it up eventually!)

These were also two questions that jumped to mind for me as I read this post.

On the catastrophic collapse issue - no, didn't look at that! It wouldn't change the headline cost-effectiveness that much, but it might depend on your views on astronomical waste.

Is CEARCH pronounced "search"?

Yep! My fellow 2022 CE incubatees and I probably spent more time than was wise on brainstorming cool-sounding names and backronyms. On hindsight, perhaps I should have just gone with Cause Research Advancement and Prioritization (CRAP)!

Whatever the answer, I don't think it can be prevented 🙃

Love to see it.

Because of the way you'd framed the problem - that people have rarely evaluated thousands of causes - I was expecting the "shallow" research round to be a lot shorter than a week. At that rate, if you wanted to do shallow research on 1000 causes a year, you'd need 20 researchers.

You're absolutely right that the shallow research part is fairly time-intensive, and not at all ideal. I had started out thinking one could get away with <=1 day's worth of research at the shallow stage, but I found that just wasn't sufficient to get a high-confidence evaluation (taking into consider the research, the construction of a CEA, the double-checking of all calculations, writing up a report, etc). To put things in context, Open Phil takes a couple of weeks for their shallow research, and bringing that down to 1 week already involves considerable sacrifice (not being able to get expert opinions beyond what is already published), and getting it further down to 1-3 days would be too detrimental to research quality, I think.

Aside from attempting to shorten the research process, ramping up the size of the research team would be the obvious solution, as you say,  and it's what I'll be trying to pursue in the near term. Of course, funding constraints (at the organizational level) and general talent constraint (at the movement level) probably constrain us. Hence, I'm fairly enthusiastic about Akhil's and Leonie's Cause Innovation Bootcamp!

That makes a lot of sense. I find things often take what feels like a long time, even when you're trying to go fast.

Exciting stuff! Looking forward to seeing what you come up with. I agree that the movement has not been systematic enough on cause prioritisation. 

One thing I'm curious about.. where do you draw the line on:

(a) Where one cause ends and the other begins / how to group causes:

For example, aren't fungal diseases, nuclear war and asteroids all sub-causes of global health, in that we only (or at least mainly) care about them insofar as they threaten global health? AI safety is the same (except that in addition to mattering because it threatens health, it also matters because it has the opportunity to bring about happiness). 

(b) Where causes end and interventions begin:

You're measuring the promise of these cause areas in DALYs per $100k, which means you've started thinking about the solutions already. Is CEARCH doing intervention exploration too?

(a) It's definitely fairly arbitrary, but the way I find it useful to think about it is that causes are problems, and you can break them down into:

  • High-level cause area: The broadest possible classification, like (i) problems that primarily affect humans in the here and now; (ii) problems that affect non-human animals; (iii) problems that primarily affect humans in the long run; and (iv) meta problems to do with EA itself.
  • Cause Area: High-level cause domains (e.g. neartermist human problems) can then be broken down into various intermediate-level cause areas (e.g. global disease and poverty -> global health -> communicable diseases -> vector-borne diseases -> mosquito borne diseases) until they reach the narrowest, individual cause level)
  • Cause: At the bottom, we have problems that are defined in the most narrow way possible (e.g. malaria).

In terms of what level cause prioritization research should focus on - I'm not sure if there's an optimal level to always focus on. On the one hand, going narrow makes the actual research easier; on the other, you increase the amount of time needed to explore the search space, and also risk missing out on cross-cause solutions (e.g. vaccines for fungal diseases in general and not just, say, candidiasis).

(b) I think Michael Plant's thesis had a good framing of the issue, and at the risk of summarizing his work poorly, I think the main point is that if causes are problems then interventions are solutions, and since we ultimately care about solving problems in a way that does the most good, we can't really do cause prioritization research without also doing intervention evaluation.

The real challenge is identifying which solutions are the most effective, since at the shallow research stage we don't have the time to look into everything. I can't say I have a good answer this challenge, but in practice I would just briefly research what causes there are, and choose what superficially seems like the most effective. On the public health front, where the data is better, my understanding is that vaccines are (maybe unsurprisingly) very cost-effective, and same for gene drives.

Curated and popular this week
Relevant opportunities