Hide table of contents

As a community, EA sometimes talks about finding "Cause X" (example 1, example 2).

The search for "Cause X" featured prominently in the billing for last year's EA Global (a).

I understand "Cause X" to mean "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar."

This afternoon, I realized I don't really know how many people in EA are actively pursuing the "search for cause X." (I thought of a couple people, who I'll note in comments to this thread. But my map feels very incomplete.)

19

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

In my understanding "Cause X" is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar".

I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of "recognition"a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:

  • Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
  • Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a "natural part of life". Right consideration but it doesn't imply its moral value or how many resources we should devote to the problem)
  • "Resurrection" or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
  • Something related to subjective experience? (stuff related to subjective experience that people don't deem worthy to assign moral value to because "times are still too rough to notice them", or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).

Cause areas that I think don't fit the definition above:

  • Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
  • X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.

But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I'm not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.

I believe you could argue that every organisation devoted to finding new potential cause areas is.

What organizations do you have in mind?

5
Emanuele_Ascani
5y
Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn't mean "devoted exclusively to finding new potential cause areas".

Thanks!

Very curious why this was downvoted. (This idea has been floated before, e.g. on the 80,000 Hours podcast, and seems like a plausible Cause X.)

I think working on preventing collapse of civilization given loss of electricity/industry due to extreme solar storm, high altitude electromagnetic pulses and narrow AI computer virus is a cause X (disclaimer, co-founder of ALLFED).

This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an "Effective Thesis Prize", to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.

Rethink Priorities seems to be the obvious organization focused on this.

From their website:

Right now, our research agenda is primarily focused on:
prioritization and research work within interventions aimed at nonhuman animals (as research progress here looks uniquely tractable compared to other cause areas)
understanding EA movement growth by running the EA Survey and assisting LEAN and SHIC in gathering evidence about EA movement building (as research here looks tractable and neglected)

Sounds like they're currently focused on new animal welfare & community-building interventions, rather than finding an entirely differ... (read more)

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.

Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.

Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.

2
Milan_Griffes
5y
Thanks! Is there a public-facing prioritized list of Rethink Priorities projects? (Just curious)
5
Peter Wildeford
5y
Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019 We're working on writing up an update.

Between this, some ideas about AI x-risk and progress, and the unique position of the EA community, I'm beginning to think that "move Silicon Valley to cooperate with the US government and defense on AI technology" is Cause X. I intend to post something substantial in the future.

[anonymous]5y14
0
0

Can you expand on this answer? E.g. how much this is a focus for you, how long you've been doing this, how long you expect to continue doing this, etc.

6
Peter Wildeford
5y
I'd refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj
1[anonymous]5y
The link didn't work properly for me. Did you mean the following comment?
3
Peter Wildeford
5y
Yep :)

GiveWell is searching for cost-competitive causes in many different areas (see the "investigating opportunities" table).

Good point. Plausibly this is Cause X research (especially if they team up with Mark Lutter & co.); I'll be curious to see how far outside their traditional remit they go.

Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.

Comments12
Sorted by Click to highlight new comments since: Today at 9:51 PM

One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don't think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.

Their website suggests it wasn't independent.

'The primary issue for OPIS is the ethical imperative to reduce suffering. Linked to the effective altruism movement, they choose causes that are most likely to produce the largest impact, determined by what Leighton calls “a clear underlying philosophy which is suffering-focused”.'

I may be wrong, but I remember reading an EA profile report and seeing Leighton comment that the profile report inspired OPIS's movement toward working on the problem.

Michael Plant's cause profile on mental health seems like a plausible Cause X.

Wild-animal-suffering research seems like a plausible Cause X.

Founders Pledge cause report on climate change seems like a plausible Cause X.

I've always thought of "Cause X" as a theme for events like EAG that are meant to prompt thinking in EA, and wasn't ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don't think it ever should have been. I don't think it should be treated as such either. I don't see how it makes sense to anyone as a practical pursuit.

There have been some cause prioritization efforts that took 'Cause X' seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.

Since the question became reformulated as "Is x-risk reduction Cause X?," much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

In general, I've never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.

While they're disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.

It's taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: 'What is Cause X?'

They're not brought to attention much, but there are sources outlining what the 'fundamental assumptions' of EA are (what are typically called 'EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:

1. If one is confident one's current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.

2. If one is confident one's current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn't know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.

3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.

As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

https://www.openphilanthropy.org/research/cause-reports

I don't see how it makes sense to anyone as a practical pursuit.

GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.

That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority.

Pretty strongly disagree with this. I think there's a strong case for x-risk being a priority cause area, but I don't think it dominates all other contenders. (More on this here.)

The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.

Givewell's and Open Phil's worked wasn't termed 'Cause X,' but I think a lot of the stuff you're pointing to would've started before 'Cause X' was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:

  • institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
  • small, private non-profit organizations like Rethink Priorities.

Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn't know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.

The Qualia Research Institute is a good generator of hypotheses for Cause X candidates. Here's a recent example (a).

Curated and popular this week
Relevant opportunities