[ Question ]

What are the key ongoing debates in EA?

by richard_ngo 1 min read8th Mar 202053 comments

60


Specifically, I'm interested in cases where people who are heavily involved in effective altruism both disagree about a question, and also currently put non-negligible effort into debating the issue.

One example would be the recent EA forum post Growth and the case against randomista development.

Anecdotal or non-public examples welcome.

New Answer
Ask Related Question
New Comment

12 Answers

I'm excited to read any list you come up with at the end of this!

Some I thought of:

  • How likely is it that we're living at the most influential time in history?
  • What is the total x-risk this century?
  • Are we saving/investing enough for the future?
  • How much less of an x-risk is AI if there is no "fast takeoff"? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
  • how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
  • Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
  • Should EA stay as a "big tent" or split up into different movements?
  • How much should EA be trying to grow?
  • Does EA pay enough attention to climate change?

Should EA be large and welcoming or small and weird? Related: How important is it for EAs to follow regular social norms? How important is diversity and inclusion in the EA community?

To what extent should EA get involved in politics or push for political change?

Normative ethics, especially population ethics, as well as the case for longtermism (which is somewhere between normative and applied ethics, I guess). Even the Global Priorities Institute has research defending asymmetries and against longtermism. Also, hedonism vs preference satisfaction or other values, and the complexity of value.

Consciousness and philosophy of mind, for example on functionalism/computationalism and higher-order theories. This could have important implications for nonhuman animals and artificial sentience. I'm not sure how much debate there is these days, though.

Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?

My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.

I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."

My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"

Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.

In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.

I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Whether avoiding *extreme suffering* such as cluster headaches, migraines, kidney stones, CRPS, etc. is an important, tractable, and neglected cause. I personally think that due to the long-tails of pleasure and pain, and how cheap the interventions would be, focusing our efforts on e.g. enabling cluster headaches sufferers to access DMT would prevent *astronomical amounts of suffering* at extremely low costs.

The key bottleneck here might be people's ignorance of just *how bad* these kinds of suffering are. I recommend reading the "long-tails of pleasure and pain" article linked above to get a sense of why this is a reasonable interpretation of the situation.

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

"Assuming longtermism, are "broad" or "narrow" approaches to improving the value of the long-term future more promising?"

This is mostly just a broadening of one of Arden's suggestions: "Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?" Not sure how widely debated this still is, but examples include 1, 2, and 3.

Partly relatedly, I find Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy" a really helpful resource for keeping track of the most important evidence and arguments on important questions, and I've wondered whether a comparable resource would be helpful for the effective altruism community more widely.

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

My fundamental disagreement with the EA community in on the importance of basic education (high school equivalent in USA)