Specifically, I'm interested in cases where people who are heavily involved in effective altruism both disagree about a question, and also currently put non-negligible effort into debating the issue.

One example would be the recent EA forum post Growth and the case against randomista development.

Anecdotal or non-public examples welcome.

New Answer
New Comment

13 Answers sorted by

I'm excited to read any list you come up with at the end of this!

Some I thought of:

  • How likely is it that we're living at the most influential time in history?
  • What is the total x-risk this century?
  • Are we saving/investing enough for the future?
  • How much less of an x-risk is AI if there is no "fast takeoff"? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
  • how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
  • Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
  • Should EA stay as a "big tent" or split up into different movements?
  • How much should EA be trying to grow?
  • Does EA pay enough attention to climate change?

Thanks for the list! As a follow-up, I'll try list places online where such debates have occurred for each entry:

1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1

2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.

3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history

4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like

5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don't find either compelling). Also a lot of Simon Knutsson's stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

6a. https://forum... (read more)

Re: 9 - I wrote this back in April 2019. There have been more recent comments from Will in his AMA, and Toby in this EA Global talk (link with timestamp).

Should EA be large and welcoming or small and weird? Related: How important is it for EAs to follow regular social norms? How important is diversity and inclusion in the EA community?

To what extent should EA get involved in politics or push for political change?

I just want to note that in principle, large & weird or small & welcoming movements are both possible. 60s counterculture was a large & weird movement. Quakers are a small & welcoming movement. (If you want to be small & welcoming, I guess it helps to not advertise yourself very much.)

I think you are right that there's a debate around whether EA should be sanitized for a mass audience (by not betting on pandemics or whatever). But e.g. this post mentions that caution around growth could be good because growth is hard to reverse, but I don't see weirdness advocacy.

5
Evan_Gaensbauer
4y
Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I'm also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less 'weird', like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.

Expanding on those points: Should EA be small and elite (i.e. to influence important/powerful actors) or broad and welcoming? How many people should earn to give and how effective is this on the margin? (Maybe not a huge debate but a lot of uncertainty) How much/should we grow EA in non-Western countries? (I think there's a fair deal of ignorance on this topic overall)

Related to D&I: How important is academic diversity in EA? And what blindspots does the EA movement have as a result?

I don't think all of these have been always publicly discussed, but there is definitely a lack of consensus and differing views.

2
Will Bradshaw
4y
What does "academic diversity" mean? I could imagine a few possible interpretations.
4
Vaidehi Agarwalla
4y
Getting people from non-STEM backgrounds, specifically non-econ social sciences and humanities.
1
Gavin
4y
I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (: (Speaking as a philosophy+economics grad and a sort-of computer scientist.)
3
Will Bradshaw
4y
I think there's quite a large diversity in what people in EA did in undergrad / grad school. There's plenty of medics and a small but nontrivial number of biologists around, for example. What they wish they'd done at university, or what they're studying now, might be another matter.

Along the same lines of community health and movement growth, in what situations should individual censor their views or expect to be censored by someone else (eg a Forum moderator or Facebook group admin)?

Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?

My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.

I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."

My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"

Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.

In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.

I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

Normative ethics, especially population ethics, as well as the case for longtermism (which is somewhere between normative and applied ethics, I guess). Even the Global Priorities Institute has research defending asymmetries and against longtermism. Also, hedonism vs preference satisfaction or other values, and the complexity of value.

Consciousness and philosophy of mind, for example on functionalism/computationalism and higher-order theories. This could have important implications for nonhuman animals and artificial sentience. I'm not sure how much debate there is these days, though.

You mention you're not sure how much debate there is around consciousness these days. Surprisingly I'd say the same is increasingly true of normative ethics.

There's still a lot of disagreement about value systems, but most people seem to have stopped having that particular argument, at least as regards total vs negative utilitarianism (which I'd say was the biggest such debate going on a few years ago).

Whether avoiding *extreme suffering* such as cluster headaches, migraines, kidney stones, CRPS, etc. is an important, tractable, and neglected cause. I personally think that due to the long-tails of pleasure and pain, and how cheap the interventions would be, focusing our efforts on e.g. enabling cluster headaches sufferers to access DMT would prevent *astronomical amounts of suffering* at extremely low costs.

The key bottleneck here might be people's ignorance of just *how bad* these kinds of suffering are. I recommend reading the "long-tails of pleasure and pain" article linked above to get a sense of why this is a reasonable interpretation of the situation.

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

Could you spell out what this means? I'd guess that most people (myself included) aren't familiar with ossification and organizational ontologies.

6
Will Bradshaw
4y
I suspect this may be evidence in itself that this is not currently a key ongoing debate in EA.
7
RomeoStevens
4y
Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

What's the new evidence? I haven't been keeping up with the worm wars since 2017. Is there more conclusive data or studies since?

I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 2016/7. My WASH piece had almost no response, which might suggest that people just aren't too bothered by worms any more, though it could equally be something unrelated like style.

I think there's a reasonable case to be made that discussion and interest around worms is dropping though, as people for whom the "low probability of a big success" reasoning is convincing seem likely to either be long-termists, or to have updated towards growth-based interventions.

1
Gavin
4y
Not sure. 2017 fits the beginning of the discussion though.
2
Linch
4y
I thought most of the fights around the worm wars were in 2015 [1]? I really haven't been following. [1] https://chrisblattman.com/2015/07/24/the-10-things-i-learned-in-the-trenches-of-the-worm-wars/

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

"Assuming longtermism, are "broad" or "narrow" approaches to improving the value of the long-term future more promising?"

This is mostly just a broadening of one of Arden's suggestions: "Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?" Not sure how widely debated this still is, but examples include 1, 2, and 3.

Partly relatedly, I find Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy" a really helpful resource for keeping track of the most important evidence and arguments on important questions, and I've wondered whether a comparable resource would be helpful for the effective altruism community more widely.

I think the answers here would be better if they were split up into points. That way we could vote on each separately and the best would come to the top.

(I don't think this is considered a debate by most people - my read is that less than 5% of people involved with EA consider psychedelics a plausible EA cause area, possibly less than 1%)

9
John_Maxwell
4y
"View X is a rare/unusual view, and therefore it's not a debate." That seems a little... condescending or something? How are we ever supposed to learn anything new if we don't debate rare/unusual views?

I simultaneously have some sympathy for this view and think that people responding to this question by pushing their pet cause areas aren't engaging well with the question as I understand it.

For example, I think that anti-ageing research is probably significantly underrated by EAs in general and would happily push for it in a question like "what cause areas are underrated by EAs", but would not (and have not) reference it here as a "key ongoing debate in EA", because I recognise that many people who aren't already convinced wouldn't consider it such.

So one criterion I might use would be whether disputants on both sides would consider the debate to be key.

I also agree with point (2) of Khorton's response to this.

Thinking about this more, I suspect a lot of people would agree that some more general statement, like "What important cause areas is EA missing out on?" is a key ongoing debate, while being sceptical about most specific claimants to that status (because if most people weren't sceptical, EA wouldn't be missing out on that cause area).

8
Kirsten
4y
I think this is two different things: 1. yes I was being a bit condescending, sorry 2. I wasn't trying to say what should be a debate; I was trying to lend accuracy to the discussion of what is a key debate in the EA community.
6
John_Maxwell
4y
Apology accepted, thanks. I agree on point 2.
5
Will Bradshaw
4y
I definitely don't think it would generally be considered a key debate.
2
Milan_Griffes
4y
I think it's closely related to key theoretical debates, e.g. Romeo's answer and Khorton's answer on this thread.
2
Milan_Griffes
4y
fwiw my read on that is ~15-35%, but we run in different circles
Buck
4y14
0
0

I'm interested in betting about whether 20% of EAs think psychedelics are a plausible top EA cause area. Eg we could sample 20 EAs from some group and ask them. Perhaps we could ask random attendees from last year's EAG. Or we could do a poll in EA Hangout.

We may need to operationalize "top EA cause area" more precisely but I would concur with Buck/also bet money odds that <20% of a reasonable random sample of EAs will not answer a question like "in 2025, will psychedelics normalization be a top 5 priority for EAs?" in the affirmative.

-2
Milan_Griffes
4y
Happy to make a bet here – let's figure out an operationalization that would satisfy all parties! fwiw 21.5% of 2019 EA survey respondents thought Mental Health should be "top or near top priority" and 58.5% though it should receive "at least significant resources". I'm sure we can quibble about how the "Mental Health" should map to the "Psychedelics" category, though it seems clear that psychedelics are one of the most promising developments in mental health in the last few decades (breakthrough therapy designation from the FDA and all that). If we assume half of the above considered psychedelics to be in the mental health bucket, then 10.75% of 2019 respondents thought psychedelics should be "top or near top priority" and 29.25% thought that psychedelics should receive "at least significant" EA resources. (And so I'd win the bet under that operationalization, though I suppose we'd also have to quibble over how "receive at least significant resources" maps to "plausible top EA cause area"...)
I'm sure we can quibble about how the "Mental Health" should map to the "Psychedelics" category, though it seems clear that psychedelics are one of the most promising developments in mental health in the last few decades (breakthrough therapy designation from the FDA and all that).
If we assume half of the above considered psychedelics to be in the mental health bucket ...

This does not seem like a quibble to me at all. It seems 'clear' to you but this is by no means the case for most people. I would happily bet that well under half of those people were thinking psychedelics when they said mental health.

2
Milan_Griffes
4y
Even if we assume that only 25% of Mental Health supporters were thinking of psychedelics, that's still 15% of survey respondents saying that psychedelics should receive "at least significant" EA resources. 0.585 * .25 = 0.15 [edited to correct double-counting]

Honestly I would assume less; I voted for Mental Health thinking of Strong Minds.

3
alex lawsen (previously alexrjl)
4y
Ditto to both parts of this
8
riceissa
4y
I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people. Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph). So your actual calculation would just be 0.585 * .25 which is about 15%.
4
Milan_Griffes
4y
Good point, thanks. I've edited my comment to correct the double-counting.
2
Milan_Griffes
4y
Fair enough. It seems clear to me because most mental health professionals I've encountered in the last ~2 years agree that psychedelics are the most innovative thing coming into mainstream Western mental health since SSRIs came online in the 1990s. There's an obvious sampling bias here, but I've seen this from many people who are personally skeptical or uncertain about psychedelics and still agree that the early trials are extremely promising, not just from enthusiasts. You can also see it in the media coverage – there's a lot of positive press about the psychedelic renaissance and some voices of caution too, but basically no negative press. (And the voices of caution are mostly saying "this is a very powerful thing that needs to be managed carefully.")
2
Liam_Donovan
4y
I'd like to take Buck's side of the bet as well if you're willing to bet more

My fundamental disagreement with the EA community in on the importance of basic education (high school equivalent in USA)

Comments1
Sorted by Click to highlight new comments since: Today at 6:18 AM

I think quite a few people here are interpreting this question to be one of either

"What is the issue about which I personally disagree with what I perceive to be EA orthodoxy?"

or

"What seemingly-EA-relevant issues am I personally most confused/uncertain about?"

Either of which could be a good question to answer, but not necessarily here (though the second one seems like a better substitution than the first).