Hide table of contents
This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! 
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated. 

Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.

Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.

  • Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.
  • AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.

The generalised argument, which I’ll call “x-risk is high”, is fairly simple:

  • 1) X-risk this century is, or could very plausibly be, very high (>10%)
  • 2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.
  • 3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.

I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:

Our situation could change

Trivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.

Our priorities could change

In the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?

  • Effective charities are, or could very plausible be, very effective.
  • Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.
  • The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.

This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.

Our priorities may change again, such that “x-risk is high” starts to look naïve. We can imagine some scenarios where this is the case: for example, we might learn that promoting economic growth, accelerating key technologies or ensuring global peace are more effective longtermist interventions than directly reducing x-risk. 

We lose what makes EA distinctive

(I’m least certain about this argument to the point where I think it's more likely to be wrong than right but #draftamnesty)

Other movements make similar arguments to “x-risk is high”. Extinction Rebellion, a climate activist group, regularly draws attention to the risks the planet faces as a result of climate change in order to motivate political activism. The group has had limited success (at most, by introducing more ambitious climate targets to the conversation) and has also attracted criticism for overstating the risks of climate change.

I think “x-risk is high” is much more robust than arguments that claim that climate change will imminently destroy life on earth. But other people might not notice the difference. I worry that by (only) using “x-risk is high”, we risk being dismissed as alarmists. This would be wrong, and I’m sympathetic to the idea that both the EA movement and XR should sound the alarm because we are collectively failing to respond to all of these issues quickly enough. But if we don’t paint a more robust case than “x-risk is high”, that criticism could become more potent.

Takeaway

Outlining the case for longtermism and explaining how longtermism implies that x-risk should be a top priority even if x-risk is low is a much more robust strategy:

  • If we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, it’s still clear that we should prioritise reducing x-risk.
  • If other actions beyond directly reducing x-risk become top priority, those convinced by the case for longtermism are more likely to pivot appropriately. Moreover, if longtermism itself proves to be less robust than we thought, those convinced by the core principles of EA are more likely to pivot appropriately too.
  • EA retains the intellectual rigour that has gotten us to where we are now, and that rigour is on display. I think this rigour is the reason we attract many smart people to high-priority paths (though I’m unsure of this).

My thanks to Lizka for running the Draft Amnesty Day and prompting me to share this draft.

52

0
0

Reactions

0
0

More posts like this

Comments12
Sorted by Click to highlight new comments since: Today at 7:50 PM

I appreciate this post and think you make broadly reasonable arguments!

As the cited example of "screw longtermism", I feel I should say that my crux here is mostly that I think AI x risk is just actually really important, and that given this, that longtermism is bad marketing and unnecessarily exclusionary.

It's exclusionary because it's a niche and specific philosophical position that has some pretty unsavoury conclusions, and is IMO incredibly paralysing and impractical if you AREN'T trying to minimise x risk. I think that if framed right, "make sure AI does what we want especially as it gets far more capable" is just an obviously correct thing to want, and I think the movement already has a major PR problem among natural allies (see, eg Timnit Gebru's Twitter) that this kind of thing exacerbates

It's bad marketing because it's easily conflated with neglecting people alive today, Pascal's Mugging, naive utilitarianism, strong longtermism, etc. And I often see people mocking EA or AI Safety who point to the same obvious weakness of "if there's just a. 0001% chance of it being really bad we should drop everything else to fix it". I think this is actually a pretty valid argument to mock!

And even for people who avoid that trap, it seems pretty patronising to me to frame "caring about future people" as an important and cruxy moral insight - in practice the distinguishing thing about EA is our empirical beliefs about killer robots!

I am admittedly also biased because I find most moral philosophy debates irritating, and think that EAs as a whole spend far too much time on them rather than actually doing things!

Thanks :) And thanks for your original piece!

There seems to be tension in your comment here. You're claiming both that longtermism is a niche and specific philosophical position but also that it's patronising to point out to people. 

Perhaps you're pointing to some hard trade-off? Like, if you make the full argument, it's paralysing and impractical, but if you just state the headline, it's obvious? That strikes me as a bit of a double-strawman - you can explain the idea in varying levels of depth depending on the context.

I don't think longtermism need be understood as a niche and specific philosophical position and discussion about longtermism doesn't need to engage in complex moral philosophy, but I agree that it's often framed this way (in the wrong contexts) and that this is bad for the reasons you point to. I think the first chapter of What We Owe the Future gets this balance right, and it's probably my favourite explanation of longtermism.

I disagree that most people already buy its core claim, which I think is more like "protecting the long-term future of humanity is extremely important and we're not doing it" and not just "we should care about future people". I think many people do "care" in the latter way but aren't sincerely engaging with the implications of that.

think that EAs as a whole spend far too much time on them rather than actually doing things!

I agree with this!

Yeah, fair point, I'm conflating two things here. Firstly, strong longtermism/total utilitarianism, or the slightly weaker form of "the longterm future is overwhelmingly important, and mostly dominates short term considerations", is what I'm calling the niche position. And "future people matter and we should not only care about people alive today" is the common sense patronising position. These are obviously very different things!

In practice, my perception of EA outreach is that it mostly falls into one of those buckets? But this may be me being uncharitable. WWOTF is definitely more nuanced than this, but I mostly just disagree with its message because I think it significantly underrates AI.

I do think that the position of "the longterm future matters a lot, but not overwhelmingly, but is significantly underrated/under invested in today" is reasonable and correct and falls in neither of those extremes. And I would be pro most of society agreeing with it! I just think that the main way that seems likely to robustly affect the longterm future is x risk reduction, and that the risk is high enough that this straightforwardly makes sense from common sense morality.

All makes sense, I agree it's usually one of those two things and that the wrong one is sometimes used.

Yeah, I think that last sentence is where we disagree. I think it's a reasonable view that I'd respond to with something like my "our situation could change" or "our priorities could change". But I'm glad not everyone is taking the same approach and think we should make both of these (complimentary) cases :) 

Thanks for engaging! 

I am admittedly also biased because I find most moral philosophy debates irritating, and think that EAs as a whole spend far too much time on them rather than actually doing things!

I'd say the biggest red flag for moral philosophy is that it still uses intuition both as a hypothesis generator and reliable evidence, when it's basically worthless for conclusions to accept. Yet that's the standard moral philosophy is in. It's akin to the pre-science era of knowledge.

That's why it's so irritating.

So I can draw 2 conclusions from that:

  1. Mind independent facts about morality are not real, in the same vein as identity is not real (controversially, consciousness probably is this.)

  2. There is a reality, but moral philosophy needs to be improved.

And I do think it's valuable for EA to do this, if only to see whether there is a reality at the end of it all.

Congrats on posting your draft!

Ultimately I agree with: "x-risk is high", "the long-term is overwhelmingly important", and "we should use reason and evidence to decide what is most important and then do it", so what I choose to emphasize to people in my messaging is a strategic consideration about what I think will have the best effects, including convincingness. (I think you agree.)

One reason why the EA community seems to spend so much energy on the EA-principle thing is that we've figured out that it's a good meme. It's well-exercised. Whereas the "x-risk is high" message is less-well validated. I would also share you concern that it would turn people off. But maybe it would be good? I think we should probably try more on the margin!

I do think "the long-term is overwhelmingly important" is probably over-emphasized in messaging. Maybe it's important for more academic discussions of cause prioritization, but I'd be surprised if it deserved to be as front-and-center as it is.

Thanks!

Yep, agree with the first paragraph. I do think a good counterargument to this post is "but let's try it out! If it is effective, that might make community-building much more straightforward".

I'm unsure about the prevalence of "the long-term is overwhelmingly important". On the one hand, it might be unnecessary but, on the other, this feels like one of the most important ideas I've come across in my life! 

I'm curious if anyone has tried experimentally evaluating what messaging works. It seems like this new lab in NYC will be doing just this sort of work, so I'll be following along with them: https://www.eapsychology.org/

Broadly agree; nitpick follows.

I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run.

I'm persuaded of all this, except for the “better for the world” part, which I'm not sure about and which I think you didn't argue for. That is, you've persuasively argued that emphasising the process over the conclusions has benefits for community epistemics and long-term community health; but this does trade off against other metrics one might have, like the growth/capacity of individual x-risk-related fields, and you don't comment about the current margin.

For example, if you adopt David Nash's lens of EA as an incubator and coordinator of other communities,  “it's possible that by focusing on EA as a whole rather than specific causes, we are holding back the growth of these fields.”

The low-fidelity message “holy shit, x-risk” may be an appropriate pitch for some situations, given that people have limited attention, and 'getting people into EA per se' is not what we directly care about. For example, among mid-career people with relevant skills, or other people who we expect to be more collaborators with EA than participants.

The high-fidelity message-sequence “EA → Longtermism → x-risk”, as a more complicated idea, is more suited to building the cause prioritisation community, the meta-community that co-ordinates other communities. For example, when fishing for future highly-engaged EAs in universities.

This still leaves open the question of which one of these should be the visible outer layer of EA that people encounter first in the media etc., and on that I think the current margin (which emphasises longtermism over x-risk) is OK. But my takeaway from David Nash's post is that we should make sure to maintain pathways within EA — even 'deep within', e.g. at conferences — that provide value and action-relevance for people who aren't going to consider themselves EA, but who will go on to be informed and affected by it for a long time (that's as opposed to having the implicit endpoint be "do direct work for an EA org"). If these people know they can find each other here in EA, that's also good for the community's breadth of knowledge.

Thanks! Yes, I'm sympathetic to the idea that I'm anchoring too hard on EA growth being strongly correlated with more good being done in the world, which might be wrong. Also agree that we should test out and welcome people who are convinced by some messages but not others.

I agree pretty strongly with this. I think it especially matters since, in my cause prio, the case for working on AI x-risk is much higher than other causes of x-risk even if level of x-risk they posed were the same because I'm not convinced that the expected value of the future conditional on avoiding bio and nuclear x-risk is positive. More generally I think the things that it's worth focusing on from a longtermist perspective compared to just a "dying is bad" perspective can look different within cause areas especially AI. For instance, I think it makes governance stuff and avoiding multi-agent failures look much more important.  

Huh, this is an interesting angle! Thanks :)