Cross-posted from my blog.
Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small.
Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%.
That is not how most nonprofit work feels to me.
You are only ever making small dents in important problems
I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems.
Consider what else my $500 CrossFit scholarship might do:
* I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed.
* I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering):
More meta points
The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".
Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?
To my mind, this is the crux, because:
[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]
Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.
I think people have a cached intuition that "global health is most cost-effective on near-term timescales" but what's really happened is that "a well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base." Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms.
(I should caveat that we haven't yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)
Speaking personally, I have also perceived a move away from longtermism, and as someone who finds longtermism very compelling, this has been disappointing to see. I agree it has substantive implications on what we prioritise.
Speaking more on behalf of GWWC, where I am a researcher: our motivation for changing our cause area from “creating a better future” to “reducing global catastrophic risks” really was not based on PR. As shared here:
Essentially, we’re aiming to use the term “reducing global catastrophic risks” as a kind of superset that includes reducing existential risk, and that is inclusive of all the potential motivations. For example, when looking for recommendations in this area, we would be happy to include recommendations that only make sense from a longtermist perspective. A large part of the motivation for this was based on finding some of the arguments made in several of the posts you linked (including “EA and Longtermism: not a crux for saving the world”) compelling.
Also, our decision to step down from managing the communications for the Longtermism Fund (now “Emerging Challenges Fund”) was based on wanting to be able to more independently evaluate Longview’s grantmaking, rather than brand association.
Great post, Tom, thanks for writing!
One thought is that a GCR framing isn't the only alternative to longtermism. We could also talk about caring for future generations.
This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And it's pluralistic, connected to indigenous worldviews and environmentalist rhetoric.
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
This isn’t mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didn’t decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term “GCR” over “longtermism”, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.
One point that hasn't been mentioned: GCR's may be many, many orders of magnitude more likely than extinctions. For example, it's not hard to imagine a super deadly virus that kills 50% of the worlds population , but a virus that manages to kill literally everyone, including people hiding out in bunkers, remote villages, and in antarctica, doesn't make too much sense: if it was that lethal, it would probably burn out before reaching everyone.
The relevant comparison in this context is not with human extinction but with an existential catastrophe. A virus that killed everyone except humans in extremely remote locations might well destroy humanity’s long-term potential. It is not plausible—at least not for the reasons provided— that “GCR's may be many, many orders of magnitude more likely than” existential catastrophes, on reasonable interpretations of “many, many”.
(Separately, the catastrophe may involve a process that intelligently optimizes for human extinction, by either humans or non-human agents, so I also think that the claim as stated is false.)
How?
I see it delaying things while the numbers recover, but it's not like humans will suddenly become unable to learn to read. Why would humanity not simply pick itself up and recover?
Two straightforward ways (more have been discussed in the relevant literature) are by making humanity more vulnerable to other threats and by pushing back humanity past the Great Filter (about whose location we should be pretty uncertain).
This is very vague. What other threats? It seems like a virus wiping out most of humanity would decrease the likelihood of other threats. It would put an end to climate change, reduce the motivation for nuclear attacks and ability to maintain a nuclear arsenal, reduce the likelihood of people developing AGI, etc.
Humanity’s chances of realizing its potential are substantially lower when there are only a few thousand humans around, because the species will remain vulnerable for a considerable time before it fully recovers. The relevant question is not whether the most severe current risks will be as serious in this scenario, because (1) other risks will then be much more pressing and (2) what matters is not the risk survivors of such a catastrophe face at any given time, but the cumulative risk to which the species is exposed until it bounces back.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I'm not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work we've completed, wasn't designed to address this question directly. I don't think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.
We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/longtermist message testing, and broader work assessing public attitudes towards EA/longtermism (which I don't have linkable applications for).
I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you don't want to maximise persuasiveness, it's still important to understand how different groups are understanding (or misunderstanding) your message.
I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I don't think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.
IMO, the main axis of variation for EA related cause prio is "how far down the crazy train do we go" not "person affecting (current generations) vs otherwise" (though views like person affecting ethics might be downstream of crazy train stops).
Idk what I think about Longtermism --> GCR, but I do think that we shouldn't lose "the future might be totally insane" and "this might be the most important century in some longer view". And I could imagine focus on GCR killing a broader view of history.
That said, if we literally just care about experiences which are somewhat continuous with current experiences, it's plausible that speeding up AI outcompetes reducing GCRs/AI risk. And it's plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by "have people survive until techno utopia and ensure that techno utopia happens". And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.
Meta: this should not have been a quick take, but a post (references, structure, tldr, epistemic status, ...)
This sounds like an accusation, when it could so easily have been a compliment. The net effect of comments like this is fewer posts and fewer quick takes.
I actually meant it as a compliment, thanks for pointing out that it can be received differently. I liked this "quick take" and believe it would have been a high-quality post.
I was not aware that my comment would reduce the number of quick takes and posts, but I feel deleting my comment now just because of the downvotes would also be weird. So, if anyone reads this and felt discouraged by the above, I hope you rather post your things somewhere rather than not at all.
Yeah that's fair. I wrote this somewhat off the cuff, but because it got more engagement than I thought I'd make it a full post if I wrote again
I've upvoted this comment, but weakly disagree that there's such a shift happening (EVF orgs still seem to be selecting pretty heavily for longtermist projects, the global health and development fund has been discontinued while the LTFF is still around etc), and quite strongly disagree that it would be bad if it is:
That 'if' clause is doing a huge amount of work here. In practice I think the EA community is far too sanguine about our prospects post-civilisational collapse of becoming interstellar (which, from a longtermist perspective, is what matters - not 'recovery'). I've written a sequence on this here, and have a calculator which allows you to easily explore the simple model's implications on your beliefs described in post 3 here, with an implementation of the more complex model available on the repo. As Titotal wrote in another reply, it's easy to believe 'lesser' catastrophes are many times more likely, so could very well be where the main expected loss of value lies.
I think I agree with this, but draw a different conclusion. Longtermist work has focused heavily on existential risk, and in practice the risk of extinction, IMO seriously dropping the ball on trajectory changes with little more justification that the latter are hard to think about. As a consequence they've ignored what seem to me the very real loss of expected unit-value from lesser catastrophes, and the to-me-plausible increase in it from interventions designed to make people's lives better (generally lumping those in as 'shorttermist'). If people are now starting to take other catastrophic risks more seriously, that might be remedied. (also relevant to your 3rd and 4th points)
This seems to be treating 'focus only on current generations' and 'focus on Pascalian arguments for astronomical value in the distant future' as the only two reasonable views. David Thorstad has written a lot, I think very reasonably, about reasons why expected value of longtermist scenarios might actually be quite low, but one might still have considerable concern for the next few generations.
Counterpoint: I think the discourse before the purported shift to GCRs was substantially more dishonest. Nanda and Alexander's posts argued that we should talk about x-risk rather than longtermism on the grounds that it might kill you and everyone you know - which is very misleading if you only seriously consider catastrophes that kill 100% of people, and ignore (or conceivably even promote) those that leave >0.01% behind (which, judging by Luisa Rodriguez's work is around the point beyond which EAs would typically consider something an existential catastrophe).
I basically read Zabel's post as doing the same, not as desiring a shift to GCR focus, but as desiring presenting the work that way, saying 'I’d guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, it’s unlikely that we’d re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problem' (emphasis mine).
Nanda, Alexander and Zabel's posts all left a very bad taste in my mouth for exactly that reason.
This is as much an argument that we made a mistake ever focusing on longtermism as that we shouldn't now shift away from it. Oliver Habryka (can't find link offhand) and Kelsey Piper are two EAs who've publicly expressed discomfort with the level of artificial support WWOTF received, and I'm much less notable, but happy to add myself to the list of people uncomfortable the business, especially since at the time he was a trustee of the charity that was doing so much to promote his career.
YouGov Poll on SBF and EA
I recently came across this article from YouGov (published last week), summarizing a survey of US citizens for their opinions on Sam Bankman-Fried, Cryptocurrency and Effective Altruism.
I half-expected the survey responses to be pretty negative about EA, given press coverage and potential priming effects associating SBF to EA. So I was positively surprised that:
(it's worth noting that there were only ~1000 participants, and the survey was online only)
I am very sceptical about the numbers presented in this article. 22% of US citizens have heard of Effective Altruism? That seems very high. RP did a survey in May 2022 and found that somewhere between 2.6% and 6.7% of the US population had heard of EA. Even then, my intuition was that this seemed high. Even with the FTX stuff it seems extremely unlikely that 22% of Americans have actually heard of EA.
Thanks - I just saw RP put out this post, which makes much the same point. Good to be cautious about interpreting these results!
Quick take: renaming shortforms to Quick takes is a mistake
A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
throw e/acc on there too
Ten Project Ideas for AI X-Risk Prioritization
I made a list of 10 ideas I'd be excited for someone to tackle, within the broad problem of "how to prioritize resources within AI X-risk?" I won’t claim these projects are more / less valuable than other things people could be doing. However, I'd be excited if someone gave a stab at them
10 Ideas:
I wrote up a longer (but still scrappy) doc here