Yeah that's fair. I wrote this somewhat off the cuff, but because it got more engagement than I thought I'd make it a full post if I wrote again
Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?
To my mind, this is the crux, because:
[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.
Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering):
More meta points
I haven't yet decided, but it's likely that a majority of my donations will go to this year's donor lottery. I'm fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if they're unsure where to give.
Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which I'm personally excited about.
Thanks - I just saw RP put out this post, which makes much the same point. Good to be cautious about interpreting these results!
YouGov Poll on SBF and EA
I recently came across this article from YouGov (published last week), summarizing a survey of US citizens for their opinions on Sam Bankman-Fried, Cryptocurrency and Effective Altruism.
I half-expected the survey responses to be pretty negative about EA, given press coverage and potential priming effects associating SBF to EA. So I was positively surprised that:
(it's worth noting that there were only ~1000 participants, and the survey was online only)
(FYI to others - I've just seen Ajeya's very helpful writeup, which has already partially answer this question!)
What's the reason for the change from Longtermism to GCRs? How has/will this change strategy going forward?
It seems that OP's AI safety & gov teams have both been historically capacity-constrained. Why the decision to hire for these roles now (rather than earlier)?
Ten Project Ideas for AI X-Risk Prioritization
I made a list of 10 ideas I'd be excited for someone to tackle, within the broad problem of "how to prioritize resources within AI X-risk?" I won’t claim these projects are more / less valuable than other things people could be doing. However, I'd be excited if someone gave a stab at them
I wrote up a longer (but still scrappy) doc here