TB

Tom Barnes

Applied Researcher @ Founders Pledge
1002 karmaJoined Apr 2020Working (0-5 years)London, UK

Comments
30

Topic Contributions
1

Yeah that's fair. I wrote this somewhat off the cuff, but because it got more engagement than I thought I'd make it a full post if I wrote again

Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?

To my mind, this is the crux, because:

  1. If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/GCRs
  2. If No, then I'm confused why one wouldn't donate to animals / global health instead?

[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]

Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly

TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:

  • Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building
  • This post from Claire Zabel (OP)
  • Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks"
  • Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund"
  • Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes

My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today.

Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering):

  1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance.
  2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential"
    • Preventing an extinction-level GCR might move us from 0% to 1% of future potential, but there's 99x more value in focusing on going from the "okay (1%)" to "great (100%)" future.
    • See Aird 2020 for more nuances on this point
  3. From a longtermist (~suffering focused) perspective, reducing GCRs might be net-negative if the future is (in expectation) net-negative
    • E.g. if factory farming continues indefinitely, or due to increasing the chance of an S-Risk
    • See Melchin 2021 or DiGiovanni 2021  for more
    • (Note this isn't just a concern for suffering-focused ethics people)
  4. From a longtermist perspective, a focus on GCRs neglects non-GCR longtermist interventions (e.g. trajectory changes, broad longtermism, patient altruism/philanthropy, global priorities research, institutional reform, )
  5. From a "current generations" perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people / animals alive today
    • I'm pretty uncertain about this, but my guess is that alleviating farmed animal suffering is more welfare-increasing than e.g. working to prevent an AI catastrophe, given the latter is pretty intractable (But I haven't done the numbers)
    • See discussion here
    • If GCRs actually are more cost-effective under a "current generations" worldview, then I question why EAs would donate to global health / animal charities (since this is no longer a question of "worldview diversification", just raw cost-effectiveness)

More meta points

  1. From a community-building perspective, pushing people straight into GCR-oriented careers might work short-term to get resources to GCRs, but could lose the long-run benefits of EA / Longtermist ideas. I worry this might worsen community epistemics about the motivation behind working on GCRs:
    • If GCRs only go through on longtermist grounds, but longtermism is false, then impartial altruists should rationally switch towards current-generations opportunities. Without a grounding in cause impartiality, however, people won't actually make that switch
  2. From a general virtue ethics / integrity perspective, making this change on PR / marketing reasons alone - without an underlying change in longtermist motivation - feels somewhat deceptive.
    • As a general rule about integrity, I think it's probably bad to sell people on doing something for reason X, when actually you want them to do it for Y, and you're not transparent about that
  3. There's something fairly disorienting about the community switching so quickly from [quite aggressive] "yay longtermism!" (e.g. much hype around launch of WWOTF) to essentially disowning the word longtermism, with very little mention / admission that this happened or why

I haven't yet decided, but it's likely that a majority of my donations will go to this year's donor lottery. I'm fairly convinced by the arguments in favour of donor lotteries [1, 2], and would encourage others to consider them if they're unsure where to give. 

Having said that, lotteries have less fuzzies than donating directly, so I may separately give to some effective charities which I'm personally excited about.

Thanks - I just saw RP put out this post, which makes much the same point. Good to be cautious about interpreting these results!

YouGov Poll on SBF and EA

I recently came across this article from YouGov (published last week), summarizing a survey of US citizens for their opinions on Sam Bankman-Fried, Cryptocurrency and Effective Altruism.

I half-expected the survey responses to be pretty negative about EA, given press coverage and potential priming effects associating SBF to EA. So I was positively surprised that:

Survey Results

(it's worth noting that there were only ~1000 participants, and the survey was online only)

(FYI to others - I've just seen Ajeya's very helpful writeup, which has already partially answer this question!)

What's the reason for the change from Longtermism to GCRs? How has/will this change strategy going forward?

It seems that OP's AI safety & gov teams have both been historically capacity-constrained. Why the decision to hire for these roles now (rather than earlier)?

Ten Project Ideas for AI X-Risk Prioritization

I made a list of 10 ideas I'd be excited for someone to tackle, within the broad problem of "how to prioritize resources within AI X-risk?" I won’t claim these projects are more / less valuable than other things people could be doing. However, I'd be excited if someone gave a stab at them

10 Ideas:

  1. Threat Model Prioritization
  2. Country Prioritization
  3. An Inside-view timelines model
  4. Modelling AI Lab deployment decisions
  5. How are resources currently allocated between risks / interventions / countries
  6. How to allocate between “AI Safety” vs “AI Governance”
  7. Timing of resources to reduce risks
  8. Correlation between Safety and Capabilities
  9. How should we reason about prioritization?
  10. How should we model prioritization?

I wrote up a longer (but still scrappy) doc here

Load more