jtm

863Joined Jan 2019

Bio

Hi there! :)

Professionally, I work on biosecurity grantmaking with Effective Giving. I also spend some of my time working as a researcher on global catastrophic biological risks at the Future of Humanity Institute

I am not aiming to be anonymous/pseudonymous on here but sparingly write my full name because I prefer my Forum activity not to appear in search engines.

Joshua TM

Comments
54

Thank you for writing this, I think it's very important.

Oh, and I also quite liked your section on 'the balance of positive vs negative value in current lives'!

Thanks for writing this!

One thing I really agreed with.

 For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.

I particularly appreciate your point about avoiding 'bait-and-switch' dynamics. I appreciate that it's important to build broad support for a movement, but I ultimately think that it's crucial to be transparent about what the key considerations and motivations are within longtermism. If, for example, the prospect of 'digital minds' is an essential part of how leading people in the movement think about the future, then I think that should be part of public outreach, notwithstanding how offputting or unintuitive it may be. (MacAskill has a comment about excluding the subject here).

One thing I disagreed with.

MacAskill at times seemed reluctant to quantify his best-guess credences, especially in the main text.

I agree it's good to be transparent about priorities, including regarding the weight placed on AI risk within the movement. But I tend to disagree that it's so important to share subjective numerical credences and it sometimes has real downsides, especially for extremely speculative subjects. Making implicit beliefs explicit is helpful. But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the "1-in-6" estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.

I appreciate that there are many benefits of sharing numerical credences and you seem like an avid proponent of sharing subjective credences (you do a great job at it in this post!), so we don't have to agree. I just wanted to highlight one substantial downside of the practice.

In a nutshell: I agree that caring about the future doesn't mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.

Thanks for sharing this!

I think this quote from Piper is worth highlighting:

(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.


I broadly agree with this, except I think the first "if" should be replaced with "insofar as."  Even as someone who works full-time on existential risk reduction, it seems very clear to me that longtermism is causing this obvious and immediate harm; the question is whether that harm is outweighed by the value of pursuing longtermist priorities. 

GiveWell growth is entirely compatible with the fact that directing resources toward longtermist priorities means not directing them toward present challenges. Thus, I think the following claim by Piper is unlikely to be true:

My main takeaway from the GiveWell chart is that it’s a mistake to believe that global health and development charities have to fight with AI and biosecurity charities for limited resources.

To make that claim, you have to speculate about the counterfactual situation where effective altruism didn't include a focus on longtermism.  E.g., you can ask:

  1. Would major donors still be using the principles of effective altruism for their philanthropy? 
  2. Would support for GiveWell charities have been even greater in that world? 
  3. Would even more people have been dedicating their careers to pressing current challenges like global development and animal suffering?  

My guess is that the answer to all three is "yes", though of course I could be wrong and I'd be open to hear arguments to the contrary. In particular, I'd love to see evidence for the idea of a 'symbiotic' or synergistic relationship. What are the reasons to think that the focus on longtermism has been helpful for more near-term causes?  E.g., does longtermism help bring people on board with Giving What We Can who otherwise wouldn't have been? I'm sure that's the case for some people, but how many? I'm genuinely curious here!

To be clear, it's plausible that longtermism is extremely good for the world all-things-considered and that longtermism can coexist with other effective altruism causes. 

But it's very clear that focusing on longtermism trades off against focusing on other present challenges, and it's critical to be transparent about that. As Piper says, "prioritization of causes is at the heart of the [effective altruism] movement."

Thanks for your reply.

My concern is not that the numbers don't work out. My concern is that the "$100m/0.01%" figure is not an estimate of how cost-effective 'general x-risk prevention' actually is in the way that this post implies.

It's not an empirical estimate, it's a proposed  funding threshold, i.e. an answer to Linch's question "How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn't say whether are many (or any) such interventions available at the moment.  If I say "I propose that GiveWell should endorse interventions that we expect to save a life per $500", that doesn't by itself show whether such interventions exist. 

Of course, the proposed funding threshold could be informed by cost-effectiveness estimates for specific interventions; I actually suspect that it is. But then it would be useful to see those estimates – or at the very least know which interventions they are  – before establishing that figure as the 'funding bar' in this analysis.

This is particularly relevant if those estimates are based on interventions that do not prevent catastrophic events but merely prevent them from reaching existential/extinction levels, as the latter category does not affect all currently living people, meaning that '8 billion people' would be the wrong number for the estimation you wrote above.

Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’ 

I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
 

You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?”, which includes the proposed answer “we should fund interventions that we have resilient estimates of reducing x-risk ~0.01% at a cost of ~$100M.” 
 

But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I'm wrong!)

In short, saying that we should fund interventions at the $100m/0.01% bar doesn’t say whether there are many (or any) available  interventions at that level of cost-effectiveness. And while I appreciate that some grantmakers have begun leaning more on that kind of quantitative heuristic, I don’t doubt that you can infer from this fact that previously or currently funded work on ‘general x-risk prevention’ has met that bar, or even come particularly close to it.


So, I think the $125-$1,250 figure already assumes longtermism and isn’t applicable to your question. (Though I may have missed something here and would be happy to stand correct – particularly if I have misrepresented Linch’s analysis!)

Of course, if the upshot is that  ‘general x-risk prevention’ is less cost-effective than the $125-$1,250 per currently-alive-human-life-equivalent-saved, then your overall point only becomes even stronger. 

(PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I'm pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates. Given how rough these estimates are, I think it'd be better if others replicated their own analysis from scratch before citing them.)

Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.

A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery

To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look quite different from interventions to prevent extinction from biology. And the difference between catastrophe and extinction really does depend on what one thinks about longtermism and the importance of future generations.

Thanks for taking the time to write this up!

I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way. 

Load More