jtm

Hi there!

I'm a graduate student in health policy who's been involved with EA for quite a while now. I do research on pandemic preparedness and will be working full-time on biorisk research at the Future of Humanity Institute starting in October 2021.

Beyond biorisk, I'm especially excited about improving the epistemic norms, inclusivity, and general awesomeness of the EA community.

I previously helped run the Yale Effective Altruism group.

Cheers :)

Comments

Response to Phil Torres’ ‘The Case Against Longtermism’

Thanks for the context. I should note that I did not in any way intend to disparage Beckstead's personal character or motivations, which I definitely assume to be both admirable and altruistic.

As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author's personal actions.

A Biosecurity and Biorisk Reading+ List

Thanks for making this list, Tessa – so much that I have yet to read! And thanks for including our article :)

I thought I might suggest a few other readings on vaccine development:

Also, I think you omitted a super important 80k podcast: Ambassador Bonnie Jenkins on 8 years of combating WMD terrorism.

Finally, since you already included a ton of readings from fellow EA's, I thought I'd also suggest Questioning Estimates of Natural Pandemic Risk (2018), David  Manheim.

Thanks again for making this!

Response to Phil Torres’ ‘The Case Against Longtermism’

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?

I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn't do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative. 

I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like 'we should prioritise people like ourselves.'

Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.

Response to Phil Torres’ ‘The Case Against Longtermism’

Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.

I would love to write a longer post about Torres' essay and engage in a fuller discussion of your points right away, but I'm afraid I wouldn't get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.

A)

Torres suggests that longtermism is based on an ethical assumption of total utilitarianism (...) However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism.

I agree with this and think that any critique of longtermism's moral foundations should engage seriously with the fact many of its key proponents have written extensively about moral uncertainty and pluralism, and that this informs longtermist thinking considerably. I don't think Torres' essay does that.
 

B)

However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies.

Agreed, this seems like another important omission from the essay and one that is quite conspicuous given Bostrom's prominent essay on the topic.

C)

Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed.

As above, this seems like a critical omission

D) 

Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives.  (...)

However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1/6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%.

Unless I'm misunderstanding something, this section seems to conflate three distinct quantities:

  1. The estimated marginal effect on existential risk of some action EAs could take.
  2. The estimated absolute existential risk this century.
  3. The estimated marginal effect on existential risk of some big policy change, e.g. arms control.

While (2) might indeed be as high as ~16%, and (3) may be as high as 1-10%, both of these quantities are very different from (1). Very rarely, if ever, do EAs have the option 'spend $50M to achieve a robust arms control regime'; it's much more likely to be 'spend $50M to increase the likelihood of such a regime by 1-5%.'

So, unless you think the tens of millions of "EA dollars" allocated towards longtermist causes reduce existential risk by >>0.001% per, say, ten million dollars spent, then it seems like you would indeed have to be committed to Torres' formulation of the tiny-risk-reduction vs. current-lives-saved tradeoff.

Of course, you may believe that the marginal effects of many EA actions are, in fact, >>>0.001% risk reduction. And even if you don't, the tradeoff may still be a reasonable ethical position to take. 

I just think it's important to recognise that that tradeoff does seem to be a part of the deal for x-risk-focused longtermism. 

E)

Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”

However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people.

For a discussion of this point, I think it is only fair to also include the quote from Nick Beckstead's dissertation that Torres discusses in the relevant section. I include it in full below, for context:

"Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal." (Beckstead, 2013, quoted in Torres, 2021)

Here, I should perhaps note that while I've read parts of Beckstead's work, I don't think I've read that particular section, and I would appreciate hearing if there is a crucial piece of context that's missing. Either way,  I think this quote deserves a fuller discussion – I will, for now, simply note that I certainly think the quote, as written, is very objectionable and potentially warrants indignation.

Again, thanks for writing the post, I look very much forward to the discussions in the comments!

Response to Phil Torres’ ‘The Case Against Longtermism’

I second most of what Alex says here.  Like him, I only know about this particular essay from Torres, so I will limit my comments to that.

Notwithstanding my own objections to its tone and arguments, this essay did provoke important thoughts for me – as well as for other committed longtermists with whom I shared it – and that was why I ultimately ended up including it on the syllabus. The fact that, within 48 hours, someone put in enough effort to write a detailed forum post about the substance of the essay suggests that it can, in fact, provoke the kinds of discussions about important subjects that I was hoping to see. 

Indeed, it is exactly because I think the presentation in this essay leaves something to be desired that I would love to see more community discussion on some of these critiques of longtermism, so that their strongest possible versions can be evaluated. I realise I haven't actually specified which among the essay's many arguments that I find interesting, so I hope I will find time to do that at some point, whether in this thread or a separate post.

 

What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings?

Strongly agree with alexrjl here. 

And even if you assume consequentialism to be true and set moral uncertainty aside, I believe this is the sort of thing where the empirical uncertainty is so deep, and the potential for profound harm so great, that we should seriously err on the side of not doing things that intuitively seem terribly wrong, since commonsense morality is a decent (if not perfect) starting point for determining the net consequences of actions.  Not sure I'm making this point very clearly, but the general reasoning is discussed in this essay: Ethical Injunctions.

More generally I would say that – with all due respect to OP – this is an example of a risk associated with longtermist reasoning, whereby terrible things can seem alluring when astronomical stakes are involved. I think we, as a community, should be extremely careful about that.

A full syllabus on longtermism

Thanks! You can just use my full name (this is Joshua from the Yale group).

A full syllabus on longtermism

Thanks for your comment. I wholeheartedly agree that this is generally a neglected issue in the community, which is partly why included the brief note – although, as stated, I believe it deserves separate and longer discussions.

A full syllabus on longtermism

Thanks, much appreciated! I should perhaps have indicated which of the pieces on this list have been published with peer review. Generally, most of the articles have, with the exception of the book chapters, GPI working papers, and a few other working papers.

Load More