# 88

This short post responds to some of the criticisms of longtermism in Torres’ minibook: Were the Great Tragedies of History “Mere Ripples”? The Case Against Longtermism, which I came across in this syllabus.

I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges. I split the criticisms into two sections: the first on problematic ethical assumptions or commitments, the second on problematic policy proposals.

Note that I both respect and disagree with all three authors. Torres piece is insightful and thought-provoking, as well as polemical; Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse; and Bostrom is a groundbreaking visionary, though one can dispute many of his views.

# Problematic ethical assumptions or commitments

Torres argues that longtermism rests on assumptions and makes commitments that are problematic and unusual/niche. He is correct that Bostrom has a number of unusual ethical views, and in his early writing he was perhaps overly fond of a contrarian ‘even given these incredibly conservative assumptions the argument goes through’ framing. But Torres does not sufficiently appreciate that these limitations and constraints have largely been acknowledged by longtermist philosophers, who have (re)formulated longtermism so as to not require these assumptions and commitments.

## Total utilitarianism

Torres suggests that longtermism is based on an ethical assumption of total utilitarianism, a view in which we should maximise wellbeing based on adding together the wellbeing of all the individuals in a group. Such a ‘more is better’ ethical view accords significant weight to trillions of future individuals. He points out that total utilitarianism is not a majority opinion amongst moral philosophers.

However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism. One of the achievements of The Precipice is Ord’s arguments pointing out the affinities between longtermism with other ethical traditions, such as conservatism, obligations to the past, virtue ethics. One can be committed to a range of ethical views and endorse longtermism.

## Trillions of simulations on computronium

Torres suggests that the scales are tilted towards longtermism by including in the calculation quadrillions of simulations of individuals living flourishing lives. The view that such simulations would be moral agents, or that this future is desirable, is certainly unusual.

But one doesn’t have to be committed to this view for the argument to work. The argument goes through if we assume that humanity never leaves Earth, and simply survives until the Earth is uninhabitable – or even more conservatively, survives the duration of an average mammalian species. There are still trillions of future individuals, whose interests and dignity matter.

## ‘Reducing risk from 0.001% to 0.0001% is not the same as saving thousands of lives’

Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. This a clear example of early Bostrom stating his argument in a philosophically robust, but very counterintuitive way. Worries about this framing have been common for over a decade, in the debate over ‘Pascal’s Mugging’.

However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1/6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%. Specifically on Pascal’s Mugging, a number of decision-theory responses have been proposed, which I will not discuss here.

## Transhumanism and space settlement & ‘Not reaching technological maturity = existential risk’

Torres suggests that longtermism is committed to transhumanism and space settlement (in order to expand the number of future individuals), and argues that Bostrom bakes this commitment into existential risk through a negative definition of existential risk as any future that does not achieve technological maturity (through extinction, plateauing, etc).

However, while Bostrom certainly does think this future is ethically desirable, longtermism is not committed to it. Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed. Longtermism is not committed to any particular outcome from the Long Reflection. For example, if after the Long Reflection humanity decided to never become post-humans, and never leave Earth, this would not necessarily be viewed by longtermists as a destruction of humanity’s potential, simply one choice as to how to spend that potential.

# Problematic policy proposals

Torres argues that longtermists are required to endorse problematic policy proposals. I argue that they are not – I personally would not endorse these proposals.

## ‘Continue developing technology to reduce natural risk’

Torres argues that longtermists are commited to continued technological development for transhumanist/space settlement reasons – and to prevent natural risks – but that this is “nuts” because (as he fairly points out) longtermists themselves argue that natural risk is tiny compared to anthropogenic risk.

However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies. This is not a call to continue technological development in order to become post-humans or reduce asteroid/supervolcano risk – it is to differentially progress technology, assuming that overall technological development is hard/impossible to stop. I would agree with this assumption, but one may reasonably question it, especially when phrased as a form of strong ‘technological completism’ (any technology that can get invented will get invented).

## Justifies surveillance

Torres argues against the “turnkey totalitarianism” (extensive and intrusive mass surveillance and control to prevent misuse of advanced technology) explored in Bostrom’s ‘Vulnerable World Hypothesis’, and implies that longtermism is committed to such a policy.

However, longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model. The existential risks we have faced so far (nuclear and biological weapons, climate change) have largely come from state militaries and large companies, and the existential risks we may soon face (from new biotechnologies and transformative AI) will also come from the same threat sources. The focus of existential risk prevention should therefore be on states and companies. Risks from individuals and small groups are relatively much smaller. These small benefits from the kind of mass surveillance Bostrom explores means it is not justified by a cost-benefit analysis.

Nevertheless, in the contrived hypothetical of ‘anyone with a microwave could have a nuclear weapon’, would longtermism be committed to restrictions on liberty? I address this in the next heading.

## Justifies mass murder

Torres argues that longtermists would have to be willing to commit horrendous acts (e.g. destroy Germany with nuclear weapons) if it would prevent extinction.

This is a classic objection to all forms of consequentialism and utilitarianism – from the Trolley Problem to the Colosseum objection. There are many classic responses, ranging from disputing the hypothetical to pointing out that other ethical views are also committed to such an action.

It is not a unique objection to longtermism, and loses some of its force as longtermism does not have to be based on utilitarianism (as I said above). I would also point out that it is an odd accusation to level, as longtermism places such high priority on peace, disarmament and avoiding catastrophes.

## Justifies giving money to the rich rather than the extreme poor, which is a form of white supremacy

Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”

However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people. Longtermists might in practice donate to NGOs or scientists in the developed world, but the ultimate beneficiaries are future generations. Indeed, the same might be true of other cause areas e.g. work on a malaria vaccine or clean meat. Torres does not seem to accord much weight to how much longtermists recognise this as a moral dilemma and feel very conflicted – most longtermists began as committed to ending the moral crimes of extreme poverty, or of factory farming. There are many huge tragedies, but one must unfortunately chose were to spend one’s limited time and resources.

Longtermism is committed to the view that future generations matter morally. They are moral equals. When someone is born is a morally irrelevant fact, like their race, gender, nationality or sexuality. Furthermore, present people are in a unjust, exploitative power imbalance with future generations. Future generations have no voice or vote in our political and economic systems. They can do nothing to affect us. Our current political and economic systems are set up to overwhelmingly benefit those currently alive, often at the cost of exploiting, and loading costs onto, future generations.

This lack of recognition of moral equality, lack of representation, power imbalance and exploitation shares many characteristics with white supremacy/racism/colonialism and other unjust power structures. It is ironic to accuse a movement arguing on behalf of the voiceless of being a form of white supremacy.

# 88

New Comment

It is  very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white  supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.

A quick point of clarification that Phil Torres was never staff at CSER; he was a visitor for a couple of months a few years ago. He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not). (And FWIW he has made similar allusions, albeit thinly veiled, about me).

I'm really sorry to hear that from both of you, I agree it's a serious accusation.

For longtermism as a whole, as I argued in the post, I don't understand describing it as white supremacy - like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him?

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him.

[disclaimer: I am co-Director at CSER. While much of what I will write intersects with professional responsibilities, it is primarily written from a personal perspective, as this is a deeply personal matter for me. Apologies in advance if that's confusing, this is a distressing and difficult topic for me, and I may come back and edit. I may also delete my comment, for professional or personal/emotional reasons].

I am sympathetic to Halstead's position here, and feel I need to write my own perspective. Clearly to the extent that CSER has - whether directly or indirectly - served to legitimise such attacks by Torres on colleagues in the field, I bear a portion of responsibility as someone in a leadership position. I do not feel it would be right or appropriate for me to speak for all colleagues, but I would like to emphasise that individually I do not, in any way, condone this conduct, and I apologise for it, and for any failings on my individual part that may have contributed.

My personal impression supports the case Halstead makes. Comments about my 'whiteness', and insinuations regarding my 'real' reasons for objecting to positions taken by Torres only came after I objected publicly to Torres's characterisations of Halstead, Olle Hagstrom, Nick Beckstead, Toby Ord and others. I have been informed by Torres that I owe him an apology for not siding with him [edit: to emphasise, this is my personal subjective impression/interpretation based on communications with me].

As well as the personal motivation, this mode of engagement reflects another aspect of this discourse I find deeply troubling: while I think there are valid arguments against longtermism, and alternative perspectives, it becomes impossible to discuss the issues, and in particular, the unfair characterisation of individuals, on the object level. Object level disagreement is met with an insinuation that this is the white supremacists closing ranks. I do believe there is a valid argument in some cases that one can be unaware of biases, and one can be unconsciously influenced by the 'background radiation' of a privileged society. Personally I have experienced this in unconscious, and sometimes deliberate, racism experienced as an Irish person living in Britain, and I have no doubt that non-white people have it much worse. However, this principle can also most certainly be overused uncharitably, or even 'weaponised' to shut down constructive intellectual engagement. And it is profoundly anti-intellectual to permit only those from outside a system of privilege to challenge scholarship.

There are other rhetorical moves I find deeply troubling. The common-society use of 'white supremacy' is something like "people who believe that white people are superior to other races and should dominate them; and are willing to act on that through violent means.". Torres has typically not defined the term, but when challenged, he has explained that he is using the term in the more narrowly-used way used in critical race theory; of "of white people benefiting from and maintaining a system where the legacy of colonial privilege is maintained". (note that he does define it in the mini-book, although as the 'academic' definition, which I think is overstatement). When challenged, Torres insults people for not automatically knowing he is using the more esoteric CRT definition rather than the common-use definition. This is not a reasonable position to take. And it is not reasonable to expect people not to be deeply hurt and offended by the language used.

Even accounting for the CRT definition, this is still an extremely serious and harmful accusation, and one that should not be made without extremely careful consideration and very strong evidence. In my own case,  as someone from a culture  overwhelmingly defined by the harms of colonialism, it is another way of shutting down any possible discussion; it is so violently upsetting that it renders me incapable of continuing to engage.

To the extent that scholars at CSER are still collaborating with Torres: I am not. I have spoken regarding my concerns to those who have let me know they are still collaborating with him, and have let them make their own choices. Most collaborations  are the legacy of projects initiated during his visit 2 years ago (which I authorised, not knowing some of the more serious issue Halstead raises, but being aware of some more minor concerns). Papers take a long time to go through the academic system, and it would be a very unusual and hostile step to e.g. take an author's name off a paper against their wishes. In some instances, people wished to engage with some aspects of Torres' critique and collaborate with presenting them in a more constructive and less polemical way (e.g. see several examples of Beard+Torres). I have respected their choices. This may not be the case with all collaborations; at CSER's current size I am not always aware of every paper being written. But I think it is fair to say my view on this style of engagement are well-known.

I have not taken the step of banning colleagues at CSER from collaborating with Torres. This would be an extremely unusual step in academia, running contrary to some fundamental principles of academic freedom.  Further, I am concerned that such steps would reinforce another set of attack lines: Torres has already publicly claimed that he 'has no doubt' that employees at CSER that disagreed with me would be fired for it. I value having scope for intellectual disagreement greatly, and I would not want this perspective to take hold.

I do not claim that my decisions have been correct.

I do think there is significant value in engaging with critics. I admire engagement of the sort that Haydn has just undertaken. As a committed longtermist, to 'turn the other cheek' and engage in good faith with a steelmanned, charitable interpretation of a polemical and hostile document is something I find admirable in itself. And as noted elsewhere in this discussion, enough people have found some value in the challenge Torres has presented to ideas within longtermism (even where presented uncharitably) that it seems reasonable for some to engage with it. However at the same time, I do worry that beyond some point, engaging so charitably may legitimise a mode of discourse that I find distressingly hostile and inimical to kind and constructive, and open discourse.

These are challenging, and sometimes controversial topics. There will very often be issues on which reasonable people will disagree. There will sometimes be positions taken that others will be profoundly uncomfortable with. This is not unique to Xrisk or longtermism; the same is true of global development and animal rights. I believe it is of paramount importance that we be able to interact with each other as thinkers and doers in a kind, constructive and charitable way; and above all to adopt these principles when we critique each other. After all, when we are wrong, this is nearly always the most effective way to change minds. While not everyone will agree with me on this, this is the view I have always put forward in the centres I have been a part of.

Addendum: There's a saying that "no matter what side of an argument you're on, you'll always find someone on your side who you wish was on the other side".

There is a seam running through Torres's work that challenges xrisk/longtermism/EA on the grounds of the limitations of being led and formulated by a mostly elite, developed-world community.

Like many people in longtermism/xrisk, I think there is a valid concern here.  xrisk/longtermism/EA all started in a combination of elite british universities + US communities e.g. bay. They had to start somewhere. I am of the view that they shouldn't stay that way.

I think it's valid to ask whether there are assumptions embedded within these frameworks at this stage that should be challenged, and to posit that these would be challenged most effectively by people with a very different background and perspective. I think it's valid to argue that thinking, planning for, and efforts to shape the long-term future should not be driven by a community that is overwhelmingly from one particular background and that doesn't draw on and incorporate the perspectives of a community that reflects more of global societies and cultures. Work by such a community would likely miss important values and considerations, might reflect founder-effect biases, and would lack legitimacy and buy-in when it came to implementation. I think it's valid to expect it to engage with frameworks beyond utilitarianism, and I'm pleased to see GPI, The Precipice, amongst others do this.

As both xrisk and longtermism grow and mature, a core part of the project should be, in my view, and likely will be, expanding beyond this starting point. Such efforts are underway. They take a long time. And I would like to see people, both internal and external to the community, challenge the community on this where needed .

However, for someone on this side of the argument, I am deeply frustrated by Torres's approach. It salts the earth for engagement with people who disagree with this view and actively works against finding common ground. It alienates people from diverse backgrounds outside xrisk/longtermism from engaging with xrisk/longtermism, and thus makes the project harder. And it strengthens the views of those who disagree with the case I've put, especially when they perceive those they disagree with acting in bad faith. The book ends with the claim "More than anything, I want this mini-book to help rehabilitate “longtermism,” and hence Existential Risk Studies." I do not believe this hostile, polemical approach serves that aim; rather I worry that it is undermining it.

I completely agree with all of this, and am glad you laid it out so clearly.

Seconded.

I just wanted to say that this is a beautiful comment. Thank you for sharing your perspective in such an elegant, careful and nuanced manner.

I don't have any comment to make about Torres or his motives (I think I was in a room with him once). However, as a more general point, I think it can still make sense to engage with someone's arguments, whatever their motivation, at least if there are other people who take them seriously. I also don't have a view on whether others in the longtermism/X-risk world do take Torres's concern seriously, it's not really my patch.

Despite disagreeing with most of it, including but not limited to the things highlighted in this post, I think that Torres's post is fairly characterised as thought-provoking. I'm glad Joshua included it in the syllabus, also glad he caveated its inclusion, and think this response by Hayden is useful.

I haven't interacted with Phil much at all, so this is a comment purely on the essay, and not a defense of other claims he's made or how he's interacted with you.

I second most of what Alex says here.  Like him, I only know about this particular essay from Torres, so I will limit my comments to that.

Notwithstanding my own objections to its tone and arguments, this essay did provoke important thoughts for me – as well as for other committed longtermists with whom I shared it – and that was why I ultimately ended up including it on the syllabus. The fact that, within 48 hours, someone put in enough effort to write a detailed forum post about the substance of the essay suggests that it can, in fact, provoke the kinds of discussions about important subjects that I was hoping to see.

Indeed, it is exactly because I think the presentation in this essay leaves something to be desired that I would love to see more community discussion on some of these critiques of longtermism, so that their strongest possible versions can be evaluated. I realise I haven't actually specified which among the essay's many arguments that I find interesting, so I hope I will find time to do that at some point, whether in this thread or a separate post.

I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead (!) and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.

(To be clear: I think the syllabus is otherwise great, and kudos for creating it!)

EDIT: See Seán's comment for further elaboration on points (1) and (2) above.

Genuine question: if someone has views that are widely considered repugnant (in this case that longtermists are white supremacists) but otherwise raises points that some people find interesting and thought-provoking, should we:

A) Strongly condemn the repugnant ideas whilst genuinely engaging with the other ideas

B) Ignore the person completely / cancel them

If the person is clearly trolling or not writing in good faith then I'd imagine B) is the best response, but if Torres is in fact trolling then I find it surprising that some people find some of his ideas interesting / thought-provoking.

(Just to reiterate this is a genuine question I'm not stating a view one way or the other and I also haven't read Torres' post)

In this case, I would say it's not the mere fact that they hold views widely considered repugnant, but the conjunction of that fact with decisive evidence of intellectual dishonesty (that some people found his writings thought provoking isn't necessarily in tension with the existence of this evidence). Even then you probably could conceive of scenarios where the points raised are so insightful that one should still engage with the author, but I think it's pretty clear this isn't one of those cases.

The last time I tried to isolate the variable of intellectual dishonesty using a non-culture war example on this forum (in this case using fairly non-controversial (to EAs) examples of intellectual dishonesty, and with academic figures that I at least don't think are unusually insightful by EA lights), commentators appeared to be against the within-EA  cancellation of them, and instead opted for a position more like:

I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

This appears broadly analogous to how jtm presented Torres' book in his syllabus. Now of course a) there are nontrivial framing effects so perhaps people might like to revise their conclusions in my comment and b) you might have alternative reasons to not cite Torres in certain situations (eg very high standard for quality of argument, deciding that personal attacks on fellow movement members is verbotten), but at least the triplet-conjunction presented in your comment (
bad opinions + intellectual dishonesty + lack of extraordinary insight) did not, at the time, seem to be sufficient criteria in the relatively depoliticized examples I cited.

[+]Aaron Gertler3mo Moderator Comment-39
[-]Aaron Gertler3mo Moderator Comment56

As the Forum’s lead moderator, I’m posting this message, but it was written collaboratively by several moderators after a long discussion.

As a result of several comments on this post, as well as a pattern of antagonistic behavior, Phil Torres has been banned from the EA Forum for one year.

Our rules say that we discourage, and may delete, "unnecessary rudeness or offensiveness" and "behavior that interferes with good discourse". Calling someone a jerk and swearing at them is unnecessarily rude, and interferes with good discourse.

Phil also repeatedly accuses Sean of lying:

I am trying to stay calm, but I am honestly pretty f*cking upset that you repeatedly lie in your comments above, Sean [...] I won't include your response, Sean, because I'm not a jerk like you.

How can someone lie this much about a colleague and still have a job?

You repeatedly lied in your comments above. Unprofessional. I don't know how you can keep your job while lying about a colleague like that.

After having seen the material shared by Phil and Sean (who sent us some additional material he didn’t want shared on the Forum), we think the claims in question are open to interpretation but clearly not deliberate lies. For example, Sean said that Phil "has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not)." It’s evident from screenshots that Phil did list himself on Facebook and LinkedIn as working at CSER after he was no longer there, very plausibly by mistake. This is the kind of mistake that’s easy to make, but repeatedly saying someone is lying by pointing out the mistake is another example of unnecessary rudeness.

Of course, it’s understandable to have strong feelings if you believe someone is lying about you, but we expect Forum users to express strong feelings in a more productive way ("I think you're mistaken about that, and here's why"). Phil is sometimes more courteous, but we feel that his comments often fail to represent the culture we want to see on the Forum.

This ban is not related to Phil's academic work. We appreciate having well-informed critics on the Forum; even criticism which seems overly harsh, or somewhat off-target, can generate good discussion (e.g. this post and this response to it). For another example, see this defense of some of Phil’s views.

*****

We encourage people to alert us to any other instances of name-calling, swearing at people, or unsubstantiated personal accusations. We aim to apply these rules consistently and proportionately to the frequency/extent of their violation.

In several milder cases, we’ve messaged people with private warnings; because this case led to a ban, we’re sharing this comment publicly. And on this post, I’ve issued a warning to Halstead for accusations that he hadn't substantiated at the time he posted them, though he later shared satisfactory evidence with me.

People are still welcome to cross-post Phil's work, quote him, argue for his points, and all the rest — but he won't be permitted to post here himself until 12 May, 2022.

[This comment is a tangential and clarifying question; I haven't yet read your post]

Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse

If I didn't know anything about you, I'd assume this meant "Toby Ord suggests climate change, nuclear weapons, and collapse should be fairly high priorities. I disagree (while largely agreeing with Ord's other priorities)."

But I'm guessing you might actually mean "Toby Ord suggests climate change, nuclear weapons, and collapse should be much lower priorities than things like AI and biorisk (though they should still get substantial resources, and be much higher priorities than things like bednet distribution). I disagree; I think those things should be similarly high priorities to things like AI and biorisk."

Is that guess correct?

I'm not sure whether my guess is based on things I've read from you, vs just a general impression about what views seem common at CSER, so I could definitely be wrong.

That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon.

Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.

I would love to write a longer post about Torres' essay and engage in a fuller discussion of your points right away, but I'm afraid I wouldn't get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.

A)

Torres suggests that longtermism is based on an ethical assumption of total utilitarianism (...) However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism.

I agree with this and think that any critique of longtermism's moral foundations should engage seriously with the fact many of its key proponents have written extensively about moral uncertainty and pluralism, and that this informs longtermist thinking considerably. I don't think Torres' essay does that.

B)

However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies.

Agreed, this seems like another important omission from the essay and one that is quite conspicuous given Bostrom's prominent essay on the topic.

C)

Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed.

As above, this seems like a critical omission

D)

Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives.  (...)

However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1/6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%.

Unless I'm misunderstanding something, this section seems to conflate three distinct quantities:

1. The estimated marginal effect on existential risk of some action EAs could take.
2. The estimated absolute existential risk this century.
3. The estimated marginal effect on existential risk of some big policy change, e.g. arms control.

While (2) might indeed be as high as ~16%, and (3) may be as high as 1-10%, both of these quantities are very different from (1). Very rarely, if ever, do EAs have the option 'spend $50M to achieve a robust arms control regime'; it's much more likely to be 'spend$50M to increase the likelihood of such a regime by 1-5%.'

So, unless you think the tens of millions of "EA dollars" allocated towards longtermist causes reduce existential risk by >>0.001% per, say, ten million dollars spent, then it seems like you would indeed have to be committed to Torres' formulation of the tiny-risk-reduction vs. current-lives-saved tradeoff.

Of course, you may believe that the marginal effects of many EA actions are, in fact, >>>0.001% risk reduction. And even if you don't, the tradeoff may still be a reasonable ethical position to take.

I just think it's important to recognise that that tradeoff does seem to be a part of the deal for x-risk-focused longtermism.

E)

Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”

However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people.

For a discussion of this point, I think it is only fair to also include the quote from Nick Beckstead's dissertation that Torres discusses in the relevant section. I include it in full below, for context:

"Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal." (Beckstead, 2013, quoted in Torres, 2021)

Here, I should perhaps note that while I've read parts of Beckstead's work, I don't think I've read that particular section, and I would appreciate hearing if there is a crucial piece of context that's missing. Either way,  I think this quote deserves a fuller discussion – I will, for now, simply note that I certainly think the quote, as written, is very objectionable and potentially warrants indignation.

Again, thanks for writing the post, I look very much forward to the discussions in the comments!

A little historical background - one of my first introductions to proto-effective altruism was through corresponding with Nick Beckstead while he was a graduate student, around the time he would have been writing this dissertation. He was one of the first American members of Giving What We Can (which at the time was solely focused on global poverty), and at the time donated 10% of his graduate stipend to charities addressing global poverty. When I read this passage from his dissertation, I think of the context provided by his personal actions.

I think that "other things being equal" is doing a lot of work in the passage. I know that he was well aware of how much more cost-effective it is to save lives in poor economies than in rich ones, which is why he personally put his money toward global health.

Thanks for the context. I should note that I did not in any way intend to disparage Beckstead's personal character or motivations, which I definitely assume to be both admirable and altruistic.

As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author's personal actions.

Happy to have a go; the "in/out of context" is a large part of the problem here. (Note that I don't think I agree with Beckstead's argument for reasons given towards the end).

(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.

If I were to apply the argument to the 12th century world, I might claim that saving a person in what is now modern day Turkey would have greater ripple effects than saving a person in war-ravaged Britain.  The former was lightyears further ahead in science and technology, chock full of incredible muslim scholar-engineers like Al Jazari (seriously; read about this guy). I might be wrong of course;  the future is unpredictable  and these ripples might be wiped out in the next century by a Mongol Horde (as for the most part did happen); but wrong on different grounds.

And earlier in the thesis Beckstead provides a whole heap of caveats (in addition to 'all other things being equal', including that his argument explicitly does not address issues "such as whose responsibility that is, how much the current generation should be required to sacrifice for the sake of future generations, how shaping the far future stacks up against special obligations or issues of justice"; these are all "good questions" but out of scope.)

If Beckstead further developed the 'it is better to save lives in rich countries' argument in the thesis, explicitly embedding it within the modern context and  making practical recommendations that would exacerbate the legacy of harm of postcolonial inequality, then Torres might have a point. He does not. It's a paragraph on one page of a 198 page PhD thesis. Reading the paragraph in the context of the overall thesis gives a very different impression than the deliberately leading context that Torres places the paragraph in.

(2) Now consider the further claims that Torres has repeatedly made - that this paragraph taints the entire field in white supremacy; and that any person or organisation who praised the thesis is endorsing white supremacy. This is an even more extreme version of the same set of moves. I have found nothing - nothing -anywhere  in the EA or longtermist literature building on and progressing this argument.

(3) The same can be seen, but in a more extreme fashion, for the Mogensen paper. Again, an abstract philosophical argument. Here Mogensen (in a very simplified version) observes that over three dimensions - the world - total utilitarianism says you should spread your resources over all people in that space. But if you introduce a 4th dimension - time, then the same axiology says you should spread your resources over space and time, and the majority of that obligation lies in the future. It's an abstract philosophical argument. Torres reads in white supremacy, and invites the reader to read in white supremacy.

(4) The problem here is that no body of scholarship can realistically withstand this level of hostile scrutiny and leading analysis. And no field can realistically withstand the level of hostile analysis where one paragraph in a PhD thesis taken out of context is used to damn an entire field.  I don't think I personally agree with the argument on its own terms - it's hard to prove definitively but I would have a concern that inequality has often been argued to be a driver of systemic instability, and that if so, any intervention that increases inequality might contribute to negative 'ripple effects' regardless of what countries were rich and poor at a given time. And I think the paragraph itself could reasonably be characterised as 'thoughtless', given the author is a white western person writing in C21, even if the argument is not explicitly in this context.

However the extreme criticism presented in Torres's piece stands in stark contrast to the much more serious racism that goes unchallenged in so much of scholarship and modern life. Any good-faith actor will in the first instance pursue these, rather than reading the worst ills possible into a paragraph of a PhD thesis. I've run out of time, but will illustrate this shortly with a prominent example of what I consider to be much more significant racism from Torres's own work.

Here is an article by Phil Torres arguing that the rise of Islam represents a very significant and growing existential risk.

https://hplusmagazine.com/2015/11/17/to-survive-we-must-go-extinct-apocalyptic-terrorism-and-transhumanism/

I will quote a key paragraph:

"Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before."

Firstly, this is nonsense. The proposition that 1% of Muslims would hold "active apocalyptic" views and be prepared to act on it is pure nonsense. And "if even 1%" suggests this is the author lowballing.

Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.

Thirdly, it utilises another standard racism trope, population replacement - look at the growing number of scary 'other'. They threaten to over-run the US's good 'ol apple pie armed forces.

This was not a paragraph in a thesis. It was a public article, intended to reach as wide an audience as possible. It used to be prominently displayed on his now-defunct website. The article above was written several years more recently than Beckstead's thesis.

I will say, to Torres's credit, that his views on Islam have become more nuanced over time, and that I have found his recent articles on Islam less problematic. This is to be praised. And he has moved on from attacking Muslims to 'critiquing' right-wing Americans, the Atheist community, and the EA community. This is at least punching sidewards, rather than down.

But he has not subject his own body of work, or other more harmful materials, to anything like the level of critique that he has subjected Beckstead, Mogensen etc al. I consider this deeply problematic in terms of scholarly responsibility.

Understood!

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?

I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn't do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.

I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like 'we should prioritise people like ourselves.'

Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?

Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

The main issue I have with this quote is that it's so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat 'other things being equal',  but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries.

I don't understand why thinking like that quote isn't totally passe to EAs. At least to utilitarian EAs. If anyone's allowed to think hypothetically ("divorced from the reality") I would think it would be a philosophy grad student writing a dissertation.

I just wanted to echo your sentiments in the last part of your comment re: Beckstead's quote about the value of saving lives in the developed world. Having briefly looked at where this quote is situated in Beckstead's PhD thesis (which, judging by the parts I've previously read, is excellent), the context doesn't significantly alter how this quote ought to be construed.

I think this is at the very least an eyebrow-raising claim, and I don't think Torres is too far off the mark to think that the label of white supremacism, at least in the "scholarly" sense of the term, could apply here. Though it's vital to note that this is in no way to insinuate that Beckstead is a white supremacist, i.e., someone psychologically motivated by white supremacist ideas. If Torres has insinuated this elsewhere, then that's another matter.

It also needs noting that, contra Torres, longtermism simpliciter is not committed to the view espoused in the Beckstead quote. This view falls out of some particular commitments which give rise to longtermism (e.g. total utilitarianism). The OP does a good job of pointing out that there are other "routes" to longtermism, which Ord articulates, and I think these approaches could plausibly avoid the implication that we ought to prioritise members of the developed world over the contemporaneous global poor.

I'm oblivious to Torres' history with various EAs, so I'm anxious about stepping into what seems like quite a charged debate here (especially with my first forum post), but I think it's worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they'd receive (unfairly or not!) - so it's worth considering how plausible these charges are, and how longtermists might respond. The OP develops some promising initial responses, but I also think a longer discussion would be beneficial.

Rational discourse becomes very difficult when a position is characterized by a term with an extremely negative connotation in everyday contexts—and one which, justifiably, arouses strong emotions—on the grounds that the term is being used in a "technical" sense whose meaning or even existence remains unknown to the vast majority of the population, including many readers of this forum. For the sake of both clarity and fairness to the authors whose views are being discussed, I strongly suggest tabooing this term.

>but I think it's worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they'd receive (unfairly or not!) - so it's worth considering how plausible these charges are, and how longtermists might respond.

This is a good point, and worth being mindful of as longtermism becomes more mainstream/widespread.

For whatever it's worth, I show in a forthcoming, peer-reviewed philosophy paper that Ord's view is, in fact, worse than Bostrom's in multiple ways. I will, of course, happily share a link to he document once it's published (although I know some folks at FHI have a copy right now).

"I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges."