Please don't just downvote this. I welcome comments, criticisms, feedback, and so on. Where am I wrong? Do you disagree that utopianism has, historically, led to bad outcomes? Do you think that S2 really is as bad as S1? Is Olle Häggström's scenario or Pinker's statement off-base?
This paper offers a number of reasons for why the Bostromian notion of existential risk is useless. On the one hand, it is predicated on a highly idiosyncratic techno-utopian vision of the future that few would find appealing. On the other, the “worst-case outcomes” for humanity group together the atrocious to the benign. What matters, on Bostrom’s view, is not human extinction per se, but any event that would permanently prevent current or future people from attaining technological Utopia. I then consider the question of whether the Bostromian paradigm could be dangerous. My answer is affirmative: this perspective combines utopianism and utilitarianism. Historically, this has proven to be a highly combustible mix. When the ends justify the means, and when the end is paradise, then groups or individuals may feel justified in contravening any number of moral constraints on human behavior, including those that proscribe violent actions. Although I believe that studying low-probability, high-impact risks is extremely important, I urge scholars to abandon the Bostromian concept of existential risk.
Much though I dislike important conversations happening on Facebook rather than some more public forum, it's probably worth people considering engaging here reading the pre-existing Facebook discussion here and here. At the very least we can avoid re-treading old ground.
I think the concerns about utopianism are well-placed and merit more discussion in effective altruism. I'm sad to see the post getting downvoted.
I downvoted it based on things like calling John Halstead and Nick Beckstead white supremacists (based on extremely shaky argumentation) and apparently taking it as obvious that rejecting person-affecting views is morally monstrous.
I might make longer, more substantive comments later, but there are reasons to downvote this other than wanting to squash discussion of fanaticism.
It may be noted that in the thing I wrote on climate change I don't actually defend long-termism or even avow belief in it.
For those who find it confusing that I, at best a mid-table figure in EA, get dragged into this stuff, the reason is that I once publicly criticised a post on Pinker that Phil wrote on Facebook (my critique was about three sentences). Phil has since then borne a baffling and persistent grudge against me, including persistently sending me messages on Facebook, name-checking me while making some rape allegations against some famous person I have never heard of, and then calling me a white supremacist. Hopefully, this gives some insight into Phil's psychology and what is actually driving posts such as the one linked to here.
John: Do I have your permission to release screenshots of our exchange? You write: "... including persistently sending me messages on Facebook." I believe that this is very misleading.
please do
Thanks for pointing that out!
For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:
"As he [Beckstead] makes the point,
>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
This is overtly white-supremacist."
The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn't seem like a very helpful contribution to the discussion.
Thanks, I agree with this clarification.
I actually find the argument that those arguing against prioritising climate change are aiding white supremacy[1] more alarming than the attack on Beckstead, even though the accusations there are more oblique.
While I think Beckstead's argumentation here seems basically true, it is clearly somewhat incendiary in its implications and likely to make many people uncomfortable – it is a large bullet to bite, even if I think that calling it "overtly white-supremacist" is bad argumentation that risks substantially degrading the discourse[2].
Conversely, claiming that anyone who doesn't explicitly prioritise a particular cause area is aiding white supremacy seems like extremely pernicious argumentation to me – an attempt to actively suppress critical prioritisation between cause areas and attack those trying to work out how to make difficult-but-necessary trade-offs. I think this style of argumentation makes good-faith disagreement over difficult prioritisation questions much harder, and contributes exceedingly little in return.
"Hence, dismissing climate change because it does not constitute an obstacle for creating Utopia reinforces unjust racial dynamics, and thus supports white supremacy." (p. 27) ↩︎
The document also claims (in footnote 13) that "the prevalence of such tendencies" (by which I assume is meant "overtly white-supremacist" tendencies, since the footnote is appended directly to that accusation) in EA longtermism "may be somewhat unsurprising" given EA's racial make-up. I would find it quite surprising if many EAs were secretly harbouring white-supremacist leanings, and would require much stronger (or indeed any) evidence that this were the case before making such aspersions. ↩︎
Yeah, I agree the facile use of "white supremacy" here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.
I guess I still think it makes important enough arguments that I'd like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.
Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.
Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?
No.
I'm sad to see this comment get downvoted.
I tried to make this comment before, but for some reason it isn't visible, so I'm reposting it.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism "the axiological difference between S1 and S2 is negligible". It may be negligible compared to the difference between either and utopia, but that doesn't mean it's negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is "negligible" compared to the total disvalue in the world over the course of ten years doesn't necessarily mean one is callous about the former.
Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
I think you don't need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).
I think the paper title is clickbaity and misleading, given that you argue narrowly against Bostrom's conception of existential risk rather than the broader idea of x-risk itself.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism "the axiological difference between S1 and S2 is negligible". It may be negligible compared to the difference between either and utopia, but that doesn't mean it's negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is "negligible" compared to the total disvalue in the world over the course of ten years doesn't necessarily mean one is callous about the former.
I agree that this is not an existential catastrophe, at least on timescales of less than a billion years, provided that humanity is not permanently prevented from leaving Earth. To me, an "existential catastrophe" is an event that causes humanity's welfare or the quality of its moral values to permanently fall far below present-day levels, e.g. to pre-industrial levels. At most, I'd be disappointed if technology plateaued at a level above the present day's technological progress.
However, I'd consider it an existential catastrophe if humanity permanently lost the ability to settle outer space, because that would make our eventual extinction inevitable.
I think the key message a lot of people will take away from this post is "Your entire philosophy and way of life is wrong - it doesn't matter if everyone dies."
What is the key message you actually want people to take away from this post?
If they read superficially, yes. Would you prefer he explicitly say in the abstract "I think it's bad if everyone dies"?
ælijah: If you're going to accuse other users of having read something superficially, please explain your views in more detail. What do you think the paper's key message is, and what sections/excerpts make you believe this?
I'll note that Khorton didn't suggest that "it doesn't matter if everyone dies" was what the post's author actually meant to convey - instead, she expressed concern that it could be read in that way, and asked the author to clarify.
Also, speaking as a Forum moderator: the tone of your comment wasn't really in keeping with the Forum's rules. We discourage even mildly abrasive language if it doesn't contain enough detail for people to be able to respond to your points.
I apologize. I meant my comment to say that the paper wouldn't be misunderstood in that way by its readership as a whole if it were read carefully.
On further thought, I think it could be reasonably argued that the abstract actually should explicitly say "I think it's bad if everyone dies".
Thanks for clarifying. This topic has generally been contentious, so I want to be careful to keep the discussion based on substantive discussion of Torres' ideas or specific wording.