Hide table of contents

This short post responds to some of the criticisms of longtermism in Torres’ minibook: Were the Great Tragedies of History “Mere Ripples”? The Case Against Longtermism, which I came across in this syllabus.

I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges. I split the criticisms into two sections: the first on problematic ethical assumptions or commitments, the second on problematic policy proposals.

Note that I both respect and disagree with all three authors. Torres piece is insightful and thought-provoking, as well as polemical; Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse; and Bostrom is a groundbreaking visionary, though one can dispute many of his views.

 

Problematic ethical assumptions or commitments

Torres argues that longtermism rests on assumptions and makes commitments that are problematic and unusual/niche. He is correct that Bostrom has a number of unusual ethical views, and in his early writing he was perhaps overly fond of a contrarian ‘even given these incredibly conservative assumptions the argument goes through’ framing. But Torres does not sufficiently appreciate that these limitations and constraints have largely been acknowledged by longtermist philosophers, who have (re)formulated longtermism so as to not require these assumptions and commitments.

Total utilitarianism 

Torres suggests that longtermism is based on an ethical assumption of total utilitarianism, a view in which we should maximise wellbeing based on adding together the wellbeing of all the individuals in a group. Such a ‘more is better’ ethical view accords significant weight to trillions of future individuals. He points out that total utilitarianism is not a majority opinion amongst moral philosophers.

However, although total utilitarianism strongly supports longtermism, longtermism doesn’t need to be based on total utilitarianism. One of the achievements of The Precipice is Ord’s arguments pointing out the affinities between longtermism with other ethical traditions, such as conservatism, obligations to the past, virtue ethics. One can be committed to a range of ethical views and endorse longtermism.

Trillions of simulations on computronium

Torres suggests that the scales are tilted towards longtermism by including in the calculation quadrillions of simulations of individuals living flourishing lives. The view that such simulations would be moral agents, or that this future is desirable, is certainly unusual.

But one doesn’t have to be committed to this view for the argument to work. The argument goes through if we assume that humanity never leaves Earth, and simply survives until the Earth is uninhabitable – or even more conservatively, survives the duration of an average mammalian species. There are still trillions of future individuals, whose interests and dignity matter.

‘Reducing risk from 0.001% to 0.0001% is not the same as saving thousands of lives’

Torres implies that longtermism is committed to a view of the form that reducing risk from 0.001% to 0.0001% is morally equivalent to saving e.g. thousands of present day lives. This a clear example of early Bostrom stating his argument in a philosophically robust, but very counterintuitive way. Worries about this framing have been common for over a decade, in the debate over ‘Pascal’s Mugging’.

However, longtermism does not have to be stated in such a way. The probabilities are unfortunately likely higher – for example Ord gives a 1/6 (~16%) probability of existential risk this century – and the reductions in risk are likely higher too. That is, with the right policies (e.g. robust arms control regimes) we could potentially reduce existential risk by 1-10%. Specifically on Pascal’s Mugging, a number of decision-theory responses have been proposed, which I will not discuss here.

Transhumanism and space settlement & ‘Not reaching technological maturity = existential risk’

Torres suggests that longtermism is committed to transhumanism and space settlement (in order to expand the number of future individuals), and argues that Bostrom bakes this commitment into existential risk through a negative definition of existential risk as any future that does not achieve technological maturity (through extinction, plateauing, etc). 

However, while Bostrom certainly does think this future is ethically desirable, longtermism is not committed to it. Torres underplays the crucial changes Ord makes with his definition of existential risk as the “destruction of humanity’s potential” and the institution of the “Long Reflection” to decide what we should do with this potential. Long Reflection proponents specifically propose not engaging in transhumanist enhancement or substantial space settlement before the Long Reflection is completed. Longtermism is not committed to any particular outcome from the Long Reflection. For example, if after the Long Reflection humanity decided to never become post-humans, and never leave Earth, this would not necessarily be viewed by longtermists as a destruction of humanity’s potential, simply one choice as to how to spend that potential.

 

Problematic policy proposals 

Torres argues that longtermists are required to endorse problematic policy proposals. I argue that they are not – I personally would not endorse these proposals.

‘Continue developing technology to reduce natural risk’

Torres argues that longtermists are commited to continued technological development for transhumanist/space settlement reasons – and to prevent natural risks – but that this is “nuts” because (as he fairly points out) longtermists themselves argue that natural risk is tiny compared to anthropogenic risk.

However, the more common longtermist policy proposal is differential technological development – to try to foster and speed up the development of risk-reducing (or more generally socially beneficial) technologies and to slow down the development of risk-increasing (or socially harmful) technologies. This is not a call to continue technological development in order to become post-humans or reduce asteroid/supervolcano risk – it is to differentially progress technology, assuming that overall technological development is hard/impossible to stop. I would agree with this assumption, but one may reasonably question it, especially when phrased as a form of strong ‘technological completism’ (any technology that can get invented will get invented).

Justifies surveillance

Torres argues against the “turnkey totalitarianism” (extensive and intrusive mass surveillance and control to prevent misuse of advanced technology) explored in Bostrom’s ‘Vulnerable World Hypothesis’, and implies that longtermism is committed to such a policy. 

However, longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model. The existential risks we have faced so far (nuclear and biological weapons, climate change) have largely come from state militaries and large companies, and the existential risks we may soon face (from new biotechnologies and transformative AI) will also come from the same threat sources. The focus of existential risk prevention should therefore be on states and companies. Risks from individuals and small groups are relatively much smaller. These small benefits from the kind of mass surveillance Bostrom explores means it is not justified by a cost-benefit analysis.

Nevertheless, in the contrived hypothetical of ‘anyone with a microwave could have a nuclear weapon’, would longtermism be committed to restrictions on liberty? I address this in the next heading.

Justifies mass murder

Torres argues that longtermists would have to be willing to commit horrendous acts (e.g. destroy Germany with nuclear weapons) if it would prevent extinction.

This is a classic objection to all forms of consequentialism and utilitarianism – from the Trolley Problem to the Colosseum objection. There are many classic responses, ranging from disputing the hypothetical to pointing out that other ethical views are also committed to such an action.

It is not a unique objection to longtermism, and loses some of its force as longtermism does not have to be based on utilitarianism (as I said above). I would also point out that it is an odd accusation to level, as longtermism places such high priority on peace, disarmament and avoiding catastrophes.

Justifies giving money to the rich rather than the extreme poor, which is a form of white supremacy

Torres suggests that longtermism is committed to donating to the rich rather than to those in extreme poverty (or indeed animals). He further argues that this reinforces “racial subordination and maintain[s] a normalized White privilege.”

However, longtermism is not committed to donating (much less transferring wealth from poor countries) to present rich people. Longtermists might in practice donate to NGOs or scientists in the developed world, but the ultimate beneficiaries are future generations. Indeed, the same might be true of other cause areas e.g. work on a malaria vaccine or clean meat. Torres does not seem to accord much weight to how much longtermists recognise this as a moral dilemma and feel very conflicted – most longtermists began as committed to ending the moral crimes of extreme poverty, or of factory farming. There are many huge tragedies, but one must unfortunately chose were to spend one’s limited time and resources.

Longtermism is committed to the view that future generations matter morally. They are moral equals. When someone is born is a morally irrelevant fact, like their race, gender, nationality or sexuality. Furthermore, present people are in a unjust, exploitative power imbalance with future generations. Future generations have no voice or vote in our political and economic systems. They can do nothing to affect us. Our current political and economic systems are set up to overwhelmingly benefit those currently alive, often at the cost of exploiting, and loading costs onto, future generations.

This lack of recognition of moral equality, lack of representation, power imbalance and exploitation shares many characteristics with white supremacy/racism/colonialism and other unjust power structures. It is ironic to accuse a movement arguing on behalf of the voiceless of being a form of white supremacy.

Comments75
Sorted by Click to highlight new comments since: Today at 1:25 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[anonymous]3y198
0
0

It is  very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white  supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.  

A quick point of clarification that Phil Torres was never staff at CSER; he was a visitor for a couple of months a few years ago. He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not). (And FWIW he has made similar allusions, albeit thinly veiled, about me).

9
HaydnBelfield
3y
I'm really sorry to hear that from both of you, I agree it's a serious accusation.  For longtermism as a whole, as I argued in the post, I don't understand describing it as white supremacy - like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.
[anonymous]3y88
0
0

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him? 

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him. 

[disclaimer: I am co-Director at CSER. While much of what I will write intersects with professional responsibilities, it is primarily written from a personal perspective, as this is a deeply personal matter for me. Apologies in advance if that's confusing, this is a distressing and difficult topic for me, and I may come back and edit. I may also delete my comment, for professional or personal/emotional reasons].

I am sympathetic to Halstead's position here, and feel I need to write my own perspective. Clearly to the extent that CSER has - whether directly or indirectly - served to legitimise such attacks by Torres on colleagues in the field, I bear a portion of responsibility as someone in a leadership position. I do not feel it would be right or appropriate for me to speak for all colleagues, but I would like to emphasise that individually I do not, in any way, condone this conduct, and I apologise for it, and for any failings on my individual part that may have contributed.

My personal impression supports the case Halstead makes. Comments about my 'whiteness', and insinuations regarding my 'real' reasons for objecting to positions taken by Torres only came after I objected publicly... (read more)

Addendum: There's a saying that "no matter what side of an argument you're on, you'll always find someone on your side who you wish was on the other side".

There is a seam running through Torres's work that challenges xrisk/longtermism/EA on the grounds of the limitations of being led and formulated by a mostly elite, developed-world community.

Like many people in longtermism/xrisk, I think there is a valid concern here.  xrisk/longtermism/EA all started in a combination of elite british universities + US communities e.g. bay. They had to start somewhere. I am of the view that they shouldn't stay that way. 

I think it's valid to ask whether there are assumptions embedded within these frameworks at this stage that should be challenged, and to posit that these would be challenged most effectively by people with a very different background and perspective. I think it's valid to argue that thinking, planning for, and efforts to shape the long-term future should not be driven by a community that is overwhelmingly from one particular background and that doesn't draw on and incorporate the perspectives of a community that reflects more of global societies and cultures. Work by such... (read more)

I completely agree with all of this, and am glad you laid it out so clearly.

8
jtm
3y
Seconded.

I just wanted to say that this is a beautiful comment. Thank you for sharing your perspective in such an elegant, careful and nuanced manner.

-40
philosophytorres
3y

I don't have any comment to make about Torres or his motives (I think I was in a room with him once). However, as a more general point, I think it can still make sense to engage with someone's arguments, whatever their motivation, at least if there are other people who take them seriously. I also don't have a view on whether others in the longtermism/X-risk world do take Torres's concern seriously, it's not really my patch.

-29
philosophytorres
3y

Despite disagreeing with most of it, including but not limited to the things highlighted in this post, I think that Torres's post is fairly characterised as thought-provoking. I'm glad Joshua included it in the syllabus, also glad he caveated its inclusion, and think this response by Hayden is useful.

I haven't interacted with Phil much at all, so this is a comment purely on the essay, and not a defense of other claims he's made or how he's interacted with you. 

edit in 2022, as this comment is still occasionally receiving votes:
I stand by the above, but having read several other pieces since, displaying increasing levels of bad faith, I'm increasingly sympathetic to those who would rather not engage with it.

jtm
3y11
0
0

I second most of what Alex says here.  Like him, I only know about this particular essay from Torres, so I will limit my comments to that.

Notwithstanding my own objections to its tone and arguments, this essay did provoke important thoughts for me – as well as for other committed longtermists with whom I shared it – and that was why I ultimately ended up including it on the syllabus. The fact that, within 48 hours, someone put in enough effort to write a detailed forum post about the substance of the essay suggests that it can, in fact, provoke the kinds of discussions about important subjects that I was hoping to see. 

Indeed, it is exactly because I think the presentation in this essay leaves something to be desired that I would love to see more community discussion on some of these critiques of longtermism, so that their strongest possible versions can be evaluated. I realise I haven't actually specified which among the essay's many arguments that I find interesting, so I hope I will find time to do that at some point, whether in this thread or a separate post.

 

Like him, I only know about this particular essay from Torres, so I will limit my comments to that.

I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.

(To be clear: I think the syllabus is otherwise great, and kudos for creating it!)

EDIT: See Seán's comment for further elaboration on points (1) and (2) above.

Genuine question: if someone has views that are widely considered repugnant (in this case that longtermists are white supremacists) but otherwise raises points that some people find interesting and thought-provoking, should we:

A) Strongly condemn the repugnant ideas whilst genuinely engaging with the other ideas

B) Ignore the person completely / cancel them

If the person is clearly trolling or not writing in good faith then I'd imagine B) is the best response, but if Torres is in fact trolling then I find it surprising that some people find some of his ideas interesting / thought-provoking.

(Just to reiterate this is a genuine question I'm not stating a view one way or the other and I also haven't read Torres' post)

In this case, I would say it's not the mere fact that they hold views widely considered repugnant, but the conjunction of that fact with decisive evidence of intellectual dishonesty (that some people found his writings thought provoking isn't necessarily in tension with the existence of this evidence). Even then you probably could conceive of scenarios where the points raised are so insightful that one should still engage with the author, but I think it's pretty clear this isn't one of those cases.

The last time I tried to isolate the variable of intellectual dishonesty using a non-culture war example on this forum (in this case using fairly non-controversial (to EAs) examples of intellectual dishonesty, and with academic figures that I at least don't think are unusually insightful by EA lights), commentators appeared to be against the within-EA  cancellation of them, and instead opted for a position more like:

I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

This appears broadly analogous to how jtm presented Torres' book in his syllabus. Now of course a) there are nontrivial framing effects so perhaps people might like to revise their conclusions in my comment and b) you might have alternative reasons to not cite Torres in certain situations (eg very high standard for quality of argument, deciding that personal attacks on fellow movement members is verbotten), but at least the triplet-conjunction presented in your comment (
bad opinions + int... (read more)

-6
Aaron Gertler
3y
-15
Jim Balter
1y
-60
philosophytorres
3y
Aaron Gertler
3yModerator Comment84
0
0

As the Forum’s lead moderator, I’m posting this message, but it was written collaboratively by several moderators after a long discussion.

As a result of several comments on this post, as well as a pattern of antagonistic behavior, Phil Torres has been banned from the EA Forum for one year.

Our rules say that we discourage, and may delete, "unnecessary rudeness or offensiveness" and "behavior that interferes with good discourse". Calling someone a jerk and swearing at them is unnecessarily rude, and interferes with good discourse.

Phil also repeatedly accuses Sean of lying:

I am trying to stay calm, but I am honestly pretty f*cking upset that you repeatedly lie in your comments above, Sean [...] I won't include your response, Sean, because I'm not a jerk like you.

How can someone lie this much about a colleague and still have a job?

You repeatedly lied in your comments above. Unprofessional. I don't know how you can keep your job while lying about a colleague like that.

After having seen the material shared by Phil and Sean (who sent us some additional material he didn’t want shared on the Forum), we think the claims in question are open to interpretation but clearly not deliberate lies.&... (read more)

[This comment is a tangential and clarifying question; I haven't yet read your post]

Ord’s book is a great restatement of the ethical case, though I disagree with his prioritisation of climate change, nuclear weapons and collapse

If I didn't know anything about you, I'd assume this meant "Toby Ord suggests climate change, nuclear weapons, and collapse should be fairly high priorities. I disagree (while largely agreeing with Ord's other priorities)." 

But I'm guessing you might actually mean "Toby Ord suggests climate change, nuclear weapons, and collapse should be much lower priorities than things like AI and biorisk (though they should still get substantial resources, and be much higher priorities than things like bednet distribution). I disagree; I think those things should be similarly high priorities to things like AI and biorisk."

Is that guess correct? 

I'm not sure whether my guess is based on things I've read from you, vs just a general impression about what views seem common at CSER, so I could definitely be wrong.

That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon. 

3
mic
2y
Is your preprint available now? I'd be curious to read your thoughts about why climate change and nuclear war should be prioritized more.

Thanks, Haydn, for writing this thoughtful post. I am glad that you (hopefully) found something from the syllabus useful and that you took the time to read and write about this essay.

I would love to write a longer post about Torres' essay and engage in a fuller discussion of your points right away, but I'm afraid I wouldn't get around to that for a while. So, as an unsatisfactory substitute, I will instead just highlight three parts of your post that I particularly agreed with, as well as two parts that I believe deserve further clarification or context.

A)... (read more)

A little historical background - one of my first introductions to proto-effective altruism was through corresponding with Nick Beckstead while he was a graduate student, around the time he would have been writing this dissertation. He was one of the first American members of Giving What We Can (which at the time was solely focused on global poverty), and at the time donated 10% of his graduate stipend to charities addressing global poverty. When I read this passage from his dissertation, I think of the context provided by his personal actions.

I think that "other things being equal" is doing a lot of work in the passage. I know that he was well aware of how much more cost-effective it is to save lives in poor economies than in rich ones, which is why he personally put his money toward global health.

5
jtm
3y
Thanks for the context. I should note that I did not in any way intend to disparage Beckstead's personal character or motivations, which I definitely assume to be both admirable and altruistic. As stated in my comment, I found the quote relevant for the argument from Torres that Haydn discussed in this post. I also just generally think the argument itself is worth discussing, including by considering how it might be interpreted by readers who do not have the context provided by the author's personal actions.

Happy to have a go; the "in/out of context" is a large part of the problem here. (Note that I don't think I agree with Beckstead's argument for reasons given towards the end).

(1) The thesis (198 pages of it!) is about shaping the far future, and operates on staggering timescales. Some of it like this quote is written in the first person, which has the effect of putting it in the present-day context, but these are at their heart philosophical arguments abstracted from time and space. This is a thing philosophers do.

If I were to apply the argument to the 12th century world, I might claim that saving a person in what is now modern day Turkey would have greater ripple effects than saving a person in war-ravaged Britain.  The former was lightyears further ahead in science and technology, chock full of incredible muslim scholar-engineers like Al Jazari (seriously; read about this guy). I might be wrong of course;  the future is unpredictable  and these ripples might be wiped out in the next century by a Mongol Horde (as for the most part did happen); but wrong on different grounds.

And earlier in the thesis Beckstead provides a whole heap of caveats (in addition to 'all oth... (read more)

Here is an article by Phil Torres arguing that the rise of Islam represents a very significant and growing existential risk.

https://hplusmagazine.com/2015/11/17/to-survive-we-must-go-extinct-apocalyptic-terrorism-and-transhumanism/

I will quote a key paragraph:

"Consider the claim that there will be 2.76 billion Muslims by 2050. Now, 1% of this number equals 27.6 million people, roughly 26.2 million more than the number of military personnel on active duty in the US today. It follows that if even 1% of this figure were to hold “active apocalyptic” views, humanity could be in for a catastrophe like nothing we’ve ever experienced before."

Firstly, this is nonsense. The proposition that 1% of Muslims would hold "active apocalyptic" views and be prepared to act on it is pure nonsense. And "if even 1%" suggests this is the author lowballing.

Secondly, this is fear-mongering against one of the most feared and discriminated-against communities in the West, being written for a Western audience.

Thirdly, it utilises another standard racism trope, population replacement - look at the growing number of scary 'other'. They threaten to over-run the US's good 'ol apple pie armed forces.

This was not a... (read more)

3
Julia_Wise
3y
Understood!

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

4
jtm
3y
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion? I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn't do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.  I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like 'we should prioritise people like ourselves.' Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

3
Garrison
3y
The main issue I have with this quote is that it's so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat 'other things being equal',  but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries. 

I don't understand why thinking like that quote isn't totally passe to EAs. At least to utilitarian EAs. If anyone's allowed to think hypothetically ("divorced from the reality") I would think it would be a philosophy grad student writing a dissertation.

-16
Garrison
3y

I just wanted to echo your sentiments in the last part of your comment re: Beckstead's quote about the value of saving lives in the developed world. Having briefly looked at where this quote is situated in Beckstead's PhD thesis (which, judging by the parts I've previously read, is excellent), the context doesn't significantly alter how this quote ought to be construed. 

I think this is at the very least an eyebrow-raising claim, and I don't think Torres is too far off the mark to think that the label of white supremacism, at least in the "scholarly" sense of the term, could apply here. Though it's vital to note that this is in no way to insinuate that Beckstead is a white supremacist, i.e., someone psychologically motivated by white supremacist ideas. If Torres has insinuated this elsewhere, then that's another matter. 

It also needs noting that, contra Torres, longtermism simpliciter is not committed to the view espoused in the Beckstead quote. This view falls out of some particular commitments which give rise to longtermism (e.g. total utilitarianism). The OP does a good job of pointing out that there are other "routes" to longtermism, which Ord articulates, and I think ... (read more)

Rational discourse becomes very difficult when a position is characterized by a term with an extremely negative connotation in everyday contexts—and one which, justifiably, arouses strong emotions—on the grounds that the term is being used in a "technical" sense whose meaning or even existence remains unknown to the vast majority of the population, including many readers of this forum. For the sake of both clarity and fairness to the authors whose views are being discussed, I strongly suggest tabooing this term.

>but I think it's worth noting that, were various longtermist ideas to enter mainstream discourse, this is exactly the kind of critique they'd receive (unfairly or not!) - so it's worth considering how plausible these charges are, and how longtermists might respond.

This is a good point, and worth being mindful of as longtermism becomes more mainstream/widespread.

Well there's a huge obvious problem with an "all generations are equal" moral theory. How do you even know whether you're talking about actual moral agents? For all we know, maybe in the next few years some giant asteroid will wipe out human life entirely.  

We can try to work with expected values and probabilities, but that only really works when you properly justify what probability you're giving certain outcomes. I have no idea how someone gets something like a 1/6th probability of extinction risk from causes xyz especially when the science and tech of a few of these causes are speculative, and frankly it doesn't sound possible. 

We actually do have a good probability for a large asteroid striking the earth within the next 100 years, btw. It was the product of a major investigation, I believe it was 1/150,000,000.

Probabilities don't have to be a product of a legible, objective or formal process. It can be useful to state our subjective beliefs as probabilities to use them as inputs to a process like that, but also generally it's just good mental habit to try to maintain a sense of your level of confidence about uncertain events.

If Ord is giving numbers like a 1/6 chance, he needs to back them up with math. Sure, the chance of asteroid extinction can be calculated by astronomers, but probability of extinction by climate change or rogue AI is a highly suspect endeavor when one of those things is currently purely imaginary and the other is a complex field with uncertain predictive models that generally only agree on pretty broad aspects of the planet.