210Joined Dec 2020


Philosophy DPhil at Oxford and Parfit Scholar at GPI https://www.elliott-thornley.com/


I haven't read this post yet, but it sounds like you might be interested in this paper on existential risks from a Thomist Christian perspective if you haven't seen it already.


Nice post! Consider this a vote for more summaries.

All good points, but Tarsney's argument doesn't depend on the assumption that longtermist interventions cannot accidentally increase x-risk. It just depends on the assumption that there's some way that we could spend $1 million  that would increase the epistemic probability that humanity survives the next thousand years by at least 2x10^-14.

Thanks! This is valuable feedback.

By 'persistent difference', Tarsney doesn't mean a difference that persists forever. He just means a difference that persists for a long time in expectation: long enough to make the expected value of the longtermist intervention greater than the expected value of the neartermist benchmark intervention.

Perhaps you want to know why we should think that we can make this kind of persistent difference. I can talk a little about that in another comment if so.

A point that seems worth noting, from Puzzles for Everyone:

In an especially striking example of conflating utilitarianism with anything remotely approaching systematic thinking, popular substacker Erik Hoel recently characterized the Beckstead & Thomas paper on decision-theoretic paradoxes as addressing “how poorly utilitarianism does in extreme scenarios of low probability but high impact payoffs.” Compare this with the very first sentence of the paper’s abstract: “We show that every theory of the value of uncertain prospects must have one of three unpalatable properties.” Not utilitarianism. Every theory.

(Alas, when I tried to point this out in the comments section, after a brief back-and-forth in which Erik initially doubled down on the conflation, he abruptly decided to instead delete my comments explaining his mistake.)

Yes, nice point. We could depart from the total view and go for a neutral band. But it's worth noting that this move comes with problems of its own.

Great point. But note that if lives of monk-like tranquility are neutral, that makes the Mirrored Repugnant Conclusion harder to accept:

For any population of hellish lives, there is a population of barely bad lives that is worse.

The total view in population ethics implies this Mirrored Repugnant Conclusion.

If lives of monk-like tranquility are neutral, then lives of monk-like tranquility plus a mosquito bite are barely bad, and so the total view implies:

For any population of hellish lives, there is a population of lives of monk-like tranquility plus a mosquito bite that is worse.

I think in ordinary cases, necessitarianism ends up looking a lot like presentism. If someone presently exists, then they exist regardless of my choices. If someone doesn't yet exist, their existence likely depends on my choices (there's probably something I could do to prevent their existence).

Necessitarianism and presentism do differ in some contrived cases, though. For example, suppose I'm the last living creature on Earth, and I'm about to die. I can either leave the Earth pristine or wreck the environment. Some alien will soon be born far away and then travel to Earth. This alien's life on Earth will be much better if I leave the Earth pristine. Presentism implies that it doesn't matter whether I wreck the Earth, because the alien doesn't exist yet. Necessitarianism implies that it would be bad to wreck the Earth, because the alien will exist regardless of what I do.

I'm just trying to fix that discrepancy.

I see. That seems like a good thing to do.

Here's another good argument against person-affecting views that can be explained pretty simply, due to Tomi Francis.

Person-affecting views imply that it's not good to add happy people. But Q is better than P, because Q is better for the hundred already-existing people, and the ten billion extra people in Q all live happy lives. And R is better than Q, because moving to R makes one hundred people's lives slightly worse and ten billion people's lives much better. Since betterness is transitive, R is better than P. R and P are identical except for the extra ten billion people living happy lives in R. Therefore, it's good to add happy people, and person-affecting views are false.

Load More