As I understand, the following two positions are largely accepted in the EA community:

  1. Temporal position should not impact ethics (hence longtermism)
  2. Neutrality against creating happy lives

But if we are time-agnostic, then neutrality against making happy lives seems to imply a preference for extinction over any future where even a tiny amount of suffering exists.

So am I missing something here? (Perhaps "neutrality against creating happy lives" can't be expressed in a way that's temporally agnostic?)

7

0
0

Reactions

0
0
New Answer
New Comment

5 Answers sorted by

My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

I have never seen a survey on this, but I think most people here adopt a totalist view on which creating new happy people is good, because of e.g. the classic transitivity argument. So you were correct to be confused!

I want to focus on the following because it seems to be a problematic misunderstanding:

"1. Temporal position should not impact ethics (hence longtermism)"

This genuinely does seem to be a common view in EA, namely, that when someone exists doesn't (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.

The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won't say if these objections are, all things considered, plausible, I'll merely set out what they are.

First, there is the epistemic objection to longtermism (sometimes called the 'tractability', 'washing-out', or 'cluelessness' objection) that, in short, we can't be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments. Note this has nothing to do with different values of people due to time.

Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What's the justification for this implication? One justification could be 'presentism', the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.

An alternative justification, which does not rely on temporal position in itself, is 'necessitarianism', the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone ('person-affecting restriction') and (2) existence is not comparable to non-existence for someone ('non-comparativism'). In short, it isn't better to create lives, because it's not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)

The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn't better for either the people that would have existed, or the people that will actually exist. This is known as the 'non-identity problem'. Necessitarians might explain that, although we really want to help (far) future people, we simply can't. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees' lives go better - only sentient entities can have well-being.)

Note, crucially, this has nothing to do with temporal position in itself either. It's the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn't matter in itself).

*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don't want to get into this here as this is already long enough.

I agree with Jack that neutrality about creating happy lives is (probably) a minority view within EA, although I'm not sure. 80% of EAs are consequentialist according to the most recent EA survey, and most of those probably reject neutrality: https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics

The conclusion in favour of extinction doesn't necessarily follow, though, depending on the exact framing of the asymmetry and neutrality (although I think it would according to the views CLR defends, but I don't even think everyone at CLR agrees with those views). See the soft asymmetry and conclusion here: https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/

Note that this view does satisfy transitivity, but not the independence of irrelevant alternatives, i.e. whether A is better than B can depend on what other options are available. I think standard intuitions about the repugnant conclusion, which the soft asymmetry avoids (if so recall correctly), do not satisfy the independence of irrelevant alternatives. There are other cases where independence is violated by common intuitions: https://forum.effectivealtruism.org/posts/HyeTgKBv7DjZYjcQT/the-problem-with-person-affecting-views?commentId=qPDNPCsWuCF86hsqi

For what it's worth, this is a new view put forth, so it's likely few people know about it, but I suspect it's closest to a temporally impartial version of most people's moral intuitions.

There's also the possibility of s-risks by omission, like failing to help aliens (causally or causally), which extinction would exacerbate, although I'm personally skeptical that we would find and help aliens. Some discussion here: https://centerforreducingsuffering.org/s-risk-impact-distribution-is-double-tailed/

Personally, I basically agree with the views in that article by CLR, the asymmetry in particular is one of my strongest intuitions (the hard version, additional happy lives aren't good), and I think that an empty future would be optimal because of the asymmetry. I do not find this counterintuitive.

Curated and popular this week