Mauricio

Wiki Contributions

Comments

Many Undergrads Should Take Light Courseloads

Thanks, Aaron! I've felt similarly--crazy how much time (and effort/attention/stress) that frees up :)

Participate in or facilitate fellowships/reading groups for EA if EA is something you want to do. Having other people depend on you or expect things from you can be really motivating. 

I'm into the general point here. I'd also encourage people to be much more ambitious in applying this advice--anecdotally, a significantly lighter courseload leaves enough time to e.g. organize whole fellowships (although facilitation/participation can definitely be a good starting point).

Many Undergrads Should Take Light Courseloads

Thanks! That's right, I was mainly thinking about value for group organizing (although seems generally valuable for making connections).

Why I am probably not a longtermist

Thanks! I'm not very familiar with Haidt's work, so this could very easily be misinformed, but I imagine that other moral foundations / forms of value could also give us some reasons to be quite concerned about the long term, e.g.:

  • We might be concerned with degrading--or betraying--our species / traditions / potential.
  • You mention meaninglessness--a long, empty future strikes me as a very meaningless one.

(This stuff might not be enough to justify strong longtermism, but maybe it's enough to justify weak longtermism--seeing the long term as a major concern.)

Also, I absolutely value positive experiences! [...] I think I just prioritise it a bit less

Oh, interesting! Then (with the additions you mentioned) you might find the arguments compelling?

Why I am probably not a longtermist

Thanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)

Why I am probably not a longtermist

Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/autonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldn't be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)

(Maybe there's some asymmetry in people's views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives aren't worthwhile.)

only about 50% in the UK sample thought extinction was uniquely bad

This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection.

I think you're generally right that a significant fraction of non-utilitarian views wouldn't be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (I'd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like they'd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So I'd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. "humanity is inherently super awesome").

Why I am probably not a longtermist

Hm, I can't wrap my head around rejecting transitivity.

we could adopt a sort of "tethered good approach" (following Christine Korsgaard), where we maintain that claims like "A is better/more valuable than B" are only meaningful insofar as they are  reducible to claims like "A is better/more valuable than B for person P."

Does this imply that bringing tortured lives into existence is morally neutral? I find that very implausible. (You could get out of that conclusion by claiming an asymmetry, but I haven't seen reasons to think that people with objective list theories of welfare buy into that.) This view also seems suspiciously committed to sketchy notions of personhood. 

Why I am probably not a longtermist

Thanks! I think I see how these values are contingent in the sense that, say, you can't have human relationships without humans. Are you saying they're also contingent in the sense that (*) creating new lives with these things has no value? That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?

(In other words, "objective list" theories of well-being (if they hold some lives to be better than neutral) + transitivity seem to imply that creating good lives is possible and valuable, which implies (*) is false. People with these theories of well-being could avoid that conclusion by (a) rejecting that some lives are better than neutral, or (b) by rejecting transitivity. Do they?)

Why I am probably not a longtermist

Thanks!

I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others

Curious why you think this first part? Seems plausible but not obvious to me.

in an empty future, everyone has full freedom/autonomy and gets everything they want

I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.)

in an empty future [...] no one faces injustice, no one suffers

Yup, agreed that empty futures are better than some alternatives under many value systems. My claim is just that many value systems leave substantial room for the world to be better than empty.

I think most people think of the badness of extinction as primarily the deaths, not the prevented future lives, though, so averting extinction wouldn't get astronomical weight.

Yeah, agreed that something probably won't get astronomical weight if we're doing (non-fanatical forms of) moral pluralism. The paper you cite seems to suggest that, although people initially see the badness of extinction as primarily the deaths, that's less true when they reflect:

More people find extinction uniquely bad when [...] they are explicitly prompted to consider long-term consequences of the catastrophes. [...] Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Why I am probably not a longtermist

Additional thoughts:

I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.

I think the word "totalitarianism" is pulling too much weight here. I'm sympathetic to something like "existential security requires a great combination of preventative capabilities and civilizational resilience." I don't see why that must involve anything as nasty as totalitarianism. As one alternative, advances in automation might allow for decentralized, narrow, and transparent forms of surveillance--preventing harmful actions without leaving room for misuse of data (which I'd guess is our usual main concern about mass surveillance).

(Calling something "soft totalitarianism" also feels like a bit odd, like calling something "mild extremism." Totalitarianism has historically been horrible in large part because it's been so far from being soft/moderate, so sticking the connotations of totalitarianism onto soft/moderate futures may mislead us into underestimating their value.)

I also have traditional Pascal’s mugging type concerns for prioritizing the potentially small probability of a very large civilisation.

I don't see how traditional Pascal's mugging type concerns are applicable here. As I understand them, those apply to using expected value reasoning with very low (subjective) probabilities. But surely "humanity will last with at least our current population for as long as the average mammalian species" (which implies our future is vast) is a far more plausible claim than "I'm a magical mugger from the seventh dimension"?

Load More