This is a special post for quick takes by anoni. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 7:01 PM

TL;DR silly critics to long-termism, can you convince me to keep donating to EA funds?

See positive arguments and introduction below. The following are supposed to be naive Critics/questions:

  1.  Utilitarianism critics. Even with the newer formulation of long-termism,
    1. Why should I care about people that won't exist? Say we go extinct, then what? (like in the movie "Her", this could be the smart choice). I'm more on the pro-abortion side of the discussion here. Why should X-risks be costly because of their opportunity cost and not because of the immediate suffering?
    2. Do we care only about humans? Isn't out there any silly argument like "we should cultivate X insect because there will be a lot of them and a lot of lives means a lot of happiness"?
    3. Definitely not clear what is good in the long run (nor what's good now). True or false?
  2. Privilege/anti-capitalism critics,
    1. This is more of a feeling. Is long-termism supporting capitalism? Doesn't the immediate solution of the world's most pressing problem require a systemic change? Is it the only discourse that can adapt to the current state of affairs instead of demanding an urgent change?
    2. On the same line, how are the preferences of non-privileged (e.g. poor) people taken into account when defining what is "effective"?
    3. Does this view incentivize the acceptance of suffering in the present moment and in future present moments?
    4. Is this view trying to solve the existential crisis arising when imagining a future with a sustainable but not growing (e.g. population) society?
  3. Other
    1. The expected discounted return seems more appropriate because the probabilities of changing our preferences grow with time. True or false?
    2. How something that feels that bad (ignoring current suffering to prioritize maybe future happiness) could be right? 
    3. Is this a justification to natural selfishness (e.g. not giving up all we "can" give up)?



Hi everyone! I have read some of the posts on long-termism. Maybe one hour of reading, which is obviously not enough given the time  devoted and depth achieved by people around here. However, I still feel the idea is terrible, which is why I compiled the previous naive list to have each point refuted. I definitely don't like seeing the funds giving money to already privileged people (arguably more than me), but I trust there is a good reason behind. Please add a "IMO" before each phrase.

The most compelling pro arguments for long-termism investing are: 1- it makes sense for good investments to be counter-intuitive (because they are the most neglected and they wouldn't be neglected if they were intuitive) and 2- long-termism is more about the coincidence between the correctness of globally good actions and the correctness of those actions in the long term future. 

Any Too Long Didn't Read - like answer will be very appreciated 

Hi - my intuitions fall in the other direction here, so I'm keen to explain why.  Implicit IMOs in front of everything here.

    1.1:  I have a younger brother. My parents could have stopped at one, and my family would broadly still be happy, but my brother is generally happy and leads a good life. Similarly, if they'd had a third child they probably would have been happy and great too, and I would have loved them. All else being equal I wish that youngest sibling could have existed. IMO these two sentiments aren't meaningfully distinct. 

    1.2:  We don't only care about humans.  Sure, the argument for making more humans would apply to insects or something as well. However, most of the things that would kill all the humans would also kill everything else, so for me not letting that happen is still much more of a priority.

    1.3: True on the specifics, false more generally. I don't know exactly what the world should look like, but I'm pretty sure people being happy is good, more people being happy is better, and everything being unrecoverably dead is neutral at most. 



    2.1: If we weren't potentially about to all die I'd be more willing to think about this, but we have to survive the next century or two first. Whether capitalism makes things better or worse for now depends much more on whether it makes us more or less likely to all die, than on anything else (again, for now). 

    2.2: I'm pretty sure non-privileged people also want to be alive and happy. 

    2.3: Possibly, and I'm ok with that. I'd rather live a worse life if it means my grandkids are more likely to survive and have happy ones. Although it's definitely better for everyone to be happier now, I feel like it doesn't amount to much if we all die in the next century. 

    2.4: If I can choose between a surviving but stable society, and a growing one, I would choose the growing one. But both are better than an empty rock, so the priority now is not dying either way.



    3.1: I'm pretty sure we'll continue to want to be alive and happy, so false. People can't decide what their preferences are, and work to fulfil them, if they don't exist.

    3.2: Our moral intuitions were built for very different-looking societies to where we are today. We like sugar and sex because we were supposed to go for fruit and reproduction; our moral intuitions aren't hugely different. IMO this is in a similar category to people caring more about saving one child than eight of them. 

    3.3: No. 

I should clarify 3.3. For me, longtermism is partly the acknowledgement of much vaster moral stakes - so long as there are things we can do to help, they're no less important to do as short-termist interventions. (The usual arguments about it not being helpful to demand too much of people still apply though).


1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.



2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.



3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.


Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

Curated and popular this week