Oxford philosopher William MacAskill’s new book, What We Owe the Future, caused quite a stir this month. It’s the latest salvo of effective altruism (EA), a social movement whose adherents aim to have the greatest positive impact on the world through use of strategy, data, and evidence. MacAskill’s new tome makes the case for a growing flank of EA thought called “longtermism.” Longtermists argue that our actions today can improve the lives of humans way, way, way down the line — we’re talking billions, trillions of years — and that in fact it’s our moral responsibility to do so.

In many ways, longtermism is a straightforward, uncontroversially good idea. Humankind has long been concerned with providing for future generations: not just our children or grandchildren, but even those we will never have the chance to meet. It reflects the Seventh Generation Principle held by the indigenous Haudenosaunee (a.k.a. Iroquois) people, which urges people alive today to consider the impact of their actions seven generations ahead. MacAskill echoes the defining problem of intergenerational morality — people in the distant future are currently “voiceless,” unable to advocate for themselves, which is why we must act with them in mind. But MacAskill’s optimism could be disastrous for non-human animals, members of the millions of species who, for better or worse, share this planet with us.

Read the rest on Forbes.

Comments6
Sorted by Click to highlight new comments since: Today at 8:28 AM

Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that's my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.

Thank you for the insights!

Before I read this I took it mostly as a given that most people's mainline scenario for astronomical numbers of people involved predominantly digital people. If this is your mainline scenario the arguments for astronomical amounts of animal suffering seem much weaker (I think).

Fai
2y11
0
0

Excuse me for repeating some of the things Brian said in reply to Calebp. Since I want to do a complete formulation of my arguments.

I think there are a few potential pushbacks to the "digital being dominating argument"

  •  EMPIRICAL - Some digital beings might be nonhuman animals, or very similar to nonhuman animals in the relevant senses. And with that, we also need to consider the possibility that people's attitudes toward physical nonhuman animals might carry through to people's attitudes toward digital nonhuman animals. Some reasons people might simulate nonhuman animals:
    • Simulations for scientific studies
    • For education
    • For amusement/competitions
    • For aesthetics
    • For sadism
  • EMPIRICAL - Brian mentioned that we might treat some digital people like we treat nonhuman animals. And one of the reasons could be that we make digital people like nonhuman animals. (And there seem to be incentives for this.) In this case, again, attitudes toward nonhuman animals now might matter a lot
  • EMPIRICAL - How can we be sure that coming up with digital beings won't make use of physical animals? For instance, a Singaporean government-supported startup seems to be planning to sell semiconductors made from insects
  • META/FIELD BUILDING - No longtermists seem to be ever trying to downplay the importance of future flesh and bone humans because they will be dominated by digital beings. In fact, they are advocating for them. So I wonder why there is a lot of willingness, if not eagerness, to downplay the importance of nonhuman animals in the long-term future using the digital people argument. (Yes, what happens to flesh and bone humans is likely to be relevant to what will happen to digital people. But the same goes for nonhuman animals too.)
  • PHILOSOPHICAL - That there are even larger numbers doesn't make an astronomical number not astronomical. Then we need to think about certain philosophical questions, such as 
    • Can suffering and happiness actually be canceled out (as in both mentally, and morally)? 
    • Are above-0-experiences illusions? (there are people who argue that the feeling of happiness is not an actual positive state)
    • Whether we are justified in systematically leaving out proportionally small population groups, especially in cases when these groups are experiencing mostly extreme suffering.

Well articulated. Thanks for adding this.

I think we should have a lot of uncertainty about the future. For example:

  1. There could be a high percentage of digital people but some non-digital people, and so animals still matter.

  2. Digital people might cause suffering to digital animals.

  3. We could treat digital people as terribly as we do animals.

Others have written about these ideas here: https://forum.effectivealtruism.org/topics/non-humans-and-the-long-term-future.

Thanks for your comment!

More from BrianK
Curated and popular this week
Relevant opportunities