The reading list below is based on a reading list originally used for an internal GPI reading group. These reading groups are used as a way of doing an early-stage exploration of new areas that seem promising from an academic global priorities research perspective.  Each topic is often used as the theme for one or two weekly discussions, and in most cases those attending the discussion will have read the suggested materials beforehand. 

As I thought that it could be a valuable resource for those interested in academic global priorities research, I’m sharing it here, with permission from the authors. All the credit for the list below goes to them.

Disclaimer: The views presented in the readings suggested below do not necessarily represent views held by me, GPI, or any GPI staff member.

Motivation

There are a variety of reasons why EAs might care about the notion of moral progress.

  1. As a cause area. Moral progress might be viewed as a cause area, in the sense that we might want to try to bring moral progress about. Plausibly, the better we understand moral progress, the better placed we will be to bring it about.
  2. Deferral to the Future. Moral progress might seem relevant to determining the extent to which we should defer decisions and resources to the future.
  3. Value of the Future. Moral progress might tell us something about the value of the future. Indeed, one of the things we might mean by “moral progress” might simply be that the future will be better than the present in morally relevant respects.
  4. Contingency. The question of contingency of moral progress is plausibly an important one for longtermists. In particular, if moral progress is nearly inevitable, then we might be less worried about which values shape the immediate future. Likewise, different views on the contingency of moral progress might bring different implications to our beliefs about the progress of moral behavior in artificial intelligence.

1. Overview

2. Past Trends in Progress

3. Moral Advocacy/Moral Progress as Cause Area

4. Contingency of Moral Values and Progress

  • MacAskill, What We Owe the Future, chapters 3 and 4

5. The Sign of the Future

6. Moral Circle Expansion

7. Evolution and Biomedical Enhancement

8. Further Reading

19

3 comments, sorted by Click to highlight new comments since: Today at 10:55 AM
New Comment

Thanks for sharing. I've got a couple questions if you might know any answers.

1. Are you aware of anyone in EA who has studied the problem of moral regress?
2. Are there any authors (in or outside of EA)  who has studied the question of making moral progress to decrease the risk the future may be morally much worse whose research you might be willing to include in a reading list like this? 

Are you aware of anyone in EA who has studied the problem of moral regress?

Somewhat related: Gwern on the Narrowing Circle.

Yeah, I've seen that one  before. Thanks for sharing. I haven't read it in full yet, so I couldn't yet evaluate whether it'd be a fit entry to add to the reading list. I'll check it out though.