Hide table of contents

In order to influence the longterm future, we often need to understand what it might look like. What it might look like partly depends on what we expect the relevant stakeholders to value. This is why the key question I ask (and started trying to answer) in the present sequence is: What values should we expect powerful civilizations/agents to hold and what does this imply for longtermist cause prioritization?

Key takes

  • Growing the (currently small and very informal) research field of axiological futurism would help answer this question.
    • My first post makes the case for working in the area and makes a literature review of related work.
  • Contrary to what many longtermists assume,  what the moral truth might be likely doesn't influence the answer to this question. (See my second post.)
  • We should expect values that are "grabby" (i.e., expansion-conducive) to be selected over the long run, at least in the worlds where there is more at stake. Call this the Grabby Values Selection Thesis (See my third post.)
  • Concern for suffering is particularly non-grabby and may therefore be strongly selected against. Call this the Upside-focused Colonist Curse. The stronger this curse, the more we might want to prioritize reducing s-risks (in long-lasting ways) over reducing other long-term risks. (See my fourth post.)

Some key remaining questions

  • What is the likelihood of an early[1] value lock-in (or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology)?
  • How could we try to quantify the significance of the Upside-focused Colonist Curse
    • How is it most likely to play out? (See this comment for a typology of different forms it could take.)
  • If the Upside-focused Colonist Curse is decisively important, does that imply anything else than “today’s longtermists might want to prioritize s-risks”?
    • What does it imply for s-risks cause prioritization (other than "we should prioritize 'late s-risks'")?
  • What makes some values more adaptive than others? What features, other than upside-focusedness, might the most grabby values have? 
    • Simplicity? Low conduciveness to uncertainty on how to maximize for these values? Where are they on the action-omission axis?
  • Should we expect biological aliens to be sentient or not?
  • How wide will the moral circle of our successors and that of grabby aliens be? Does the moral circle framework even make sense here? Would they even care about sentient beings and/or their experiences?
  • What things other than suffering could plausibly be disvalued by a grabby civilization/AGI?


Feel very free to reach out to me (buhler.jim94@gmail.com) if you're interested in working on this topic!

  1. ^

    I.e., before grabby values get the time to be selected for within the civilization. (See my third post.)

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities