Read the Better Futures series here, and discuss it here, all week.

This week, we are highlighting Forethought's Better Futures series. To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option.

Fin Moorhouse (@finm), who authored two chapters in the series (Convergence and Compromise, and No Easy Eutopia) along with @William_MacAskill, has agreed to answer a few of your questions. 

You can read (and comment) on the full series on the Forum. In order, the chapters are:

Leave your questions and comments below. Note that Fin isn't committing to answer every question, and if you see someone else's question you can answer, you're free to. 

Comments1
Sorted by Click to highlight new comments since:

Hi Fin,

I have a lot of questions so I figure I would just share all of them and you could respond to the ones you want to.

  1. I think Forethought is a super cool institution. What advice would have for someone who wanted to work there as a researcher? Do you think it's important to have a strong understanding of how LLMs work?
  2. I made this post where I categorized flourishing cause areas based on "How To Make The Future Better." I thought I'd share. I'm curious if this categorization generally aligns with how you think about the problem.
    1. Locking-in one’s values
      1. Ensuring the future is aligned with the correct values
      2. Working towards viatopia
      3. Promoting futures with more moral reflection
      4. Improving the ability for people with different views to get their desired futures
    2. Ensuring future people are able to create a good future
      1. Keeping humanity’s options open
      2. Improving global stability
      3. Improving future human’s decision making
      4. Empowering responsible actors
    3. Speeding up progress
  3. I made this post which is an overview of longtermism's ideas, writings, individuals, institutions, and history. I thought I'd share since you made the longtermism website.
  4. The Better Futures series assumes that the future will be net-positive by default. To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation. Do you agree with this logic or do you think the future will be net-positive by default? Additionally, why?
  5. Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go. At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?
  6. I understand that you have done some work related to space governance. A criticism I have of working on this field is that (1) it seems like it has been very intractable due to the lack of space treaties (2) if any great power has a decisive advantage, global treaties won't matter (3) even if you are able to get a law or treaty passed, corporate or state interests could easily override these laws later on (4) there's probably a low chance of success of even getting into a position where you could influence this stuff. As such, I'm wondering if you think it's valuable for additional people to work in the field why do you think this?
  7. It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years. I personally am pretty skeptical of this, although I don't think it's necessarily the case. It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy. I'm wondering if you agree with this assessment.
  8. In "How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?
Curated and popular this week
Relevant opportunities