Flourishing futures

Discuss the topic on this page. Here is the place to ask questions and propose changes.
Comments5
Sorted by

Things that could maybe be done in future:

  • Expand this entry by drawing on posts tagged Fun Theory on LessWrong, and/or crosspost some here and give them this tag
  • Expand this entry by drawing on the "AI Ideal Goverance" section of the GovAI research agenda, and/or crosspost that agenda here and give it this tag
  • Expand this entry by drawing on the Bostrom and Ord sources mentioned in Further reading
  • Draw on this shortform of mine, and particularly the following paragraph

Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (Dafoe, 2018 [section on "AI Ideal Governance"]). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying people’s preferences for them.

Also drawing on the paper Existential Risk and Existential Hope: Definitions and/or other discussion related to "existential hope". But that specific term seems to often be used in a quite different way from what the paper meant, such that I'd favour either avoiding the term (just discussing the concepts and citing the paper) or explicitly noting that there are those two distinct uses occurring in different places.

EDIT: One more related concept is "global upside possibilities".

I see what I've put here as a starting point. There are various reasons one might want to change it, such as:

  • Maybe the bullet point style isn't what the EA Wiki should aim for
  • Maybe a different name would be better
  • What I've got here is like my own take, based loosely on various sources but not really modelled super directly on them

You can see my original thinking for this entry here.

Thank you for creating this! As a general observation, I think it's perfectly fine to go ahead and add a new tag or content even if there are still uncertainties one would like to resolve or improvements one would like to make. In other words, "starting point" entries are welcome.

Yeah, that policy definitely sounds good, and I already assumed it was the case. I guess what I should've said is that here I'm more uncertain what the right name, scope, and content would be than I am for the average entry I create.

So sort of "more starting-point-y than average" in terms of whether the current content should be there in the current way. (Though many other entries I make are more starting-point-y than this one in terms of them having almost no content.)