Long reflection

Leo (+35/-38)
Pablo (+9/-4)
Pablo (+15/-12)
Pablo (+7/-6)
Pablo (+690/-826)
Pablo (+869/-29)
Leo
Pablo (-20) I don't think we should add individuals under 'related entries' except in very special cases
Pablo
Pablo (+1552/-193)

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be madediscussed during the long reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages, with one taking precedence over the other.

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stagesstages, with one taking precedence over the other.

Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project of arranging the universe's resources in accordance to its values, but ought instead to spend considerable time— "centuries (or more)" (Ord 2020), ;[1] "perhaps tens of thousands of years" (Greaves et al. 2019), "thousands;[2] "thousands or millions of years" (Dai 2019), "[;[3] "[p]erhaps... a million years" (MacAskill, in Perry 2018)[4]—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized (Ord 2020).realized.[1]

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection (Stocker 2020; Hanson 2021).reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages with one taking precedence over the other.

Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.

Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.

Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.
Interview with William MacAskill about moral uncertainty and other topics.

Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.

  3. ^

    Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.

  4. ^

    William MacAskill, in Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.

  5. ^

    Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.

  6. ^

    Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.

Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project (such as space colonization) of arranging the universe's resources in accordance to its values, but ought instead to spend considerable time— "centuries (or more)" (Ord 2020),  "perhaps tens of thousands of years" (Greaves et al. 2019), "thousands or millions of years" (Dai 2019), "[p]erhaps... a million years" (MacAskill, in Perry 2018)—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized (Ord 2020).

Criticism

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection (Stocker 2020; Hanson 2021). Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages with one taking precedence over the other.

Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.

Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.

The Long Reflectionlong reflection is a hypothesized period of time during which humanity works out how best to realize its long-term potential, "a long period—potential.

Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project (such as space colonization) of arranging the universe's resources in accordance to its values, but ought instead to spend considerable time— "centuries (or more)" (Ord 2020),  "perhaps tens of thousands of years—during which human civilisation, perhaps with the aidyears" (Greaves et al. 2019), "thousands or millions of improved cognitive ability, dedicates itself to workingyears" (Dai 2019), "[p]erhaps... a million years" (MacAskill, in Perry 2018)—figuring out what is ultimatelyin fact of value" (Greaves et al. 2019)value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized (Ord 2020).

Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.
Interview with William MacAskill about moral uncertainty and other topics.

Wiblin, Robert & Keiran Harris (2018) Our descendants will probably see us as moral monsters. what should we do about that?, 80,000 Hours, January 19.
Interview with William MacAskill about the long reflection and other topics.

Load more (10/16)