AI safety

David M (+85)
Alejandro Ortega
Alejandro Ortega (+1794/-394)
Nathan Young (+80/-35)
Nathan Young (+81/-77)
Nathan Young (+762)
Will Aldred (-32)
Lorenzo Buonanno
Lizka (+33)
Leo (+18)

Interventions that aim to reduce these risks can be split into:

 

Reading on why AI might be an existential risk

Cotra, Ajeya (2022) Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeoverEffective Altruism Forum, July 18

Carlsmith, Joseph (2022) Is Power-Seeking AI an Existential Risk?Arxiv, 16 June

Yudkowsky, Eliezer (2022) AGI Ruin: A List of LethalitiesLessWrong, June 5

Ngo et al (2023) The alignment problem from a careerdeep learning perspectiveArxiv, February 23

80,000 Hours' medium-depth investigation rates technical AI safety research a "priority path"—among the most promising career opportunities the organization has identified so far.[1][2] 

AI safety and AI risk is sometimes referred to as a Pascal's Mugging [3]1],  implying that the risks are tiny and that for any stated level of ignorable risk the the payoffs could be exaggerated to force it to still be a top priority. A response to this is that in a survey of  700 ML researchers, the median answer to the "the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” was 5% with, with 48% of respondents giving 10% or higher[4]2]. These probabilites are too high (by at least 5 orders of magnitude) to be consider Pascalian. 

 

Further reading on arguments against AI Safety

Grace, Katja (2022) Counterarguments to the basic AI x-risk caseEA Forum, October 14

Garfinkel, Ben (2020) Scrutinising classic AI risk arguments80000 Hours Podcast, July 9

 

AI safety as a career

80,000 Hours' medium-depth investigation rates technical AI safety research a "priority path"—among the most promising career opportunities the organization has identified so far.[3][4] Richard Ngo and Holden Karnofsky also have advice for those interested in working on AI Safety[5][6]

AI alignment | AI governance | AI forecasting| AI takeoff | AI race | Economics of artificial intelligence |AI interpretabilityAI risk | cooperative AI | building the field of AI safety

  1. ^

    Todd, Benjamin (2018) The highest impact career paths our research has identified so far, 80,000 Hours, August 12.

  2. ^

    Todd, Benjamin (2021) AI safety technical research, 80,000 Hours, October.

  3. ^

    https://twitter.com/amasad/status/1632121317146361856 The CEO of Replit, a coding organisation who are involved in ML Tools

  4. ^
  5. ^

    Todd, Benjamin (2023) The highest impact career paths our research has identified so far, 80,000 Hours, May 12.

  6. ^

    Hilton, Benjamin (2023) AI safety technical research, 80,000 Hours, June 19th

  7. ^

    Ngo, Richard (2023) AGI safety career advice, EA Forum, May 2

  8. ^

    Karnofsky, Holden (2023), Jobs that can help with the most important century, EA Forum, Feb 12

AI safety and AI risk is sometimes referred to as a Pascal's Mugging [3],  implying that the risks are tiny and that for any standardstated level of evidenceignorable risk the risk andthe payoffs could be reformulateexaggerated to force it to still be a top priority. A response to this is that in a survey of  700 ML researchers, the median answer to the "the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” was 5% with, with 48% of respondents giving 10% or higher[4]. These probabilites are too high (by at least 5 orders of magnitude) to be consider Pascalian. 

AI safety and AI risk is regularlysometimes referred to as a Pascal's Mugging [3], implying that for any standard of evidence the risk and payoffs could be reformulate to still be a top priority. A response to this is that in a survey of  700 ML researchers, the median answer to the "the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” was 5% with, with 48% of respondents giving 10% or higher[3]4]. These probabilites are too high (by at least 5 orders of magnitude) to be consider Pascalian. 

  1. ^

    Todd, Benjamin (2018) The highest impact career paths our research has identified so far, 80,000 Hours, August 12.

  2. ^

    Todd, Benjamin (2021) AI safety technical research, 80,000 Hours, October.

  3. ^
  4. ^

    https://twitter.com/amasad/status/1632121317146361856 The CEO of Replit, a coding organisation who are involved in ML Tools

  5. ^

Arguments against AI safety

AI safety and AI risk is regularly referred to as a Pascal's Mugging, implying that for any standard of evidence the risk and payoffs could be reformulate to still be a top priority. A response to this is that in a survey of  700 ML researchers, the median answer to the "the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” was 5% with, with 48% of respondents giving 10% or higher[3]. These probabilites are too high (by at least 5 orders of magnitude) to be consider Pascalian. 

  1. ^

    Todd, Benjamin (2018) The highest impact career paths our research has identified so far, 80,000 Hours, August 12.

  2. ^

    Todd, Benjamin (2021) AI safety technical research, 80,000 Hours, October.

  3. ^
  4. ^

    https://twitter.com/amasad/status/1632121317146361856 The CEO of Replit, a coding organisation who are involved in ML Tools

Krakovna, Victoria (2017) Introductory resources on AI safety research, Victoria Krakovna's Blog, October 19.
A list of readings on AI safety.

Load more (10/19)