Conjunctive vs. disjunctive risk models

  • Carlsmith (2021)Joseph Carlsmith's models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.[1]
  • By contrast, Soares (2022)Nate Soares's models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.[2]

Soares, Nate (2021) "Comments on Carlsmith'Carlsmith’s 'Is“Is power-seeking AI an existential risk?'", LessWrong., November 13. 

  1. ^

    Carlsmith, Joseph (2021) Draft report on existential risk from power-seeking AI, Effective Altruism Forum, April 28. 

  2. ^

    Soares, Nate (2022) AGI ruin scenarios are likely (and disjunctive), Effective Altruism Forum, July 27.

Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively and disjunctively. For example, a disaster D could occur if conditionconditions C1 and C2 are true, or if condition C3 is true: D=(C1C2)C3.

Models of catastrophic risks can be conjunctive or disjunctive. A conjunctive risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (D=C1C2Ck). In a conjunctive model, the probability of the disaster is less than or equal to the probabilities of the individual conditions. By contrast, a disjunctive risk model is one in which the disaster occurs as a result of any of several conditions holding (D=C1C2Ck). In a disjunctive model, the probability of the disaster is greater than or equal to the probabilities of the individual conditions.

Examples of conjunctive and disjunctive risk models of AI risk:

  • Carlsmith (2021) models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.
  • By contrast, Soares (2022) models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.

Further reading

Soares, Nate (2021) "Comments on Carlsmith's 'Is power-seeking AI an existential risk?'" LessWrong.

Models of catastrophic risks can be conjunctive or disjunctive. A conjunctive risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (D=C1C2Ck). In a conjunctive model, the probability of the disaster is less than or equal to the probabilities of the individual conditions. By contrast, a disjunctive risk model is one in which the disaster occurs as a result of any of several conditions holding (D=C1C2Ck). In a disjunctive model, the probability of the disaster is greater than or equal to the probabilities of the individual conditions.

Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively and disjunctively. For example, a disaster D could occur if condition C1 and C2 are true, or if condition C3 is true: D=(C1C2)C3.

Related entries

existential risk | global catastrophic risk

Created by BrownHairedEevee at 2y