•

Applied to The Fermi Paradox has not been dissolved ago

~~Carlsmith (2021)~~Joseph Carlsmith's models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.^{[1]}- By contrast,
~~Soares (2022)~~Nate Soares's models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.^{[2]}

Soares, Nate (2021) ~~"~~Comments on ~~Carlsmith'~~Carlsmith’s ~~'Is~~“Is power-seeking AI an existential risk?~~'~~”~~"~~, *LessWrong*~~.~~, November 13.

^{^}Carlsmith, Joseph (2021) Draft report on existential risk from power-seeking AI,

*Effective Altruism Forum*, April 28.^{^}Soares, Nate (2022) AGI ruin scenarios are likely (and disjunctive),

*Effective Altruism Forum*, July 27.

Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively *and* disjunctively. For example, a disaster D could occur if ~~condition~~conditions C1 and C2 are true, or if condition C3 is true: D=(C1∩C2)∪C3.

Models of catastrophic risks can be conjunctive or disjunctive. A **conjunctive** risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (D=C1~~∩~~~~C~~~~2~~∩…∩Ck). In a conjunctive model, the probability of the disaster is *less than or equal to* the probabilities of the individual conditions. By contrast, a **disjunctive** risk model is one in which the disaster occurs as a result of *any* of several conditions holding (D=C1~~∪~~~~C~~~~2~~∪…∪Ck). In a disjunctive model, the probability of the disaster is *greater than or equal to* the probabilities of the individual conditions.

Examples of conjunctive and disjunctive risk models of AI risk:

- Carlsmith (2021) models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.
- By contrast, Soares (2022) models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.

Soares, Nate (2021) "Comments on Carlsmith's 'Is power-seeking AI an existential risk?'" *LessWrong*.

•

Applied to Draft report on existential risk from power-seeking AI ago

•

Applied to Reviews of "Is power-seeking AI an existential risk?" ago

•

Applied to AGI ruin scenarios are likely (and disjunctive) ago

Models of catastrophic risks can be conjunctive or disjunctive. A **conjunctive** risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (D=C1∩C2∩…∩Ck). In a conjunctive model, the probability of the disaster is *less than or equal to* the probabilities of the individual conditions. By contrast, a **disjunctive** risk model is one in which the disaster occurs as a result of *any* of several conditions holding (D=C1∪C2∪…∪Ck). In a disjunctive model, the probability of the disaster is *greater than or equal to* the probabilities of the individual conditions.

Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively *and* disjunctively. For example, a disaster D could occur if condition C1 and C2 are true, or if condition C3 is true: D=(C1∩C2)∪C3.

•

Created by BrownHairedEevee at

compound existential risk | existential risk | existential risk factor | global catastrophic risk | models | expected value | forecasting | impact assessment | model uncertainty