Hide table of contents

You are viewing revision 1.6.0, last edited by Leo

Models of catastrophic risks can be conjunctive or disjunctive. A **conjunctive** risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (). In a conjunctive model, the probability of the disaster is *less than or equal to* the probabilities of the individual conditions. By contrast, a **disjunctive** risk model is one in which the disaster occurs as a result of *any* of several conditions holding (). In a disjunctive model, the probability of the disaster is *greater than or equal to* the probabilities of the individual conditions.

Examples of conjunctive and disjunctive risk models of AI risk:

- Joseph Carlsmith's models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.
^{[1]} - By contrast, Nate Soares's models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.
^{[2]}

Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively *and* disjunctively. For example, a disaster could occur if conditions and are true, or if condition is true: .

Relevance