A state risk is a risk associated with being in a particular state, whereas a step risk (also called a transition risk) is a risk arising from transitioning to a new state. Nick Bostrom appears to have originated the distinction in his book Superintelligence, although some mentions of it predate the book's publication.
The cumulative state risk associated with being in some state grows as a function of the time spent in that state. Natural existential risks are typically state risks. For example, absent deflection efforts, the risk that an asteroids of a certain size collides with Earth by 2030 is higher than the risk that it does so by 2029. The longer humanity exposes itself to a state risk, the higher its probability of succumbing to the associated catastrophe. For this reason, the are pro tanto reasons for reducing state risks as soon as possible.
Things are different with step risks. Here the threat arises only when the transition to the new state begins, and the overall risk incurred during this transition is not generally a function of its total duration. Thus, with step risks there is no presumption in favor or against either postponing or prolonging the transition; what is appropriate will vary depending on characteristics specific to each risk. Some anthropogenic existential risks are plausibly viewed as step risks, with AI risk being perhaps the clearest example.
Since state risks are correlated with natural existential risks, and step risks with anthropogenic existential risks, the latter's much greater share of total existential risk suggests that most of this risk is posed by transitioning to new states, rather than by remaining in a given state. This finding has important implications for the strategic management of existential risk....