Bostrom's definition of existential risk is “an event that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”.


Existential implies a risk to existence, and does imply an event that threatensthe premature extinction of Earth-originating intelligent life”, but I don't think it implies an event that threatens “the permanent and drastic destruction of its potential for desirable future development”.

Mitigating an event that threatensthe premature extinction of Earth-originating intelligent life” (I will call these extinction risks from here on) would be prioritised under most ethical and empirical views.

However, the extent to which we should prioritise mitigating a non-extinction event that threatens “the permanent and drastic destruction of its potential for desirable future development” (I will call these stagnation risks, drawing partially on Bostrom’s classification of x-risks) depends greatly on 1) what is defined as desirable future development and 2) how desirable this desirable future development is. Both of these issues depend on your specific ethical views more than concern for extinction risks does. 

By desirable future development, I think longtermists usually mean achieving ‘technological maturity’, defined by Bostrom as “the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved”. 

I assume that this is considered desirable because it would allow maximisation of wellbeing for the maximum number of consciousnesses. However, the goal of maximising the number of consciousnesses is only present in non-person affecting total utilitarianism and prioritarianism, and absent in most, if not all, other moral theories. Therefore, prioritising the mitigation of stagnation risks is robust to fewer moral theories than prioritising the mitigation of extinction risks.

Individuals who primarily believe in moral theories other than non-person affecting total utilitarianism and prioritarianism (which presumably includes the vast majority of people) may then consider the case for mitigating stagnation risks to be weaker than the case for mitigating extinction risks. Therefore, lumping extinction and stagnation risks together as "existential" risks may weaken the apparent case for mitigating extinction risks. 

Discussing extinction risks and stagnation risks together as “existential risks” also leads to a focus on the “astronomical waste” argument, since extinction risks and stagnation risks both prevent technological maturity from being achieved. Since technological maturity is an instrumental goal to maximising the number of consciousnesses, which is a terminal goal only desirable under non-person affecting total utilitarianism and prioritarianism, and most people don’t subscribe to these moral theories, use of the term “existential risk” makes the case for mitigating extinction risks less convincing.

 

An alternative, more pluralistic approach in terms of moral theories to the different possible meanings of "desirable future development"  doesn't seem feasible to me, since there is probably too much variety on what "desirable future development" could mean (but I might explore this idea in more detail in a future post).


Instead, I think EAs should separate discussion of extinction and stagnation risks, with a more pluralistic approach in terms of moral theories on why extinction risks might be prioritised, but with clarity and honesty on why longtermists are more concerned about extinction risks than people with other views, and why longtermists are also concerned about stagnation risks.


EDIT:
On further thought, it may be worth dividing out existential risk into extinction risk, collapse risk, flawed realisation risk and plateauing risk, where extinction and collapse risks seem robust to many more moral theories, and plateauing and flawed realisation risks seem robust to far fewer moral theories.

38

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 3:12 PM

FWIW I think people are normally more concerned with flawed realisation scenarios than stagnation scenarios. (I'm not sure whether this changes your basic point.)

Thanks for your comment. I don’t think it changes my point, but in that case “stagnation risk” is also a badly named term here.

I’ve added this edit to the post:

“ On further thought, it may be worth dividing out existential risk into extinction risk, collapse risk, flawed realisation risk and plateauing risk, where extinction and collapse risks seem robust to many more moral theories, but plateauing and flawed realisation risks seem robust to far fewer moral theories. “

[comment deleted]2y2
0
0