This can also be found here

On 18th November, 2023, I submitted the following article for the Cambridge Meridian Office 24-hour sprint competition:

In discussion groups, I have noticed that differences in judgements as to what should be prioritised often stem from disagreements over answers to certain difficult metaphysical questions.

For example, in a recent discussion it came to me to pose the question of whether a ‘sentient virus’ could exist. Now, if it *were* possible, then it would pose quite a considerable threat, and *whether* we think it is possible or not is dependent upon our answers to difficult questions in the philosophy of mind. As to the first of these two claims; this is because a sentient virus would be aware of the fact that it is antithetical to their interests to kill their host[1], and so we might rightly worry that, were it to come into existence, it would inflict great suffering upon other sentient organisms for as long as possible.[2] On the second; there is great debate and little consensus amongst philosophers as to what sentience consists in, what it is *produced* by, and therefore what the necessary conditions are for sentience to arise in a thing or become associated *with* a thing.

This is a crude example, but there are many more. Indeed, the *only* things that members of the Xrisk community seem to agree on is that we ought to be doing the most good that we can do, and that this includes considering the interests of future people (and even that these are agreed upon is debatable).

Despite the existence of these kinds of ambiguities, however, we still want to ensure that we are doing the best we can to try to have the biggest *impact* we can with our research, work, donations and resource allocation.

But how can we ensure that we are doing so?

Answering this question involves answering the question of how we can reasonably *ground* a belief that what we are focusing on is more likely than any other option to be one of the biggest risks to sentient life, *given what we know*. This ‘given what we know’ caveat is necessary, because there is *no other grounding* on which we can base the decision to prioritise one area over another. But, when the metaphysical bases influencing our assessment are weak or unclear, we still do not want to do nothing. Nor, indeed, do we want to pick between potential options *arbitrarily*. So how do we choose?

I suggest that, in deciding what to prioritise, we take into account the *likelihood* of the relevant influencing metaphysical claims being true.

My argument is as follows:

**P1.** It is often that the truth or falsity of a proposition p influences the probability that a catastrophic event e takes place.

**P2.** In many of these cases, whether p is true or false is far from clear.

**P3.** In order to prioritise amongst focus areas, we must compare the probabilities that the different catastrophic events e to which the focus areas respectively correspond will take place.

**C.** Therefore, we should move to consider the *likelihood* that these propositions p have of being true, rather than *merely* the probability that event e takes place *given* that p is true/false (which is what I think we are often implicitly doing).

Now, two questions are likely to arise:

**Q1.** How can we judge the probability that p is true? For example, how can we judge the probability that it is (metaphysically) possible for a virus to become sentient?

**Q2.** How can we *use* the probability that p is true to influence our assessment of the probability that e will occur? For example, in light of the probability that *it is possible* for a virus to become sentient, how do we calculate the probability that a virus *will* become sentient?

In answer to the first question; I suggest that we survey the population to gauge the consensus (or lack thereof) regarding relevant metaphysical claims. If, for example, 50% of the population believe that it is possible for a virus to become sentient, whereas 50% of the population believe that it is *impossible* (say, because they believe that sentience is reserved for those who possess a spirit or soul), then the probability that a virus *can* become sentient is 50%. I suggest that we do it this way because we are dealing with claims about which science cannot help to enlighten us. Given this and, again, given that we want to be neither idle nor arbitrary in our reasoning, then, the extent of consensus as to the truth of metaphysical claims is the best indicator we have as to their likelihood to be true. Indeed, granted that we must expose the people we ask to the arguments in favour and against the uptake of the relevant metaphysical claims, how else could we estimate the probability of their truth?

Now, in answer to the second question; I suggest that we should then combine the probability that event e will occur *given that p is true* with the probability that p is true. If, for example, we judge the probability of a sentient virus coming into existence within the next century, given that it is *metaphysically possible*, to be 80%, then the probability that a sentient virus *will* come into existence is 40%.

With these clarifications, I hope to have put forward a sound and informative sketch of a needed modification to our epistemology in Xrisk. We should consider the likelihood of influencing metaphysical claims being true when deciding which Xrisks to prioritise. Once we have taken this into account in the way I have sketched, then, we should prioritise those catastrophic events e which are most likely to occur.

[1]

This fact was brought to my attention by Olivia Benoit in a discussion held on 02/11/23 about resilience and collapse in the context of Xrisk.

[2]

I am going to call this an Xrisk with, I believe, little backlash, despite the fact that it does not have any immediate or obvious bearing on the *existence* of humans or humanity. Indeed, I understand Xrisks in a broad sense – even broader than that of ‘global catastrophic risk’ and closer to one of mere ‘catastrophic risk’ (the potential for the causing of great suffering and/or death). Now, this might risk (ironically) having the study of Xrisks devolve into mere applied ethics, but I do not believe that this is too much of a worry, because I am still operating under the assumptions that, for example, we should take into account factors such as neglectedness, scale, and likelihood of success, and the consideration of future people, which alone confines this research to a still radical and important sub-movement in applied ethics.

Executive summary:The author proposes modifying our approach to existential risk prioritization by incorporating the likelihood of key influencing metaphysical claims.Key points:This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact usif you have feedback.I have proposed that we survey the population - not the consensus amongst experts - because I have written about metaphysical claims about which there is little consensus amongst experts, or have at least tried to.

I wonder; was this summary generated by AI?