I was chatting recently to someone who had difficulty knowing how to orient to x-risk work (despite being a respected professional working in that field). They expressed that they didn't find it motivating at a gut level in the same way they did with poverty or animal stuff; and relatedly that they felt some disconnect between the arguments that they intellectually believed, and what they felt to be important.
I think that existential security (or something like it) should be one of the top priorities of our time. But I usually feel good about people in this situation paying attention to their gut scepticism or nagging doubts about x-risk (or about parts of particular narratives about x-risk, or whatever). I’d encourage them to spend time trying to name their fears, and see what they think then. And I’d encourage them to talk about these things with other people, or write about the complexities of their thinking.
Partly this is because I don't expect people who are using intellectual arguments to override their gut to do a good job of consistently tracking what the most important things to do are on a micro-scale. So it would be good to get the different parts of them to sync more.
And partly because it seems like it would be a public good to explore and write about these things. Either their gut is onto something with parts of its scepticism, in which case it would be great to have that articulated; or their gut is wrong, but if other people have similar gut reactions then playing out that internal dialogue in public could be pretty helpful.
It's a bit funny to make this point about x-risk in particular because of course the above all applies to whatever topic. But I think people normally grasp it intuitively, and somehow that's less universal around x-risk. I guess maybe this is because people don't have any first-hand experience with x-risk, so their introductions to it are all via explicit arguments … and it's true that it's a domain where we should be unusually unwilling to trust our gut takes without hearing the arguments, but it seems to me like people are unusually likely to forget that they can know anything which has bearing on the questions without already being explicit (and also that perhaps the social environment, in encouraging people to take explicit arguments seriously, can accidentally overstep and end up discouraging people from taking anything else seriously). These dynamics seem especially strong in the case of AI risk — which I regard as the most serious source of x-risk, but also the one where I most wish people spent more time exploring their nagging doubts.
Thanks for the post. I just today was thinking through some aspects of expected value theory and fanaticism (i.e., being fanatic about applying expected value theory) that I think might apply to your post. I had read through some of Hayden Wilkinson’s Global Priorities Institute report from 2021, “In defense of fanaticism,” and he brought up a hypothetical case of donating $2000 (or whatever it takes to statistically save one life) to the Against Malaria Foundation (AMF), or giving the money instead to have a very tiny, non-zero chance of an amazingly valuable future by funding a very speculative research project. I changed the situation for myself to consider why would you give $2000 to AMF instead of donating it to try to reduce existential risk by some tiny amount, when the latter could have significantly higher expected value. I’ve come up with two possible reasons so far to not give your entire $2000 to reducing existential risk, even if you initially intellectually estimate it to have much higher expected value:
I don’t know if this is exactly what you were looking for, but these seem to me to be some things to think about to perhaps move your intellectual reasoning closer to your gut, meaning you could be intellectually justified in putting some of your effort into following your gut (how much exactly is open to argument, of course).
In regards to how to make working on existential risk more “gut wrenching,” I tend to think of things in terms of responsibility. If I think I have some ability to help save humanity from extinction or near extinction, and I don’t act on that, and then the world does end, imagining that situation makes me feel like I really dropped the ball on my part of responsibility for the world ending. If I don’t help people avoid dying from malaria, I do still feel a responsibility that I haven’t fully taken up, but that doesn’t hit me as hard as the chance of the world ending, especially if I think I have special skills that might help prevent it. By the way, if I felt like I could make the most difference personally, with my particular skill set and passions, in helping reduce malaria deaths, and other people were much more qualified in the area of existential risk, I’d probably feel more responsibility to apply my talents where I thought they could have the most impact, in that case malaria death reduction.