I was chatting recently to someone who had difficulty knowing how to orient to x-risk work (despite being a respected professional working in that field). They expressed that they didn't find it motivating at a gut level in the same way they did with poverty or animal stuff; and relatedly that they felt some disconnect between the arguments that they intellectually believed, and what they felt to be important.
I think that existential security (or something like it) should be one of the top priorities of our time. But I usually feel good about people in this situation paying attention to their gut scepticism or nagging doubts about x-risk (or about parts of particular narratives about x-risk, or whatever). I’d encourage them to spend time trying to name their fears, and see what they think then. And I’d encourage them to talk about these things with other people, or write about the complexities of their thinking.
Partly this is because I don't expect people who are using intellectual arguments to override their gut to do a good job of consistently tracking what the most important things to do are on a micro-scale. So it would be good to get the different parts of them to sync more.
And partly because it seems like it would be a public good to explore and write about these things. Either their gut is onto something with parts of its scepticism, in which case it would be great to have that articulated; or their gut is wrong, but if other people have similar gut reactions then playing out that internal dialogue in public could be pretty helpful.
It's a bit funny to make this point about x-risk in particular because of course the above all applies to whatever topic. But I think people normally grasp it intuitively, and somehow that's less universal around x-risk. I guess maybe this is because people don't have any first-hand experience with x-risk, so their introductions to it are all via explicit arguments … and it's true that it's a domain where we should be unusually unwilling to trust our gut takes without hearing the arguments, but it seems to me like people are unusually likely to forget that they can know anything which has bearing on the questions without already being explicit (and also that perhaps the social environment, in encouraging people to take explicit arguments seriously, can accidentally overstep and end up discouraging people from taking anything else seriously). These dynamics seem especially strong in the case of AI risk — which I regard as the most serious source of x-risk, but also the one where I most wish people spent more time exploring their nagging doubts.
In the dis-spirit of this article I'm going to take the opposite tack and I'm going to explore nagging doubts that I have about this line of argument.
To be honest, I'm starting to get more and more sceptical/annoyed about this behaviour (for want of a better word) in the effective altruism community. I'm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, you've probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I don't think it should be encouraged wholesale in this community. Maybe I just don't have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example I've come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldn't explore those nagging doubts and express them out loud, just like maybe you shouldn't scream and hit things based on the mistaken belief that it'll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think that's a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they won't need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesn't need to be encouraged to do this.
On a similarly simple intellectual level, I see "people should not suppress doubts about the critical shift in direction that EA has taken over the past 10 years" as a no-brainer. I do not see it as intellectual wank in an environment where every other person assumes p(doom) approaches 1 and timelines get shorter by a year every time you blink. EA may feature criticism circle jerking overall, but I think this kind of criticism is actually important and not actually super well received (I perceive a frosty response whenever Matthew Barnett criticizes AI doomerism)