Introduction
When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2]
In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior.
There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3]
Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
If you hold these assumptions robustly, the most direct answer would be to focus on the kind of beings who are likely to experience greater suffering by default, namely factory-farmed animals, and potentially some wild animals. You should focus on interventions (alternative proteins, vegan advocacy) that are likely to cause these animals not to come into existence, rather than welfarist approaches that improve the lives of animals but keep numbers relatively constant. This is a very popular approach, so you'd be welcome in this part of the EA space.
But you might want to slightly relax your assumptions slightly when considering practical work you could do. Assuming that reducing suffering is your ultimate goal, if the "best way not to suffer is not to live", it doesn't necessarily follow that the most effective way to reduce suffering (given limited resources) is stopping beings coming into existence.
For example, an intervention to help people in poor countries detect particularly painful congenital defects before birth and terminate these pregnancies might reduce suffering and satisfy your assumptions, but if it's expensive, it might be more effective to reduce the suffering of existing people, for example, providing relatively cheap pain relief for people with late-stage cancer.
Or if you could cause x number of factory-farmed chickens to be raised in a free-range/ organic way for the same cost/ resources as stopping y number of factory-farmed chickens being born, there's probably some number for x and y for which you'd choose the first option.
I agree with you. Anyway, with that in mind, what charities would you recommend for me to support specifically?
You mentioned "vegan advocacy" it doesn't seem to me effective in principle I think it's not worth debating and it doesn't change anything. Besides, I'm not sure how real animal suffering is compared to human suffering, due to the fact that consciousness plays a big role in the experience of suffering. I'm guessing that animals do suffer, so I'd love to hear about specific ways to act so that they are not born, but I'd still like to hear about an alternative that focuses on the problem of human existence.
Regarding "the problem of human existence", it sounds like you'd get along with the people at the Voluntary Human Extinction Movement. I have no reason to think that organization is particularly effective at achieving its goal, but they might be able to point you in the right direction. There's also Population Connection, which I don't believe shares the same end goal but may (or may not) be taking more effective steps in the same direction.