Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), enthusiastic donor.
Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed).
Welfarism vs Abolitionism is a tough debate, for the reasons highlighted in this post. But this has become my reference article for when I stumble upon this thorny question in discussion with other advocates. Useful, concise, memorable thanks to a good use of concepts, parts and lists, and quite entertaining to read thanks to the examples and anecdotes.
This is a kind of post I like. Politely and concisely questioning an EA norm that has real-world consequences, without trying to answer all the questions. I'm interested to see if there will be further discussions of this in the comments (for now, I won't risk a position on this, I find myself modestly agreeing but don't have much to add).
Hi Zoe! Nice article, thank you for supporting the World Day for the End of Fishing and Fish Farming. By the way, I'm not seeing the link to the article on your website, so I'll leave it here for those who are curious.
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as "whether it would be good or bad for everyone to die," the statement "whether it would be good or bad for no future people to come into existence" seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about this - even those most convinced of the importance of reducing extinction risk often remind us of the possibility of "futures worse than extinction." This clarification isn't intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isn't as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasn't specifically arguing that we should be agnostic on that point either).
I hope I didn't come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so it's understandable that both get discussed (and this isn't unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!
[...] whether it would be good or bad for everyone to die
I'm sorry for not engaging with the rest of your comment (I'm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, it's likely that every few weeks, the vast majority of the world's individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortened - because it would not seem compelling in that case, especially when aiming to take into consideration of the welfare / interests of most individuals on earth. I don't think this is a complex philosophical point or some extreme skepticism: I'm just superficially observing that the situation of "everyone dies prematurely"[1] seems to be very close to what we already have, so it doesn't seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, I'm not saying "animals die so X-risk is good", my point is simply that I don't agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though I'm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that "prematurely" was implied when you said "good or bad for everyone to die". Of course, if we think that it's bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
My bad, I wasn't very clear when I used the term "counterargument", and "nuance" or something else might have fit better. It doesn't argue against the fact that without humans, there won't be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a "counterargument" to the idea that we'd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he's far from alone in having argued this, but this is the first post that comes to mind).
I have a question, and then a consideration that motivates it, which is also framed as a question that you can answer if you like.
If an existential catastrophe occurs, how likely is it to wipe out all animal sentience on earth?
I've already asked that question here (and also, to some acquaintances working in AI Safety, but the answers have very much differed - it seems we're quite far from a consensus on this, so it would be interesting to see perspectives from the varied voices taking part in this symposium.
Less important question, but that may clarify what motivates me to ask my main question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? A few days ago, I was planning on making a post on invertebrate sentience being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but then thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time). There are probably different reasons to agree (or disagree) with this, and I'd be happy to hear yours in short, though it's not as important to me as my first question. Thank you for doing this!
I already registered! This is an exciting opportunity to learn more about Animal Welfare Economics, and who knows, perhaps meet some fellow EAs during the breaks?