JoA🔸

Donor and activist @ World Day for the End of Fishing and Fish Farming
88 karmaJoined Pursuing a graduate degree (e.g. Master's)Paris, France

Comments
10

I have a question, and then a consideration that motivates it, which is also framed as a question that you can answer if you like.

If an existential catastrophe occurs, how likely is it to wipe out all animal sentience on earth? 

I've already asked that question here (and also, to some acquaintances working in AI Safety, but the answers have very much differed - it seems we're quite far from a consensus on this, so it would be interesting to see perspectives from the varied voices taking part in this symposium.

Less important question, but that may clarify what motivates me to ask my main question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? A few days ago, I was planning on making a post on invertebrate sentience being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but then thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time). There are probably different reasons to agree (or disagree) with this, and I'd be happy to hear yours in short, though it's not as important to me as my first question. Thank you for doing this!

JoA🔸
2
1
0
43% disagree

This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete. 

It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood of biological suffering continuing on earth for millions of years (though it's not clear to me whether it would be more or less intense without intelligent life on earth), and the possibility of space (and eventually earth) being colonized by aliens (though whether their values will be better or worse remains an open question in my view).

Another point (which I'm not certain about how to weigh in my considerations) is that certain extinction events could massively reduce suffering on earth, by preventing digital sentience or even by causing the end of biological sentient life (this seems unlikely, and I've asked here how likely or unlikely EAs thought this was).

However, I am very uncertain of the tractability of improving future outcomes, especially considering recent posts by researchers at the Center on Long-Term Risk, or this one by a former researcher there, highlighting how uncertain it is that we are well-placed to improve the future. Nonetheless, I think that efforts made to improve the future, like the work of the Center for Reducing Suffering, the Center on Long-Term Risk, or the Sentience Institute, advocate for important values and could have some positive flow-through effects in the medium-term (though I don't necessarily think that this robustly improves the longer term future). I will note, however, that I am biased since work related the Center for Reducing Suffering was the primary reason I got into EA.

I am very open to changing my mind on this, but for now I'm under 50% agree because it seems to me that, in short:

  1. Extinction Risk Reduction could very well have negative expected value.
  2. Efforts to improve the value of futures where we survive might have some moderate positive effects in the short-term.

Lots of uncertainties. I expect to have moved my cursor before the end of the week!

Thank you very much for answering both questions! This was clear and helpful.

Very interesting post, as is often the case with you. Insightful and pragmatic. However, I feel like a closer investigation on charities that effectively ensure that large herbivores are helped. It's plausible that broader conservationist initiatives which have only part of their focus on wild herbivores could still have a larger effects than smaller charities that seem to work mostly at the individual level. In any case, I think it's likely that you're right, and if you are, it would be very interesting to see where donations are most likely to effectively increase the population of large herbivores. Do you currently have any idea of the potential effectiveness of those organizations ?

I appreciated this post - I find it good to see arguments related to wild animal ethics developed in a framework that isn't strictly consequentialist. I was a bit surprised by the references to creating new ecosystems on other planets, as that seems to be a quite different matter, and hadn't really been introduced in the post - but maybe your original writeup contained previous references to this, which made the reference make more sense?

I realized that there was not even one comment on this post, so I wanted to quickly drop in to say that this one of my favorite posts I've read on the forum. It has stuck with me in the past months. I appreciate how it remains relatively simple in its categories, while pointing out facts about our position in the world that we tend to take for granted.

It sometimes feels to me like in fundamental philosophical debates about value in EA (such as the value of existence or the moral value of individuals from different species), the crux is a sort of core, visceral intuition (I especially feel this way about questions regarding outweighing disvalue through value, or the importance of existence) - and the main defenders of suffering-focused ethics acknowledge this at times, supporting their arguments through vivid real-life examples that aim a the guts. It seems unlikely that any consensus will emerge - though there is clearly a majority view for now that goes against suffering-focused views, but it is interesting to remind us where we are all speaking from, and this might be something that individuals with very different ethical positions can agree upon. An example of where this may help is during debates on the claim that "we are biased against taking suffering seriously because we have experienced little to no extreme suffering" is equally true as "we don't realize how important happiness is because we haven't experienced extreme happiness". I think that reminding ourselves that we are among the happier individuals existing right now on this planet makes might more likely to consider that we should, by default, be more prone to ignoring the intensity that suffering often reaches, than the intensity that happiness often reaches.

Very engaging post! I appreciated how it covered the many different aspects of the decision-making and transition process: rational, practical, social and emotional. I feel like this would have value to animal advocates who are not particularly interested in EA, as it would be a very concrete way of introducing the difficult questions one grapples with when considering impact. However, I am not sure where else this could be posted in order for members of this wider audience to access it.

I hope this is not too much of a digression from the core of the post, but I was struck to see that you cited Brian Tomasik's article as being more or less the spark that set the organization off on the course of reevaluating their interventions, and eventually changing their domain of action. I often notice individuals in EA organizations - or non-EA animal advocates - citing Tomasik as someone who has led them to reevaluate their considerations, and sometimes even to change the type of interventions that they put in place. He also seems to have come first in advocating for earning-to-give (in 2006), appears to be one of the most cited advocates for reducing wild-animal suffering (2009), created the second-ever table trying to evaluate direct suffering caused by animal foods (2007), and wrote the first article dedicated to s-risks (2013). These things have all been substantially expanded upon since, and some, such as Wild Animal Welfare, are even considered by somes as EA causes in their own right. Would I be wrong in considering Brian Tomasik's influence as having been comparably far-reaching within the movement (especially on the level of ideas) as Toby Ord's or Nick Bostrom's ? 

(Side note: I know this is probably not the most important subject to think about, but I find it helpful to get a clearer picture of where the core concepts and claims that make up Effective Altruism come from, in order to be more aware of the contingencies of the movement; also, trying to vaguely keep track of this helps me reflect on the influence that sharing ideas can have - and that is something where on the surface level, Tomasik's record seems out of the ordinary, especially for someone who isn't much of a public figure).

I suspect that all of the species that are currently of significantly debated consciousness—call them swing state species—are conscious. This would include crabs, lobsters, fish, and most insects, but it wouldn’t include oysters or AI. There’s fairly widespread agreement that such beings aren’t conscious.

Since he states here (and on another occasion in the article) that oysters aren't conscious, he most likely believes that it's not morally wrong to eat oysters (and he probably also includes mussels and other bivalves in this category).

Several factors make me confident regarding the importance of this choice : the sheer scale and intensity of the suffering involved, the lower cost of helping nonhuman individuals in farms compared to humans, and the comparative small size of the aniimal welfare / advocacy movement giving $100m a potentially more important long-term impact.

Load more