C

CB🔸

Independent researcher @ Effective Altruism France
896 karmaJoined Working (6-15 years)Lyon, France

Bio

Participation
3

I'm living in France. Learned about EA in 2018, found that great, digged a lot into the topic. The idea of "what in the world improves well-being or causes suffering the most, and what can we do" really influenced me a whole lot - especially when mixed with meditation that allowed me to be more active in my life.

One of the most reliable thing I have found so far is helping animal charities : farmed animals are much more numerous than humans (and have much worse living conditions), and there absolutely is evidence that animal charities are getting some improvements (especially from The Humane League). I tried to donate a lot there. 

Long-termism could also be important, but I think that we'll hit energy limits before getting to an extinction event - I wrote an EA forum post for that here: https://forum.effectivealtruism.org/posts/wXzc75txE5hbHqYug/the-great-energy-descent-short-version-an-important-thing-ea

How I can help others

I just have an interest in whatever topic sounds really important, so I have a LOT of data on a lot of topics.  These include energy, the environment, resource depletion, simple ways to understand the economy, limits to growth, why we fail to solve the sustainability issue, and how we got to that very weird specific point in history.

I also have a lot of stuff on Buddhism and meditation and on "what makes us happy" (check the Waking Up app!)

Comments
281

Interesting, thank you.

On the second point this reads like very optimistic (the way animals are treated in rich countries is just very bad). I agree that it's maybe easier to appeal to ethical values and develop alternatives now but it's hard to know if this will be enough to offset all the negative stuff associated by 'more power and money = easier to buy animal products'. But I won't have much time to engage and it's not that important since we can't change this part of the trajectory.

The post is interesting and well argued, but I am not sure I agree - one example I have in mind is Microsoft using AI to double the productivity of a shrimp farm, likely by increasing density.

Regarding this : "The industry also operates under finite resource constraints, including feed, water, energy, and land" It is also possible that AI, by increasing economic growth and developing better energy sources, can indirectly increase animal consumption by giving more resources to people.

I agree that animal welfare activists should use AI to boost their outreach, however.

This would be great!

Even better would be something dedicated to the topic of the impact AI will have on animals. It's very likely (unavoidable?) that most of the beings affected by AI will be animals (although artificial sentience could be somewhere).

An AI aligned with humans but not animals would have terrible effects for many beings in the world, so just pushing for AI safety for humans is not enough by itself to bring a positive world.

The intersection on AI x animals seems promising though.

Pain feels worse when it's conscious than when it's unconscious?

I mean, sometimes I have a stomachache that I barely notice and which is mostly unconscious until I look at it. And it doesn't motivate me to change a lot. However, having someone whipping me really motivates me to move elsewhere - something I wouldn't do if the feeling were mostly unconscious (I'd mostly just step back by reflex). Probably the same reason that wakes me up when I'm hit in my sleep.

So pain as a conscious valenced negative experience seems like a strong motivator to act on.

The fact that things can be perceived unconsciously is interesting, but if it were enough to survive in nature, I don't see a lot of reasons why we, humans, would have developed conscious pain in the first place?

Thanks for the post ! This is valuable.

I find the argument convincing: why would evolution not include something as similar as pain in autonomous animals, and why would they have such similar behavior in pain if they were not conscious?

I think work on animals is comparatively neglected, due to the high numbers of individuals of bad conditions. More specifically, the smaller the animals, the more numerous and neglected they tend to be, which leads to underfunding.


For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EA's value system.

Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful.

I strongly agree: the comparative underfunding of these areas always felt off to me, given their very large numbers of individuals and low-hanging fruits. 
However, it feels like more and more people are recognizing the need for more funding for animal welfare, given the results of the recent debate.

Another comment : regarding the value of longtermist intervention, while I understand numbers can be very high, my main uncertainty is that I'm not even sure a lot of common interventions have a positive impact.

For instance, is working against X-risks good when avoiding an S-risks would allow factory farming to continue? The answer will depend on many questions (will factory farming continue in the future, what is the impact of humanity on wild animals, what will happen regarding artificial sentience, etc.), none of which have a clear answer.

Reducing S-risks seems good, though.

Very interesting and well formulated! It highlights several hidden assumptions that can significantly reduce your ability to have an impact.

Indeed, from what I've seen, the (natural) tendency of giving very low moral value to other animals (eg less than 1000 that of a human) often stems from gut feeling, with added justifications afterwards.

Load more