Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), FutureKind AI Fellow, freelance translator, enthusiastic donor.
Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed). Reasonably clueless about this.
"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik
Hi Zoe! It's thrilling to meet others with interest in invertebrate welfare (doesn't happen every day), and congratulations again for donating to a cause that is rarely considered appealing! Unsurprisingly, there's really no consensus on what one should do for animals in the face of AGI. However, there's a lot of exchange around what AI could mean for animals on the Sentient Futures slack, and if you have some thoughts you want to share about this, I'm sure there are many members there (including me) who'd be happy to read your current takes on the topic!Â
Thanks for sharing your thoughts on this important topic! I enjoyed reading it, and I loved the reference to the supporter/fan framework from Mjreard.
My feelings: the EA community is somewhat better than average at 1-2, most definitely better at 4 in certain cases (and I think cultivated meat is a good example, as EA circles very often argue for it on non-animal welfare grounds). I agree that 5 is an EA failure mode (I'm on the obsequious / sycophant side, I'm a bit cowardly and think being "fully transparent" about your values often leads to unproductive conflict), and 3 is something where improvement could be made (though it's hard-ish for longtermist causes, or even when you care about animals - heck, even discussing the welfare of cute mice seems to backfire).
Again, this is an important topic, since how we influence others positively could have a large impact, and taking a few hours to think about how to do it right is certain worth it.
3. The psychology of professional sports is surprisingly healthy.
This thesis is one of the most insightful community-related things I've read on the forum. I'd love to read more about if and hear if you think there's anything actionable on the margin (highly de-emphasize careers within EA orgs in outreach material, especially now that top impact may have moved elsewhere, eg, in high-impact non-EA roles?). Thanks!
In the spirit of thanking whoever helped you, this post was what finally convinced me to substantially donate (1,500$+ since then) to charities working on limiting the growth of insect farming when I read it in February of 2025. And yet, I had already substantially engaged with work on insect suffering, especially from Tomasik. Not sure what pushed me over the edge, but this post really managed to make take in the current evidence as worthy of influencing my priorities.
I'm not well-learned enough on this, but it seems that you would appreciate this very cool recent post which (badly summarized) kind of explains how one can still have forms of moral action-guidance even if some sound moral theories (in your post, a form of impartial consequentialism?) imply that "none of our actions matter" : Resolving radical cluelessness with metanormative bracketing.
Really cool! Love these breakdowns that go into the weeds of measurable impact.Â
"Programming: Some more focus on EA’s past achievements would have been beneficial in making a more compelling case for the movement as a whole."
I intuitively agree ("track record" seems like one of the strongest arguments for EA (obligatory Scott Alexander reference), especially in global health and farm animal welfare), but I wonder what makes you say that. Was it something you heard in the feedback form?
Welcome to the Forum, Zoe! I guess my knee-jerk response to this would be that I agree that these are significant problems with EA branding, I don't think most of these have easy, tractable answers (sadly a common occurrence in EA branding problems imo, eg "longtermism" being perceived as callous toward present-day issues).Â
"Hive mind" seems hard to avoid when building a community of people working toward common goals that are somewhat strange. "Holier-than-thou" is almost inevitable in "doing the most good one can" (and EA seems in fact quite relaxed by this standard, though your specific criticisms of the 10% pledge were interesting to read). "Sounds like AI", however, is probably fixable, and individuals could make some efforts, in the age where "AI-like writing" is increasingly criticized, to have a slightly warmer style, and maybe to de-emphasize bulletpoints somewhat? (less sure about this, I like bulletpoints)
But above all, I want to say, congratulations on your yearly donations! Even if it's not the holy grail of 10%, 10K a year is absolutely no joke, and giving 10% is far from having become an EA norm anyway. This level of donations, and the plan to keep going, is rare and precious. Thank you for doing so much for others!
Interesting question! Might the Kurzgesagt video on factory farming count as an example of this for animal welfare? If someone wants to do it again, they could try to assess what they think the video did right (and wrong) and improve upon it. Maybe some cues on messaging could be taken from Lewis Bollard's fairly successful appearance on the Dwarkesh podcast?
Also, a potential reason why AI Safety focused on it (compared to other cause areas) might be that they have pipelines which can absorb a fair amount of people, and so they find it more worthwhile to launch broad outreach that could get a few dozen counterfactual people applying to fellowships, and the like? This may less be the case for other causes when it comes to talent - I assume that for animal welfare and global health, the informal theory of change behind funding a high-quality video would be rather donation-focused. However, I could be wrong about the talent pipeline reason, and maybe some content creation funders mostly want to raise broad awareness of AI risk issues (seemed to be the case for the Future of Life Institute).
I think this is a very compelling (and enjoyable) essay. I particularly appreciate the first point of 2.1 as an intuitive reminder of the complicated empirical issues at hand. The main argument here is strengthened by this intuitive way of highlighting that doing (impartial) good is actually complicated.
I appreciate the efforts made here of highlighting alternatives to long-term EV maximization with precise credences, since the lack of "other options" can be a big mental blocker. Part 3 (and the conclusion, to an extent) seem to constitute the first solid high-level overview of this on the Forum, so this is quite helpful. Not to mention, these sections act as serious reminders of how important it is to "get it right", whatever that ends up meaning.
This going in my personal best-of for Forum posts of 2025! You explore crucial considerations and possible responses in a clear and transparent way, with pleasant sequencing. I find it very helpful in order to be less confused about my reactions in the face of backfire effects.Â