I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao
I am now considering options in AI ethics, governance, or the intersection of AI and animal welfare.
As a (wild) animal welfare person, I am disappointed to see this. Your comment was thoughtful and well-intentioned.
It doesn't apply here, but in general I expect animal welfare people are more likely to disapprove of certain views, or take more of a combative attitude to public debates, because so much of normal discourse sneaks in speciesist assumptions and is actively harmful to animals. But I don't think that's the explanation here - I largely agree with your comment.
To respond to your original comment, I think with a bit of creativity you will be able to find politically tractable interventions. For example, people tend to view humane management of animals in cities quite positively. There's also a growing movement for compassionate conservation. It's more focused on doing no harm than actively helping wild animals, but at least it is a movement towards thinking about the welfare of wild animals. I do think that there will often be a tradeoff between effectiveness and political tractability though, and it may be worth pursuing sub-optimal interventions for a while in order to gain greater political momentum towards helping wild animals.
You’ve said you’re in favour of slowing/pausing, yet your post focuses on ‘making AI go well’ rather than on pausing. I think most EAs would assign a significant probability that near-term AGI goes very badly - with many literally thinking that doom is the default outcome.
If that's even a significant possibility, then isn't pausing/slowing down the best thing to do no matter what? Why be optimistic that we can "make AGI go well" and pessimistic that we can pause or slow AI development for long enough?
I enjoyed this post a lot while reading it, but after reflecting (and discussing with my local group) I feel more unsure. Consider that can ask if we should encourage 'heroic responsibility' and try to foster this kind of radical, positive altruism at three different levels:
1. Personally, as an individual
2. Within EA
3. Within society as a whole.
The post seems to argue for all three. It talks specifically about the need for a cultural shift. I feel very convinced of (1) (I'd value this highly for myself), I'm less convinced of (2), and I feel quite unconvinced of (3).
Heroic responsibility & burnout
I think it's quite clear that it would be beneficial if this way of thinking became widespread in EA and society at large. But it's less clear if that's a realistic expectation. I actually see a lot of risks to encouraging heroic responsibility within EA; EA pivoted away from heroic responsibility toward more toned-down messaging about doing good quite intentionally. As kuhanj notes, without he positive, enjoying-the-process attitude that's argued for in part 2, there's the risk that heroic responsibility leads to burnout. And it seems to me that enjoying the process is actually not always that easy: meditation just isn't for everyone, I've meditated for a number of years and can't say it transformed me. I would be happy to see workshops on this at EA retreats, but it doesn't seem worth it to ask all EAs to spend large amounts of time on this when we're not sure if it'll work, and the current strategy of simply not asking people to take on all the world's problems also works ok. For people new to EA, the movement might also be very off-putting if it seemed to ask this much of you.
Is heroic responsibility learned or innate?
I also think that heroic responsibility might be determined more by genes or early childhood experiences than anything else. The examples of heroes don't seem to be of people who arrived there because of some deep insight - rather, these are people who were motivated by justice to begin with. I know for myself, I am more motivated in this way than my siblings now but I was also more motivated when I was 10 years old. Resources spent trying to transform people in this way might be wasted, and might be better spent by trying to encourage people who already have this disposition to join EA.
This is awesome. I really liked how you considered both short term and long term, clear and diffuse effects, and noted how they changed your confidence.
It seems like this should be highly valuable for:
I agree with @david_reinstein that it would be nice to see this made into a more visually polished and navigable form, but in terms of the content itself I found it very easy to understand the reasoning and assessments.
I think this may have been a misunderstanding, because I also misunderstood your comment at first. At first you refer simply to the people who play the biggest role in shaping AGI - but then later (and in this comment) you refer to people who contribute most to making AGI go well - a very important distinction!
That's fair. I would love it if we had data on this, and to be honest I am unsure about whether being strictly vegan is always right - my stronger objection to this article was about not being strictly vegetarian. That is easier to do and I think is perceived as less strict, at least in western societies. On the other hand, as I said in another comment I think that it's very hard to eat meat and fully internalise nonspeciesism at the same time. A true nonspeciesist should be disugsted by meat, because that's literally a dead body in front of you. So I think it's worth it to be strictly vegetarian primarily to reinforce your own values, internally - but also for the signalling effect.
Hey, I agree that many people associate veganism with 'annoying people'. But that's actually...more reason to call yourself vegan, if you're not an annoying person yourself! Break the stereotype, and normalise being standing for vegan values :)
My sense is that a lot of people in EA are against factory farming, but still buy into human supremacy and are ok with free-range farming. Then the 90% approach reflects the appropriate attitude and is fine. But for those like myself who have long-term hopes of ending animal exploitation altogether, I think it makes sense to signal that we oppose all of it. Requiring others to be strict is certainly counter-productive, though. I also don't think change has to be all or nothing - I actually think it's really good for people who make exceptions sometimes to call themselves vegan.
I think this kind of signal might work for high-functioning EAs, but not for your average person. It's too complicated: "I don't want to participate in a practice that harms animals" is much easier to understand.
By the logic you've expressed in the post, I think you could also consider eating leftover meat, meat that's for free, meat that's from someone you know... so it gets complicated. My expectation is that most people see such behaviour, and think this person kind of cares about animal welfare, but only a bit.
That all said, I think (although I'm uncertain) that reason (1) in my last comment might actually be the most important.
I felt very discouraged when I heard that there were over 1300 applications for the Gov AI winter fellowship. But now I'm frankly appalled to hear that there were over 7500 applications for the 2025 FIG Winter Fellowship.
Should we officially declare that AI governance is oversaturated, and not recommend this career path except for the ultra-talented?