Author’s Note: I started writing this about an hour before posting it. I really do mean it as just quick, initial thoughts I might expand on at some point. Every point here has probably been made elsewhere, and some of the points might not even be that good, but I just really had this on my chest. Take that how you will, maybe think of it more as a comment or informal rant than a normal post, hopefully it is still worth putting out.

Edit 3/17/22: I wrote this piece quickly so it’s pretty sloppy in places, and I may comment some corrections/clarifications in retrospect. The first one is on my discussion of animal welfare as a cause area. I say that animal welfare sticks out a bit as not fitting in with the other really big cause areas, but what I mean specifically is factory farming, I think more speculative high EV subcauses like wild animal welfare fit in with the arguments for the long term future cause area pretty well.

I also want to emphasize that I’m not saying factory farming isn’t easily one of the most important cause areas, I’m trying to say that it has a fairly middle child feel to it when you look at the other big cause areas, and the most widely recognized reasons for them. It is probably higher EV work than global health and development, but probably not as high as the long term future. It has a more concrete, present day pay off than the long term future, but not as much as global health and development. One perspective is that it is popular precisely because it finds this happy middle, but I find this relatively unpersuasive.

I think that, just as some of the less pure expected valuey, more squishy reasons people stick with global health and development rather than the long term future have been identified, like wanting the more certain pay off and going less weird places at extremes, a good story of this sort about the popularity of factory farming as a cause area reaching this “big three” status is needed. The one that fits my experiences best and I haven’t seen given the emphasis that I think it warrants is that it is a cause area with an unusually strong justice side.

I have lots of thoughts on this recent ACX post, but I don’t want to get into all of them. Having studied ethics, I know that there is a definition of “justice” that is more formal, at least in analytic philosophy, that I could reference in discussing the merits of this post, but…mostly that is not what interests me. I share a basic theoretical suspicion of the distinction it seems to draw on with Alexander, and think insofar as he touches on what justice might formally mean, it is close enough to this formal definition that it isn’t really where any of my issues with or responses to this post lie.

Actually, I want to zoom out and get even less formal about the whole thing. Put aside social posturing or high theory for a moment, what is the feeling “justice” is trying to evoke in these cases Alexander highlights, what is the activist project here? This is something, it seems to me from this post, that Alexander can’t relate to as well as I do. The invocation of police, and of there being no saints, and everything else, doesn’t match my perspective at all.

So, I want to look at an example. The Effective Altruism movement has three big cause areas that always get mentioned, global health and development, animal welfare, and the long term future. If you just read the websites and relevant EA explanations, animal welfare always looks a bit like the odd one out. Want to help in as speculatively big a way as possible from an EV perspective? Go for the long term future. Find some of that too weird and want to help with present problems in as reliable, measurable a way as you can? Go for global health and development.

Even the characterization of animal welfare as the “neglectedness” representative of the trio falls a bit flat, tons of extremely promising EA cause areas are at least as neglected in both other camps, from mental health in Africa, to biosecurity. Maybe this isn’t fair, after all, animal welfare is the most neglected broad category of cause areas EA is interested in. But if Animal Welfare wasn’t around, the most neglected would have been whichever of the other two is more neglected. It isn’t clear why you need the trio, so some better explanation of the unique advantages seems needed.

When talking to fellows new to the movement about Effective Altruism, I have been self-conscious about the fact that when you list your movement’s big axes as “helping the poorest people in the world, saving humanity, and also making farms nicer”, it sounds like some explanation is needed, and I have never been able to give the standard explanations for why it is such a big deal for so many EAs with a straight face. I think the real answer, which doesn’t mesh with the standard narratives very cleanly, is justice.

When Effective Altruists look at the world, they see lots of cases of unacceptable neglect and apathy and deep power differentials between possible beneficiaries and possible benefiters. Oh, and they also see sentient beings even more numerous that humans alive on Earth being actively/purposely subjected to non-stop torture for minor benefits to humans (that probably aren’t even net beneficial to humans), heavily normalized by culture, and which nearly everyone of moderate affluence on Earth is complicit in. The former types of issues can be given a justicey spin, but once you buy the right moral premises, the latter category screams “justice issue”. Ignoring this dimension makes it hard to see why animal welfare is such a popular cause area, indeed many passionate Effective Altruists I have run into, whether they are directly working on it or not, have a special, very personal investment in it when you talk to them. I would include myself in that category. The answer is, Effective Altruists, too, are only human.

Changing topics a bit, consider a common type of Effective Altruist thought experiment. Say you have been transported back to 1800, what does the Effective Altruist of the time do? A prime candidate is to in some way work on making the industrial revolution go well, figure out ways to mitigate long-term environmental harms, amp up growth and promote it outside of the Euro/Anglo-sphere, help build good infrastructure. An Effective Altruist might well give this answer, but they also might give the answer that pretty much everyone other than the Effective Altruist would give: devote your life to abolitionism. The industrial revolution was an incredibly important point in history that probably has had a much bigger overall impact if you look to the whole of the future past that point, but abolitionist screams justice.

This is where the biggest misunderstanding of Alexander’s piece jumps out to me, he laments by the end that we have no saints if we view the world as being about justice, no real heroes, just minimally decent people. Fascinating theory, but look at the list of our actual cultural heroes for a moment. King, Gandhi, Lincoln, Mandela, the list goes on and on. Lots of our cultural heroes were just impressively talented people, artists, scientists, and so on. Those we revere the most, and with the most ethical reverence in particular, were champions of justice. It is an ongoing struggle of the Effective Altruist movement to get people like Stanislav Petrov and Norman Borlaug to get a fraction of the spotlight of justice champions.

The reason why there are now so very many fill-in-the-blank justices is pretty obvious if you look at it this way. Every movement, even the ones with the very best arguments that their cause is fantastic, wants to find at least some little piece of justice in the issue they can rope in to tide over their haggard activists.

There is maybe a meta point implied by Alexander’s piece that is still relevant on this framing, that this “justice” thing doesn’t capture all that matters. Indeed, even if you managed to find an angle for “justice” in everything worth caring about, the ones that most screamed justice would not map cleanly onto the ones most worth caring about. This is concerning, especially if you aren’t compelled by the idea that justice maps some genuine independently important thing, as many people intuit that it does. It seems like in order to confront this, however, the best thing to do is to first recognize how deep the thing you are worried about runs in people. Justice is not about viewing the world in terms of cops and bad guys, it is about having a bleeding heart.

14

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 9:50 AM

I think I disagree with the core premise that animal welfare is an odd one out. That animals have moral worth is a much smaller buy, than the beliefs needed to accept longtermism.

For reference, I think the strongest case for longtermism comes when you accept claims that humanity has a non-zero chance of colonising the universe with digital human beings. It makes perfect sense for me that someone would accept that animals have a high moral worth, but not the far future stuff.

I don't think a justice explanation predicts better why EA's care about animals, than the object level arguments for caring about them do.

I mostly agree, I don’t think I was super clear with my initial post, and have edited to try to clarify more what I mean by the “odd one out”. To respond to your point more specifically, I also agree that the reason for caring in the first place is just the strong arguments in favor of caring about non-humans, and I even agree that the formal arguments for caring about non-human animals are probably more philosophically robust that those for caring about future generations (at least in the “theory X” no-difference-made-by-identity way longtermists usually do). I think the reason the cause area is the odd one out on the EA formal arguments side is different from the reason it is the odd one out when describing EA to outsiders, to be clear, I just think that when an outsider finds the cause area weird on the list, it becomes hard to respond if the formal arguments are also less well developed for which dimension factory farming dominates the other three areas on. I hope this clarifies my position somewhat.

I understand the post's claim to be as follows. Broadly speaking, EAs go for global health and development if they want to help people through rigorously prevent interventions. They go for improving the long-term future stuff if they want to maximize expected value. And they go for farmed animal welfare if they care a lot about animals, but the injustice of factory farming is a major motivation for why many EAs care about it. This makes a lot of sense to me and I wholeheartedly agree.

That said, I think the selection of the main three cause areas – global health and development, farmed animal welfare, and existential risk reduction – is largely a product of the history of EA. Global poverty and factory farming are from Peter Singer, evidence-based global health charities are from GiveWell, and existential risk reduction is from Bostrom and Yudkowsky. (That’s my guess, anyway.) Climate change mitigation manages to be considerably less popular, even though it's much more popular among the general population and seems roughly at least as cost-effective as global poverty interventions. The mental health charities found by Founders Pledge and Happier Lives Institute seem pretty good to me, but since mental health is a newer field of study within EA, it hasn’t had time to percolate throughout the community. Though EA philosophers have spent quite some time fleshing out the ideas of longtermism, many longtermist interventions besides extinction risk reduction remain under-explored.

This is a good summary of my position. I also agree that a significant part of the reason for the three major cause areas is history, but think that this answers a slightly different question from the one I'm approaching. It's not surprising, from the outside, that people who want to good, and have interests in common with major figures like Peter Singer, are more likely to get heavily involved with the EA movement than people who want to do good and have other values/interests. However, from the inside it doesn't give an account of why the people who do wind up involved with EA find the issue personally important, certainly the answer is unlikely to be "because it is important to Peter Singer". I'd count myself in this category, of people who share values with major figures in the movement, were in part selected for by the movement on this basis, and also, personally, care a very great deal about factory farming, more so that even cause areas I think might be more important from an EV perspective. This is as much an account of my own feelings that I think applies to others as anything else.

Related thought: people having different definitions of "justice", where that word points to overlapping-but-not-identical clusters of moral intuitions.

Animal welfare maps best on a cluster like "concern for the least-well-off" or "power for the powerless" or "the Rawls thing where if you imagined it happening to you, you'd hate it and want to escape it" or "ending suffering caused by the whims of other agents." That last one is particularly noticeable, since we usually have a moral intuition that suffering caused by other agents is basically preventable-thus-more-tragic.

Curated and popular this week
Relevant opportunities