Sorry for the late response. I don’t actually think that non-utilitarian intuitions/principles are necessarily more evolutionarily biased than utilitarian principles. I think certain deontological precepts (like Kant’s categorical imperative) could also be less vulnerable to evolutionary debunking arguments than ‘common-sense’ moral intuitions, for example. I don’t think it’s as easy to argue that something like this, or the principle of Universal Benevolence, is the product of natural selection. It could be, but it seems we have less reason to think it is. And if ethics is about how we ought (in a reason-implying sense) to live, then focusing on what we have most reason to do is sufficient.
Once we’ve reasoned about “who counts?”, we can then move on to “what counts?”
I think hedonism is the most defensible answer to “what counts?”, and when you combine that with plausible answers to “who counts?”, you arrive at hedonistic utilitarianism.
Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.
I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.
If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.
It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.
Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)
Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.
When we say that EAs “mostly” have a certain demographic background, we should remember that this still means there are hundreds of EAs that don’t fit that background at all and they shouldn’t be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas haven’t just popped up in the West by any means.
I also don’t think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!
Thanks for your reply! I’m not saying that EA should be able to exclude others’ visions because others are doing so. I’m claiming that it’s impossible not to exclude others’ visions of the future. Let’s take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.
Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.
I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.
I think reason is as close to an objective tool as we’re likely to get and often isn’t born from our standpoint in the world or the culture we grow up in. That’s why people from many different cultures have often reached similar conclusions, and why almost everyone (regardless of their background) can recognise logical and mathematical truths. It’s also why most people agree that the sun will rise the next morning and that attempting to leave your house from your upper floor window is a bad idea.
I think the onus is on advocates of these movements to explain their relevance to “doing the most good”. As for the various 20th Century criticisms of utilitarianism, my sense is that they’ve been parried rather successfully by other philosophers. Finally, my point about utilitarianism being just as modern is that it hasn’t in any way been superseded by these other movements — it’s still practiced and used today.
I’m sure Thorn does do this (I haven’t watched the video in full yet), but it seems more productive to criticise the “EA vision of the future” than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).
MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people don’t like what they think is the “EA vision of the future” and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding others’ visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldn’t have a specific vision.
My comment mainly referred to the causes we’ve generally decided to prioritise. When we engage in cause prioritisation decisions, we don’t ask ourselves whether they’re a “leftist” or “rightist” cause area.
I did say that EAs may engage in party politics in an individual or group capacity. But they’re still often doing so in order to advocate for causes that EAs care about, and which people from various standard political ideologies can get on board with. Bankman-Fried also donated to Republican candidates who he thought were good on EA issues, for example. And the name of the “all-party” parliamentary group clearly distinguishes it from just advocating for a standard political ideology or party.
I don’t quite see how existentialism, structuralism, post-structuralism and fascism are going to help us be more effectively altruistic, or how they’re going to help us prioritise causes. Communism is a different case as in some formats it’s a potential altruistic cause area that people may choose to prioritise.
I also don’t think that these ideas are more “modern” than utilitarianism, or that their supposed novelty is a point in their favour. Fascism, just to take one of these movements, has been thoroughly discredited and is pretty much the antithesis of altruism. These movements are movements in their own right, and I don’t think they’d want EAs to turn them into something they’re not. The same is true in the opposite direction.
By all means, make an argument in favour of these movements or their relevance to EA. But claiming that EAs haven’t considered these movements (I have, and think they’re false) isn’t likely to change much.
I sympathise with this position. Impartiality is a key tenet of EA. At the same time, EA already tolerates outright speciesism (people, including a number of high-status individuals within the community, who explicitly say that they value non-humans less than humans not because of sentience, but because they are simply members of a different species). Moreover, as Jason says, these people would have still been recipients anyway.