Hi Stephen. I’m also lacto-vegetarian. I take Vitamin D supplements (mainly for the reasons that they’re recommended for everyone) and an occasional Vitamin B complex or B3 supplement. I’ve considered taking algae-based Omega-3 supplements (in the form of DHA and EPA) but I don’t think the evidence is strong enough to justify the expense. My iron levels have consistently been fine without supplementation. I’ve found VeganHealth.org to be useful (I’d vouch for the quality of their evidence reviews). Ginny Messina is also worth reading (https://www.theveganr...
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstr...
I’m sorry to hear that you’ve been feeling this way, Linch. I’ve also been facing some of the difficulties that you describe. I’ll try to do the best I can but would welcome the input of people who are more knowledgeable than me!
In the professional work of the English Utilitarians, what stands out to me is perseverance. When Bentham’s Panopticon project (which was meant to be an improvement on the often cruel treatment of prisoners) failed to get off the ground, he moved on to other things such as education reform (advocating for an end to corporal punishm...
There has historically been some overlap between the charities that Open Phil and the Animal Welfare Fund have supported, and ACE's recommendations, which suggests that there is a degree of consensus. See also the discussion here, in which some endorse the changes that ACE has made to its methodology.
(Crossposted from FB)
Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.
But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people ma...
Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.
I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.
I'm in agreement with Michael_S that hedonium and delorium should be...
Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.
I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impre...
And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.
I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.
People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sen...
I don't think this gets us very far. You're making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits - they're human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.
Ultimately, Singer put it best: do the most good that you can do.
The main problem with this post, in my view, is that it's still in some places trying trying to re-run the election debate. The relevant question is no longer about who is a bigger risk or who will cause more net suffering out of Trump or Clinton, but about how bad Trump is on his own and what we can do to reduce the risks that arise from his Presidency.
I agree that Trump's views on Russia reduce global catastrophic risk (although his recent appointments seem to be fairly hawkish towards Russia.) However, he'll likely increase tensions in Asia, and his vie...
Just a few thoughts.
Firstly, Trump's agricultural advisors seem to be very hostile to animal welfare. This may mean that we need more people working on farmed animal welfare, not less.
In terms of going into politics, the prospect of having a group of EAs, and perhaps even an EA-associated organization, doing regular, everyday politics may turn some people off from the movement (depending on your view on whether EA is net-positive or negative overall, this may be bad or good.)
While Sentience Politics, the Open Philanthropy Project and some others I may h...
Yeah, I'd say Parfit is probably the leading figure when it comes to trying to find convergence. If I understand his work correctly, he initially tried to find convergence when it came to normative ethical theories, and opted for a more zero-sum approach when it came to meta-ethics, but in the upcoming Volume Three I think he's trying to find convergence when it comes to meta-ethics too.
In terms of normative theories, I've heard that he's trying to resolve the differences between his Triple Theory (which is essentially Rule Utilitarianism) and the other th...
It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.
For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).
If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.
I disagree that biting the bullet is "almost always a mistake". In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.
Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people's current lives were being lived inside an Experience Machine, 50% of people would want to st...
I have a probably silly question about the EuroSPARC program: what if you're in the no man's land between high school and university, i.e. you've just left high school before the program starts?
I know of a couple of mathematically talented people who might be interested (and who would still be in high school), so I'll certainly try and contact them!
This essay by Brian Tomasik addresses this question further, looking at the overall impact of human activities on wild-animal suffering, and includes the effect of factory-farming in the analysis too. Whilst human impact on the environment may lead to a net reduction in wild-animal suffering (if you think that the lives of wild-animals are significantly net-negative), the people whose lives are saved by the Against Malaria Foundation also have little impact on the environment, so also have little impact on the reduction of wild-animal suffering.
Thanks for the post. I'm somewhat less confident in the meat-eater problem being a problem as a result of it, maybe for different reasons though. I still think that it is overall a problem, however. I'll just put my initial thoughts below.
It’s also plausible that interventions that raise incomes, like deworming, have a lower impact on meat consumption because they don’t raise the overall number of humans that would be eating meat over their entire lifetime.
The effect of raising income itself will still tend to increase meat consumption, though. There w...
I agree - it would be bizarre to selectively criticise EA on this basis when our entire healthcare system is predicated on ethical assumptions.
Similarly, we could ask "why satisfy my own preferences?", but seeing as we just do, we have to take it as a given. I think that the argument outlined in this post takes a similar position: we just do value certain things, and EA is simply the logical extension of our valuing these things.
I agree with Squark - it's only when we've already decided that, say, saving lives is important that we create health systems to do just that.
But, I agree with the point that EA is not doing anything different to society as a whole - particularly healthcare - in terms of its philosophical assumptions. It would be fairly inconsistent to selectively look for the philosophical assumptions that underlie EA and not healthcare systems.
More generally, I approach morality in a similar way: sentient beings aim to satisfy their own preferences. I can't suddenly dec...
I'm in agreement with you on the meat consumption issue: morality doesn't begin and end with meat consumption, but it's better to donate lots to effective animal charities and be vegan, as opposed to offsetting one's meat consumption or having fancy vegan meals and being vegan. This seems to be the standard utilitarian stance. That's without taking into account the benefits of being vegan in terms of flow-through effects too, which have been discussed on this forum before. Personally, after having become essentially vegan, my family has had to reduce its m...
I approach utilitarianism more from a framework that, logically, I should be maximising the preference-satisfaction of others who exist or will exist, if I am doing the same for myself (which it is impossible not to do). So, in a sense, I don't believe that preference-satisfaction is good in itself, meaning that there's no obligation to make satisfied preferrers, just preferrers satisfied. I still assign some weight to the total view, though.
Interesting piece. I too reject the average view, but I'm currently in favour of prior-existence preference utilitarianism (the preferences of currently existing beings and beings who will exist in the future matter, but extinction, say, isn't bad because it prevents satisfied people from coming into existence) over the total view. I find it to be quite implausible that people can be harmed by not coming into existence, although I'm aware that this leads to an asymmetry, namely that we're not obligated to bring satisifed beings into existence but we're obl...
Thank you for this - I found it to be very useful. While I recognise the PR issue, I think it's also very important to explore all areas when it comes to cause-prioritization.
Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I'm making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it's global poverty or domesticated animal welfare) will have on it.
General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotio...
I think this is an excellent post. The point about unnecessary terminology from philosophy and economics is certainly one I've thought about before, and I like the suggestion about following Orwell's rules.
On the use of the term rational, I think it can be used in different ways. If we're celebrating Effective Altruism as a movement which proceeds from the assumption that reason and evidence should be used to put our moral beliefs into action, then I think the use of the term is fine, and indeed is one of the movement's strong points which will attract peo...
Interesting test. I scored quite low in terms of political bias, but there's certainly a temptation to correct or over-correct for your biases when you're finding it very hard to choose between the options.
A discussion of moral philosophy may be important not only because morality is integral to EA in general, but because it illustrates how the movement is suitable for people with wildly different views on morality, from utilitarians/consequentialists to deontologists to those who take a religious view.
I'd say that this video of Peter Singer is quite a good, short overview of cause prioritization.
Very detailed!
I'm currently in between lacto-ovo vegetarianism and veganism in that I'm a lacto-vegetarian. This is only because I don't currently have a regular income (I'm still in high school), and attempting to replace dairy in particular has been quite an inconvenience.
So, my experience is that it is a lot less inconvenient to give up eggs than to give up dairy products, so perhaps you could try lacto-vegetarianism, but seeing as you are willing to go "95% vegan" and potentially "100% vegan", they're probably better in consequentialist terms overall.
I've seen criticisms of effective altruism in which effective altruists have been criticised for supposedly donating a large proportion of their income simply to improve their image and make themselves look better. On that basis, it could be argued that EA should have a closer relationship to Maximum Selflessness, but even then, people could still accuse EAs of "being selfless" in order to improve their image.
On the other hand, if EA were centred around the concept of Maximum Selflessness, it could be perceived as too demanding. But, if the self...
Thank you for giving a realistic account of what it's like to be a doctor.
I'm considering studying medicine, so this was very helpful!
The Fish Welfare Initiative also works on improving shrimp welfare.