Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/
I didn't vote, but was (mildly) annoyed to find the linked post is partly pay-walled, which makes the link-post feel uncomfortably spammy? I expect it would get a more positive reception if the content was reproduced on the forum, rather than directing people to a paid subscription.
Not that I recall. (Wild animal suffering is mentioned, but just to support things like bird-friendly window glass, and helping to protect endangered large herbivores that plausibly have better lives than predation-prone smaller animals that might replace them.)
It may make a big difference here whether one is coming from a "commonsense" moral perspective (on which brutal killing is intrinsically wrong) or a more consequentialist perspective (on which an overall positive life is better than no life at all, as also discussed in this comment).
Of course, we can all agree that it would be better to prevent net-negative lives from existing in the first place. But the strict anti-killing stance that would oppose even net-happy farmed lives does not strike me as "more effective at helping animals". IMO, you don't help someone by preventing them from having an overall happy life, even if that overall happy life also contains some bad experiences. We should only want to prevent bad experiences all else equal, not when it entails also preventing greater positive experiences for that same individual.
That's just the calf separation issue I mentioned, right? That's a shame, but I wouldn't lump them together with factory farms (which I associate with daily mistreatment and overall negative quality of life).
It's easy to find Vital Farms eggs at Whole Foods or Amazon Fresh, and they are pasture raised.
Yeah, it is interesting. He actually begins chapter 4 ('Living Without Speciesism') with a section on "Effective Altruism for Animals" which talks more about protests, donations, and careers. But then goes on (in the subsequent, 'Eating Ethically', section):
All of the actions just mentioned are important things to do, but there is one more step we can take that underpins, makes consistent, and gives meaning to all our other activities on behalf of animals: We can take responsibility for our own lives, and make them as free of cruelty as we can. We can, as far as is reasonable and practical in our individual circumstances, stop buying and consuming meat and other animal products.
Which does sound very deontological! (Maybe partly strategic, if more people are likely to be willing to change their diet as a first step? But I also get the sense that he just thinks it's deeply unreasonable for many of us to eat [factory-farmed] meat. As an akratic omnivore myself, I kinda feel like he's... not wrong there.)
To be clear, I think veganism is good and worth advocating for, but I agree with you that I'd kind of expect the other (more?) "important things to do" to get comparatively more attention/priority, from a utilitarian perspective.
Discussing conscientious omnivorism, on p.191, he writes that he "remain[s] in doubt whether it is good to bring into existence beings who can be expected to live happy lives and whether this can justify killing them."
Two pages earlier, he explicitly notes a change in view:
In the first edition of this book, I rejected Leslie Stephen’s argument (that conscientious omnivorism is good for animals) on the grounds that it requires us to think that bringing a being into existence confers a benefit on that being—and to hold this, we must believe that it is possible to benefit a nonexistent being. This, I wrote, was nonsense; but now I am less sure that it is. After all, most of us would agree that conceiving a child who we know will have a genetic defect that would make their life painful and short would harm the child. Yet if we can harm a nonexistent child, surely we can also benefit a nonexistent child. To deny this, we would need to explain the asymmetry between the two cases, and that is not easy to do.
In terms of 'replaceability', note that even if continuing a (happy) life is good, it doesn't follow that it's better than killing with replacement. The replacement might be just as good, after all. To avoid that implication, you need something like individual-directed reasons to generate an asymmetry between killing and failing to create. (Though even then, it's hard to avoid the conclusion that short-lived happy lives are better than no lives at all.)
So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
That's a non-sequitur. There's no inconsistency between holding a certain conclusion -- that "every decent person should share the basic goals or values underlying effective altruism" -- and "honestly engaging with criticisms". I do both. (Specifically, I engage with criticisms of EA principles; I'm very explicit that the paper is not concerned with criticisms of "EA" as an entity.)
I've since reworded the abstract since the "every decent person" phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That's a view I hold, and I'm happy to defend it. You're trying to assert that my conclusion is illegitimate or "dishonest", prior to even considering my supporting reasons, and that's frankly absurd.
The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills - higher maxima - out there, but we do not know how to get there; any particular systemic change might as well make things worse.
Yes, and my "whole point" is to respond to this by observing that one's total evidence either supports the gamble of moving in a different direction, or it does not. You don't seem to have understood my argument, which is fine (I'm guessing you don't have much philosophy background), but it really should make you more cautious in your accusations.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
It's all about uncertainty -- that's what "in expectation" refers to. I'm certainly not attributing certainty to the proponent of systemic change -- that would indeed be a strawperson, but it's an egregious misreading to think that I'm making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
the sentence "This claim is ... true" just really, really gets to me
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn't mean that they're failing to engage honestly with those who disagree with them.
So the question is not "among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?"
Now this is a straw man! The view I defend there is rather that "we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings." Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
The most common arguments I am aware of against billionaire philanthropists are...
Those aren't arguments against how EA principles apply to billionaires, so aren't relevant to my paper.
So that is what I mean by "arguing against strawpeople"
You didn't accurately identify any misrepresentations or fallacies in my paper. It's just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
But they never even try to argue that EA support for "the very social structures that cause suffering" does more harm than good. As indicated by the "thereby", they seem to take the mere fact of complicity to suffice for "undermining its efforst to 'do the most good'."
I agree that they're talking about the way that EA principles are "actualized". They're empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. I'm pointing out that this fact doesn't suffice. They need to further show that the complicity does more harm than good.
Can you clarify why you think it's "incorrect" to conceive of disease burden as ongoing, or applying per unit of time, and more accurate to treat it as a per-life constant?
One objection to the "per-life constant" approach is that it could easily incorrectly imply that some short-lived but happy disabled lives are net-negative for the person living it. (Suppose the constant burden for deafness comes out to one year per lifetime, and then imagine a deaf child who lives happily for less than one year. So long as their short life was happy, it would seem inaccurate to call it net-negative! By contrast, the standard per-unit-of-time approach allows that happy deaf lives are always worth living, just not quite as good as they would have been without the mild disability.)