Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
I basically agree with Scott. You need to ask what it even means to call something 'obligatory'. For many utilitarians (from Sidgwick to Peter Singer), they mean nothing more than what you have most reason to do. But that is not what anyone else means by the term, which (as J.S. Mill better recognized) has important connections to blameworthiness. So then the question arises why you would think that anything less than perfection was automatically deserving of blame. You might just as well claim that anything better than maximal evil is thereby deserving of praise!
For related discussion, see my posts:
And for a systematic exploration of demandingness and its limits (published in a top academic journal), see:
I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)
I basically agree with the core case for "animal welfare offsetting", and discuss some related ideas in Confessions of a Cheeseburger Ethicist. The main points of resistance I'd flag are just:
Or if any other kind of progress (including moral progress, some of which will come from future people) will eventually abolish factory-farming. I'd be utterly shocked if factory-farming is still a thing 1000+ years from now. But sure, it is a possibility, so you could discount the value of new lives by some modest amount to reflect this risk. I just don't think that will yield the result that marginal population increases are net-negative for the world in expectation.
In the long term, we will hopefully invent forms of delicious meat like cultured meat that do not involve sentient animal suffering... When that happens, pro-natalism might make more sense.
As Kevin Kuruc argues, progress happens from people (or productive person-years), not from the bare passage of time. So we should expect there's some number of productive person-years required to solve this problem. So there simply is no meat-eater problem. As a first-pass model: removing person-years from the present doesn't reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.
One quick reason for thinking that academic philosophy norms should apply to the "institutional critique" is that it appears in works of academic philosophy. If people like Crary et al are just acting as private political actors, I guess they can say whatever they want on whatever flimsy basis they want. But insofar as they're writing philosophy papers (and books published by academic presses) arguing for the institutional critique as a serious objection to Effective Altruism, I'm claiming that they haven't done a competent job of arguing for their thesis.
Such a norm would make intellectual progress impossible. We'd just spend all day accusing each other of vague COIs. (E.g.: "Thorstad is a humanities professor, in a social environment that valorizes extreme Leftism and looks with suspicion upon anyone to the right of Bernie Sanders. In such a social environment, it would be very difficult for him to acknowledge the good that billionaire philanthropists do; he will face immense social pressure to instead reduce the status of billionaires and raise the status of left-wing activists, regardless of the objective merits of the respective groups. It's worth considering whether these social pressures may have something to do with the positions he ends up taking with regard to EA.")
There's a reason why philosophy usually has a norm of focusing on the first-order issues rather than these sorts of ad hominems.
I think you've misunderstood me. My complaint is not that these philosophers openly argue, "EAs are insufficiently Left, so be suspicious of them." (That's not what they say.) Rather, they presuppose Leftism's obviousness in a different way. They seem unaware that market liberals sincerely disagree with them about what's likely to have good results.
This leads them to engage in fallacious reasoning, like "EAs must be methodologically biased against systemic change, because why else would they not support anti-capitalist revolution?" I have literally never seen any proponent of the institutional critique acknowledge that some of us genuinely believe, for reasons, that anti-capitalist revolution is a bad idea. There is zero grappling with the possibility of disagreement about which "systemic changes" are good or bad. It's really bizarre. And I should stress that I'm not criticizing their politics here. I'm criticizing their reasoning. Their "evidence" of methodological bias is that we don't embrace their politics. That's terrible reasoning!
I don't think I'm methodologically biased against systemic change, and nothing I've read in these critiques gives me any reason to reconsider that judgment. It's weird to present as an "objection" something that gives one's target no reason to reconsider their view. That's not how philosophy normally works!
Now, you could develop some sort of argument about which claims are or are not "extraordinary", and whether the historical success of capitalism relative to anti-capitalism really makes no difference to what we should treat as "the default starting point." Those could be interesting arguments (if you anticipated and addressed the obvious objections)! I'm skeptical that they'd succeed, but I'd appreciate the intellectual engagement, and the possibility of learning something from it. Existing proponents of the institutional critique have not done any of that work (from what I've read to date). And they're philosophers -- it's their job to make reasoned arguments that engage with the perspectives of those they disagree with.
How does writing a substantive post on x-risk give Thorstad a free pass to cast aspersions when he turns to discussing politics or economics?
I'm criticizing specific content here. I don't know who you are or what your grievances are, and I'd ask you not to project them onto my specific criticisms of Thorstad and Crary et al.
Thorstad acknowledged that many of us have engaged in depth with the critique he references, but instead of treating our responses as worth considering, he suggests it is "worth considering if the social and financial position of effective altruists might have something to do with" the conclusions we reach.
It is hardly "mud-slinging" for me to find this slimy dismissal objectionable. Nor is it mud-slinging to point out ways in which Crary et al (cited approvingly by Thorstad) are clearly being unprincipled in their appeals to "systemic change". This is specific, textually-grounded criticism of specific actors, none of whom are you.
Fair enough - I think I agree with that. Something that I discuss a lot in my writing is that we clearly have strong moral reasons to do more good rather than less, but that an over-emphasis on 'obligation' and 'demands' can get in the way of people appreciating this. I think I'm basically channeling the same frustration that you have, but rather than denying that there is such a thing as 'supererogation', I would frame it as emphasizing that we obviously have really good reasons to do supererogatory things, and refusing to do so can even be a straightforward normative error. See, especially, What Permissibility Could Be, where I emphatically reject the "rationalist" conception of permissibility on which we have no more reason to do supererogatory acts than selfish ones.