Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
What do you mean by "maximization"? I think it's important to distinguish between:
(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.
(2) Maximizing within specific decision contexts: insofar as you're trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.
As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)
On the broader themes, a lot of what you're pointing to is potential conflicts between ethics and self-interest, and I think it's pretty messed up to use the language of psychological "health" to justify a wanton disregard for ethics. Maybe it's partly a cultural clash, and when you say things like "All perspectives are valid," you really mean them in a non-literal sense?
I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
My claim is not "too strongly stated": it accurately states my view, which you haven't even shown to be incorrect (let alone "unfair" or not "defensible" -- both significantly higher bars to establish than merely being incorrect!)
It's always easier to make weaker claims, but that raises the risk of failing to make an important true claim that was worth making. Cf. epistemic cheems mindset.
Maybe I spoke too soon: it "seems unfair" to characterize Wenar's WIRED article as "discouraging life-saving aid"? (A comment that is immediately met with two agree votes!) The pathology lives on.
Thanks for the link. (I'd much rather people read that than Wenar's confused thoughts.)
Here's the bit I take to represent the "core issue":
If everyone thinks in terms of something like "approximate shares of moral credit", then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they'd all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he's (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I'm right about (i)-(iii), then I don't think it's accurate to characterize him as "in some reasonable way gesturing at the core issue."
Did you read my linked article on moral misdirection? Disavowing full-blown aid skepticism is compatible with discouraging life-saving aid, in the same way that someone who disavows xenophobia but then spends all their time writing sensationalist screeds about immigrant crime and other "harms" caused by immigrants is very obviously discouraging immigration whatever else they might have said.
ETA: I just re-read the WIRED article. He's clearly discouraging people from donating to GiveWell's recommendations. This will predictably result in more people dying. I don't see how you can deny this. Do you really think that general audiences reading his WIRED article will be no less likely to donate to effective charities as a result?
Yeah, I don't particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). I'm largely reacting to your positive annotated comments about the WIRED piece.
That said, I really don't think Wenar is (even close to) "substantively correct" on his "share of the total" argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). That's silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenar's confused reasoning about the impact of philanthropic donations.
I mean, it's undeniable that the best thing is best. It's not like there's some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.
I can see pathologies in both directions here. I don't think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others? I think the crucial thing is just to frame it positively rather than negatively, and don't get confused about where the baseline or zero-point properly lies.