Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/
Definitely interested to hear your substantive views when you have time! (All views are risky. I'm just honestly reporting my current opinion, based on what I've read to date. Happy to update after hearing more, though.)
Thanks for sharing! fyi, I've written up a summary of the main themes of the paper here.
(And seconding Jordan's request for "an article of a similar style arguing against EA principles". My suspicion is that none can exist because there's no reasonable way to make such an argument; insinuation and "political" critique is all that the critics have got. But I'd love to be proven wrong!)
Fair point, thanks!
But isn't the relevant harm here animal suffering rather than animal death? It would seem pretty awful to prefer that an animal suffer torturous agony rather than a human suffer a mild (1000x less bad) papercut.
You elided the explanation of the difference, which is psychological rather than metaphysical (just like the difference between failing to donate more to charity vs failing to save a child drowning right before your eyes).
The metaphysical commonality explains why both are very unjustified. The psychological difference explains why one, but not the other, warrants especially significant guilt / blame.
They're not "objections", because you've misunderstood your target. EA is perfectly compatible with judging that it's better to give later. That's an open empirical question. But yes, lots has been written on it. See, e.g., Julia Wise's Giving now vs later: a summary (and the many links contained therein).
The core issue here is that you're failing to distinguish intrinsic and instrumental value. The standard view is that all lives have equal intrinsic value. But obviously they can differ in instrumental value.
For further explanation, see this comment, along with the utilitarianism.net page on instrumental favoritism.
Not paying what you owe is a form of theft. Should one try to steal from the federal government in order to "redirect" the money to effective charities? Like the idea of "stealing to give" more generally, it seems like one of those questions that could be fun to ponder in a philosophy seminar room (we can surely imagine some thought experiments in which this would seem justified), but that seems like a terrible idea to encourage in practice.
In particular, I think the following key passage implicitly places the burden of proof in the wrong place:
I don’t have a knock-down argument for why this critique is incorrect; I just find it too speculative and abstract to outweigh the more concrete, dollars-and-cents case in favor of [stealing to give].
I think the opposite. I can't give a knock-down argument for why the naive utilitarian case for stealing-to-give is incorrect in any given instance (other than the simple expectational result of averaging over the commonsense belief that most such norm-breaking is likely to prove counterproductive, and we shouldn't believe ourselves to be the exception without exceptionally strong evidence). But I think we should have a very strong prior against such uncooperative, anti-social norm-breaking.
If you want to maximise statistical power, allocate patients using Drop The Loser (DTL) or some similar method
Can you briefly explain how DTL works?
For research, at least, it probably depends on the nature of the problem: whether you can just "brute force" it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best).
Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It's the rare breakthroughs that matter. Presumably funders think the same is true of, e.g., AI safety research.