Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog
10% Pledge #54 with GivingWhatWeCan.org
You're conflating "charity" and "charity evaluator". The whole point of independent evaluators is that other people can defer to their research. So yes, I think the answer is just "trust evaluators" (not "trust first-order charities"), the same way that someone wondering which supplements contain unsafe levels of lead should trust Consumer Reports.
If you are going to a priori refuse to trust research done by independent evaluators until you've personally vetted them for yourself, then you have made yourself incapable of benefiting from their efforts. Maybe there are low-trust societies where that's necessary. But you're going to miss out on a lot if you actually live in a high-trust society and just refuse to believe it.
I'm sorry, but those are just excuses. Nobody requires claims to be "proven" beyond all possible doubt before making decisions that are plausibly (but not definitely) better for themselves (like going to college). They only demand such proof to get out of making decisions that are plausibly better for others.
Unless you're a conspiracy theorist, you should probably think it more likely than not that reputable independent evaluators like GiveWell are legit. And then a >50% chance of saving lives for something on the order of ~$5000 is plainly sufficient to justify so acting. (Assuming that saving a life with certainty for ~$10k would obviously be choice-worthy.)
If one is unusually skeptical of life-saving interventions, the benefits of direct cash transfers (e.g. GiveDirectly) are basically undeniable. No "huge mental investment" or "leap of faith" required. (Unless by "leap of faith" you mean perfectly ordinary sorts of trust that go without saying in every other realm of life.)
I'm open to the possibility that what's all things considered best might take into account other kinds of values beyond traditionally welfarist ones (e.g. Nietzschean perfectionism). But standard sorts of agent-relative reasons like Wolf adverts to (reasons to want your life in particular to be more well-rounded) strike me as valid excuses rather than valid justifications. It isn't really a better decision to do the more selfish thing, IMO.
Your second paragraph is hard to answer because different people have different moral beliefs, and (as I suggest in the OP) laxer moral beliefs often stem from motivated reasoning. So the two may be intertwined. But obviously my hope is that greater clarity of moral knowledge may help us to do more good even with limited moral motivation.
See the Theories of Well-being chapter at utilitarianism.net for a detailed philosophical overview of this topic.
The simple case against hedonism is just that it is bizarrely restrictive: many of us have non-hedonistic ultimate desires about our own lives that seem perfectly reasonable, so the burden is on the hedonist to establish that they know better than we do what is good for us, and - in particular - that our subjective feelings are the only things that could reasonably be taken to matter for our own sakes. That's an extremely (and I would say implausibly) restrictive claim.
Just sharing my 2024 Year in Review post from Good Thoughts. It summarizes a couple dozen posts in applied ethics and ethical theory (including issues relating to naive instrumentalism and what I call "non-ideal decision theory") that would likely be of interest to many forum readers. (Plus a few more specialist philosophy posts that may only appeal to a more niche audience.)
Fair enough - I think I agree with that. Something that I discuss a lot in my writing is that we clearly have strong moral reasons to do more good rather than less, but that an over-emphasis on 'obligation' and 'demands' can get in the way of people appreciating this. I think I'm basically channeling the same frustration that you have, but rather than denying that there is such a thing as 'supererogation', I would frame it as emphasizing that we obviously have really good reasons to do supererogatory things, and refusing to do so can even be a straightforward normative error. See, especially, What Permissibility Could Be, where I emphatically reject the "rationalist" conception of permissibility on which we have no more reason to do supererogatory acts than selfish ones.
I basically agree with Scott. You need to ask what it even means to call something 'obligatory'. For many utilitarians (from Sidgwick to Peter Singer), they mean nothing more than what you have most reason to do. But that is not what anyone else means by the term, which (as J.S. Mill better recognized) has important connections to blameworthiness. So then the question arises why you would think that anything less than perfection was automatically deserving of blame. You might just as well claim that anything better than maximal evil is thereby deserving of praise!
For related discussion, see my posts:
And for a systematic exploration of demandingness and its limits (published in a top academic journal), see:
I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)
I agree with your first couple of paragraphs. That's why my initial reply referred to "reputable independent evaluators like GiveWell".
Conspiracy theorists do, of course, have their own distinct (and degenerate) "webs of trust", which is why I also flagged that possibility. But mainstream academic opinion (not to mention the opinion of the community that's most invested in getting these details right, i.e. effective altruists) regards GiveWell as highly reputable.
I didn't get the sense from John's comment that he understands reasonable social trust of this sort. He offered a false dichotomy between "thorough and methodical research" and "gut reactions", and suggested that "trust comes from... [personally] evaluat[ing] the service through normal use and consumption." I think this is deeply misleading. (Note, for example, that "normal use and consumption" does not give you any indication of how much lead is in your turmeric, whether your medication risks birth defects if taken during pregnancy, etc etc. Social trust, esp. in reputable institutions, is absolutely ubiquitous in navigating the world.)