C

CRW

10 karmaJoined Jun 2022

Comments
1

CRW
2y11
0
0

Based on the quoted material, I understand Ne'eman to be disparaging the fact that utilitarianism has a tendency to deprivilege positive impacts that we can have on our local environment - indeed the positive version of this is one of the things I most admire about the EA community.  However, I think the relevant, annoying quirk of communicating is that we can say "X is a really great intervention for improving welfare", and have people who don't already buy into the utilitarian framework realise - correctly - that this means Y is implicitly worse than X.   Often this gets garbled from "Y is worse than X" to "Y is bad and X is good" (as opposed to "Y is good and X is even better").

The 'steelman' version of the case as I understand it is that being a committed EA / Utilitarian (which are synonymous unless people are being unusually careful, and is conceptually accurate here anyways) means that you'll often have to trade off between doing 'unintuitive good in large quantities' and doing 'more intuitive good in smaller amounts'.  People placing more credence in utilitarianism might accept the greater increase in U as self-evidently preferable to the loss of intuitiveness, but many others will make judgements on the basis of their moral intuition and come to the opposite conclusion.  It's not totally far-fetched or clearly 'evil' to have a moral philosophy which takes doing 'intuitive good' to be a duty, and maximising U to be supererogatory - great if you want to, but you're not 'bad' for doing otherwise.

I think in particular that here you are strawmanning:
“How does effective altruism deal with the fact that the issues I passionately advocate for are self-evidently worse ways to spend money than the issues I don’t advocate for, so self-evidently that I don’t even have to back up my claims in any way?”
In particular, you're begging the question by using the word 'worse', as you've already assumed that U(X)>U(Y) implies X>Y, whereas Ne'eman is saying (in my own interpretation) that his internal moral compass 'knows' Y>X, which is frankly quite difficult to argue against if he isn't concerned about potentially being inconsistent (and many people aren't!  Remember that rationality is instrumentally useful because it prevents Dutch-book attacks and helps us maximise utility functions effectively, but if you don't have a utility function, then being irrational might be a reasonable price to pay to satisfy your moral impulses).

It's hard to say, and I would love to see if there's been any work done on this front, but I would hypothesise that, at the meta-level, people would buy in more willingly to the EA mission if it were presented as being totally supererogatory (and then maybe would stick around long-enough to realise that this might not be a necessary stipulation) - the alternative being to go 'all-or-nothing' in demanding people submit to the 'tyranny of the QALY'.

 

Tl;dr not being utilitarian is inconsistent to a utilitarian, but an ethical belief isn't a fortiori stupid just because it is inconsistent with utilitarianism, and I'd think that taking Ne'eman's implicit concern seriously looks something like portraying utility maximisation as supererogatory (maybe even if you totally don't agree and think it's 100% required?)

(sorry if this was ranty - unfortunately my default writing style - I really did like the post :) )