Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
1604Joined Dec 2018

Bio

Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/

Comments
130

Yes, this strikes me as an important point. It's a bit like how ideologically-motivated hate crimes are (I think correctly) regarded as worse than comparable "intentional" (but non-ideologically-motivated) violence, perhaps in part because it raises the risks of systematic harms. 

Many moral differences are innocuous, but some really aren't.  For an extreme example: the "true believer" Nazi is in some ways worse than the cowardly citizen who goes along with the regime out of fear and self-interest.  But that's very different from everyday "value disagreements" which tend to involve values that we recognize as (at least to some extent) worthy of respect, even if we judge them ultimately mistaken.

Now, consider your situation: instead of sitting in FTX's bank account, that money finds itself in your account. It shouldn't have been transferred to you; FTX wasn't solvent when it made that transaction, it needed to keep all of its money to try to pay back its creditors.

Is that true of grants made back in Jan - Feb?  I read somewhere (Forbes, I think) that this was when the big grants to EA orgs were made, whereas it seems maybe the solvency issues didn't arise until after the Luna crash in May?

For a neater, hypothetical version of the question: consider some honest profits FTX made several years ago.  If still in their accounts now, it would need to be used to pay back the creditors.  But suppose instead that they immediately granted out those profits (several years ago), which seemed an intrinsically legit transaction given the circumstances at a time, and the recipient org for some reason hasn't gotten around to spending those funds (not sure exactly what that means, in accounting terms, since money is fungible and the org would surely have had some expenses during this time; but maybe it was earmarked for a specific purpose that hasn't yet eventuated).  Do you think the org is obligated to return the funds in this case?

Yes, agreed that what matters for EA's purposes is agreement on its most central practical norms, which should include norms of integrity, etc., and it's fine to have different underlying theories of what ultimately justifies these. (+ also fine, of course, to have empirical/applied disagreements about what we should end up prioritizing, etc., as a result..)

I'll look forward to hearing more of your thoughts on consequentialism & collective action problems at some future point!

Sounds reasonable!  Though if you can build in all the details of your specific individual situation, and are directed to do what's best in light of this, do you think this ends up being recognizably distinct from act consequentialism?

(Not that convergence is necessarily a problem. It can be a happy result that different theorists are "climbing the same mountain from different sides", to borrow Parfit's metaphor. But it would at least suggest that the Kantian spin is optional, and the basic view could be just as well characterized in act consequentialist terms.)

re: "Practical Kantianism", can it avoid the standard failure modes of universal generalization--i.e., recommending harmful acts simply because other acts of the same kind had positive value?

Here's an example:

An evil but trustworthy demon holds a referendum on whether to blow up the world. He explains there are two consequences:
(1) He will destroy the world unless at least 50 people vote against it, and
(2) He will torture one puppy for each person who votes against destroying the world.
You have x-ray vision, and as you enter the voting booth, you can see that there have already been the needed 50 votes against destroying the world. What should you do?

fwiw, I think marginal value is more relevant than average value or anything else like it -- it's just that we can often (not always) take average group value to be our best guide to the marginal value of our contributions (if there's no grounds for taking ourselves to be in an unrepresentative position).

For more on how consequentialists can deal with collective action/inefficacy worries, see my post, 'Five Fallacies of Collective Harm'.

(Apologies if the self-linking is annoying. I think the linked posts are helpful and relevant, but obviously feel free to downvote if you disagree!)

Great post!  On the tension between "maximization" vs "common-sense", it can be helpful to distinguish two aspects of utilitarianism that are highly psychologically separable:

(1) Acceptance of instrumental harm (i.e. rejection of deontic constraints against this); and

(2) Moral ambition / scope-sensitivity / beneficentrism / optimizing within the range of the permissible. (There may be subtle differences between these different characterizations, but they clearly form a common cluster.)

Both could be seen as contrasting with "common sense". But I think EA as a project has only ever been about the second.  And I don't think there's any essential connection between the two -- no reason why a commitment to the second should imply the first.

As generously noted by the OP [though I would encourage anyone interested in my views here to read my recent posts instead of the old one from my undergraduate days!], I've long argued that utilitarianism is nonetheless compatible with:

(1*) Being guided by commonsense deontic constraints, on heuristic grounds, and distrusting explicit calculations to the contrary (unless it would clearly be best for most people similarly subjectively situated to trust such calculations).

fwiw, my sense is that this is very much the mainstream view in the utilitarian tradition. Strikingly, those who deny that utilitarianism implies this are, overwhelmingly, non-utilitarians. (Of course, there are possible cases where utilitarianism will clearly advise instrumental harm, but the same is true of common-sense deontology; absolutism is very much not commonsensical.)

So when folks like Will affirm the need for EA to be guided by "commonsense moral norms", I take it they mean something like the specific disjunction of rejecting (1) or affirming (1*), rather than a wholehearted embrace of commonsense morality, including its lax rejection of (2).  But yeah, it could be helpful to come up with a concise way of expressing this more precise idea, rather than just relying on contextual understanding to fill that in!

This leads some critics to a double standard of criticising those who explicitly prioritise issues over global health in favour of those who implicitly prioritise other issues over global health, despite the impacts on global health being the same."

I think this actually understates the case, in that even many EAs who explicitly "downgrade" global health to some extent still give it significant priority -- and donations -- in practice (likely more so than most critics/non-EAs).

I agree that calls to "throw out [all] attempts at doing moral calculus" are overreactions. The badness of fraud (etc.) is not a reason to reject longtermism, or earning to give, or other ambitious attempts to promote the good that are perfectly compatible with integrity and good citizenship.  It merely suggests that these good goals should be pursued with integrity, rather than fraudulently.

But I do think it would be a mistake for people to only act with integrity conditional on explicit calculations yielding that result on each occasion.  Rather, I think we really should throw out some attempts at doing moral calculus. (It's never been a part of mainstream EA that you should try to violate rights for the greater good, for example.  So I think the present discussions are really just making more explicit constraints on utilitarian reasoning that were really there all along.)

When folks wonder whether EA has the philosophical resources to condemn fraud etc., I think it's worth flagging three (mutually compatible) options:

(1) Some EAs accept (non-instrumental) deontic constraints, compatibly with a utilitarian-flavoured conception of beneficence that applies within these constraints.

(2) Others may at least give sufficient credence to deontological views that they end up approximating such acceptance when taking account of moral uncertainty (as you suggest).

(3) But in any case, we shouldn't believe that fraud and bad citizenship are effective means to promoting altruistic goals, so even pure utilitarianism can straightforwardly condemn such behaviour.

I don't think that (2) alone will necessarily do much, because I don't think we really ought to give much credence to deontology.  I give deontology negligible--much less than 1%--credence myself (for many reasons, including this argument).  Since my own uncertainty effectively only ranges over different variants of consequentialism, this uncertainty doesn't seem sufficient to make much practical difference  in this respect.

So I'm personally inclined to put more weight on heuristics that firmly warn against fraud and dishonesty on instrumental grounds.  I'm not sure whether this is what you're dismissing as "barely-evidenced heuristics", as I think our common-sense grasp of social life and agential unreliability actually provides very strong grounds (albeit not explicit quantitative evidence) for believing that these particular heuristics are more reliable than first-pass calculations favouring fraud.  And the FTX disaster (at least if truly motivated by naive utilitarianism, which I agree is unclear) constitutes yet further evidence in support of this, and so ought to prompt rethinking from those who disagree on this point.

As you say, we can see that any recent fraudulent actions were not truly rational (or prudent) given utilitarian goals. But that's a point in support of explanation (3), not (2).

Not exactly -- though it is a good question!

The dichotomy merely suggests that failing to create a person does not harm or wrong that individual in the way that negatively affecting their interests (e.g. by killing them as a young adult) does.  Contraception isn't murder, and neither is abstinence.

But avoiding wrongs isn't all that matters.  We can additionally claim that there's always some (albeit weaker) reason to positively benefit possible future people by bringing them into a positive existence.  So there's some moral reason to have kids, for example, even though it doesn't wrong anyone to remain childless by choice.

And when you multiply those individually weak reasons by zillions, you can end up with overwhelmingly strong reasons to prevent human extinction, just as longtermists claim. (This reason is so strong it would plausibly be wrong to neglect or violate it, even though it does not wrong any particular individual. Just as the non-identity problem shows that one outcome can be worse than another without necessarily being worse for any particular individual.)

Load More