Hide table of contents

There’s one respect in which philosophical training seems to make (many) philosophers worse at practical ethics. Too many are tempted to treat tidy thought experiments as a model for messy real-world ethical quandaries.

We’re used to thinking about scenarios where all the details and consequences are stipulated, so that we can better uncover our theoretical commitments about what matters in principle. I’ve previously flagged that this can be misleading: our intuitions about real-world situations may draw upon implicit knowledge of what those situations are like, and this implicit knowledge (when contrary to the explicit stipulations of the scenario) may distort our theoretical verdicts. But it’s even worse when the error goes the other way, and verdicts that only make sense given theoretical stipulations get exported into real-life situations where the stipulations do not hold. This can badly distort our understanding of how people should actually behave.

Our undergraduate students often protest the silly stipulations we build into our scenarios: “Why can’t we rescue everyone from the tracks without killing anyone?” It’s a good instinct! Alas, to properly engage with thought experiments, we have to abide by the stipulations. We learn (and train our students) to take moral trade-offs at face value, ignore likely downstream effects, and not question the apparent pay-offs for acting in dastardly ways. This self-imposed simple-mindedness is a crucial skill for ethical theorizing. But it can be absolutely devastating to our practical judgment, if we fail to carefully distinguish ethical theory and practice.

Moral distortion from high stakes

A striking example of such philosophy-induced distortion comes from our theoretical understanding that sufficiently high stakes can justify overriding other values. This is a central implication of “moderate deontology”: it’s wrong to kill one as a means to save five, but obviously you should kill one innocent person if that’s a necessary means to saving the entire world.

Now, crucially, in real life that is not actually a choice situation in which you could ever find yourself. The thought experiment comes with stipulated certainty; real life doesn’t. So, much practical moral know-how comes down to having good judgment, including about how to manage your own biases so that you don’t mistakenly take yourself to have fantastically strong reasons to do something that’s actually disastrously counterproductive. This is why utilitarians talk a lot about respecting generally-reliable rules rather than naively taking expected value (EV) calculations at face value. Taking our fallibility seriously is crucial for actually doing good in the world.

Higher stakes make it all the more important to choose the consequentially better option. But they don’t inherently make it more likely that a disreputable-seeming action is consequentially better. If “stealing to give” is a negative-EV strategy for ordinary charities, my default assumption is that it’s negative-EV for longtermist causes too.[1] There are conceivable scenarios where that isn’t so; but some positive argument is needed for thinking that any given real-life situation (like SBF’s) takes this inverted form. Raising the stakes doesn’t automatically flip the valence.

Many philosophers don’t seem to understand this. Seth Lazar, for example, gave clear voice to (what we might call) academic philosophy’s high stakes distortion when he was interviewed on Hi-Phi Nation last year.[2] Lazar claimed that it’s “intellectually inconsistent” to simultaneously hold that (i) there are astronomical stakes to longtermism and x-risk reduction, and yet (ii) it’s also really important that you act with integrity.

This is a bizarre assertion. There’s a very obvious way to reconcile these two claims. Namely, hold that the value of integrity is (at least in part) instrumental, and helps us to more reliably achieve longtermist goals (which, after all, vitally depend upon the support and co-operation of others in society).

We can have our astronomical cake and eat it too.

My personal view is that SBF’s fraud was clearly negative in expectation, purely considering the effects on (money and talent committed to) longtermist causes.[3] If Lazar disagrees, and really thinks that fraud is an effective long-term strategy for charitable fundraising, he should explicitly argue for that.[4] (I think that view is—to borrow his term—“bananas”.) But what I find shocking here is that Lazar doesn’t seem to realize that his assumed positive instrumental evaluation of fraud is even open to question.[5]

The problem is that he’s treating SBF’s fraud as a thought experiment, and our disciplinary training teaches us not to question the presumed payoffs in thought experiments. But in real life, this is mistake. We should question the presumed payoffs. “Does fraud really pay?” is a question our undergrads are wise enough to ask. We can temporarily bracket the question while doing ethical theory. But we should not forget about it entirely, especially if we hope to later bring our ethical theories to bear on real-life problems. It really matters what one ought to believe about the likely consequences; an agent’s assumptions about their situation (especially if both dangerous and self-serving) are not above question.

The Trolley Problem is No Guide to Practical Ethics

Given the above diagnosis, I wasn’t thrilled to see Kieran Setiya’s Guardian article explicitly relating “Bankman-Fried’s quandary” to the trolley problem. The article does a nice job of explaining Judy Thomson’s theoretical arguments,[6] but I think badly misrepresents the ethical situation of someone tempted by fraud. Like Lazar, Setiya largely takes for granted the instrumental efficacy of stealing to give, in order to better focus on the interesting philosophical question of “whether it’s OK to cause harm for the greater good.” I agree that the latter question is more philosophically interesting. But that doesn’t make it practically important. For practical purposes, the question to ask is whether fraud pays.[7]

In a key passage, Setiya writes:

For decades, ethicists, including Thomson, struggled to reconcile our contrasting judgments when it comes to flipping the switch versus pushing the bystander or murdering the patient: in each case, we take one life to save five. If we can’t identify a meaningful moral difference, we ought to conclude that, since it’s OK to flip the switch, it’s OK to push the bystander or kill the patient after all. This conclusion leads inexorably to a more utilitarian moral view, in which it’s all right to cause harm in the service of the greater good. And it permits a moral defence of Bankman-Fried. He may have miscalculated harms and benefits, risks and rewards, but there was a respectable philosophical argument on his side.

I think it’s very important to appreciate that the bolded passage is not true. Irrationally believing that X obtains, given that there are respectable philosophical grounds for taking X to be a right-making feature, does not entail that you have “a moral defense” or “a respectable philosophical argument” on your side. It really matters whether or not it’s reasonable for you to believe X! (Compare terrorists, religious inquisitors, etc. Many morally indefensible actions are such that, on any non-absolutist view, they would be justified if the agent’s descriptive assumptions were accurate. Acting on unjustified empirical beliefs is a very important form of practical moral error!)

After offering a very flimsy argument for deontology, Setiya’s article concludes that “We should not allow some future Bankman-Fried to justify his actions by appealing to the greater good.” This is true enough, but the focus on ethical theory badly misdiagnoses the practical moral problem. Unless you’re an absolutist, you are committed to thinking that some conceivable frauds could be objectively justified if truly necessary to save the planet. So this near-universal belief (that a sufficiently greater good can serve as a right-making feature) can’t be the problem. Rather, I think, the problem is that people aren’t reliable at applying moral theories in practice. Motivated reasoning can easily lead them to (irrationally but conveniently) believe that some genuine right-making feature obtains, when in actual fact it doesn’t. This is an entirely predictable feature of human nature. So they ought to be more cautious of making this mistake.

The problem of motivated reasoning is in some ways much deeper than the trolley problem. A priori philosophy is insufficient on its own to address it. (Psychology and other social sciences may help.) But as a crucial first step, we can at least highlight that naive instrumentalism is a poor theory of instrumental rationality for human-sized minds, and refrain from calling naive instrumentalist reasoning “morally defensible” when it isn’t.

 

  1. ^

    The higher stakes then means that “stealing to give” is more stringently wrong for the longtermist causes, as it’s all the more important not to undermine those ones.

  2. ^

    See p.16 of the transcript.

  3. ^

    As Rob Wiblin explains (in his recent podcast interview with Robert Wright, around the 21 minute mark): longtermism had been in pretty good shape (in terms of its financial and intellectual support trajectory) before SBF/FTX money entered the scene. It would have been completely fine if Alameda had simply gone bankrupt—with no fraudulent gambles to attempt to prop it up. Even if one completely ignores the expected losses to innocent FTX customers, the negative reputational effects of SBF’s fraud were so obviously damaging for the causes he supposedly supported—seriously undermining the whole endeavor by deterring other funders and intellectuals from associating with those causes and projects—that it’s hard to see how any reasonable analysis could view the fraud as positive in expectation (even just for those causes).

    In discussing why I think SBF’s fraud was negative in expectation, i.e. bad strategy, I don’t mean to take any stand on the psychological question of whether SBF himself intended it strategically. Discussing the latter question, Will MacAskill points out—drawing on Eugene Soltes’ Why They Do It: Inside the Mind of the White-Collar Criminal—“People often don’t self-conceptualize what they’re doing [as] fraud. It’s often mindless; they’re not even really thinking about what they’re doing. They’re also just getting the wrong kind of feedback mechanisms, where they’re maybe doing something that’s quite close to the line and getting lots of positive feedback, and don’t have an environment of negative feedback, where they’re getting punished for that.”

  4. ^

    And perhaps start hoping that more people take up stealing-to-give! Note that even if their acts are agent-relatively wrong, it’s unclear why that’s any concern to we well-motivated bystanders, if the wrong-doing really is consequentially better for the world as a whole.

    To be clear: I don’t expect it to be consequentially better. What I don’t understand is why anyone with the opposite expectation would continue to oppose such acts. Don’t you want the world to be a better place? (What kind of weird ethics-fetishist are you?)

  5. ^

    Lazar continues: “You can’t, on the one hand, say that these 10^48 future people, their existence matters. And it depends on what we do. But then not say, it’s going to have a massive impact on what you’re actually permitted to do.”

    My response: obviously we should do what will actually best help secure a positive long-term future. I don’t think any longtermists are saying that we should stick to commonsense morality despite this threatening their longtermist goals. Rather, they (should) think that taking all the higher-order consequences into account, SBF-style recklessness is the graver threat to their goals.

  6. ^

    Though the article neglects the obvious objection: if our unwillingness to endure being caused harm is an argument against causing harm, why isn’t our unwillingness to voluntarily endure harm by omission equally an argument against allowing harm? To illustrate, consider a trolley variant where you are among the five on the track, and so would insist on pulling the switch if given the chance. Now ask: If you wouldn’t allow your death (among others) merely in order to avoid killing one other, how (in the standard trolley switch case) can you justify the decision to allow others’ deaths, merely to avoid killing one other?

  7. ^

    More specifically, (i) what are the possible downsides (including to the very causes that one is hoping to benefit)? And (ii) how confidently can one trust one’s own judgment in a case where one has a strong personal incentive to commit the fraud? (E.g., if one would be similarly personally embarrassed by either criminal or non-criminal bankruptcy.)

60

4
0

Reactions

4
0

More posts like this

Comments7
Sorted by Click to highlight new comments since:

I think part of what's driving the feeling that there is something bad for consequentialism here is something like the following. People don't think they *actually* value integrity (just) because it has good consequences, but rather think they assign (some) intrinsic value to integrity. And then they find it a suspicious coincidence if a view which assign *zero* *intrinsic* value to integrity is being claimed to get the same results as their own intuitions about all cases we actually care about, given that (they think) their intuition are being driven partly by the fact that they value integrity intrinsically. Obviously, this doesn't show that in any particular case the claim that acting without integrity doesn't really maximize utility is wrong. But I think it contributes to people's sense that utilitarians are trying to cheat here somehow. 

Interesting diagnosis! But unless they're absolutists, shouldn't they be equally suspicious of themselves? That is, nobody (but Kant) thinks the intrinsic value of integrity is so high that you should never tell a lie even if the entire future of humanity depended on it. So I don't really see how they could think that the intrinsic value of integrity makes any practical difference to what a longtermist really ought to do.

(Incidentally, I think this is also a reason to be a bit suspicious of Will MacAskill's appeals to "normative uncertainty" in these contexts. Every reasonable view converges with utilitarian verdicts when the stakes are high.)

I'm inclined to agree with both those claims, yes.

Preference utilitarianism is perfectly compatible with people preferring to have integrity and preferring others to behave with integrity.

The problem of motivated reasoning is in some ways much deeper than the trolley problem.

The motivation behind motivated reasoning is often to make ourselves look good (in order to gain status/power/prestige). Much of the problem seems to come from not consciously acknowledging this motivation, and therefore not being able to apply system 2 to check for errors in the subconscious optimization.

My approach has been to acknowledge that wanting to make myself look good may be a part of my real or normative values (something like what I would conclude my values are after solving all of philosophy). Since I can't rule that out for now (and also because it's instrumentally useful), I think I should treat it as part of my "interim values", and consciously optimize for it along with my other "interim values". Then if I'm tempted to do something to look good, at a cost to my other values or perhaps counterproductive on its own terms, I'm more likely to ask myself "Do I really want to do this?"

BTW I'm curious what courses you teach, and whether / how much you tell your students about motivated reasoning or subconscious status motivations when discussing ethics.

Excellent article — and even better title.

Executive summary: Philosophers' training in tidy thought experiments can distort their practical moral judgment and lead them to mistakenly believe that sufficiently high stakes can justify unethical actions in the real world, as exemplified by some philosophers' defenses of SBF's fraud.

Key points:

  1. Philosophers are trained to accept stipulations in thought experiments, but this can lead to distorted practical judgments if theoretical verdicts are misapplied to real-world situations.
  2. High stakes make choosing the consequentially better option more important, but do not inherently make disreputable actions more likely to be consequentially better.
  3. SBF's fraud was likely negative in expectation for longtermist causes, considering reputational damage and deterrence of future funders and supporters.
  4. Irrationally believing an action is justified based on respectable philosophical grounds does not provide an actual moral defense if the empirical beliefs are unreasonable.
  5. The deeper problem is motivated reasoning leading people to conveniently believe right-making features obtain when they actually don't, so more caution is needed.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities