Hide table of contents

Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that’s always a conceptual possibility, I don’t think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I’ll now explain.

Adjusting for Bias

Imagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry.

Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It’s conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that’s a much weirder case than what we’re talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one’s aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether.

And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn’t mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner.

Metacoherence prohibits naïve utilitarianism

“But doesn’t utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There’s nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments.

This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here’s a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure:

  1. Shortage of time and energy will in general preclude such calculations.
  2. Even if time and energy are available, the relevant information commonly is not.
  3. An agent's judgment on particular issues is likely to be distorted by his own interests and special affections.
  4. Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect.
  5. Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them.
  6. And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is no real chance that actions will even approximate to them.

For all these reasons and more (e.g. the risk of reputational harm to utilitarian ethics),1 violating people's rights is practically guaranteed to have negative expected value. You should expect that most people who believe themselves to be the rare exception are mistaken in this belief. First-pass calculations that call for rights violations are thus known to be typically erroneous. Generally-beneficial rules are “generally beneficial” for a reason. Knowing this, it would be egregiously irrational to violate rights (or other generally-beneficial rules) on the basis of unreliable rough calculations suggesting that doing so has positive “expected value”. Unreliable calculations don’t reveal the true expected value of an action. Once you take into account the known unreliability of such crude calculations, and the far greater reliability of the opposing rule, the only reasonable conclusion is that the all-things-considered “expected value” of violating the rule is in fact extremely negative.

Indeed, as I argued way back in my PhD dissertation, this is typically so clear-cut that it generally shouldn’t even occur to prudent utilitarians to violate rights in pursuit of some nebulous “greater good”—any more than it occurs to a prudent driver that they could swerve into oncoming traffic. In this way, utilitarianism can even accommodate the thought that egregious violations should typically be unthinkable. (Of course one can imagine hypothetical exceptions—ticking time bomb scenarios, and such—but utilitarianism is no different from moderate deontology in that respect. I don’t take such wild hypotheticals to be relevant to real-life practical ethics.)

Prudent Utilitarians are Trustworthy

In light of all this, I think (prudent, rational) utilitarians will be much more trustworthy than is typically assumed. It’s easy to see how one might worry about being around naïve utilitarians—who knows what crazy things might seem positive-EV to them in any fleeting moment? But prudent utilitarians abide by the same co-operative norms as everyone else (just with heightened beneficence and related virtues), as Stefan Schubert & Lucius Caviola explain in ‘Virtues for Real-World Utilitarians’:

While it may seem that utilitarians should engage in norm-breaking instrumental harm, a closer analysis reveals that it often carries large costs. It would lead to people taking precautions to safeguard against these kinds of harms, which would be costly for society. And it could harm utilitarians’ reputation, which in turn could impair their ability to do good. In light of such considerations, many utilitarians have argued that it is better to respect common sense norms. Utilitarians should adopt ordinary virtues like honesty, trustworthiness, and kindness. There is a convergence with common sense morality… [except that] Utilitarians can massively increase their impact through cultivating some key virtues that are not sufficiently emphasized by common sense morality…

This isn’t Rule Utilitarianism

I’ve argued that prudent utilitarians will follow reliable rules as a means to performing better actions—doing more good—than they would through naively following unreliable, first-pass calculations. When higher-order evidence is taken into account, prudent actions are the ones that actually maximize expected value. It’s a straightforwardly act-utilitarian view. Like the master archer, the prudent utilitarian’s target hasn’t changed from that of their naïve counterpart. They’re just pursuing the goal more competently, taking naïve unreliability into account, and making the necessary adjustments for greater accuracy in light of known biases.

There are a range of possible alternatives to naïve utilitarianism that aren’t always clearly distinguished. Here’s how I break them down:

(1) Prudent (“multi-level”) utilitarian: endorses act-utilitarianism in theory, motivated by utilitarian goals, takes into account higher-order evidence of unreliability and bias, and so uses good rules as a means to more reliably maximize (true) expected value.

(2) Railton’s “sophisticated” utilitarian: endorses act-utilitarianism in theory, but has whatever (potentially non-utilitarian) motivations and reasoning they expect to be for the best.

(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.

(4) Rule utilitarian: not really consequentialist; moral goal is not to do good, but just to act in conformity with rules that would do good in some specified — possibly distant — possible world. (Subject to serious objections.)

See also:

1 As we stress on utilitarianism.net [fn 2]: “This reputational harm is far from trivial. Each individual who is committed to (competently) acting on utilitarianism could be expected to save many lives. So to do things that risk deterring many others in society (at a population-wide level) from following utilitarian ethics is to risk immense harm.”

Comments42
Sorted by Click to highlight new comments since: Today at 5:02 PM

I wish you would engage more with other philosophers who speak about utilitarianism, especially since (reading the comments on this thread) you appear to be taken as having some kind of authority on the topic within the EA community even though other prominent philosophers disagree with your takes. 

Chris Bertram posted this today for example. Here's two quotes from the post:

"Don’t get me wrong utilitarianism is a beautiful, systematic theory, a lovely tool to help navigate acting in the world in a consistent and transparent matter. When used prudently it’s a good way to keep track of one’s assumptions and the relationship between means and ends. But like all tools it has limitations. And my claim is that the tradition refuses to do systematic post-mortems on when the tool is implicated in moral and political debacles. Yes, somewhat ironically, the effective altruism community (in which there is plenty to admire) tried to address this in terms of, I  think, project failure. But that falls short in willing to learn when utilitarianism is likely to make one a danger to innocent others."

"By framing the problem as Mr. Bankman-Fried’s “integrity” and not the underlying tool, MacAskill will undoubtedly manage to learn no serious lesson at all. I am not implicating utilitarianism in the apparent ponzi scheme. But Bankman-Fried’s own description back in April of his what he was up to should have set off alarm bells among those who associated with him–commentators noticed it bore a clear resemblance to a Ponzi.+ (By CrookedTimber standards I am a friend of markets.) Of course, and I say this especially to my friends who are utilitarians; I have not just discussed a problem only within utilitarianism; philosophy as a professional discipline always assumes its own clean hands, or finds ways to sanitize the existing dirt."



Finally, you should perhaps consider holding yourself to a higher standard, as a philosophy professor, than to straw-man people who are genuinely trying to engage with you philosophically, as you did here: 

"I don't think we should be dishonest.  Given the strong case for utilitarianism in theory, I think it's important to be clear that it doesn't justify criminal or other crazy reckless behaviour in practice.  Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point."





 

Sorry, how is that a straw man?  I meant that comment perfectly sincerely.  Publius raised the worry that "such distinctions are too complex for a not insignificant proportion of the public", and my response simply explained that I expect this isn't true of those who would be reading my post.  I honestly have no idea what "standard" you think this violates.  I think you must be understanding the exchange very differently from how I understood it.  Can you explain your perspective further?

re: Bertram: thanks for the pointer, I hadn't seen his post.  Will need to find time to read it. From the quotes you've given, it looks like we may be discussing different topics.  I'm addressing what is actually justified by utilitarian theory.  He's talking about ways in which the tools  might be misused.  It isn't immediately obvious that we necessarily disagree. (Everything I say about "naive utilitarianism" is, in effect, to stress ways that the theory, if misunderstood, could be misused.)

I would genuinely appreciate an explanation for the downvotes.  There's evidently been some miscommunication here, and I'm not sure what it is.

[anonymous]1y13
5
0

I didn't downvote, but I'd guess that Lauren and perhaps others understood your "Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point." to mean something like "If you, dear interlocutor, were sophisticated enough then you'd grasp my point."

(Not confident in this though, as this interpretation reads to me like an insult rather than a straw man.)

Huh, okay, thanks. fwiw, I definitely did not intend any such subtext. (For one thing, Publius did not themselves deny the relevant distinction, but merely worried that some other ppl in the general population would struggle to follow it.  I was explicitly expressing my confidence in the sophistication of all involved in this discussion.)

I agree with the distinction between naive and prudent utilitarianism, but I also think it all breaks down when you factor in the infinite expected value of mitigating extinction risks (assuming a potential techno utopian future, as some longtermists do) and the risk of AGI driven extinction in our lifetimes.

I’m pretty sure lots of pure prudent utilitarians would still endorse stealing money to fund AI safety research, especially as we get closer in time to the emergence of AGI.

(https://forum.effectivealtruism.org/posts/2wdanfCRFbNmWebmy/ea-should-consider-explicitly-rejecting-pure-classical-total)

[anonymous]1y0
3
5

A lot of people are disagreeing with you. But what you are saying is just uncontroversially correct, as anyone who studies philosophy outside of EA would tell you.

To those disagree-downvoting: I would like to suggest the possibility that the problem, here, is with utilitarianism. Not with anything freedomandutility said. Because what they said is literally just true as a matter of math. If you don't like that, then you don't like utilitarianism.

Are you assuming that "stealing money" wouldn't (or couldn't possibly?) prove counterproductive to the cause of AI safety research and funding?  Because I'm pretty sure there's no mathematical theorem that rules out the possibility of a criminal action turning out to be counterproductive in practice!  And that's the issue here, not some pristine thought experiment with frictionless planes.

[anonymous]1y0
3
3

I am using the math that the CEA has pushed. Here's a quote from The Case For Strong Longtermism by Will MacAskill and Hilary Greaves (page 15):

That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion (resp., one million, 100) lives on our main (resp. low, restricted) estimate – far more than the near-future benefits of bed net distribution.

If 100 dollars could be morally equivalent to saving one trillion lives, then I'd steal money too.

And here is a quote from Nick Bostrom's paper Astronomical Waste (paragraph 3):

Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.

If 10^29 human lives are lost every second we delay technological development, then why wait to get the money through more legitimate means?

Of course, this all could backfire. There are risks involved. But the risks are not enough to avoid making the expected value of fraud extremely high. Taking all this into consideration, then, is it any surprise that SBF did what he did?

And again, this is not my math. This is the math pushed by prominent and leading figures in EA.  I am just quoting them. Don't shoot the messenger.

And on that note: I recommend you watch this YouTube video, and use it as a source of reflection: 

That math shows that the stakes are high.  But that just means that it's all the more important to make actually prudent choices.  My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation.

You keep conflating "funding good causes is really important" (EA message) with "stealing money is an effective way to fund important causes" (stupid criminal fantasy).

I think it's really important to be clear that the EA message and the stupid criminal fantasy are not remotely the same claim.

Edited to add: it would certainly be bad to hold the two views in conjunction.  But, between these two claims, the EA message is not the problem.

[anonymous]1y-3
3
3

My point is that, in the real world, stealing money does not serve the goal of increasing funding for your cause in expectation. 

Why not? What if we can generate tens of billions of dollars through fraudulent means? We can buy a lot of utility with that money, after all. Perhaps even save humanity from the brink of extinction. 

And what if we think we have a fairly good reason to think that we will get away with it. Surely the expected value would start to look pretty enticing by then, no? 

Frankly, I'd like to see your calculations. If you really believe that SBF's fraud did not have net positive value in expectation, then prove it. Do the math for us. At what point does the risk no longer become acceptable, and how much money would it take to offset that risk? 

Do you know? Have you run the calculations? Or do you just have faith that the value in expectation will be net negative? Because right now I'm not seeing calculations. I am just seeing unsubstantiated assertions. 

But I'll throw you a bone. For the sake of argument, let's suppose that you crunch the numbers so that the math conveniently works out. Nice. 

But is this math consistent with Nick Bostrom's math, or Will's math? Is it consistent with the view that 100 dollars donated to AI safety is worth one trillion human lives? Or that every second of delayed technological development is just as bad as the untimely death of 10^29 people? 

On the face of it, it seems extremely improbable that this math could be consistent. Because what if Sam was very likely to get away with it, but just got unlucky? Alternatively, what if the risks are higher but SBF had the potential to become the world's first trillionaire? Would that change things? 

If it does, then this math seems flimsy. So, if we want to reject this flimsiness, then we need to say that Will or Nick's math is wrong.  

But here's the rub: SBF could have calibrated his own decision theoretic musings to the tune of Nick and Will's, no? And if he did, that would suggest that Nick and/or Will's math is dangerous, would it not? And if their math is dangerous, that means that there must be something wrong with EA's messaging. So perhaps it's the case that EA --- and EV thinking in general --- does, in fact, bear some responsibility for this mess. 

This brings us to your edit: 

 the EA message is not the problem. 

Care to elaborate on this point? 

How do you know this? Are you sure that the EA message is not the problem? What does it mean to say that a message is a 'problem', in this case? Would the EA message be a problem if it were true that had EA never existed, then SBF would never have committed massive financial fraud

Because this counterfactual claim seems very likely to be correct. (See this Bloomberg article here.) So this would seem to suggest that EA is part of the problem, no? 

Because, surely, if EA is causally responsible for this whole debacle, then "the EA message" is at least part of the problem. Or do you disagree? 

If you do disagree, then: What does it mean, in your view, for something to be a "problem"? And what exactly would it take for "the EA message" to be "the problem"? 

And last, but certainly not least: Is there anything at all that could convince you that EV reasoning is not infallible?

"What if...?  Have you run the calculations? ... On the face of it..."

Did you even read the OP?  Your comments amount to nothing more than "But my naive utilitarian calculations suggest that these bad acts could really easily be justified after all!"  Which is simply non-responsive to the arguments against naive utilitarianism.

I'm not going to repeat the whole OP in response to your comments.  You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions.  But that's precisely the mistake that the OP is arguing against.

Is there anything at all that could convince you that EV reasoning is not infallible?

This is backwards.  You are the one repeatedly invoking naive "EV reasoning" (i.e. calculations) as supposedly the true measure of expected value. I'm arguing that true expected value  is best approximated when constrained by reliable heuristics.

If you do disagree, then: What does it mean, in your view, for something to be a "problem"?

I mean for it to be false, unjustified, and something we should vociferously warn people against.  Not every causal contribution to a bad outcome is a "problem" in this sense.  Oxygen also causally contributed to every bad action by a human--without oxygen, the bad act would not have been committed.  Even so, oxygen is not the problem.

[anonymous]1y0
4
0

 You repeatedly affirm that you think naive calculations, unconstrained by our most basic social knowledge about reliable vs counterproductive means of achieving social goals, are suited to answering these questions

Where did I say this?

I'm not going to repeat the whole OP in response to your comments.

You're assuming that you responded to my question in the original post. But you didn't. Your post just says "trust me guys, the math checks out". But I see no math. So where did you get this from? 

I'm arguing that true expected value  is best approximated when constrained by reliable heuristics.

"Arguing"? Or asserting?

If these are arguments, they are not quite strong. No one outside of EA is convinced by this post. I'm not sure if you saw, but this post has even become the subject of ridicule on Twitter.

Not every causal contribution to a bad outcome is a "problem" in this sense.  Oxygen also causally contributed to every bad action by a human--without oxygen, the bad act would not have been committed.

Okay, I didn't realize we were going back to PHIL 101 here. If you need me to spell this out explicitly: SBF chose his career choice because he was encouraged by prominent EA leaders to earn to give. Without EA, he would have never had the means to start FTX. The-earn-to-give model encourages shady business practices.

The connection is obvious.

Saying this has nothing to do with EA is like saying the Stalin's governance had nothing to do with Marxism.

Denying the link is delusional and makes us look like a cult.

[anonymous]1y4
1
2

I suspect part of the problem here is that many EAs and philosopher-types tend to conceptualize ethics in terms of abstract theories being the be-all end-all of ethics, e.g. I suspect if you tell them utilitarianism has a problem and they like it more than other theories, they have nothing to fall back upon and keep defending utilitarianism even when it doesn't make sense. The idea that formal systems of ethics have their own limits and all questions of morality don't ultimately come down to which abstract theory one favors tends to be a bit of a foreign concept in my experience. 

I can't speak for others, but this isn't the reason I'm defending utilitarianism. I'd be more than happy to fall back on other types of consequentialism, or moral uncertainty, if necessary (in fact I lean much more towards these than utilitarianism in general). I'm defending it simply because I don't think that the criticisms being raised are valid for most forms of utilitarianism. See my comments below for more detail on that.

That being said, I do think it's perfectly reasonable to want a coherent ethical theory that can be used universally. Indeed the alternative is generally considered irrational and can lead to various reductios.

[anonymous]1y2
1
0

Apologies if this is rude, but your response is a good example of what I was talking about. I don't think I'll be able to write a short response without simply restating what I was saying in a slightly different way, but if you want to read a longer version of what I was talking about, you might be interested in my comment summarizing a much longer piece critiquing the use of utilitarianism in EA: 

Utilitarianism is not a good theory of everything for morality. It's helpful and important in some situations, such as when we have to make trade offs about costs and benefits that are relatively commensurate, deal with particular types of uncertainty, and for generating insights. But it doesn't really work or help in other ways or situations. There are several reasons for this or ideas gesturing in this direction. For one, no theory or model is a theory of everything in any domain, so why should utilitarianism be any different for ethics? For another, utilitarianism doesn't help us when we have to trade off different kinds of values against each other. Another is that in some situations, we inevitably have to exercise context-dependent judgment that cannot be captured by utilitarianism. 

This is not an anti-intellectualism argument that system construction is useless. Rather, this is a judgment about the limits of a particular model or theory. While such a judgment may not be justified from some kind of first principle or more fundamental system, this doesn't mean the judgment is wrong or unjustified. Part of the fundamental critique is that it is impossible/unworkable to find some kind of complete system that would guide our thinking in all situations; besides infinite regress problems, it is inescapable that we have to make particular moral judgments in specific contexts. This problem cannot be solved by an advanced AI or by assuming that there must be a single theory of everything for morality. Abstract theorizing cannot solve everything. 

Utilitarianism has been incredibly helpful, probably critical, for effective altruism, such as in the argument for donating to the most effective global health charities or interventions. It can also lead to undesirable value dictatorship and fanaticism. 

But this doesn't mean EA necessarily has a problem with fanaticism either. It is possible to use utilitarianism in a wise and non-dogmatic manner. In practice most EAs already do something like this, and their actions are influenced by judgment, restraint, and pluralism of values, whatever their stated or endorsed beliefs might be. 

The problem is that they don't really understand why  or how they do this beyond that it is desirable and perhaps necessary [is this right?]. People do get off the train to crazy town at some point, but don't really know how to justify it within their professed/desired framework beside some ad-hoc patches like moral uncertainty. The desire for a complete system that would guide all actions seems reasonable to EAs. EAs lack an understanding of the limits of systemic thinking. 

EA should move away from thinking that utilitarianism and abstract moral theories can solve all problems of morality, and instead seek to understand the world as it is better. This may lead to improvements to EA efforts in policy, politics, and other social contexts where game-theoretic considerations and judgment play critical roles, and where consequentialist reasoning can be detrimental. 

No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I'm trying to say is, I wouldn't really identify as a 'utilitarian' myself, so I don't think I really have a vested interest in this debate. Nonetheless, I don't think utilitarianism 'breaks down' in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I'm also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.

To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It's very unclear to me how we can know that it's 'impossible or unworkable' to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an 'ad-hoc patch'. It wasn't initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or 'preferable'), just like with empirical uncertainty.

[anonymous]1y1
0
0

I think it was a good example (I changed the wording from 'great' to 'good') because my point was more about the role of abstract and formal theories of ethics rather than restricted to utilitarianism itself, and your response was defending abstract theories as the ultimate foundation for ethics. The point (which I am likely communicating badly to someone with different beliefs) is that formal systems have limits and are imperfectly applied by flawed humans with limited time, information, etc. It is all well and good to talk about making adjustments to theories to refine them, and indeed philosophers should do so, but applying them to real life is necessarily an imperfect process. 

I think the archer metaphor makes sense only if you accept moral realism. To an anti-realist, moral rules are just guides to ethical behavior. If naïve utilitarianism is a bad guide, and "following reliable rules" a good one, then why pretend that you're maximizing utility at all?

I don't see how metaethics makes any difference here.  Why couldn't an anti-realist similarly distinguish between (i) their moral goals, and (ii) the instrumental question of how best to achieve said goals?  (To pursue goals in a prudent, non-naive way, is not at all to mean that you are merely "pretending" to have those goals!) 

E.g. you could, in principle, have two anti-realists who endorsed exactly the same decision-procedure, but did so for different reasons. (Say one cared intrinsically about the rules, while the other followed rules for purely instrumental reasons.) I think it makes sense to say that these two anti-realists have different fundamental values, even though they agree in practice.

[anonymous]1y-1
8
13

This is all fine and good. But it fails to address the core of the issue. 

And the core of the issue is this. Utilitarianism is about maximizing utility. And donating billions of dollars to effective causes maximizes utility fairly well, even if that money was obtained by means of fraud. Because the harms of fraud can be plugged into our utility calculations and be offset by all the good things done by the money. 

It is true that various decision procedures would caution against committing fraud, on account of the risks involved. That's fine. 

But utilitarianism is not a decision procedure. Utilitarianism simply tells us which states of affairs are morally good. And a state of affairs where SBF commits massive financial fraud and then donates the proceeds to effective causes is, according to utilitarianism, morally good. Because the ends would offset the means. 

This is a problem for utilitarianism. And it is the problem you failed to directly address in this post. 

To my knowledge the most common rightness criterion of utilitarianism states that an action (or rule, or virtue) is good if, in expectation, it produces net positive value. Generally fraud of any kind does not have a net positive expected value, and it is very hard to distinguish the exceptions[1], if indeed any exist. Hence it is prudent to have a general rule against committing fraud, and I believe this aligns with what Richard is arguing in his post.

Personally I find it very dubious that fraud could ever be sanctioned by this criterion, especially once the damage to defrauded customers and reputational damage is factored in[2]. But let's imagine, for the sake of discussion, that exceptions do exist and that they can be confidently identified[3]. This could be seen as a flaw of this kind of utilitarianism, e.g. if one has a very strong intuition against illegal actions like fraud[4]. Then one could appeal to other heuristics, such as risk-aversion (which is potentially more compatible with theories like objective utilitarianism) or moral uncertainty, which is my preferred response. I.e. there is a non-trivial possibility that theories like traditional deontology are true, which should also be factored into our decisions (e.g. by way of a moral parliament).

To summarise, I think in any realistic scenario, no reasonable type of utilitarianism will endorse fraud. But even if it somehow does, there are other adequate ways to handle this counter-intuitive conclusion which do not require abandoning utilitarianism altogether.

Edit: I just realised that maybe what you're saying is more along the lines of "it doesn't matter if the exceptions can be confidently identified or not, what matters is that they exist at all". An obvious objection to this is that expected value is generally seen as relative to the agent in question, so it doesn't really make sense to think of an action as having an 'objective' net positive EV.[5] Also, it's not very relevant to the real-world, since ultimately it's humans who are making the decisions based on imperfect information (at least at the moment).

  1. ^

    This is especially so given the prevalence of bias / motivated reasoning in human reasoning.

  2. ^

    And FWIW, I think this is a large part of the reason why a lot of people have such a strong intuition against fraud. It might not even be necessary to devise other explanations.

  3. ^

    Just to be clear, I don't think the ongoing scenario was an exception of this kind.

  4. ^

    Although it is easy to question this intuition, e.g. by imagining a situation where defrauding one person is necessary to save a million lives.

  5. ^

    If an objective EV could be identified on the basis of perfect information and some small fundamental uncertainty, this would be much more like the actual value of the action than an EV, and leads to absurd conclusions. For instance, any minor everyday action could, through butterfly effects, lead to an extremely evil or extremely good person being born, and thus would have a very large 'objective EV', if defined this way.

[anonymous]1y3
3
3

In response to your edit: Yes, that's what I mean. Utilitarianism can't say that fraud is wrong as a matter of principle. The point about EV is not strictly relevant, since expected value theory != utilitarianism. One is a decision theory and the other is a metaethical framework. And my point does not concern what kinds of actions are rational. My point concerns what states of affairs are morally good on utilitarian grounds. The two notions can come apart (e.g. it might not be rational to play the lottery, but a state of affairs where one wins the lottery and donates the money to effective causes would be morally good on utilitarian grounds).


I'm guessing you mean 'normative ethical framework', not 'meta-ethical framework'. That aside, what I was trying to say in my comment is that EV theory is not only a criterion for a rational decision, though it can be one,[1] but is often considered also a criterion for what is morally good on utilitarian grounds. See, for instance, this IEP page.

I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante. The former is somewhat of a minority view, to my knowledge, and is subject to serious criticisms. (Not least that it is impossible to know with certainty what the actual consequences of a given action will be.[2])[3]

That being said, I agree that the consequences ex post are still very relevant. Personally I find a ‘dual’ or ‘hybrid’ view like the one described here most plausible, which attempts to reconcile the two dichotomous views. Such a view does not entail that it is morally acceptable to commit an action which is, in reasonable expectation, net negative, it simply accepts that positive consequences could in fact result from this sort of action, despite our expectation, and that these consequences themselves would be good, and we would be glad about them. That does not mean that we should do the action in the first place, or be glad that it occurred.[4]

  1. ^

    Actually, I don’t think that’s quite right either. The rationality criterion for decisions is expected utility theory, which is not necessarily the same as expected value in the context of consequentialism. The former is about the utility (or 'value') with respect to the individual, whereas the latter is about the value aggregated over all morally relevant individuals affected in a given scenario.

  2. ^

    Also, in a scenario where someone reduced existential risk but extinction did in fact occur, objective utilitarianism would state that their actions were morally neutral / irrelevant. This is one of many possible examples that seem highly counterintuitive to me.

  3. ^

    Also, if you were an objective consequentialist, it seems you would want to be more risk-averse and less inclined to use raw EV as your decision procedure anyway.

  4. ^

    I am not intending to raise the question of ‘fitting attitudes’ with this language, but merely to describe my point about rightness in a more salient way.

[anonymous]1y-2
3
2

I'm guessing you mean 'normative ethical framework', not 'meta-ethical framework'.

No. I meant 'metaethical framework.' It is a standard term in moral philosophy. See: https://plato.stanford.edu/entries/metaethics/

I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante.

No. Here is what I mean. Utilitarianism defines moral value in terms of utility. So a state of affairs with high net utility is morally valuable, according to utilitarianism. And a state of affairs where SBF got away with it (and even some states of affairs where he didn't) have net positive utility. So they are morally valuable, according to utilitarianism.

Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.

Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term 'wealth'. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.

This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.

No. I meant 'metaethical framework.' It is a standard term in moral philosophy. See: https://plato.stanford.edu/entries/metaethics/

I'm aware of the term. I said that because utilitarianism is not a metaethical framework, so I'm not really sure what you are referring to. A metaethical framework would be something like moral naturalism or error theory.

Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.

Metaethics is about questions like what would make a moral statement true, or whether such statements can even be true. It is not about whether a 'thing' is morally good or bad: that is normative ethics. And again, I am talking about normative ethics, not decision theory. As I’ve tried to say, expected value is often used as a criterion of rightness, not only a decision procedure. That’s why the term ‘expectational’ or ‘expectable’ utilitarianism exists, which is described in various sources including the IEP. I have to say though at this point I am a little tired of restating that so many times without receiving a substantive response to it.

Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term 'wealth'. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.

Yes, the rightness criterion is not necessarily identical to the decision procedure. But many utilitarians believe that actions should be morally judged on the basis of their reasonable EV, and it may turn out that this is in fact identical to the decision procedure (used or recommended). This does not mean it can’t be a rightness criterion. And let me reiterate here, I am talking about whether an action is good or bad, which is different to whether a world-state is good or bad. Utilitarianism can judge multiple types of things.

Also, as I've said before, if you in fact wanted to completely discard EV as a rightness criterion, then you would probably want to adjust your decision procedure as well, e.g. to be more risk-averse. The two tend to go hand in hand. I think a lot of the substance of the dilemma you're presenting comes from rejecting a rightness criterion while maintaining the associated decision procedure, which doesn't necessarily work well with other rightness criteria.

This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.

I agree with that. What I disagree with is whether that entails that the action that produced that state of affairs was also morally good. This seems to me very non-obvious. Let me give you an extreme example to stress the point:

Imagine a sadist pushes someone onto the road in front of traffic, just for fun (with the expectation that they'll be hit). Fortunately the car that was going to hit them just barely stops soon enough. The driver of that car happens to be a terrorist who was (counterfactually) going to detonate a bomb in a crowded space later that day, but changes their mind because of the shocking experience (unbeknownst to the sadist). As a result, the terrorist is later arrested by the police before they can cause any harm. This is a major counterfactual improvement in the resulting state of affairs. However, it would seem absurd to me to say that it was therefore good, ex ante, to push the person into oncoming traffic.

[anonymous]1y2
2
1

We are talking past one another.

Hmm perhaps. I did try to address your points quite directly in my last comment though (e.g. by arguing that EV can be both a decision procedure and a rightness criterion). Could you please explain how I'm talking past you?

[anonymous]1y11
4
0

I agree this is a "problem" for utilitarianism (up-ticked accordingly).

But it's also a "problem" for any system of ethics that takes expected value into account, which applies to nearly everyone. How many random people on the street would say, "No, ends never justify the means. Like in that movie, when the baddie asked that politician for the nuclear codes and she said she didn't know them? She shouldn't have lied."

We're all - utilitarians and non-utilitarians alike - just debating where that line is. I reckon utilitarians are generally more likely to accept ends justifying means, but not by much, given all the things the OP says and many people have said on this Forum and in EA literature and messaging.

Unless you're a "naive utilitarian", which is why we have that concept, though arguably in light of recent events, EA still doesn't talk about it enough - I was very shocked at the thought that SBF could be this naive. (Although since hearing more details of the FTX story, I think it's more likely that SBF wasn't acting on utilitarian reasoning when he committed fraud - someone else puts it better than I could here: "My current guess is more that it'll turn out Alameda made a Big Mistake in mid-2022.  And instead of letting Alameda go under and keeping the valuable FTX exchange as a source of philanthropic money, there was an impulse to pride, to not lose the appearance of invincibility, to not be so embarrassed and humbled, to not swallow the loss, to proceed within the less painful illusion of normality and hope the reckoning could be put off forever.  It's a thing that happens a whole lot in finance!  And not with utilitarians either!  So we don't need to attribute causality here to a cold utilitarian calculation that it was better to gamble literally everything, because to keep growing Alameda+FTX was so much more valuable to Earth than keeping FTX at a slower growth rate.  It seems to me to appear on the surface of things, if the best-current-guess stories I'm hearing are true, that FTX blew up in a way classically associated with financial orgs being emotional and selfish, not with doing first-order naive utilitarian calculations.")

[anonymous]1y7
2
0

I also love this quote from the same post for how it emphasises just how rare it is even on (non-naive) utilitarian grounds that serious rule-breaking is Good:

I worry that in the Peter Singer brand of altruism - that met with the Lesswrong Sequences which had a lot more Nozick libertarianism in it, and gave birth to effective altruism somewhere in the middle - there is too much Good and not enough Law, that it falls out of second-order rule utilitarianism into first-order subjective utilitarianism, and rule utilitarianism is what you ought to practice unless you are a god.

[anonymous]1y-1
2
0

But it's also a "problem" for any system of ethics that takes expected value into account, which applies to nearly everyone

Not all systems of ethics take expected value into account. Examples of views that do not take EV into account include virtue ethics, deontological views, and some forms of intuitionism.

[anonymous]1y1
0
0

Sorry, "any system of ethics" was unclear. I didn't mean "any of the main normative theories that philosophers discuss" I meant "any way people have of making moral decisions in the real world." I think that's the relevant thing here and I think there are very few people in the world who are 100% deontologists or what have you (hence "which applies to nearly everyone" and  my next sentence).

[anonymous]1y3
2
1

any way people have of making moral decisions in the real world

In the real world, people don't usually make EV calculations before making decisions, no? That seems very much so like an EA thing.

[anonymous]1y1
0
0

I think my attempt to give a general description here is failing, so let me take that bit out altogether and focus on the example and see if that makes my point clearer:

I agree this is a "problem" for utilitarianism (up-ticked accordingly).

But it's also a "problem" for...nearly everyone. How many random people on the street would say, "No, ends never justify the means. Like in that movie, when the baddie asked that politician for the nuclear codes and she said she didn't know them? She shouldn't have lied."

[anonymous]1y3
1
0

Yes but ends/means and EV are two distinct things. It is true that EV is technical apparatus we can use to make our ends/means reasoning more precise. But that does not mean that people who say 'the ends never justify the means' are saying that they subscribe to EV reasoning. It is perfectly consistent to agree with the former but disagree with the latter statement.

Virtue ethics is a good example of this, since it puts emphasis on moderation as well as intuition. So a virtue ethicist can consistently take a moderate, midway approach (where the ends sometimes justify the means) without accepting any particular theoretical framework (EV or otherwise) that describes these intuitions mathematically. Because it is logically possible that morality does not bottom out in mathematics.

Here's an analogy. Perhaps morality is more like jazz music. You know good jazz when you see it, and bad jazz music is even more noticeable, but that doesn't mean there is any axiomatic system that can tell us what it means for jazz music to be good. It is possible that something similar is true for morality. If so, then we need not accept EV theory, even if we believe that ends can justify means.

[anonymous]1y3
1
1

I know. My point is that people who really think "ends never justify the means" are very rare in the world.

(Incidentally, I think I've seen you insist and say elsewhere that utilitarianism is a metaethical theory and not a normative one, when it is in fact a normative one e.g first result.)

I agree with Ben Auer's excellent replies in this thread.  One minor point I'd add is that insofar as you're focusing on the axiological question of whether some state of affairs is good, many non-utilitarian views will yield all the same verdicts as utilitarianism.  That is, many forms of deontology agree with utilitarians about what outcomes are "good" or "bad", but just disagree about the deontic question of which actions are "right" or "wrong".

Now, I'd argue that (once higher-order evidence is taken into account) expectational utilitarianism further reduces the scope of disagreement with "commonsense" deontological views in practice.  We can come up with crazy hypothetical examples where the views diverge, but it's going to be pretty rare in practice. (The main practical difference, I think, is just that deontology is more lax, in allowing you to do no good at all in your life, as long as you do no harm, whereas utilitarians are obviously more committed to the general EA/beneficentrist project of positively doing good when you easily can.)

As to whether we should subsequently be glad that someone acted wrongly or recklessly in that rare situation that it turned out for the best, it's true that utilitarianism answers 'yes'.  We should hope for good results, and be glad/relieved if they eventuate. A significant portion of deontologists (about half, in my experience) agree with consequentialists on this point.  We can all still agree that the agent is blameworthy, however, so it's important not to imagine that this implies no condemnation of the agent's action in this situation!

Freedomandutility stated one example: making AGI safe has infinite expected utility under a techno-utopian view, which may not be a hypothetical.

[anonymous]1y3
1
0

Nitpick: If you believe infinite expected utility is a thing then every action has infinite expected utility. I assume you both just mean "extremely large expected utility."

I don't believe it is infinitely valuable, but freedomandutility did mention it literally, so I'm applying a simplicity prior and thinking he did mean that literally.

As I replied in that thread, raising the stakes just makes it all the more important to be actually prudent!