Crosspost of my blog article.
A brief note: philosophers against malaria is having a fundraiser where different universities compete to raise the most money to fight malaria. Would highly encourage you to donate!
There are different views on the philosophy of risk:
- Fanaticism holds that an arbitrarily low probability of a sufficiently good outcome beats any guaranteed good outcome. For example, a one in googolplex chance of creating infinite happy people is better than a guarantee of creating 10,000 happy people.
- Risk-discounting views say that you simply ignore extremely low risks. If, say, risks are below one in a trillion, you just don’t take them into account in decision-making.
- Bounded utility views hold that as the amount of good stuff goes to infinity, the value asymptotically approaches a finite bound. For example, a bounded view might hold that as the number of happy people approaches infinity the value approaches some finite bound. The following graph shows what that looks like (the Y axis is value, the X axis is utility):
These views fairly naturally carve up the space of views. Fanaticism says that low risks of arbitrarily large amounts of good outweigh everything else. Discounting views say that you can ignore them because they’re low risks. Bounded views say that arbitrarily large amounts of good aren’t possible because goodness reaches a bound.
I’m a card-carrying fanatic for reasons I’ve given in my longest post ever! So here I’m not going to talk about which theory of risk is right but instead set out to answer: if you’re not sure which of these views is right, how should you proceed? My answer: under uncertainty, fanatical considerations dominate. If you’re not sure if fanaticism is right, you should mostly behave as a fanatic.
First, an intuition pump for why unbounded views dominate bounded views: imagine that you are trying to maximize the number of apples in the world. There are two theories. The first says that there’s no bound on how many apples can be created. The second says there can only be 100 trillion apples at the most. If you’re an apple-maximizer, you mostly care about the first one because potential upside is unlimited. It wouldn’t be shocking if value worked the same way.
Here’s a sharper argument for this conclusion: suppose you think there’s a 50% chance that value is bounded. You’re given the following three options:
A) If value is bounded, you get 50,000 units of value.
B) If value is unbounded you get 10 billion units of value.
C) If value is unbounded you have a 1/googol chance of getting infinite units of value.
I’m going to assume that conditional on value being unbounded, you think that any chance of infinite value swamps any chance of finite value (we’ll soon see that you can drop that assumption).
It’s pretty intuitive that B is better than A. After all, B gives you a 50% chance of having 10 billion units of value while A gives you a 50% chance of 50,000 units of value. Why would whether there’s a bound on value affect how much you care about some fixed quantity of value? It’s especially hard to believe the bound affects how much you care about some quantity of value to such a degree that it makes you value 50,000 units of value more than 10 billion units of value.
But C is strictly better than B. After all, given unbounded value, C has infinite expected value, which we’re assuming is what you’re trying to maximize given that value is unbounded—which it must be for B or C to pay out at all. Thus, by transitivity C beats A—C>B>A. But if C beats A, by the same logic, given any significant degree of moral uncertainty you’ll always prefer arbitrarily small probabilities of arbitrarily large rewards. The key takeaway: given moral uncertainty, fanatical considerations matter arbitrarily more than risk-discounting ones.
Now let’s drop the assumption that you’re maximizing expected utility. Let’s suppose you have the following credences:
1/4 in unbounded utility and expected utility maximizing.
1/4 in unbounded utility and discounting risks
1/4 in bounded utility and expected utility maximizing.
1/4 in bounded utility and discounting risks.
Now suppose you’re given the following deals:
A) 500 billion utils given unbounded utility and expected utility maximizing.
B) 50,000 utils given expected utility maximizing and bounded utility.
C) 50,000 utils given bounded utility and discounting risks.
D) 1/googolplex chance of infinite utils given unbounded utility and expected utility maximizing.
E) 50,000 utils given unbounded utility and discounting risks.
By the logic described before, A is better than B and C. D is better than A. Thus, D is better than B and C. This conclusion is robust to unequal credences, because A is better than B and C combined—and D is better than A—so D is better than B and C combined. A is also better than E, and D is better than A, so D>E. Thus, low probabilities of infinite payouts given fanaticism dominate high probabilities of guaranteed payouts given unbounded views that discount low risks.
The takeaway from all of this: A is better than B, C, and E. D is better than A. So D is the best on the list. Because D is something that only a fanatic would value more than the others, under uncertainty you should behave like a fanatic.
So far, I’ve assumed that you have non-trivial credence in fanatical views (both unbounded and non-discounting). What if you think such views are very unlikely? I think the same basic logic goes through. Compare D to:
F) For F, stipulate that you think there’s a 1 in 10 billion chance that fanaticism is true. F offers you a one in googol (vastly more than googolplex) chance of infinite utils, given unbounded utility and expected utility maximizing.
Clearly F is better than D. But D is better than the rest, so now F is better than the rest, and it makes sense to behave like a fanatic even if you’re pretty sure it’s false.
Now, one objection you might have is: for present purposes I’ve been assuming that one view is right and talking about probabilities. But if you’re an anti-realist, maybe you reject that. If moral facts are just facts about your preferences, perhaps there is no moral uncertainty, and it doesn’t make sense to talk about probabilities of views being right.
But this isn’t correct. All we need, for the argument, is for there to be some probability that unbounded views are right. There are many ways unbounded views might be right that shouldn’t be assigned zero credence:
- You might be mistaken about what you care about at the deepest level, instead picking up on irrelevant characteristics.
- You might be wrong about moral anti-realism. If realism is true, unbounded views might be right.
- It might be that on the right view of anti-realism, moral error is possible and unbounded views are right.
And note: the kinds of wager arguments can be framed in anti-realist terms. If you might care about something a huge amount, but you might only care about it up until a point, then in expectation you should care about it a lot.
Now, you might worry that this is a bit sus. The whole point of bounded and discounting views is to allow you to ignore low risks. And now you’re being told that if you have any credence in fanaticism it just takes over. What? But this shouldn’t be so surprising when we reflect on how its competitors deviate from fanaticism. Specifically:
- Bounded utility views end up saying value reaches a finite limit. But it’s no surprise that if you’re trying to maximize expected value—which you are if bounding utility is your only deviation from fanaticism—then views on which the limit is finite will matter much less than views on which it’s infinite. “Maybe this thing isn’t that important,” isn’t an adequate rebuttal to “we should care about this because it may be infinitely important,” given uncertainty.
- Discounting views just ignore low risks. But given some chance that low risks are the most important thing ever, even if there’s also some chance they don’t matter, they end up swamping everything else.
Now, you can in theory get around this result, but doing so will be costly. You’ll have to either:
- Abandon transitivity.
- Think that B>A which seems even worse. This would mean that 50,000 utils under bounded value beats 500 billion under unbounded value.
Now, can you do this with other kinds of moral uncertainty? Can similar arguments establish, say, that deontology matters more than consequentialism. I think the answer is no. I thought about it for a few minutes and wasn’t able to come up with a way of setting up deals analogous to A-F for deontology dominating. Deontology and consequentialism are also dealing in incommensurable units—they value things of fundamentally different types—so moral uncertainty will be wonkier across them than across different theories of value.
One other way to think about this: it’s often helpful to compare moral uncertainty to empirical uncertainty and treat them equivalently. For instance, if there’s a 50% chance that animal suffering isn’t a bad thing, you should treat that the same as if there was a 50% chance that animals didn’t suffer at all.
But if we do that in this context, unbounded views win out in expectation. Consider the empirical analogue of non-fanatical views:
- The empirical analogue of risk discounting is learning that low risks actually had a 0% chance of coming true (rather than discounting the low risks for moral reasons, you’d discount them for factual reasons). But if you think that there’s a 50% chance that low risks are arbitrarily important and there’s a 50% chance that they’re unimportant because their true odds are 0%, they’ll still end up important in expectation.
- The empirical analogue of bounded utility is thinking that the universe can only bring about some finite amount of utility. But if we thought there was some chance of that, and some chance that utility was unbounded and fanaticism was true, then the worlds where utility was unbounded would be more important in expectation.
I’m open to the possibility that there could be a non-fanatical view that deals in incommensurable units and thus isn’t dominated by fanaticism given uncertainty. But the two main competitors probably get dominated. If you’re morally uncertain, you should behave like a fanatic. In trying to bring about good worlds, we should mostly care about the worlds that are potentially extremely amazing, rather than the ones where value might be capped.
