Hide table of contents

This is a summary of the GPI Working Paper “In defence of fanaticism” by Hayden Wilkinson (published in Ethics 132/2). The summary was written by Riley Harris.

Introduction

Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very unlikely to succeed, because it is probably physically impossible. However, let’s suppose the second project has very high expected value, taking into account the likelihood of failure and the enormous value conditional on success. Which is better? Fanaticism is the view on which the second charity is better. More generally, fanaticism would claim that for any guaranteed amount of moral value, there is a better prospect which involves a very tiny chance of an enormous amount of value.[1]

Many find fanaticism counterintuitive. Although formal theories of decision-making tend to recommend that we be fanatical, this has generally been taken as a mark against these theories.[2] In the paper “In defence of fanaticism”, Hayden Wilkinson maps out problems for any non-fanatical theory. There are many ways to construct a theory avoiding fanaticism, and different problems arise depending on how the theory is constructed (summarised in Table 1). According to Wilkinson, regardless of how the theory is constructed, the resulting problems will likely be worse than simply accepting fanaticism.

 Option 1Option 2
Decision 1: beneficial tradesThe theory sometimes recommends that we do not trade a tiny amount of certainty for a very large increase in the possible payoff.A series of improvements will leave us worse off, not better off.
Decision 2: discontinuous reasoningOur choices will be inconsistent (in a certain sense) between high-stakes and low-stakes decisions.Our choices will be absurdly sensitive to tiny changes in probabilities.
Decision 3: the importance of distant eventsOur decisions will depend on what happens in distant places that we cannot affect.Our decisions will depend on our beliefs about distant places that we cannot affect, and we will not be permitted to act as we know we would if we had more information.

Table 1: Summary of problems faced by non-fanatics when constructing their theory of moral choice. At each decision point, the non-fanatical theory can avoid at most one of the options.

Decision 1: beneficial trades

In order to reject fanaticism we must either reject that seemingly beneficial trades are acceptable, or claim that a series of beneficial trades[3] can leave us worse off.

To see why, consider a series of beneficial trades. For example, we might begin knowing we will gain 1 unit of value with certainty. Suppose we are offered a trade that multiplies our potential prize by ten billion (1010) units of value, but reduces the probability of obtaining it by a tiny amount, say 0.00001%. Thus we would face near certainty (a  99.99999% chance) of obtaining ten billion units of value. Suppose we are offered a trade that again multiplies our prize by 10 billion, to 1020, and again reduces the probability of obtaining it by 0.0001%, to a 99.99998% chance. The first trade seemed clearly beneficial, and this second trade is appealing for the same reason: we sacrifice a tiny amount of certainty for a very large increase in the possible payoff. Now imagine that we keep going, receiving slightly lower chances of much higher payoffs (a 99.99997% chance of 1030 units, a 99.99996% chance of 1040 units, and so on). Although each trade seems beneficial on its own, if we repeat the trade 9,999,999 times we would trade to a 0.00001% chance of obtaining a truly enormous value.[4]  Normally we would say that when you make a series of beneficial trades you end up better off than you were before, so the final enormous but unlikely payoff is a better gamble than the initial small but certain payoff. But that is just a restatement of fanaticism. If we don’t want to accept fanaticism, we need to either reject that each individual trade was an improvement or that making a series of beneficial trades is an overall improvement.

Decision 2: discontinuous reasoning

When we reject fanaticism (but want to keep the intuitive reasoning of expected value in other contexts), our decisions in high-stakes, low-probability scenarios will be inconsistent with our decisions in lower-stakes or higher-probability scenarios. There are two specific ways that this could happen, each leading to a different problem.

Scale independence

We might value ways of making decisions that are consistent in the following way: if you choose to accept a particular trade, you would also choose to trade where everything about the situation was the same, except all positive and negative effects of your choice are doubled. For example, if you would choose to save one life rather than two lives, then you would also choose to save two million lives rather than one million lives.[5] This is called scale independence.

Recall that when we reject fanaticism we always choose the guarantee of a modest amount of moral value, over the tiny probability of arbitrarily large value. If we accept scale independence, then there must be no value such that we would prefer it for sure over the tiny probability of an arbitrarily large value. If there was, then scale independence could be used to scale back up to the fanatical conclusion.

Absurd sensitivity to tiny probability changes

But if we say there is no such value, then there will be some small chances that you ignore no matter how much value is on the line. And there will of course be some nearby chances that are almost identical that you are sensitive to. You would be absurdly sensitive to tiny probability changes. That is, you would not want to reject some chance of a very large amount of value in favour of a very slightly higher chance of a very small amount of value. Consider

Option 1: value 1010 billion with probability p (and no value with probability 1-p)

Option 2: value 0.0000001 with probability p+ (and no value with probability 1-p+)

 If we want to both reject fanaticism and preserve scale independence, we must sometimes choose Option 2 even if p is only a very tiny amount smaller than p+.[6] Taking a very slightly higher chance of some minuscule value over an almost indistinguishable probability of a vast payoff seems wrong. Worse still, we will be forced not only to say that Option 2 is better, but that it is astronomically better—no matter how large we make the payoff in Option 1 it will never be as good as we think Option 2 is. This also presents a practical problem for real decision makers. At least some of the time, in order to know which of our options is best, we will have to have subjective probability estimates that are arbitrarily precise, because our judgments will be sensitive to extremely small changes in probabilities.

Decision 3: the importance of distant events

Dependence on distant events that we cannot affect

Suppose we reject fanaticism by creating an upper bound on how much some (arbitrarily large) payoff can contribute to how good a lottery seems. Perhaps, as a payoff gets larger, each additional unit of value contributes a little less to increasing the goodness of the overall lottery than the last. If we do this, how close we are to this upper bound becomes incredibly important. Background facts that change the total value of all options—but which are unaffected by our decisions—may become crucial, because these contribute to how close we are to the upper bound. This may include facts about distant planets that we can’t affect, or even different eras that are long gone.[7]

This tends to be a deeply unintuitive result for many people—surely our actions today shouldn’t depend on the exact details of what happened in ancient Egypt, or on distant planets that our actions do not affect. We should not need to know these background facts to choose now.

Unable to act the way you know you would if you were better informed

The only way to avoid this dependence on background facts is to accept that what we ought to do will no longer depend on what actually happened, but it will still depend on our (uncertain) beliefs about what happened. Worse still, we might know for sure which option would look better if we resolved that uncertainty—everything we might learn would point us to the same answer—but, still, the mere presence of that uncertainty will require that we say something different is better.[8]

Conclusion

We have identified three decisions those trying to avoid fanaticism need to make (Table 1). For each of these, non-fanatics must choose at least one unappealing feature. Overall, they must accept at least three of these. Wilkinson concludes that this is likely to be worse than simply accepting fanaticism—we might be glad when our theories are fanatical, as it is the best of the bad options.

Practical implications

In practice, some interventions have only a tiny probability of actually resulting in a positive impact but, if they do, the impact may be enormous—e.g., lobbying governments to reduce their nuclear arsenals has only a tiny probability of preventing the very worst nuclear conflicts. It may seem counterintuitive that such interventions can be better than others that are guaranteed (or nearly guaranteed) to result in some benefit, such as directly providing medicine to those suffering from neglected tropical diseases. But the arguments in this paper suggest that low-probability interventions must at least sometimes be better than high-probability ones. If they weren’t, there would be situations where we would be led to even more counterintuitive verdicts. This strengthens the case for low-probability interventions such as lobbying for nuclear disarmament, provided that their potential impacts are large enough.

References

Nick Beckstead and Teruji Thomas (2023). A paradox for tiny probabilities and enormous values. Noûs, pages 1-25. 

Daniel Bernoulli (1738). Exposition of a New Theory on the Measurement of Risk (Specimen theoriae novae de mensura sortis). Commentarii Academiae Scientiarum Imperialis Petropolitanae 5, pages 175-192. 

Georges-Louis Leclerc de Buffon (1777). Essays on Moral Arithmetic (Essai d'arithmétique morale). Supplément à l'Histoire Naturelle 4, pages 46-123. 

Nick Bostrom (2009). Pascal’s mugging, Analysis 69/3, pages 443-445.

Bradley Monton (2019). How to Avoid Maximising Expected Utility. Philosophers' Imprint 19/18, pages 1-25. 

Eric Schwitzgebel (2017). 1% Skepticism. Noûs 51/2, pages 271-290.

  1. ^

    Formally defined, a moral theory (combined with a way of making decisions under risk) is fanatical if for any tiny probability ϵ>0 and any finite value v, there exists a finite value V that is large enough that you would choose V with probability ϵ over v with certainty.

  2. ^

    Expected utility reasoning and popular alternatives that allow for Allais preferences and risk-aversion tend to endorse fanaticism. For examples of how fanaticism is taken to be a mark against such theories, see Bernoulli (1738), Buffon (1777), Bostrom (2009), Schwitzgebel (2017) and Monton (2019).

  3. ^

    Beckstead and Thomas (2023) make an analogous argument.

  1. ^

    Here we could make the increases as large as we want, and the probability as small as we want—so long as the probability is still strictly greater than 0.

  2. ^

    A common way to violate scale dependence is by having diminishing returns—as you gain more of something, each additional unit contributes less value. This is plausible when we talk about our individual utility derived from certain things—the fifth ice cream I eat today gives me less value than the first—but it is not plausible here because we are talking about moral value. Moral value does not have diminishing returns, because saving someone's life is not less morally valuable when you’ve already saved four, so losing scale independence is a real loss.

  3. ^

    We can make p and p+ as close together as we like, so long as p+ is greater than p.

  4. ^

    This is known as the Egyptology objection because our decisions may depend on facts about what happened in ancient Egypt—facts like who was alive and how valuable their lives were. This involves rejecting a condition called background independence, which means that outcomes that are not changed by your decision should not affect your decision.

  5. ^

    This is known as the Egyptology objection because our decisions may depend on facts about what happened in ancient Egypt—facts like who was alive and how valuable their lives were. This involves rejecting a condition called background independence, which means that outcomes that are not changed by your decision should not affect your decision.

Show all footnotes

29

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Executive summary: Fanaticism, the view that a tiny probability of an enormous payoff can be better than a guaranteed modest payoff, is difficult to avoid without accepting other highly counterintuitive implications.

Key points:

  1. Non-fanatical theories must either reject seemingly beneficial trades or accept that a series of beneficial trades can make things worse overall.
  2. Non-fanatical theories lead to inconsistencies between high-stakes and low-stakes decisions, either requiring absurd sensitivity to tiny probability changes or abandoning the principle that consistently choosing the better option makes things better.
  3. Non-fanatical theories make our decisions depend on distant events we cannot affect or require us to act against what we know is best based on our uncertainty.
  4. Accepting fanaticism may be better than the alternatives, which each have highly counterintuitive implications.
  5. This strengthens the case for pursuing high-value low-probability interventions, such as lobbying for nuclear disarmament, over guaranteed modest positive impacts.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.
Relevant opportunities