Hide table of contents

This is the first post in Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is to consider some alternatives to expected value maximization for cause prioritization and, at the same time, to explore the practical implications of a commitment to expected value maximization. Here, we provide a preview of the entire sequence.

1. Introduction

We want to help others as much as we can. Knowing how is hard: there are many empirical, normative, and decision-theoretic uncertainties that make it difficult to identify the best paths toward that goal. Should we be focused on sparing children from malaria? Reducing suffering on factory farms? Mitigating the threats associated with AI? Should we split our attention between all three? Something else entirely?

Here are two common responses to these questions:

  • We ought to set priorities based on what would maximize expected value.
  • Expected value maximization (EVM) supports prioritizing existential risk (x-risk) mitigation over all else.

This post introduces a sequence from Rethink Priorities’ Worldview Investigations Team (WIT) that examines these two claims. The sequence highlights some reasons for skepticism about them both—reasons stemming from significant uncertainty about the correct normative theory of decision-making and uncertainty about many of the parameters and assumptions that enter into expected value calculations. It concludes with some practical resources for navigating uncertainty. What tools can we create to reason more transparently about tough questions? And at the end of the day, how should we make decisions?

Accordingly, this sequence is a contribution to the discussion about macro-level cause prioritization—i.e., how we split resources between global health and development, animals, and existential risk.[1] Before we go any further, though, let’s be clear: this sequence does not defend a specific split for any particular actor or group. Instead, it tries to clarify certain fundamental issues that bear on the split through the following series of reports.

The sequence has three parts. In the first, we consider alternatives to EVM.

  • We start by noting that some plausible moral theories avoid EVM entirely: they offer an entirely different way to set priorities. If the standard view is that effectiveness should be measured in something like counterfactual welfare gains per dollar spent, our report on contractualism and resource allocation illustrates how we might set priorities if we were to measure effectiveness in something like “strength-adjusted moral claims addressed per dollar spent.”
  • In our report on risk and animals, we examine several ways of incorporating risk sensitivity into the comparisons between interventions to help numerous animals with a relatively low probability of sentience (such as insects) and less numerous animals of likely or all-but-certain sentience (such as chickens and humans). We show that while one kind of risk aversion makes us more inclined to help insects, two other kinds of risk aversion suggest the opposite.
  • We generalize this discussion of risk in our report on risk-aversion and cause prioritization. Here, we model the cost-effectiveness of different cause areas in light of several formal models of risk aversion, evaluating how various risk attitudes affect value comparisons and how risk attitudes interact with one another.

The second part of the sequence examines the claim that EVM robustly favors x-risk mitigation efforts over global health or animal welfare causes.

  • In our report on the common sense case for spending on x-risk mitigation, we consider a simple model for assessing x-risk mitigation efforts where the value is restricted to the next few generations. We show that, given plausible assumptions, x-risk may not be orders of magnitude better than our best funding opportunities in other causes, especially when evaluated under non-EVM risk attitudes.
  • We then explore a more complicated hypothesis about the future, the so-called “time of perils” (TOP) hypothesis, that is commonly used to claim that x-risk is robustly more valuable than other causes. We delineate a number of assumptions that go into the TOP-based case for focusing on x-risk and highlight some of the reasons to be uncertain about them.        
  • As we reflect on the time of perils and more general risk structures, we investigate the value of existential risk mitigation efforts under different risk scenarios, different lengths of time during which risk is reduced, and a range of population growth cases. This report shows that the value of x-risk work varies considerably depending on the scenario in question and that value is only astronomical under a select few assumptions. Insofar as we don't have much confidence in any particular scenario, it’s difficult to have much confidence in any particular estimate of the value of x-risk mitigation efforts.
  • Finally, in our report on uncertainty over time and Bayesian updating, we note the difficulty of comparing estimates from models with wildly different levels of uncertainty or ambiguity. We provide an empirical estimate of how uncertainty increases as time passes, showing how a Bayesian may put decreasing weight on longer-term estimates.

All this work culminates in the third part of the sequence, where we introduce a tool for comparing causes and Rethink Priorities’ leadership discusses the practicalities of decision-making under uncertainty.

  • Accordingly, we present a cross-cause cost-effectiveness model (CCM), a tool for assessing the value of different kinds of interventions and research projects conditional on a wide range of assumptions. This resource allows users to specify distributions over possible values of parameters and see the corresponding distributions of results.
  • Finally, to make the upshot of the above more concrete, we consider how Rethink Priorities should make decisions. Written by RP’s Co-CEOs, this post comments on the theory and practice of setting priorities when your own resources are on the line.

With this quick summary behind us, we’ll provide a bit more detail about the major narrative of the sequence.

2. Making Comparisons

To compare actions that would benefit humans and chickens, we must find some common currency for comparing human and chicken welfare. To compare sure-thing investments versus moonshots with potentially great payoffs, we need to consider how to weigh probabilities when making decisions. To compare actions with near-term versus long-term consequences, we need to consider how we should take into account our uncertainty about the far future. In some of these cases, it can feel as though we are dealing with different kinds of value (human and animal welfare); in others, we may be dealing with the same kind of value, but it’s mediated by different factors (probability and time).

It’s difficult to know how to make comparisons across these apparent differences. Still, these comparisons are all the more pressing in a funding-constrained environment, where investments in any given cause come at the expense of investments in others.    

A standard approach for making these comparisons favors maximizing expected value (EVM). The expected value (EV) of an action takes the sum of the values of the possible consequences of that action weighted by the probabilities of those consequences coming to pass. EVM states that we ought to perform the action that has the highest EV (or one of the highest EV actions if there are ties).

This decision procedure tells us how to incorporate payoffs with uncertainties over those payoffs. Moreover, it promises a way to compare very different kinds of actions. For instance, while we may not have a precise theory about how to compare human and chicken welfare, we may not need one. Instead, we can simply perform sensitivity tests: we can consider our upper and lower bound estimates of the value of chicken welfare to see whether our decision turns on our uncertainty. If it doesn’t, we’re in the clear.

In addition to its pragmatic appeal, EVM also has the backing of formidable philosophical arguments purporting to show that it is a uniquely rational way of making decisions.[2] However, EVM has some highly unintuitive consequences,[3] many of which stem from a simple fact: the highest EV option needn’t be one where success is at all likely, as reductions to the probability of an outcome can always be compensated for by proportional increases in its value. For instance, if we assume that value is proportional to the number of individuals affected, it follows that as the number of individuals affected by an action increases, the probability of success of those actions can get proportionally smaller while maintaining the same EV. As a result, actions that can affect large numbers of individuals need only have a small probability of success in order to have higher EV than more sure-thing actions that affect fewer individuals.

Consider two examples of this problem. First, humans and other large animals are vastly outnumbered by insects, shrimps, and other invertebrates. Since it is less likely that the latter creatures are sentient,[4] it’s less likely that actions to benefit them will result in any value at all. However, given the sheer number of these invertebrates, even a small probability of sentience will produce the result that actions that would benefit invertebrates have higher EV than actions that would benefit humans and other large animals. The result is (what Jeff Sebo calls) the “rebugnant conclusion,” according to which we ought to redirect aid from humans toward insects and shrimps.

Comparing future and present people provides another example where large numbers tend to dominate EV calculations. One way to justify working on x-risk mitigation projects is based on the many future people they might allow to come into existence. If the human species were to persist for, say, another million years, the number of future people would be much larger than it would be if humanity were only to survive for another few centuries. So, let’s consider an action with a low probability of ensuring that humanity lasts another million years rather than a few hundred years. The value of success would be far greater than the value of a successful action that affects a relatively small number of present people. Therefore, actions that have a low probability of bringing about a positive, population-rich future will tend to have higher expected values than more sure thing actions that affect only (or primarily) present people.

You might already think that going all-in on shrimps or x-risk work is fanatical. (A view that’s compatible with wanting to spend significant resources on these causes!) If you don’t, then we can push this reasoning further. Suppose you’re approached by someone who claims to have magical powers and promises you an astronomical reward tomorrow in exchange for your wallet today. Given the potential benefits, EVM suggests that it would be irrational not to fork over your wallet even if you assign a miniscule (but non-zero) probability to their claims (i.e., you’re vulnerable to “Pascal’s mugging”). If there is some chance that panpsychism is true, where even microparticles are sentient, then we may need to devote all resources to improving their welfare, given how many of them there are. If having access to limitless free energy or substrates for digital minds would allow us to maximize the amount of value in the world, then research on those topics may trump all other causes. If we follow EVM to its logical end, then it’s rational to pursue actions that have the tiniest probabilities of astronomical value instead of actions that have sure but non-astronomical values.

This isn’t news. Indeed, some key figures in EA have expressed doubts about EVM for just these sorts of reasons. “Maximization is perilous,” Holden Karnofsky tells us, encouraging us not to take gambles that could result in serious harm. And as Toby Ord observes, “there are different attitudes you can take to risk and different ways that we can conceptualize what optimal behavior looks like in a world where we’re not certain of what outcomes will result from our actions… [Some risk-averse alternatives to EVM] have some credibility. And I think we should have some uncertainty around these things.” Presumably, therefore, at least some people in EA are open to decision theories that depart from EVM in one way or another.

To date, though, there hasn’t been much discussion of alternatives to EVM. It’s valuable, therefore, to explore alternatives to it. There are at least two different ways to challenge EVM. First, we might question its strict risk neutrality. Decision theorists have offered competing accounts of decision-making procedures that incorporate risk sensitivity. We explore three different kinds of reasonable risk aversion, showing that they yield significantly different results about which causes we ought to prioritize. Second, we might give some credence to a moral theory in which our responsibilities to others often prohibit us from maximizing value, such as contractualism.

Of course, many EAs are quite committed to EVM. And many EAs appear to be convinced that EVM has a specific practical implication: namely, that we ought to focus our resources on mitigating x-risk. If we stick with EVM, do we get that result? We’re skeptical, as there are many controversial assumptions required to secure it. Since expected values are highly sensitive to changes in our credences, the results of EVM are far less predictable than they have been assumed to be. 

3. Beyond EV maximization

Our sequence begins by illustrating the importance of our moral theory in assessing what we ought to do. EVM fits naturally with welfare consequentialism, according to which the moral worth of an action is determined by its consequences for overall well-being. Even among philosophers, though, consequentialism is a minority view. Adopting a different moral theory might lead to quite different results. In our contractualism report, we consider the implications of one prominent alternative moral theory for cause prioritization. In brief, contractualism says that morality is about what we can justify to those affected by our actions. We argue that this theory favors spending on the surest GHD interventions over x-risk work and probably over most animal work even if the latter options have higher EV. Insofar as we place some credence in contractualism, we should be less confident that the highest EV action is, therefore, the best option. Even if you aren’t inclined toward contractualism, the point stands that uncertainty over the correct theory of morality casts doubt on a strategy of always acting on the recommendations of EVM.

We then turn to the second reason for giving up on EVM: namely, making room for some kind of risk-aversion. EVM is sensitive to probabilities in only one way: the probabilities of outcomes are multiplied by the value of those outcomes, the results of which are summed to yield the expected value. However, agents are often sensitive to probabilities in many other ways, many of which can be characterized as types of risk aversion. The EV calculations that we mentioned in the previous section (regarding invertebrates and x-risk) involve several kinds of uncertainty. First, some outcomes have a low probability of occurring. Second, there is uncertainty about how much our actions will change the relevant probabilities or values. Third, there is uncertainty about the probabilities and values that we assign to the various outcomes in question. We identify three corresponding kinds of risk-averse attitudes: risk aversion with respect to outcomes; risk aversion with respect to the difference our actions make; and aversion to ambiguous probabilities. You can be averse to any or all these kinds of uncertainty.

 In our risk and animals report, we motivate sensitivity to these kinds of risk by using them to diagnose disagreements about the value of actions that would benefit humans compared to actions to help more numerous animals with relatively low probabilities of sentience (e.g., shrimps and insects). Here, the key uncertainties concern the sentience of the individuals benefited. If shrimps are sentient, then benefitting them has enormous payoffs, given how much more numerous they are. However, if shrimps aren’t sentient, we’ll be wasting our money on organisms without conscious experiences—and for whom, therefore, things can’t go better or worse. Because EVM is risk-neutral, it suggests that the gamble is worthwhile, so we ought to direct our charitable giving toward shrimps over people. We introduce three approaches to risk, including a novel form of risk aversion about the difference that our actions make. We argue that while one prominent form of risk aversion tells in favor of the rebugnant conclusion, other reasonable forms of risk aversion tell in favor of helping creatures of more certain sentience.[5]

We use this test case to motivate a more general discussion of risk. In our risk-aversion and cause prioritization report, we evaluate how various risk attitudes affect comparisons between causes and how risk attitudes interact with one another.

First, an agent who is risk-averse with respect to outcomes is motivated to avoid the worst-case states of the world more than she’s motivated to obtain the best possible ones. She places more decision weight on the potential bad outcomes of a decision and less weight on the good ones. There are several well-studied and well-motivated formal models of this kind of risk aversion. Recall that strict EVM treats changes in probabilities and values symmetrically. Risk-sensitive procedures break this symmetry, albeit in different ways. For example, in Buchak’s (2013) Risk-Weighted Expected Utility (REU), a risk weighting is applied to the probabilities of outcomes, such that the probabilities of worse outcomes are adjusted upward and better outcomes are adjusted downward.[6]

Similarly, when applying Bottomley and Williamson’s (2023) iteration of Weighted-Linear Utility Theory (WLU) under neutral assumptions about the present state of the world, larger amounts of value created are adjusted downward, such that very large value outcomes must have higher probabilities to compensate for smaller probabilities of causing large amounts of harm. Risk-aversion in this sense makes us even more inclined to favor work to reduce x-risk (since we’re even more motivated to avoid catastrophe) and more inclined to favor insects over people (since trillions of insects suffering would be really bad). However, it would also make us less inclined toward fanatical moonshots with very low probabilities of astronomical value, since these extremely positive outcomes are given increasingly less weight the more risk-averse you are.

A second kind of risk aversion concerns difference-making. An agent who’s risk-averse in this sense is motivated to avoid actions that cause things to be worse or that do nothing. She assigns decision weights not merely on the basis of the overall state of the world that results from her action but on the difference that her action makes. We believe this kind of risk aversion is both common and undertheorized.[7] Difference-making risk aversion (DRMA) makes us hesitant to undertake actions that have a high probability of inefficacy or making things worse. As a result, difference-making risk-weighted EV tends to favor helping humans over helping shrimps and helping actual versus merely possible people.

Lastly, we might be risk-averse about ambiguous or uncertain probabilities (Ellsberg 1961). An ambiguity-averse agent prefers to take gambles on bets where she is confident about the probabilities that she assigns. She prefers bets where she has good evidence about the probabilities of outcomes; she avoids bets when the probabilities she assigns are largely reflections of her ignorance. There are several strategies for penalizing ambiguity. Regardless of which we choose, risk aversion in this sense will penalize actions for which there is high variance in plausible probability assignments and for which EV results are highly sensitive to this variance.

4. Uncertainty about the effects of our actions

All these points aside, suppose we go in for EVM. Another aim of this sequence is to explore whether there’s a quick route from EVM to the view that x-risk work is clearly more cost-effective than all other opportunities.

Let’s again consider how EV calculations work. An EV calculation considers a partition of states that we take to be relevant to our decision-making. We then evaluate how good or bad things would be if we took each candidate action and ended up in that state. Finally, we ask what the probabilities of those states would be if we took each candidate action. For example, if you are deciding whether to study for an exam, two states would be relevant: either you will pass or you won’t. You consider what it would be like if you study and pass, study and fail, don’t study and pass, or don’t study and fail. Lastly, you estimate the effect of studying on your probability of passing or failing, how much more likely you are to pass if you study than if you don’t. You combine these to yield the EV of studying and the EV of not studying. Then, you compare the results and make a decision. 

Cases like deciding whether to study for an exam are fairly treated as “small world” decision problems. We consider only the near-term and direct effects that our actions would have and these effects are relatively small. For example, when deciding to study, I don’t consider the effect that my passing the exam would have for global nuclear war. We typically have good purchase on the probabilities and the values involved in such a decision problem. However, that’s often far from true when we evaluate the long-run, indirect effects of our actions, as in the case of x-risk mitigation efforts.

With this in mind, there’s something appealing about simple models for assessing the value of x-risk mitigation efforts: we know that there are enormous uncertainties at every turn; so, if we can make a “common sense” case for working on x-risk, drawing only on premises about which we are confident, then perhaps we can ignore all the other complexities.

So, for instance, someone might want to argue that spending on x-risk mitigation is orders of magnitude better than spending on animals or global health even if the future isn’t enormous. Instead, the thought might be, we can get a winning EV for x-risk interventions just based on the interests of a few generations. However, in our report on the common sense case for x-risk, we show that, when you run the numbers, you don’t necessarily get that result. Instead, we see that plausible x-risk mitigation efforts have a competitive EV with animal causes and are likely no more than an order of magnitude more cost-effective than global health interventions. What’s more, high-risk, high-EV existential risk interventions probably don’t hold up to some plausible risk-sensitive attitudes.

Someone might argue that a fatal flaw of the common-sense view is that it doesn't care about the long-run future, and it’s precisely the long-run future that’s supposed to make x-risk mitigation efforts so valuable. In order to predict the long-term value that our actions will create, though, we need to make substantive hypotheses about the causal structure of the world and what the future would be like if we didn’t act at all. Again, as noted above, the value of investing in AI alignment is much smaller if there will be a nuclear war in the near future. It’s much greater if AI alignment would lower the probability of nuclear war.

Arguments that AI risk mitigation actions have higher EV than alternatives often assume a particular view about the future, the so-called “time of perils” hypothesis. On this view, we are currently in a period of very high existential risk, including risk from AI. However, if we get transformative aligned AI, we’ll have access to a resource that will allow us to address the other existential threats we face. Thus, if we survive long enough to secure transformative aligned AI, the value of the long-run future is likely to be extremely large. In our report on the time of perils-based case for x-risk’s dominance, we show that many premises are probably required to make this story work—so many, in fact, and so uncertain in each case, that the probability of its coming to pass is low enough that betting on x-risk for this reason may amount to fanaticism.

As we reflect on the many other possible futures, it becomes important to model alternative risk trajectories. In our report on the value of extinction risk mitigation efforts, we investigate the value of such efforts under more realistic assumptions, like sophisticated risk structures, variable persistence, and different cases of value growth. This report extends the model developed by Ord, Thorstad, and Adamczewski. By enriching the base model, we are able to perform sensitivity analyses and can better evaluate when extinction risk mitigation could, in expectation, be overwhelmingly valuable, and when it is comparable to or of lesser value than the alternatives. Crucially, we show that the value of x-risk work varies considerably with different scenario specifications. Insofar as we don't have much confidence in any one scenario, we shouldn’t have much confidence in any particular estimate of the value of extinction risk mitigation efforts.

We complete our discussion of uncertainty in our report on uncertainty over time and Bayesian updating. When deciding between actions with short-term versus long-term impacts, there's a balance between certainty and expected value. Predictions of short-term impacts are more certain but might offer less value, while long-term impacts can have higher expected value but are grounded in less certain predictions. This report provides an empirical analysis of how uncertainty grows over a 1-20 year range using data from development economics RCTs. Through statistical predictions and Bayesian updating models, we demonstrate how uncertainty gradually increases over time, and how this leads to the expected value of long-term impacts diminishes as the forecast time horizon lengthens.

The upshot of these reports is fairly simple. There clearly are sets of assumptions where x-risk mitigation work has higher EV than the standard alternatives. However, there are other assumptions on which it doesn’t. And more generally, it’s difficult to make the case that it clearly beats all animal and global health work without relying on very low probability outcomes having large values.

5. Cross-Cause Cost-Effectiveness Model and How RP Should Make Decisions

Having considered the impact of risk aversion and explored the case for work on x-risk mitigation, we introduce our cross-cause cost-effectiveness model (CCM). This tool allows users to compare interventions like corporate animal welfare campaigns with work on AI safety, direct cash transfers with attempts to reduce the risk of nuclear war, and so on. Of course, the outputs depend on a host of assumptions, most of which we can’t explore in this sequence. We provide some views as defaults, but invite users to input their own. Furthermore, we allow users to input distributions of possible values to make it possible to see how uncertainties about parameters translate into uncertainties about results. So, this tool is not intended to settle many questions about resource allocation. Instead, it’s designed to help the community reason more transparently about these issues.

Finally, we turn to the hyper-practical. With all these uncertainties in mind, how should Rethink Priorities make decisions? Written by RP’s co-CEOs, this post comments on the theory and practice of setting priorities when your own resources are on the line.


The post was written by Bob Fischer and Hayley Clatterbuck. It's a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.

  1. ^

     Though it’s an open question whether it’s useful to carve up the space of interventions this way, this sequence won’t question the division.

  2. ^

     For arguments in favor of EV maximization, see Carlsmith (2022).

  3. ^

     See Beckstead (2013, Chapter 6). For a defense of fanaticism, as well as an explanation of why EV maximization leads to it, see Wilkinson (2021). For an argument for the moral importance of EVM relative to one kind of risk aversion, see Greaves et al. (2022).

  4. ^

     That is, having the capacity for valenced, phenomenally conscious experiences such that they can suffer or feel pleasure.

  5. ^

     This has consequences for fanaticism as well. Suppose there are some things we could do that have a small probability of creating new kinds of individuals (such as digital minds) that are capable of astronomical value. EVM leads to the fanatical result that we ought to prioritize these projects over ones to help people that currently exist or are likely to exist (Wilkinson 2022). Uncertainty about whether our actions could create such individuals and whether these individuals would indeed be morally considerable should cause some risk-averse agents to discount fanatical outcomes.

  6. ^

     For example, one possible REU weighting squares the probabilities of better outcomes. Therefore, better-case outcomes need to be exponentially better in order to compensate for their smaller probabilities.

  7. ^

     Greaves, et al. discuss difference-making risk aversion and provide several arguments against it.

Sorted by Click to highlight new comments since:

Executive summary: The post introduces the Rethink Priorities CURVE series, which considers alternatives to expected value maximization and explores uncertainties around the claim that existential risk mitigation should be prioritized.

Key points:

  1. Maximizing expected value can have counterintuitive implications like prioritizing insects over humans or pursuing astronomical payoffs with tiny probabilities.
  2. Alternatives like contractualism and various forms of risk aversion may better align with moral intuitions.
  3. It's not clear that expected value maximization robustly favors existential risk over other causes given uncertainties about the future.
  4. Different assumptions about risk structures and time horizons can dramatically change estimates of the value of existential risk mitigation.
  5. A cross-cause cost-effectiveness model allows transparent reasoning about cause prioritization.
  6. Practical decision-making requires wrestling with moral and empirical uncertainties.


This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I’m very excited to read this sequence! I have a few thoughts. Not sure how valid or insightful they are but thought I’d put them out there:

On going beyond EVM / risk neutrality

  • The motivation for investigating alternatives to EVM seems to be that EVM has some counterintuitive implications. I'm interested in the meta question of how much we should be swayed by counterintuitive conclusions when EVM seems to be so well-motivated (e.g. VNM theorem), and the fact that we know we are prone to biases and cognitive difficulties with large numbers.
  • Would alternatives to EVM also have counterintuitive conclusions? How counterintuitive? 
  • The motivation for incorporating risk aversion also seems driven by our intuition, but it's worth remembering the problems with rejecting risk neutrality e.g. being risk averse sometimes means choosing an action that is stochastically dominated. Again, what are the problems with the alternatives and how serious are they?

On choosing other cause areas over reducing x-risk

  • As it stands I struggle to justify GHD work at all on cluelessness grounds. GiveWell-type analyses ignore a lot of foreseeable indirect effects of the interventions e.g. those on non-human animals. It isn't clear to me that GHD work is net positive. I'd be interested in further work on this important point given how much money is given to GHD interventions in the community.
  • Not all x-risk is the same. Are there specific classes of x-risk that are pretty robust to issues people have raised to 'x-risk as a whole'? For example might s-risks - those that deal with outcomes far worse than extinction - be pretty robust? Are certain interventions, such as expanding humanity's moral circle or boosting economic growth, robustly positive and better than alternatives such as GHD even if the Time of Perils hypothesis isn't true? I'm genuinely not sure, but I know I don't feel comfortable lumping all x-risk / all x-risk interventions together in one bucket.

As it stands I struggle to justify GHD work at all on cluelessness grounds. GiveWell-type analyses ignore a lot of foreseeable indirect effects of the interventions e.g. those on non-human animals.

I support most of this comment, but strongly disagree with this, or at least think it's much too strong. Cluelessness isn't a categorical property which some interventions have and some don't - it's a question of how much to moderate your confidence in a given decision. Far from being the unanswerable question Greaves suggests, it seems reasonable to me to do any or all of the following:

  1. Assume unknown unknowns pan out to net 0
  2. Give credences on a range of known unknowns
  3. Time-limit the above process in some way, and give an overall best guess expectation for remaining semi-unknowns 
  4. Act based on the numbers you have from above process when you stop
  5. Incorporate some form of randomness in the criteria you investigate

If you're not willing to do something like the above, you lose the ability to predict anything, including supposedly long-termist interventions, which are all mired in their own uncertainties.

So while one might come to the view that GHD is in fact bad because of eg the poor meat eater problem, it seems irrational to be agnostic on the question, unless you're comparably agnostic towards every other cause.

I think you might be right that Greaves is too strong on this and I'll admit I'm still quite uncertain about exactly how cluelessness cashes out. However, I know I have difficulties funding GHD work (and would even if there was nothing else to fund), but that I don't have similar difficulties for certain longtermist interventions. I'll try to explain.

I don't want to fund GHD work because it's just very plausible that the animal suffering might outweigh the human benefit. Some have called for development economists to consider the welfare of non-human animals. Despite this, GiveWell hasn't yet done this (I'm not criticising - I know it's tricky). I think it's possible for a detailed analysis to make me happy to give to GHD interventions (over burning the money), but we aren't there yet.

On the other hand, there are certain interventions that either:

  • Have no plausible foreseeable downsides, or
  • For which I am pretty confident the upsides outweigh the downsides in expectation.

For example:

  • Technical AI alignment research / AI governance and coordination research: I struggle to come up with a story why this might be bad. Maybe the story is that it would slow down progress and delay benefits from AI, but the typical Bostrom astronomical waste argument combined with genuine concerns about AI safety from experts debunks this story for me. I am left feeling pretty confident that funding technical AI alignment research is net positive in expectation.
  • Expanding our moral circle: Again, I just struggle to come up with a story why this might be bad. Of course, poor advocacy can be counterproductive which means we should be careful and not overly dogmatic/annoying. Spreading the ideas and advocating in a careful, thoughtful manner just seems robustly positive to me. Upsides seem very large given risks of lock-in of negative outcomes for non-humans (e.g. non-human animals or digital sentience).

Some other interventions I don't think I'm clueless about include:

  • Global priorities research
  • Growing EA/longtermism (I could be convinced it's bad to try to grow EA given recent negative publicity)
  • Investing for the future
  • Research into consciousness
  • Research into improving mental health

I have no particular reason to think you shouldn't believe in any of those claims, but fwiw I find it quite plausible (though wouldn't care to give particular credences atm) that at least some of them could be bad, eg:

  • Technical AI safety seems to have been the impetus for various organisations who are working on AI capabilities in a way that everyone except them seems to think is net negative (OpenAI, Deepmind, Anthropic, maybe others). Also, if humans end up successfully limiting AI by our own preferences, that could end up being a moral catastrophe all of its own.
  • 'Expanding our moral circle' sounds nice, but without a clear definition of the morality involved it's pretty vague what it means - and with such a definition, it could cash out as 'make people believe our moral views', which doesn't have a great history.
  • Investing for the future could put a great deal of undemocratic power into the hands of a small group of people whose values could shift (or turn out to be 'wrong') over time.

And all of these interventions just cost a lot of money, something which the EA movement seems very short on recently.


I don't buy the argument that AI safety is in some way responsible for dangerous AI capabilities. Even if the concept of AI safety had never been raised I'm pretty sure we would still have had AI orgs pop up.

Also yes it is possible that working on AI Safety could limit AI and be a catastrophe in terms of lost welfare, but I still think AI safety work is net positive in expectation given the Bostrom astronomical waste argument and genuine concerns about AI risk from experts. 

The key point here is that cluelessness doesn't arise just because we can think of ways an intervention could be both good and bad - it arises when we really struggle to weigh these competing effects. In the case of AI safety, I don't struggle to weigh them.

Expanding moral circle for me would be expanding to anything that is sentient or has the capacity for welfare.

As for investing for the future, you can probably mitigate those risks. Again though my point stands that, even if that is a legitimate worry, I can try to weigh that risk against the benefit. I personally feel fine in determining that, overall, investing funds for future use that are 'promised for altruistic purposes' seems net positive in expectation. We can debate that point of course, but that's my assessment.

I think at this point we can amicably disagree, though I'm curious why you think the 'more people = more animals exploited' philosophy applies to people in Africa, but not in the future. One might hope that we learn to do better, but it seems like that hope could be applied to and criticised in either scenario.

I do worry about future animal suffering. It's partly for that reason that I'm less concerned about reducing risks of extinction than I am about reducing other existential risks that will result in large amounts of suffering in the future. This informed some of my choices of interventions for which I am 'not clueless about'. E.g. 

  • Technical AI alignment / AI governance and coordination research: it has been suggested that misaligned AI could be a significant s-risk.
  • Expanding our moral circle: relevance to future suffering should be obvious.
  • Global priorities research: this just seems robustly good as how can increasing moral understanding be bad?
  • Research into consciousness: seems really important in light of the potential risk of future digital minds suffering.
  • Research into improving mental health: improving mental health has intrinsic worth and I don't see a clear link to increasing future suffering (in fact I lean towards thinking happier people/societies are less likely to act in morally outrageous ways).

I do lean towards thinking reducing extinction risk is net positive in expectation too, but I am quite uncertain about this and I don't let it motivate my personal altruistic choices.

Thanks for engaging, Jack! As you'd expect, we can't tackle everything in a single sequence; so, you won't get our answers to all your questions here. We say a bit more about the philosophical issues associated with going beyond EVM in this supplementary document, but since our main goal is to explore the implications of alternatives to EVM, we're largely content to motivate those alternatives without arguing for them at length.

Re: GHD work and cluelessness, I hear the worry. We'd like to think about this more ourselves. Here's hoping we're able to do some work on it in the future.

Re: not all x-risk being the same, fair point. We largely focus on extinction risk and do try to flag as much in each report.

Thanks for your response, I'm excited to see your sequence. I understand you can't cover everything of interest, but maybe my comments give ideas as to where you could do some further work.

As it stands I struggle to justify GHD work at all on cluelessness grounds. GiveWell-type analyses ignore a lot of foreseeable indirect effects of the interventions e.g. those on non-human animals. It isn't clear to me that GHD work is net positive.

Would you mind expanding a bit on why this applies to GHD and not other cause areas please? E.g.: wouldn't your concerns about animal welfare from GHD work also apply to x-risk work?

I'll direct you to my response to Arepo

I'm interested in the meta question of how much we should be swayed by counterintuitive conclusions when EVM seems to be so well-motivated (e.g. VNM theorem), and the fact that we know we are prone to biases and cognitive difficulties with large numbers.

I've been interested in this as well, and I consider Holden's contra arguments in Sequence thinking vs. cluster thinking persuasive in changing my mind for decision guidance in practice (e.g. at the "implementation level" of personally donating to actual x-risk mitigation funds) -- I'd be curious to know if you have a different reaction.

Edited to add: I just realized who I'm replying to, so I wanted to let you know that your guided cause prio flowchart was a key input at the start of my own mid-career pivot, and I've been sharing it from time to time with other people. In that post you wrote

my (ambitious) vision would be for such a flowchart to be used widely by new EAs to help them make an informed decision on cause area, ultimately improving the allocation of EAs to cause areas.

and if I'm interpreting your followup comment correctly, it's sad to see little interest in such a good first-cut distillation. Here's hoping interest picks up going forward.

I'll need to find some time to read that Holden post.

I'm happy that the flowchart was useful to you! I might consider working on it in the future, but I think the issues are that I'm not convinced many people would use it and that the actual content of the flowchart might be pretty contentious - so it would be easy to be accused of being biased. I was using my karma score as a signal of if I should continue with it, and karma wasn't impressive.

I like how the sequence engages with several kinds of uncertainties that one might have.

I had two questions:

1. Does the sequence assume a ‘good minus bad’ view, where independent bads (particularly, severe bads like torture-level suffering) can always be counterbalanced or offset by a sufficient addition of independent goods?

  • (Some of the main problems with this premise are outlined here, as part of a post where I explore what might be the most intuitive ways to think of wellbeing without it.)

2. Does the sequence assume an additive / summative / Archimedean theory of aggregation (i.e. that “quantity can always substitute for quality”), or does it also engage with some forms of lexical priority views (i.e. that “some qualities get categorical priority”)?

The links are to a post where I visualize and compare the aggregation-related ‘repugnant conclusions’ of different Archimedean and lexical views. (It’s essentially a response to Budolfson & Spears, 2018/2021, but can be read without having read them.) To me, the comparison makes it highly non-obvious whether Archimedean aggregation should be a default assumption, especially given points like those in my footnote 15, where I argue/point to arguments that a lexical priority view of aggregation need not, on a closer look, be implausible in theory nor practice:

it seems plausible to prioritize the reduction of certainly unbearable suffering over certainly bearable suffering (and over the creation of non-relieving goods) in theory. Additionally, such a priority is, at the practical level, quite compatible with an intuitive and continuous view of aggregation based on the expected amount of lexically bad states that one’s decisions may influence (Vinding, 2022b, 2022e).

Thus, ‘expectational lexical minimalism’ need not be implausible in theory nor in practice, because in practice we always have nontrivial uncertainty about when and where an instance of suffering becomes unbearable. Consequently, we should still be sensitive to variations in the intensity and quantity of suffering-moments. Yet we need not necessarily formalize any part of our decision-making process as a performance of Archimedean aggregation over tiny intrinsic disvalue, as opposed to thinking in terms of continuous probabilities, and expected amounts, of lexically bad suffering.

The above questions/assumptions seem practically relevant for whether to prioritize (e.g.) x-risk reduction over the reduction of severe bads / s-risks. However,  it seems to me that these questions are (within EA) often sidelined, not deeply engaged with, or are given strong implicit answers one way or another, without flagging their crucial relevance for cause prioritization.

Thus, for anyone who feels uncertain about these questions (i.e. resisting a dichotomous yes/no answer), it could be valuable to engage with them as additional kinds of uncertainties that one might have.

Hi Teo. Those are important uncertainties, but our sequences doesn't engage with them. There's only so much we could cover! We'd be glad to do some work in this vein in the future, contingent on funding. Thanks for raising these significant issues.

I am very very excited to see this research it's the kind of thing that I think EAs should be doing a lot more of and it seems shocking that it takes us more than a decade to get round to such basic fundamental questions on cause prioritisation. Thank you so much for doing this.

I do however have one question and one potential concern.

Question: My understanding from reading the research agenda and plan here is that you are NOT looking into the topic of how best to make decisions under uncertainty (Knightian uncertainty, cluelessness, etc). It looks like you are focusing on resolving the question of WHAT exactly decision making should aim for (e.g. maximise true EV or not) but not the topic of HOW best to make those decisions (e.g. what decision tools to use, to what extent to rely on calculated EV as a tool versus other tools, when practically to satisfice or maximize, etc). It looks like you might touch on the HOW within the specific sub-question of uncertainty over time but not otherwise. Is this a correct reading of your research aims and agenda?

If so, this does puts limits on the conclusions you could draw.

I think that the majority (but by no means all) the people that I know in EA that have a carefully considered view that pushes them to focus on say global health above x-risk issues do so, not because they disagree on the WHAT because they disagree on the HOW. They are not avoiding maximising EV, or non-conseqentalist, or risk averse, they just put less weight on simple EV calculations as a decision tool and the set of tools that they do use to directs them away from x-risk work.

Such a conclusions or models built on just the WHAT question would be of limited use – not just because you need HOW to decide and WHAT too aim* for to make a decision – but specifically it is not hitting what, in my experience, is the primary (although not only) crux of people's actual disagreement here.

I'd be curious to hear if you agree with this analysis of the limits of the, still very very important, work you are doing.

. * As an aside I actually think in some cases it's possible to make do with the HOW but not the WHAT but not the other way round. For example you might believe that it has been shown empirically that in deep uncertainty situations a strategy of robust satisficing rather than maximizing allows players to win more war game scenarios or to feel more satisfied with their decision at a later point in time, and therefore believe that adopting such a strategy in situations deep uncertainty is optimal. You could believe this without taking a stance on or knowing whether or not such a strategy maximizes true EV, or is risk averse, etc.

Thanks for this. You're right that we don't give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you're raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line---and with additional funding!---we'll be glad to tackle many additional questions. And, for what it's worth, we do think that the groundwork we're laying now will make it easier to develop overall giving portfolios based on people's best judgments about how to balance the various kinds and degrees of uncertainty.

Sorry to be annoying but after reading the post "Animals of Uncertain Sentience" I am still very confused about the scope of this work

My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).

If I am wrong about the above do let me know!

Basically I find phrases like"EV maximization decision procedure" and "using EV maximisation to make these decisions" etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.

Thank you for the work you are doing on this.

"The team is only a few months old and the problems you're raising are hard"

Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.

But for what it's worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...

Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research

Hello Bob and team. Looking forward to reading this. To check, are you planning to say anything explicitly about your approach to moral uncertainty? I can't see anything directly mentioned in 5., which is where I guessed it would go. 

On that note Bob, you might recall that, a while back I mentioned to you some work I'm doing with a couple of other philosophers on developing an approach to moral uncertainty along these lines that will sometimes justify the practice of worldview diversification. That draft is nearly complete and your post series inspires me to try and get it over the line!

Nice to hear from you, Michael. No, we don't provide a theory of moral uncertainty. We have thoughts, but this initial sequence doesn't include them. Looking forward to your draft whenever it's ready.

Thank your for seriously tackling a topic that seems to be overlooked despite its huge significance. 

I am so excited to see this, as it looks like it might address many uncertainties I have but have not had a chance to think deeply about. Do you have a rough timeline on when you'll be posting each post in the series?

Thanks, Joshua! We'll be posting these fairly rapidly. You can expect most of the work before the end of the month and the rest in early November.

as reductions to the probability of an outcome can always be compensated for by proportional increases in its value


It's worth noting that this depends on the particular value function being used: holding some other standard assumptions constant, it works if and only if value is unbounded (above). There are bounded value (utility) functions whose expected value we could maximize instead. Among options that approximate total utilitarianism, there are the expected value of

  1. (time and space-)discounted total welfare, 
  2. rank-discounted total welfare,
  3. any bounded increasing function (like arctan) of total welfare.

In fact, this last one is also total utilitarianism: it agrees with total utilitarianism on all rankings between (uncertainty-free) outcomes. It's just not expectational total utilitarianism, which requires directly maximizing the expected value of total welfare.

And none of this has to give up the standard axioms of expected utility theory. Further, it satisfies very natural and similarly defensible extensions of those very rationality axioms that standard expectational total utilitarianism instead violates, specifically the Sure-Thing Principle and Independence axiom extended to infinitely (countably) many outcomes in prospects (due to St Petersburg prospects or similar, see Russell and Isaacs, 2021). I've also argued here that utilitarianism is irrational or self-undermining based on those results and other similar ones with prospects with infinitely many possible outcomes.

We shouldn't cede standard expected utility theory to expectational total utilitarians or others maximizing the expected value of unbounded utility functions. They have to accept acting apparently irrationally (getting money pumped, Dutch booked, paying to avoid information, etc.) in hypothetical cases where those with bounded utility functions wouldn't.


(Standard risk aversion with respect to total welfare would be maximizing the expected value of convex (EDIT) concave function of total welfare, but that would still be fanatical about avoiding worst cases and an unbounded utility function, so vulnerable to the generic objectios to them.)

Thanks for your comment, Michael. Our team started working through your super helpful recent post last week! We discuss some of these issues (including the last point you mention) in a document where we summarize some of the philosophical background issues. However, we only mention bounded utility very briefly and don't discuss infinite cases at all. We focus instead on rounding down low probabilities, for two reasons: first, we think that's what people are probably actually doing in practice, and second, it avoids the seeming conflict between bounded utility and theories of value. I'm sure you have answers to that problem, so let us know!

I got a bit more time to think about this.

I think there probably is no conflict between bounded utility (or capturing risk aversion with concave increasing utility functions) and theories of deterministic value, because without uncertainty/risk, bounded utility functions can agree with unbounded ones on all rankings of outcomes. The utility function just captures risk attitudes wrt to deterministic value.

Furthermore, bounded and concave utility functions can be captured as weighting functions, much like WLU. Suppose you have a utility function  of the value , which is a function of outcomes. Then, whether  is bounded or concave or whatever, we can still write:

where .[1] Then, for a random variable  over outcomes:

Compare to WLU, with some weighting function  of outcomes:

The difference is that WLU renormalizes.


By the way, because of this renormalizing, WLU can also be seen as adjusting the probabilities in  to obtain a new prospect. If  is the original probability distribution (for , i.e.  for each set of outcomes ), then we can define a new one by:[2]


  1. ^

    We can define  arbitrarily when  to avoid division by 0.

  2. ^

    You can replace the integrals with sums for discrete distributions, but integral notation is more general in measure theory.

Curated and popular this week
Relevant opportunities