Hide table of contents

Executive Summary

  • We should take uncertainty seriously. Rethink Priorities’ Moral Parliament Tool, for instance, highlights that whether a worldview favors a particular project depends on relatively small differences in empirical assumptions and the way we characterize the commitments of that worldview.
  • We have good reason to be uncertain:
    • The relevant empirical and philosophical issues are difficult.
    • We’re largely guessing when it comes to most of the key empirical claims associated with Global Catastrophic Risks and Animal Welfare.
    • As a community, EA has some objectionable epistemic features—e.g., it can be an echo chamber—that should probably make us less confident of the claims that are popular within it.
  • The extent of our uncertainty is a reason to build models more like the Portfolio Builder and Moral Parliament Tools and less like traditional BOTECs. This is because:
    • Our models allow you to change parameters systematically to see how those changes affect allocations, permitting sensitivity analyses.
    • BOTECs don’t deliver optimizations.
    • BOTECs don’t systematically incorporate alternative decision theories or moral views.
    • Building a general tool requires you to formulate general assumptions about the functional relationships between different parameters. If you don’t build general tools, then it’s easier to make ad hoc assumptions (or ad hoc adjustments to your assumptions).

Introduction

Most philanthropic actors, whether individuals or large charitable organizations, support a variety of cause areas and charities. How should they prioritize between altruistic opportunities in light of their beliefs and decision-theoretic commitments? The CRAFT Sequence explores the challenge of constructing giving portfolios. Over the course of this sequence—and, in particular, through Rethink Priorities’ Portfolio Builder and Moral Parliament Tools—we’ve investigated the factors that influence our views about optimal giving. For instance, we may want to adjust our allocations based on the diminishing returns of particular projects, to hedge against risk, to accommodate moral uncertainty, or based on our preferred procedure for moving from our commitments to an overall portfolio.

In this final post, we briefly recap the CRAFT Sequence, discuss the importance of uncertainty, and argue why we should be quite uncertain about any particular combination of empirical, normative, and metanormative judgments. We think that there is a good case for developing and using frameworks and tools like the ones CRAFT offers to help us navigate our uncertainty.

Recapping CRAFT

We can be uncertain about a wide range of empirical questions, ranging from the probability that an intervention has a positive effect of some magnitude to the rate at which returns diminish.

We can be uncertain about a wide range of normative questions, ranging from the amount of credit that an actor can take to the value we ought to assign to various possible futures.

We can be uncertain about a wide range of metanormative questions, ranging from the correct decision theory to the correct means of resolving disagreements among our normative commitments.

Over the course of this sequence—and, in particular, through Rethink Priorities’ Portfolio Builder and Moral Parliament Tools—we’ve tried to do two things.

First, we’ve tried to motivate some of these uncertainties:

  • We’ve explored alternatives to EV maximization’s use as a decision procedure. Even if EV maximization is the correct criterion of rationality, it’s questionable as a decision procedure that ordinary, fallible people can use to make decisions given all their uncertainties and limitations.
  • We’ve explored the problems and promises of difference-making risk aversion (DMRA). A DMRA agent is averse to situations where her action does no tangible good or, worse, reduces the amount of value in the world, so she prefers actions that are likely to make a difference. We’ve suggested DMRA might be extrinsically justified as a means of obtaining the greatest absolute good; we’ve also argued that difference-making might be intrinsically valuable, which would make it sensible for agents to be risk-averse about the difference that they make.
  • Likewise, one common method for moderating the fanatical effects of expected value maximization is to ignore very low probability outcomes, rounding them down to 0. Then, one maximizes EV across the remaining set of sufficiently probable outcomes. Though there are decision-theoretic objections to this approach, an epistemic defense of it is possible: we should (or are permitted to) round down low subjective credences that reflect uncertainty about how the world really is.

Second, we’ve tried to give structure to our ignorance, and thereby show how these uncertainties matter:

  • For example, our models show that unless we’re confident that the returns on x-risk interventions are high and stable and we ought to be EV maximizers, it can be optimal to diversify a giving portfolio to include fairly large amounts of spending on Animal Welfare and Global Health and Development.
  • Our models also indicate that whether a worldview favors a particular project depends on relatively small differences in the empirical assumptions and the ways we characterize the commitments of that worldview—e.g., small changes in the anticipated value of the future or the moral weights we assign to different species can cause significant changes in our first-order normative judgments.
  • Finally, it isn’t obvious whether the various strategies for handling moral uncertainty—e.g., using My Favorite Theory, maximizing expected choiceworthiness, or selecting one of several voting or bargaining options—will agree or disagree about the optimal portfolio. Sometimes they converge, sometimes they don’t, and the outcome depends on the details of the case.

Given all this, it matters how confident we are in any particular combination of empirical, normative, and metanormative judgments.

How confident should we be in any particular combination of empirical, normative, and metanormative judgments?

We suggest: not very confident. The argument isn’t complicated. In brief, it’s already widely acknowledged that:

  • The relevant empirical and philosophical issues are difficult, inspiring deep and seemingly intractable disagreements about almost every dimension of global priorities research.
  • We’re largely guessing when it comes to most of the key empirical claims associated with Global Catastrophic Risks and Animal Welfare.
  • As a community, EA has some objectionable epistemic features—e.g., it can be an echo chamber—that should probably make us less confident of the claims that are popular within it.

Given these claims, our credence in any particular crux for portfolio construction—e.g., the cost curves for corporate campaigns for chickens, the plausibility of total hedonistic utilitarianism, the value of the future, the probability that a particular intervention will backfire, etc.—should probably be modest. It would be very surprising to be the people who had figured out some of the central problems of philosophy, tackled stunningly difficult problems in forecasting, and did it all while being members of a group that (like many others) isn’t always maximally epistemically virtuous.

The relevant empirical and philosophical issues are difficult

This point hardly needs any defense, but to drive it home, just consider whether there’s any interesting thesis in global priorities research that isn’t contested. Here are some claims that might seem completely obvious and yet there are interesting, thoughtful, difficult-to-address arguments against them:

  • The future will be net positive. However, some worry that we’ll end up creating huge numbers of digital minds and/or animals who will have net negative lives, in which case the value of even a large and net positive human population could be swamped by the suffering of a huge number of nonhuman individuals.  
  • Animals matter. However, it’s also common to think that (a) animals matter if and only if they’re sentient and (b) that sentience requires some kind of metacognition that animals lack.
  • On balance, preventing children from dying has a positive effect on overall welfare. However, surviving children may go on to consume animal products derived from animals with net negative lives. If they eat enough of them, then saving children could be net negative.

To be clear, we are not criticizing the claims in bold! Instead, we’re pointing out that even when we focus on claims that feel blindingly obvious, there are reasons not to be certain, not to assign a credence of 1. And if these claims are dubitable, then how much more dubitable are claims like:

  • The probability that a given AI governance intervention will succeed is greater than 0.003%.
  • On average, you can claim 10 years of counterfactual credit for corporate campaigns to improve layer hen welfare.
  • The probability that a given public health intervention won’t make things 2x worse than they are now is <0.046%.

In each of these latter cases, the empirical and normative issues are at least as complex as they are in the former cases. So, if we can’t be fully confident about the former, then we clearly can’t be fully confident about the latter. And since a string of such dubitable assumptions is required to justify any particular class of interventions, we should have fairly low confidence that any particular class of interventions deserves all our resources.

We’re largely guessing when it comes to most of the key empirical claims associated with GCR and animal work

As before, this claim needs little defense. Anyone who has tried to BOTEC the cost-effectiveness of a GCR or animal intervention knows that there isn’t any rigorous research to cite in defense of many of the numbers. (Indeed, if you consider flowthrough effects, the same point probably applies to many GHD interventions too.) Indeed, EAs are now so accustomed to fabricating numbers that they hardly flinch. Consider, for instance, Arepo’s response to concerns that the numbers you plug into his calculators are arbitrary:

The inputs… are, ultimately, pulled out of one’s butt. This means we should never take any single output too seriously. But decomposing big-butt numbers into smaller-butt numbers is essentially the second commandment of forecasting.

In other words: “Of course we’re just guessing!” Granted, he thinks we’re guessing in a way that follows the best methodological advice available, but it’s still guesswork.

As a community, EA has some objectionable epistemic features

Yet again, this claim needs little defense. We might worry that many people in EA are engaged in motivated reasoning or have social incentives not to share evidence, perhaps in defense of the idiosyncratic views of key funders. We might worry that EAs have some “creedal beliefs” that serve to signal insider status; so, insofar as that status is desirable, these beliefs may not be driven by a truth-tracking process. EA may also be an echo chamber, where insiders are trusted much more than outsiders when it comes to arguments and evidence. We might worry that EAs are at risk of being seduced by clarity, where assigning numbers to things can make us feel as though we understand more about a situation or phenomenon than we really do. And, of course, some argue that EA is homogenous, inegalitarian, closed, and low in social/emotional intelligence.

We’re fellow travelers; we aren't trying to demonize the EA community. Moreover, it’s hardly the case that other epistemic communities are vastly superior. Still, the point is that insofar as the EA community plays an important role in explaining why we have certain beliefs, these kinds of features should lower our confidence in those beliefs—not necessarily by a lot, but by some.

There are better and worse ways of dealing with uncertainty

The extent of our uncertainty is a reason for people to think explicitly and rigorously about the assumptions behind their models. It’s also a reason to build models more like the Portfolio and Moral Parliament Tools and less like traditional BOTECs. This is because:

  • Our models allow you to change parameters systematically to see how those changes affect allocations, permitting sensitivity analyses.
  • BOTECs don’t deliver optimizations.
  • BOTECs don’t systematically incorporate alternative decision theories or moral views.
  • Building a general tool requires you to formulate general assumptions about the functional relationships between different parameters. If you don’t build general tools, then it’s easier to make ad hoc assumptions (or ad hoc adjustments to your assumptions).

We hope that our tools are useful first steps toward better models and more critical eyes on the assumptions that go into them. We also hope that they prompt us to elicit and structure the uncertainty we face.

In the future, we could improve and supplement these tools in several ways. For instance:

  • We could develop specific iterations of these tools for different organizations or grantmakers facing particular allocation problems—e.g., by adapting them to the cost curves of the projects that are most salient to their situation. We could lead workshops on how to get the most value out of these tools when using them for advising purposes. We could also make various custom features, such as new risk distributions for specific projects, new worldviews, or new features of worldviews of interest to specific donors. In general, we could make it easier for people to explore the implications of different assumptions for the projects that matter to them.
  • We could supplement these tools with an interactive model of the value of movement-building.
  • On the more ambitious end of the “improve these tools,” we could incorporate the timing of spending into the Portfolio Builder, thereby giving it the capacity to inform debates over patient philanthropy.
  • On the ambitious end of the “supplement these tools” spectrum, we could develop a tool for handling correlated risks, which would improve within-cause prioritization—e.g., a tool that would allow you to set priorities within GCR, balancing riskier projects (e.g., AI capabilities research) with safer ones (e.g., biosecurity projects that aim to improve personal protective equipment).

Acknowledgments

The CRAFT Sequence is a project of the Worldview Investigation Team at Rethink Priorities. This post was written by Hayley Clatterbuck, Bob Fischer, and Arvo Muñoz Morán. Thanks to David Moss and Derek Shiller for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.

Comments4
Sorted by Click to highlight new comments since:

I enjoyed this post and this series overall. However, I would have liked more elaboration on the section about EA's objectionable epistemic features. Only one of the links in this section refer to EA specifically; the others warn about risks from group deliberation more generally.

And the one link that did specifically address the EA community wasn't persuasive. It made many unsupported assertions. And I think it's overconfident about the credibility of the literature on collective intelligence, which IMO has significant problems.

Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.

I am not seeing the issues posed by uncertainty implemented fully in your tools. I'd like to see an in-depth treatment (and incorporation into your tools) of the position stated by Andreas Mogensen in his paper 'Maximal Cluelessness', Global Priorities Institute Working Paper No. 2/2019:

"We lack a compelling decision theory that is consistent with a long-termist perspective and does not downplay the depth of our uncertainty while supporting orthodox effective altruist conclusions about cause prioritisation."

In my view, if one accepts 100% the implications of maximal cluelessness (which is ever more strongly supported by dynamical systems and chaos theory, the more longtermist the perspective), then the logical conclusion from that position is to fund projects randomly, with random amounts.

The RP team may wish to consider prioritising the study of complexity and dynamical systems etc. as part of their continuing professional development (CPD). I recommend the courses offered by the Santa Fe Institute. You can register for most courses at any time, but the agent-based modelling course requires registration and starts at the end of August: https://www.complexityexplorer.org/courses/183-introduction-to-agent-based-modeling

Stephen Hawking famously once said that the 21st century would be the century of complexity. I wholeheartedly agree. IMHO, in these non-linear times, it should be a part of every scientist's (and philosopher's) basic education.

Thanks, Deborah. Derek Shiller offered an answer to your question here.

Curated and popular this week
Relevant opportunities