Hide table of contents

TL;DR: rough estimates are better than no estimates. Refusals to quantify often hide that one is implicitly (and unjustifiably) counting some interests for zero.

Introduction

Inspired by Bentham’s Bulldog, I recently donated $1000 to the Shrimp Welfare Project. I don’t know that it’s literally “the best charity”—longtermist interventions presumably have greater expected value—but I find it psychologically comforting to “diversify” my giving,[1] and the prospect of averting ~500 hours[2] of severe suffering per dollar seems hard to pass up. If you have some funds available that aren’t otherwise going to an even more promising cause, consider getting in on the #shrimpact!

The train to #shrimpact

The fact that most people would unreflectively dismiss shrimp welfare as a charitable cause shows why effective altruism is no “truism”. Relatively few people are genuinely open to promoting the good (and reducing suffering) in a truly cause-neutral, impartial way. For those who are, we should expect the lowest-hanging fruit to be causes that sound unappealing. As a result, if someone gives exclusively to conventionally appealing causes, that’s strong evidence that they aren’t seriously trying to do the most impartial good. If you’re serious about doing more good rather than less, then you should be open to at least some weird-sounding stuff.[3]

And you should, of course, seriously try to do more good rather than less, at least some of the time, with some of your resources. (There are tricky questions about just how much of your time and resources should go towards optimizing impartial beneficence. But the correct answer sure ain’t zero.)[4]

A bad objection

In the remainder of this post, I want to discuss a terrible objection that people commonly appeal to when trying to rationalize their knee-jerk opposition to “weird” EA causes (like shrimp welfare or longtermism).

“Different things can’t be precisely quantified or compared”

This has got to be one of the most common objections to EA-style cost-effectiveness analyses, and it is so deeply confused. Oddly, I can’t recall seeing anyone else explain why it’s so confused. (Quick answer: rough estimates are better than no estimates.)

The problem, in a nutshell, is that quantification enables large-scale comparison, and such comparison is needed in order to make high-stakes tradeoffs in an informed way. Tradeoffs, in turn, are essential to practical rationality. We can’t avoid them: different values are in conflict, and can’t all be jointly satisfied. We have to choose, or “trade off”, between them. The only question is how. We can do so openly and honestly, by seriously trying to assess their comparative value or importance. Or we can do so dishonestly, with our heads in the sand, pretending that one of the values doesn’t have to be counted at all.

Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly “can’t be precisely quantified,” what they’re effectively doing is refusing to consider that thing at all. Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with what’s emotionally appealing at a gut level. And many things that are difficult to precisely quantify (like the suffering of non-cute animals) lack emotional appeal. They’ll be completely neglected in a vibes-based analysis. That is, in effect, to give them precisely zero weight.

To address the objection, consider the datum:

(Less Wrong): It’s better to be slightly wrong than to be very wrong about moral weights and priorities.

Something I find frustrating is that many people seem to instead endorse:

(Ostrich Thinking): It’s better to ignore a question than to answer it imperfectly.

Ostrich Thinking is very stupid unwise, because your unreflective assumptions could easily be even further from the truth than the imperfect answers you would reach by giving serious thought to a problem. Compared to ignoring numbers, even the roughest quantitative model or “back of the envelope” calculation can help us to be vastly less wrong.

“Your analysis requires a lot of assumptions…”

An especially popular form of Ostrich Thinking combines:

  • Rational satisficing: the crazy view that there’s no reason to do more good once you’ve identified a “good enough” option; and
  • Certainty bias: preferring the near-certainty of some positive impact over an uncertain prospect with much greater expected value.

Combining these two bad views yields the result that you should definitely donate to a “safe” option like GiveWell-recommended charities, rather than longtermist or animal welfare causes that involve a lot more uncertainty.[5] This view might be expressed by saying something like, “Prioritizing X is awfully speculative / depends on a lot of questionable assumptions…” But it’s important to understand that this actually gets things backwards.

Firstly, note that we should not simply be aiming to do a little good with certainty. We should always prefer to do more good than less, all else equal; and we should tolerate some uncertainty for the sake of greater expected benefits. (Both rational satisficing and certainty bias are deeply unreasonable.) So, the question that properly guides our philanthropic deliberations is not “How can I be sure to do some good?” but rather, “How can I (permissibly) do the most (expected) good?”

You cannot offer an informed answer to this question without forming judgments on “speculative” matters (from AI safety to insect sentience). This renders these topics puzzles for everyone. In order to be confident that global health charities are a better bet than AI safety or shrimp welfare, you need to assign negligible credence to the assumptions and models on which these other causes turn out to be orders of magnitude more cost-effective. That’s a big assumption! It’s actually much more epistemically modest to say, “I split my credence across a wide range of possibilities, some of which involve so much potential upside that even moderate credence in them suffices to make speculative cause X win out.”

Conventional Dogmatism

It’s worth reiterating this point, because even smart people often seem to miss it. It’s very conventional to think, “Prioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.” This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding. As I previously explained:

It’s essentially fallacious to think that “plausibly incorrect modeling assumptions” undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect “plausibly incorrect” conditions or assumptions). If there’s even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.

Tarsney’s Epistemic Challenge to Longtermism is so much better at this [than Thorstad]. As he aptly notes, as long as you’re on board with orthodox decision theory (and so don’t disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately aren’t capable of undermining the expected value argument for longtermism.

The case for shrimp welfare isn’t quite so astronomical, but the numbers are nonetheless large enough to accommodate plenty of uncertainty before the expected value dips below those of more typical charities. So it would seem similarly epistemically reckless to dismiss it as a cause area (compared to typical charities), without careful analysis.[6]

Conclusion

Strive for good judgment with numbers. Be wary of misleading appeals to complexity. Like the intellectual charlatans who use big words to hide their lack of ideas, moral charlatans send false signals of moral depth with their dismissive talk of “oversimplified quantitative models”—as though they had a more sophisticated alternative in their back pocket. But they don’t. Their alternative is unreflective vibes and Ostrich Thinking. They imagine that ignoring key factors—implicitly counting them for zero—is somehow more “sophisticated” or epistemically virtuous than a fallible estimate. Don’t fall for it. Better yet, share this corrective the next time you see such Ostrich Thinking in the wild: refusing to quantify is refusing to think.

While you’re at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible. Certainty bias can feel like you’re “playing it safe”—you’re minimizing the risk of failing to make any difference—but is that really the most important kind of risk? Be aware of other respects in which it can be quite wildly reckless to pass up better opportunities. For example, it can be morally reckless to ignore risks of extremely bad outcomes (e.g. extinction or long-term dystopias). And, as I’ve explained in this post, it can be epistemically “reckless”—really going out on a limb!—to assign extreme (near-zero) credence to plausible possibilities involving ultra-high impact. As long as you’re broadly open to expected value reasoning (as you plainly should be), even a fairly small chance of ultra-high impact can be well worth pursuing.

  1. ^

    I think it’s easier to give to high-EV “longshots” if you don’t feel like all your eggs are in one basket, even if the “one basket” approach technically has greater expected value. But YMMV.

  2. ^

    Or maybe it’s more like ~5000 hours, if the stunners are used for 10 years?

  3. ^

    Again, balance it out with a well-rounded charity portfolio if you need to. Whatever helps you to get higher expected impact than you otherwise would.

  4. ^

    If anyone’s aware of an argument to the contrary—that zero is better than even just, say, 1% optimizing impartial beneficence, I’d love to hear it. Many criticisms of EA rely upon the ‘all or nothing’ fallacy, and simply argue that utilitarianism (the most totalizing, extreme form that EA could conceivably take) is unappealing, as if that would somehow entail the wholesale rejection of optimizing impartial beneficence.

  5. ^

    To be clear, I’m a big fan of GiveWell and the charities it recommends! What I’m objecting to here is rather a particular pattern of reasoning that could lead one to mistakenly believe that GiveWell charities are clearly superior to animal welfare and longtermist alternatives. It’s fine to personally prefer GiveWell charities, but any minimally intelligent and reflective person should appreciate that there are difficult open questions surrounding cause prioritization, and good grounds for judging some alternatives to be even more promising. So I think it’s very unreasonable to be dismissive of any of the major EA cause areas.

  6. ^

    It’s not necessarily a problem to have extreme credences—some claims are very implausible, and should be assigned near-zero probability! But you should probably reflect carefully before forming such extreme views, especially when they’re wildly at odds with the views of many experts who have looked more closely into the matter.

57

6
1

Reactions

6
1

More posts like this

Comments9
Sorted by Click to highlight new comments since:

On multiple occasions, I've found a "quantified" analysis to be indistinguishable from a "vibes-based" analysis: you've just assigned those vibes a number, often one basically pulled out of your behind.  (I haven't looked enough into shrimp to know if this is one of those cases). 

I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.

A vibes-based quantitative analysis has the virtue that it's easier to critique than a vibes-based non-quantitative analysis.

Yeah, I agree that one also shouldn't blindly trust numbers (and discounting for lack of robustness of supporting evidence is one reasonable way to implement that). I take that to be importantly different from - and much more reasonable than - the sort of "in principle" objection to quantification that this post addresses.

Another comment : regarding the value of longtermist intervention, while I understand numbers can be very high, my main uncertainty is that I'm not even sure a lot of common interventions have a positive impact.

For instance, is working against X-risks good when avoiding an S-risks would allow factory farming to continue? The answer will depend on many questions (will factory farming continue in the future, what is the impact of humanity on wild animals, what will happen regarding artificial sentience, etc.), none of which have a clear answer.

Reducing S-risks seems good, though.

If I understand correctly, you’re arguing that we either need to:

  1. Put precise estimates on the consequences of what we do for net welfare across the cosmos, and maximize EV w.r.t. these estimates, or
  2. Go with our gut … which is just implicitly putting precise estimates on the consequences of what we do for net welfare across the cosmos, and maximizing EV w.r.t. these estimates.

I think this is a false dichotomy,[1] even for those who are very confident in impartial consequentialism and risk-neutrality (as I am!). If (as suggested by titotal’s comment) you worry that precise estimates of net welfare conditional on different actions are themselves vibes-based, you have option 3: Suspend judgment on the consequences of what we do for net welfare across the cosmos, and instead make decisions for reasons other than “my [explicit or implicit] estimate of the effects of my action on net welfare says to do X.” (Coherence theorems don’t rule this out.)

What might those other reasons be? A big one is moral uncertainty: If you truly think impartial consequentialism doesn’t give you compelling reasons either way, because our estimates of net welfare are hopelessly arbitrary, it seems better to follow the verdicts of other moral views you put some weight on. Another alternative is to reflect more on what your reasons for action are exactly, if not "maximize EV w.r.t. vibes-based estimates." You can ask yourself, what does it mean to make the world a better place impartially, under deep uncertainty? If you’ve only looked at altruistic prioritization from the perspective of options 1 or 2, and didn’t realize 3 was on the table, I find it pretty plausible that (as a kind of bedrock meta-normative principle) you ought to clarify the implications of option 3. Maybe you can find non-vibes-based decision procedures for impartial consequentialists. ETA: Ch. 5 of Bradley (2012) is an example of this kind of research, not to say I necessarily endorse his conclusions.

(Just to be clear, I totally agree with your claim that we shouldn’t dismiss shrimp welfare — I don’t think we’re clueless about that, though the tradeoffs with other animal causes might well be difficult.)

  1. ^

    This is also my reply to Michael's comments here and here.

Very interesting and well formulated! It highlights several hidden assumptions that can significantly reduce your ability to have an impact.

Indeed, from what I've seen, the (natural) tendency of giving very low moral value to other animals (eg less than 1000 that of a human) often stems from gut feeling, with added justifications afterwards.

I sort-off bounced of this one Richard. I'm not a professor of moral philosophy, so some of what I say below may seem obviously wrong/stupid/incorrect - but I think that were I a philosophy professor I would be able to shape it into a stronger objection than it might appear on first glance.

Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly “can’t be precisely quantified,” what they’re effectively doing is refusing to consider that thing at all.

I don't think this would pass an ideological Turing Test. I think what people who make this claim are saying is often that previous attempts to quantify the good precisely have ended up having morally bad consequences. Given this history, perhaps our takeaway shouldn't be "they weren't precise enough in their quantification" and should be more "perhaps precise quantification isn't the right way to go about ethics".

Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with what’s emotionally appealing at a gut level.

Again, I don't think this is true. Would you say that before the publication of Famine, Affluence, and Morality that all moral philosophy was just "vibes-based analysis"? I think, instead, all of moral reasoning is in some sense 'vibes-based' and the quantification of EA is often trying to present arguments for the EA position.

To state it more clearly, what we care about is moral decision-making, not the quantification of moral decisions. And most decisions that have been made or have ever been made have been done so without quantification. What matters is the moral decisions we make, and the reasons we have for those decisions/values, not what quantitative value we place on said decisions/values.

the question that properly guides our philanthropic deliberations is not “How can I be sure to do some good?” but rather, “How can I (permissibly) do the most (expected) good?”

I guess I'm starting to bounce of this because I now view this as a big moral commitment which I think goes beyond simple beneficentrism. Another view, for example, would be a contractualism, where what 'doing good' means is substantially different from what you describe here, but perhaps that's a base metaethical debate.

It’s very conventional to think, “Prioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.” This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.

I think this is confusing two forms of 'extreme'. Like in one sense the default 'animals have little-to-no moral worth' view is extreme for setting the moral value of animals so low as to be near zero (and confidently so at that). But I think the 'extreme' in your first sentence refers to 'extreme from the point of view of society'.

Furthermore, if we argue that quantifying expected value in quantitative models is the right way to do moral reasoning (as opposed to sometimes being a tool), then you don't have to accept the "even a 1% chance is enough", I could just decline to find a tool that produces such dogmatism at 1% acceptable. You could counter with "your default/status-quo morality is dogmatic", which sure. But it doesn't convince me to accept strong longtermism any more, and I've already read a fair bit about it (though I accept probably not as much as you).

While you’re at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible.

One man's "conventional dogmatism" could be reframed as "the accurate observation that people with totalising philosophies promising ultra-high-impact have a very bad track record that have often caused harm and those with similar philosophies ought to be viewed with suspicion"


Sorry if the above was a bit jumbled. It just seemed this post was very unlike your recent Good Judgement with Numbers post, which I clicked with a lot more. This one seems to be you, instead of rejecting the ‘All or Nothing’ Assumption, actually going "all in" on quantitative reasoning. Perhaps it was the tone with which it was written, but it really didn't seem to actually engage with why people have an aversion to over-quantification of moral reasoning.

Thanks for the feedback! It's probably helpful to read this in conjunction with 'Good Judgment with Numbers', because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.

(A general issue I often find here is that when I'm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesn't refute this - very different - 'steelman' position that they have in mind. But I'm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it!  I'm arguing against the specific position specified in the post, i.e. holding that different kinds of values can't -- literally, can't, like, in principle -- be quantified.)

I think this is confusing two forms of 'extreme'.

I'm actually trying to suggest that my interlocutor has confused these two things. There's what's conventional vs socially extreme, and there's what's epistemically extreme, and they aren't the same thing. That's my whole point in that paragraph. It isn't necessarily epistemically safe to do what's socially safe or conventional.

This has got to be one of the most common objections to EA-style cost-effectiveness analyses, and it is so deeply confused. Oddly, I can’t recall seeing anyone else explain why it’s so confused.

I suspect you could mathematically prove that, given certain assumptions, a cost-effectiveness analysis is the correct thing to do in theory. My intuition is that if you make some set of decisions, then this forces you to assign numeric cost-effectivenesses to the expected outcomes of those decisions, except you're doing it implicitly instead of explicitly. I think the proof for this would look something like the proof of the VNM utility theorem.

Curated and popular this week
Relevant opportunities