Foreword
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don't have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone: the maths!
The point I raise here is closely related to the Two Envelopes Problem, which has been discussed before. I think some of this discussion can come across as 'too technical', which is unfortunate since I think a qualitative understanding of the issue is critical to making good decisions when under substantial uncertainty. In this post I want to try and demystify it.
This post was written quickly, and has a correspondingly high chance of error, for which I apologise. I am confident in the core point, and something seemed better than nothing.
Two envelopes: the EA version
A commonly-deployed argument in EA circles, hereafter referred to as the "Multiplier Argument", goes roughly as follows:
- Under 'odd' but not obviously crazy assumptions, intervention B is >100x as good as intervention A.
- You may reasonably wonder whether those assumptions are correct.
- But unless you put <1% credence in those assumptions, or think that B is negative in the other worlds, B will still come out ahead.
- Because even if it's worthless 99% of the time, it's producing enough value in the 1% to more than make up for it!
- So unless you are really very (over)confident that those assumptions are false, you should switch dollars/support/career from A to B.
I have seen this for both Animal Welfare and Longtermism as B, usually with Global Health as A. As written, this argument is flawed. To see why, consider the following pair of interventions:
- A has produces 1 unit of value per $, or 1000 units per $, with 50/50 probability.
- B is identical to A, and independently will be worth 1 or 1000 per $ with 50/50 probability.
We can see that B's relative value to A is as follows:
- In 25% of worlds, B is 1000x more effective than A
- In 50% of worlds, B and A are equally effective.
- In 25% of worlds, B is 1/1000th as effective as A
In no world is B negative, and clearly we have far less than 99.9% credence in A beating B, so B being 1000x better than A in its favoured scenario seems like it should carry the day per the Multiplier Argument...but these interventions are identical!
What just happened?
The Multiplier Argument relies on mathematical sleight of hand. It implicitly calculated the expected ratio of impact between B and A, and the expected ratio in the above example is indeed way above 1:
E(B/A) = 25% * 1000 + 50% * 1 + 25% * 1/1000 = 250.5
But the difference in impact, or E(B-A), which is what actually counts, is zero. In 25% of worlds we gain 999 by switching from A to B, in a mirror set of worlds we lose 999, and in the other 50% there is no change.
Tl;DR: Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.
In fact, we could use a Multiplier Argument to construct a seemingly-overwhelming argument for switching from A to B, and then use the same argument to argue for switching back again! Which is essentially the classic Two Envelopes Problem.
Some implications
One implication is that you cannot, in general, ignore the inconvenient sets of assumptions where your suggested intervention B is losing to intervention A. You need to consider A's upside cases directly, and how the value being lost there compares to the value being gained in B's upside cases.
If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand. I don't buy it, for reasons I will get into in the next section, but I do want to acknowledge that this is technically sufficient for Multiplier Arguments to be valid, and I do think some variant of this assumption is close-enough to true for many comparisons, especially intra-worldview comparisons.
But in general, the worlds where A is particularly valuable will correlate with the worlds where it beats B, because that high value is helping it beat B! My toy example did not make any particular claim about A and B being anti-correlated, just independent. Yet it still naturally drops out that A is far more valuable in the A-favourable worlds than in the B-favourable worlds.
Global Health vs. Animal Welfare
Everything up to this point I have high confidence in. This section I consider much more suspect. I had some hope that the week would help me on this issue. Maybe the comments will, otherwise 'see you next time' I guess?
Many posts this week reference RP's work on moral weights, which came to the surprising-to-most "Equality Result": chicken experiences are roughly as valuable as human experiences. The world is not even close to acting as if this were the case, and so a >100x multiplier in favour of helping chickens strikes me as very credible if this is true.
But as has been discussed, RP made a number of reasonable but questionable empirical and moral assumptions. Of most interest to me personally is the assumption of hedonism.
I am not a utilitarian, let alone a hedonistic utilitarian. But when I try to imagine a hedonistic version of myself, I can see that much of the moral charge that drives my Global Health giving would evaporate. I have little conviction about the balance of pleasure and suffering experienced by the people whose lives I am attempting to save. I have much stronger conviction that they want to live. Once I stop giving any weight to that preference [2], my altruistic interest in saving those lives plummets.
To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could[3] look like:
- In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
- In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely 'hedonism is true' scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.
Conclusion
As far as I know, the fact that Multiplier Arguments fail in general and are particularly liable to fail where multiple moral theories are being considered - as is usually the case when considering Animal Welfare - is fairly well-understood among many longtime EAs. Brian Tomasik raised this issue years ago, Carl Shulman makes a similar point when explaining why he was unmoved by the RP work here, Holden outlines a parallel argument here, and RP themselves note that they considered Two Envelopes "at length".
It is not, in isolation, a 'defeater' of animal welfare, as a cursory glance at the prioritisation of the above would tell you. I would though encourage people to think through and draw out their tables under different credible theories, rather than focusing on the upside cases and discarding the downside ones as the Multiplier Argument pushes you to do.
You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value. If so, then you can safely go where the Multiplier Argument takes you.
Just be aware that many of us do not feel that way.
- ^
Defined roughly as 'the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person'.
- ^
Except to the extent it's a signal about the pleasure/suffering balance I suppose. I don't think it does provides much information though; people generally seem to have a strong desire to survive in situations that seem to me to be very suffering-dominated.
- ^
For the avoidance of doubt, to the extent I have attempted to draw this out my balance of credences and values end up a lot more messy.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that "animals don't count at all". I think it's somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didn't really justify his view in his comment thread. I've never read Zvi justify that view anywhere either. I've heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term "overwhelming" because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, you'd need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Jules' argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you don't endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. There's just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, they'd have to be really certain that animals aren't conscious to endorse global health here. Even if there's a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think they'd still merit a significant fraction of EA funding. (Probably still more than they're currently receiving.)
I think it's fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/painkillers/social interaction as humans' are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while I'm not a consciousness expert at all, the New York Declaration on Animal Consciousness says that "there is strong scientific support for attributions of conscious experience to other mammals and to birds". Rethink Priorities' and Luke Muehlhauser's work for Open Phil corroborate that. So Yud's view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yud's Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didn't admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didn't make any attempt to justify them. So I didn't find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals don't count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I haven't read anything remotely convincing that justifies that view on the merits. That's why I didn't even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
in 2017, Holden's personal reflections "indicate against the idea that e.g. chickens merit moral concern". In 2018, Holden stated that "there is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not 'conscious' in a morally relevant way".