Foreword
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don't have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone: the maths!
The point I raise here is closely related to the Two Envelopes Problem, which has been discussed before. I think some of this discussion can come across as 'too technical', which is unfortunate since I think a qualitative understanding of the issue is critical to making good decisions when under substantial uncertainty. In this post I want to try and demystify it.
This post was written quickly, and has a correspondingly high chance of error, for which I apologise. I am confident in the core point, and something seemed better than nothing.
Two envelopes: the EA version
A commonly-deployed argument in EA circles, hereafter referred to as the "Multiplier Argument", goes roughly as follows:
- Under 'odd' but not obviously crazy assumptions, intervention B is >100x as good as intervention A.
- You may reasonably wonder whether those assumptions are correct.
- But unless you put <1% credence in those assumptions, or think that B is negative in the other worlds, B will still come out ahead.
- Because even if it's worthless 99% of the time, it's producing enough value in the 1% to more than make up for it!
- So unless you are really very (over)confident that those assumptions are false, you should switch dollars/support/career from A to B.
I have seen this for both Animal Welfare and Longtermism as B, usually with Global Health as A. As written, this argument is flawed. To see why, consider the following pair of interventions:
- A has produces 1 unit of value per $, or 1000 units per $, with 50/50 probability.
- B is identical to A, and independently will be worth 1 or 1000 per $ with 50/50 probability.
We can see that B's relative value to A is as follows:
- In 25% of worlds, B is 1000x more effective than A
- In 50% of worlds, B and A are equally effective.
- In 25% of worlds, B is 1/1000th as effective as A
In no world is B negative, and clearly we have far less than 99.9% credence in A beating B, so B being 1000x better than A in its favoured scenario seems like it should carry the day per the Multiplier Argument...but these interventions are identical!
What just happened?
The Multiplier Argument relies on mathematical sleight of hand. It implicitly calculated the expected ratio of impact between B and A, and the expected ratio in the above example is indeed way above 1:
E(B/A) = 25% * 1000 + 50% * 1 + 25% * 1/1000 = 250.5
But the difference in impact, or E(B-A), which is what actually counts, is zero. In 25% of worlds we gain 999 by switching from A to B, in a mirror set of worlds we lose 999, and in the other 50% there is no change.
Tl;DR: Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.
In fact, we could use a Multiplier Argument to construct a seemingly-overwhelming argument for switching from A to B, and then use the same argument to argue for switching back again! Which is essentially the classic Two Envelopes Problem.
Some implications
One implication is that you cannot, in general, ignore the inconvenient sets of assumptions where your suggested intervention B is losing to intervention A. You need to consider A's upside cases directly, and how the value being lost there compares to the value being gained in B's upside cases.
If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand. I don't buy it, for reasons I will get into in the next section, but I do want to acknowledge that this is technically sufficient for Multiplier Arguments to be valid, and I do think some variant of this assumption is close-enough to true for many comparisons, especially intra-worldview comparisons.
But in general, the worlds where A is particularly valuable will correlate with the worlds where it beats B, because that high value is helping it beat B! My toy example did not make any particular claim about A and B being anti-correlated, just independent. Yet it still naturally drops out that A is far more valuable in the A-favourable worlds than in the B-favourable worlds.
Global Health vs. Animal Welfare
Everything up to this point I have high confidence in. This section I consider much more suspect. I had some hope that the week would help me on this issue. Maybe the comments will, otherwise 'see you next time' I guess?
Many posts this week reference RP's work on moral weights, which came to the surprising-to-most "Equality Result": chicken experiences are roughly as valuable as human experiences. The world is not even close to acting as if this were the case, and so a >100x multiplier in favour of helping chickens strikes me as very credible if this is true.
But as has been discussed, RP made a number of reasonable but questionable empirical and moral assumptions. Of most interest to me personally is the assumption of hedonism.
I am not a utilitarian, let alone a hedonistic utilitarian. But when I try to imagine a hedonistic version of myself, I can see that much of the moral charge that drives my Global Health giving would evaporate. I have little conviction about the balance of pleasure and suffering experienced by the people whose lives I am attempting to save. I have much stronger conviction that they want to live. Once I stop giving any weight to that preference [2], my altruistic interest in saving those lives plummets.
To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could[3] look like:
- In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
- In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely 'hedonism is true' scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.
Conclusion
As far as I know, the fact that Multiplier Arguments fail in general and are particularly liable to fail where multiple moral theories are being considered - as is usually the case when considering Animal Welfare - is fairly well-understood among many longtime EAs. Brian Tomasik raised this issue years ago, Carl Shulman makes a similar point when explaining why he was unmoved by the RP work here, Holden outlines a parallel argument here, and RP themselves note that they considered Two Envelopes "at length".
It is not, in isolation, a 'defeater' of animal welfare, as a cursory glance at the prioritisation of the above would tell you. I would though encourage people to think through and draw out their tables under different credible theories, rather than focusing on the upside cases and discarding the downside ones as the Multiplier Argument pushes you to do.
You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value. If so, then you can safely go where the Multiplier Argument takes you.
Just be aware that many of us do not feel that way.
- ^
Defined roughly as 'the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person'.
- ^
Except to the extent it's a signal about the pleasure/suffering balance I suppose. I don't think it does provides much information though; people generally seem to have a strong desire to survive in situations that seem to me to be very suffering-dominated.
- ^
For the avoidance of doubt, to the extent I have attempted to draw this out my balance of credences and values end up a lot more messy.
I can try, but honestly I don't know where to start; I'm well-aware that I'm out of my depth philosophically, and this section just doesn't chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that 'we' already do things a certain way than an argument for why we should do things that way, and I don't seem to be part of the 'we'. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to 'why I don't buy this' than 'why I think you're wrong'.
***
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I don't think you're aiming for that? Regardless, it's not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the 'strength of my reason' to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesn't really say anything about the experience, while mine above does?
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long it's going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase 'light at the end of the tunnel' come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know it'll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words 'pain' and 'suffering'.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told it'll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my 'scoring' of pain's importance as well, e.g. a sense of whether it's for worthwhile reasons.
I'm with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more 'fog of war', very much including my own experiences, values and value. So - and this may be unfair - I feel like you're asking me 'why isn't this clear to you?' and I'm like 'I don't know what to tell you, it just doesn't look that simple from where I'm sitting'.
Though perhaps not quite to zero; it seems I would need to think about how much of the total suffering is the memory of suffering.