Rob Wiblin: One really important consideration that plays into Open Phil’s decisions about how to allocate its funding — and also it really bears importantly on how the effective altruism community ought to allocate its efforts — is worldview diversification. Yeah, can you explain what that is and how that plays into this debate?
Alexander Berger: Yeah, the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, “Okay, I’m not ready to go all in on that internal logic.” So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, “Okay, we’re going to get chickens out of cages. I’m not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things.”
Alexander Berger: I think another perspective would say, “I would trade an infinite number of chicken-days for any human experience. I don’t care at all.” If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you’re likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there’s a lot at stake here morally, and that there’s a lot of cost-effective opportunities that we have to improve the world this way. But we don’t think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, “Well, I’m not ready to go all in, so I’m going to go to zero and not do anything on farm animal welfare.”
Alexander Berger: We’re able to work on multiple things, and the effective altruism community is able to work on multiple things. A lot of the idea of worldview diversification is to say, even though the internal logic of some of these causes might be so totalizing, so demanding, ask so much of you, that being able to preserve space to say, “I’m going to make some of that bet, but I’m not ready to make all of that bet,” can be a really important move at the portfolio level for people to make in their individual lives, but also for Open Phil to make as a big institution.
Rob Wiblin: Yeah. It feels so intuitively clear that when you’re to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that’s basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard. But nonetheless, we’ve decided to go with something that’s a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.
This is also how I think about the meat eater problem. I have a lot of uncertainty about the moral weight of animals, and I see funding/working on both animal welfare and global development as a compromise position that is good across all worldviews. (Your certainty in the meat eater problem can reduce how much you want to fund global development on the margin, but not eliminate it altogether.)
Thanks for the follow up!
Just to clarify, I only care about the marginal cost-effectiveness. However, I feel like some intrinsically care about spending/neglectedness independently of how it relates to marginal cost-effectiveness.
Note this also applies to animal welfare.
Thanks for explaining your views! Your moral weight is 1 % (= 10^-2) of mine[1], and I multiplied Saulius' mainline estimate of 41 chicken-years per $ by 0.2[2]. So, ignoring other disagreements, your marginal cost-effectiveness would have to be 1.32 % (= 0.2/(1.51*10^3*0.01)) the non-marginal cost-effectiveness linked to Saulius' mainline estimate for corporate campaigns for chicken welfare to be as cost-effective as GiveWell's top charities. Does this sound right? Open Phil did not share how they got to their adjustment factor of 1/5, and I do agree it would be great to have more rigorous estimates of the cost-effectiveness of animal welfare interventions, so I would say your intuition here is reasonable, although I guess you are downgrading Saulius' estimate too much.
On the other hand, I find it difficult to understand how one can get to such a low moral weight. How many times as large would your moral weight become conditioning on (risk-neutral) expected total hedonistic utilitarianism?
Thanks for clarifying. Given i) 1 unit of welfare with certainty, and ii) 10 x units of welfare with 10 % chance (i.e. x units of welfare in expectation), what is the x which would make you value i) as much as ii) (for me, the answer would be 1)? Why not a higher/lower x? Are your answers to these questions compatible with your intuition that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell's top charities? If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation?
I strongly endorse expected total hedonistic utilitarianism (here is your sneaky philosophical equivalence :), and I am happy to rely on Rethink Priorities' median welfare ranges.
Since Open Phil thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
I think it makes all sense to be risk averse with respect to money, but risk neutral with respect to welfare, which is what is being discussed here.