Rob Wiblin: One really important consideration that plays into Open Phil’s decisions about how to allocate its funding — and also it really bears importantly on how the effective altruism community ought to allocate its efforts — is worldview diversification. Yeah, can you explain what that is and how that plays into this debate?
Alexander Berger: Yeah, the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, “Okay, I’m not ready to go all in on that internal logic.” So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, “Okay, we’re going to get chickens out of cages. I’m not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things.”
Alexander Berger: I think another perspective would say, “I would trade an infinite number of chicken-days for any human experience. I don’t care at all.” If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you’re likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there’s a lot at stake here morally, and that there’s a lot of cost-effective opportunities that we have to improve the world this way. But we don’t think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, “Well, I’m not ready to go all in, so I’m going to go to zero and not do anything on farm animal welfare.”
Alexander Berger: We’re able to work on multiple things, and the effective altruism community is able to work on multiple things. A lot of the idea of worldview diversification is to say, even though the internal logic of some of these causes might be so totalizing, so demanding, ask so much of you, that being able to preserve space to say, “I’m going to make some of that bet, but I’m not ready to make all of that bet,” can be a really important move at the portfolio level for people to make in their individual lives, but also for Open Phil to make as a big institution.
Rob Wiblin: Yeah. It feels so intuitively clear that when you’re to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that’s basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard. But nonetheless, we’ve decided to go with something that’s a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.
This is also how I think about the meat eater problem. I have a lot of uncertainty about the moral weight of animals, and I see funding/working on both animal welfare and global development as a compromise position that is good across all worldviews. (Your certainty in the meat eater problem can reduce how much you want to fund global development on the margin, but not eliminate it altogether.)
I want to be clear that I see risk aversion as axiomatic. In my view, there is no "correct" level of risk aversion. Various attitudes to risk will involve biting various bullets (St Petersburg paradox on the one side, concluding that lives have diminishing value on the other side), but I view risk preferences as premises rather than conclusions that need to be justified.
I don't actually think moral weights are premises. However, I think in practice our best guesses on moral weights are so uninformative that they don't admit any better strategy than hedging, given my risk attitudes. (That's the view expressed in the quote in my original comment.) This is not a bedrock belief. My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift.
Yes, I am formalizing my intuitions about cause prioritization. In particular, I am formalizing my main cruxes with animal welfare - risk aversion and moral weights. (These aren't even cruxes with "we should fund AW", they are cruxes only with "AW dominates GHD". I do think we should reallocate funding from GHD to AW on the margin.)
Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset. I was a person from a developing country, and a development economist, before I was an EA. But risk aversion is a universal preference descriptively – it shouldn't be a high bar to believe that I'm actually just a risk averse person.
At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.