BF

Bob Fischer

Bio

I'm a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals.

Sequences
2

The CURVE Sequence
The Moral Weight Project Sequence

Comments
80

Thanks for the idea, Pablo. I've added summaries to the sequence page.

Hi Ramiro. No, we haven't collected the CURVE posts as an epub. At present, they're available on the Forum and in RP's Research Database. However, I'll mention your interest in this to the powers that be!

I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he's done to push this conversation forward). I don't know whether OP should allocate most neartermist funding to AW as I haven't looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don't fall off so much that animal work loses to global health work, but I haven't investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I'd love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I'd expect animal field building to look pretty good.)

I should also say that OP's commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it's true that a straightforward utilitarian analysis would favor spending a lot more on animals, it's pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn't include a clear procedure for generating a specific allocation, it's hard to know what people who are committed to worldview diversification should do by their own lights.

Thanks for all this, Hamish. For what it's worth, I don't think we did a great job communicating the results of the Moral Weight Project.

  • As you rightly observe, welfare ranges aren't moral weights without some key philosophical assumptions. Although we did discuss the significance of those assumptions in independent posts, we could have done a much better job explaining how those assumptions should affect the interpretation of our point estimates.
  • Speaking of the point estimates, I regret leading with them: as we said, they're really just placeholders in the face of deep uncertainty. We should have led with our actual conclusions, the basics of which are that the relevant vertebrates are probably within an OOM of humans and shrimps and the relevant adult insects are probably within two OOMs of the vertebrates. My guess is that you and I disagree less than you might think about the range of reasonable moral weights across species, even if the centers of my probability masses are higher than yours.
  • I agree that our methodology is complex and hard to understand. But it would be surprising if there were a simple, easy-to-understand way to estimate the possible differences in the intensities of valenced states across species. Likewise, I agree that "there are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence." But there are also tons of assumptions and biases that go into our intuitive assessments of the relative moral importance of various kinds of nonhuman animals. So, a lot comes down to how much stock you put in your intuitions. As you might guess, I think we have lots of reasons not to trust them once we take on key moral assumptions like utilitiarianism. So, I take much of the value of the Moral Weight Project to be in the mere fact that it tries to reach moral weights from first principles.
  • It's time to do some serious surveying to get a better sense of the community's moral weights. I also think there's a bunch of good work to do on the significance of philosophical / moral uncertainty here. I If anyone wants to support this work, please let me know!

Thanks for your question, Moritz. We distinguish between negative results and unknowns: the former are those where there's evidence of the absence of a trait; the latter are those where there's no evidence. We penalized species where there was evidence of the absence of a trait; we gave zero when there was no evidence. So, not having many negative results does produce higher welfare range estimates (or, if you prefer, it just reduces the gaps between the welfare range estimates).

Thanks so much for the vote of confidence, JWS. While we'd certainly be interested in working more on these assumptions, we haven't yet committed to taking this particular project further. But if funding were to become available for that extension, we would be glad to keep going! 

Hi Teo. Those are important uncertainties, but our sequences doesn't engage with them. There's only so much we could cover! We'd be glad to do some work in this vein in the future, contingent on funding. Thanks for raising these significant issues.

Hi David. There are two ways of talking about personal identity over time. There's the ordinary way, where we're talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there's "numerical identity" way, where we're talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you're running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I'll just say that I meant only to be talking about the "numerical identity" sense of sameness over time, so we don't get the problem you're describing in the intra-individual case. If the latter, then that's a pretty big philosophical dispute that we're unlikely to resolve in a comment thread!

Thanks for this. You're right that we don't give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you're raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line---and with additional funding!---we'll be glad to tackle many additional questions. And, for what it's worth, we do think that the groundwork we're laying now will make it easier to develop overall giving portfolios based on people's best judgments about how to balance the various kinds and degrees of uncertainty.

Load more