RogerAckroyd's Shortform

by RogerAckroyd9th Jan 20217 comments
2 comments, sorted by Highlighting new comments since Today at 12:09 AM
New Comment

On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.  

Note: I don't work for 80,000 Hours, and I don't know how closely the people who wrote that article/produced their "scale" table would agree with me.

For that particular number, I don't think there was an especially rigorous reasoning process. As they say when explaining the table in their scale metric, "the tradeoffs across the columns are extremely uncertain". 

That is, I don't think that there's an obvious chain of logic from "factory farming ends" to "the future is 0.01% better". Figuring out what constitutes "the value of the future" is too big a problem to solve right now.

However, there are some columns  in the table that do seem easier to compare to animal welfare. For example, you can see that a scale of "10" (what factory farming  gets) means that roughly 10 million QALYs are saved each year. 

So a scale of "10" means (roughly) that something happens each year which is as good as 10 million people living for another year in perfect health, instead of dying.

Does it seem reasonable that the annual impact of factory farming is as bad as 10 million people losing a healthy year of their lives? 

If you think that does sound reasonable, then a scale score of "10" for ending factory farming should be fine. But you might also think that one of those two things -- the QALYs, or factory farming -- is much more important than the other. That might lead you to assign a different scale score to one of them when you try to prioritize between causes.

Of course, these comparisons are far from perfectly empirical. But at some point, you have to say "okay, outcome A seems about as good/bad as outcome B" in order to set priorities.