Nope - fixed. Thanks for pointing that out.
Thanks for sharing this!
I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It's here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord's x-risks don't reduce the possibility of future moral agents evolving etc.), and possibly doesn't even get at the important things mentioned in this post.
But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord's 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord's figure down to 9.8% to 12%.
I don't think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn't think at all about better ways to try to do this - just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn't going to do anything else with it :).
Yeah, it's interesting to see that across the board. My sense is that wild animal welfare work (and farmed animal work), are very much funding constrained. Relevant to this - Open Philanthropy doesn't currently fund EA wild animal welfare work.
Thanks for this. I think for me the major lessons from comments / conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds don't happen or spread massively, so to some extent one's credence in my argument / concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.
As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, I'd say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.
Ah - you're totally right - that was an oversight. I'm working on a followup to this piece focusing more on what animal focused longtermism looks like, and talk about moral circle expansion, so I don't know how I dropped it here :).
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort "I think that these folk X are worth less than these other folk Y" (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.One small side note - I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions. Most members of the public, myself included, aren't experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don't view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying "most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally". This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
While you're right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.
I don't know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my "to some extent, consensus" accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)
I agree that I was assuming a certain moral framework in my post - I've updated it to refer explicitly to utilitarianism of some kind, since that's a fairly common view in EA.
Thanks for the moral trade idea!
Yeah, that's fair - I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.
While I don't think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, "I think humans in other countries are worth 1/10 of those in my own country, therefore it seems like more of a priority to help those in my own country", and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don't think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that's the argument in this post. I'd be interested in talking about the reasons behind those discounts, but without them, there just isn't even a way to engage with this argument that I think is productive.
For the record, I generally don't think it is a major wrong to not be vegan, and wouldn't downvote / be this critical of someone voicing something along the lines of "I really like how meat tastes, so am not vegan," etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn't attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.
That being said, I think it's fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.
I downvoted this, and would feel strange not talking about why:
I think there are lots of good reasons, moral or otherwise, to not be vegan - maybe you can't afford vegan food, or otherwise cannot access it. Maybe you've never heard of veganism. Maybe there are good reasons to think that the animal products you're eating aren't causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.
But I don't think this argument is a particularly good one, and doesn't engage with questions of animal ethics well:
1. "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness). Though to be fair, you are assuming they do feel pain in this post.
2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you're just saying, with little basis, that a pig has 1/100 human moral worth, I don't know how to evaluate it. It isn't an argument. It's just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.
I also think these moral worth statements need more clarification - do you mean that while I (a human) feel things on the scale of -1000 to 1000, a pig only feels things on the scale of -10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.
I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don't think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.
I think it seems fairly clear that there is a strong case to be made, if you're someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it's probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.