What is the evidence that bivalves are much less likely to be sentient than insects? They are also small animals, so when they are eaten they add up to very large numbers.
I found this really insightful, thank you for your research.
I also think there is something missing in this sentence:"Surveys that instead ask more specific questions such as whether or not the person has eaten the meat of an animal in the last week."
Yeah I agree that it is not the most natural and straightforward thought-experiment. Unfortunately hedonic comparisons make most sense to me when I can ask "would I prefer experience A or B" and asking this question is much more difficult when you try to compare experiences for the animals.
But at least it should be physically imaginable to get me lobotomised to have mental capacities equivalent to that of a chicken. I'm much less likely to care about what happens to future me if my mental capacities were altered to be similar to that of an ant. But if my brain was altered to be similar to a chicken brain, I'm much more afraid of getting boiled alive, being crammed in a cage etc.
I think the question "would you rather see additional one human life-year or 3 chicken life-years" conflates the hedonic comparison with special obligations to help human beings. One might prefer human experiences vs non-human experiences even when they are hedonically equivalent because of special obligations. If we're exclusively interested in welfare I think a better thought experiment would be how would you feel about having these experiences yourself.
If God offered you an opportunity to have an extra year of average human life, and on top of that, 1 year of average layer hen life, 1 year of average broiler chicken life, 10 years of average farmed fish life, and 100 years of farmed shrimp life, would you accept that offer? Of course that experiment is too artificial, but people go through extreme illnesses that cause them have mental capacities similar to a chicken. I sometimes think about how afraid I would be about being reincarnated after my death, going through some mental changes to get my mental capacities equivalent to that of a chicken, and going through all the average chicken experiences. I personally wouldn't take that risk in exchange of one additional year of human life.
I disagree with the following:"very strong evidence against "the world in 100 years will look kind of similar to what it looks like today"."
Growth is an important kind of change. Arguing against the possibility of some kind of extreme growth makes it more difficult to argue that the future will be very different. Let me frame it this way:
Scenario -> Technological "progress" under scenario
Most of the mainstream audience mostly give credence in the scenarios 3, 4 and 5. The scenario 3 is the scenario with the highest technological progress. The blog post is mostly spent on refuting the scenario 3 by explaining the difficulty and rareness of the growth and technological change. This argument makes people give more credence in scenarios 4 and especially 5 rather than 1 and 2, since the scenarios 1 and 2 also involve a lot of technological progress.
For these reasons, I'm more inclined to believe that an introductory blog post should be more focused on assessing the possibility of 4 and 5 rather than 3.
Arguing against 3 is still important, as it is decision-relevant on the questions of whether philanthropic resources should be spent now or later. But it doesn't look like this topic makes a good intro blog-post for AI risk.
I'm in favour of everyone donating to effective charities. Even according to deontological theories I think donating and avoiding harm are two different responsibilities and people doing harm still have responsibilities/opportunities to donate. Donating is an amazing thing to do regardless of what other actions a person might be undertaking.
Nonetheless, I'm also very much in favour of having true beliefs about things and taking moral uncertainty seriously. If something doesn't seem right to me under a somewhat plausible theory I'm going to say so even if I don't believe in that theory myself. My language in the original comment is also appropriately hedged(I suspect, it might be the case).
I wouldn't want to discourage anyone from donating anywhere. But for offsetting I have uncertainties so I'm going to state them. I agree that one of the more important wrongdoings committed by consuming animal products is creating more demand.
But I'm not certain that eating meat doesn't wrong the animal eaten at all according to deontological theories.
1. I'm not sure that the right to bodily integrity ends after death. It might be the case desecrating the bodies of dead individuals might be wronging them. I'm aware that claiming that dead people can be wronged brings in a lot of problems in moral theorising, but I can't dismiss this claim entirely.
2. It seems very odd to me that if you hire an individual to kill X and X gets killed, you certainly wrong X; but if someone kills X in advance with the expectation that they will get paid for it and retroactively asks to get paid for killing X, paying them doesn't wrong X.
And if eating meat wrongs the animal being eaten then offsetting is not a Pareto improvement so the case for "offsetting" becomes weaker.
To be honest you can view these implications as weaknesses of deontological theories, I personally do.
None of this weakens the case for donating to effective charities either. Donating money to effective charities is pretty robust according to many different moral theories.
This avoids the question but I suspect meat-eating offsets are morally dubious from a standpoint that takes moral uncertainty seriously.
Emitting carbon then donating to offset it is in some sense ex-ante Pareto improvement. It's rather easier to say "nobody was made worse-off".(though it gets much more complicated when you consider the impact on people that are not born yet)
It might be the case that eating meat wrongs the animal being eaten, and that animal is not helped by the donation. So the case for offsetting is weaker here.
I also liked this quote from Obama on a similar theme. The advice is pretty common for very good reasons but hearing it from former POTUS had more emotional strength on me:"how do we sustain our own sense of hope, drive, vision, and motivation? And how do we dream big? For me, at least, it was not a straight line. It wasn't a steady progression. It was an evolution that took place over time as I tried to align what I believed most deeply with what I saw around me and with my own actions.
The first stage is just figuring out what you really believe. What's really important to you, not what you pretend is important to you. And what are you willing to risk or sacrifice for it? The next phase is then you test that against the world, and the world kicks you in the teeth. It says, "You may think that this is important, but we've got other ideas. And who are you? You can't change anything."
Then you go through a phase of trying to develop skills, courage, and resilience. You try to fit your actions to the scale of whatever influence you have. I came to Chicago and I'm working on the South Side, trying to get a park cleaned up or trying to get a school improved. Sometimes I'm succeeding, a lot of times I'm failing. But over time, you start getting a little bit of confidence with some small victories. That then gives you the power to analyze and say, "Here's what worked, here's what didn't. Here's what I need more of in order to achieve the vision or the goals that I have." Now, let me try to take it to the next level, which means then some more failure and some more frustration because you're trying to expand the orbit of your impact.
I think it's that iterative process. It's not that you come up with a grand theory of "here's how I'm going to change the world" and then suddenly it all just goes according to clockwork. At least not for me. For me, it was much more about trying to be the person I wanted to believe I was. And at each phase, challenging myself and testing myself against the world to see if, in fact, I could have an impact and make a difference. Over time, you'll surprise yourself, and it turns out that you can."
The problem with this advice is that many people in EA don't think we have enough time to slowly build up. If you think EA might take control of the future within the next 15 years, you don't have much time to build skills in the first half of your career and exercise power after you have 30 years of experience. There is an extreme sense of urgency, and I am not sure what's the right response.
To clarify, it wasn't morally supererogatory to boycott speaking with slave-owners. Often you have to speak with wrongdoers to convince them.
Lay also did a lot of things that were great. I focused on the example in the question.
I should note that Bentham too picked his fights to some extent as he never published his writings on legalising homosexuality. His address to the French delegates on colonies also tries to frame emancipation as a win-win solution. But it's still very bold. In the context of or existential risks, it doesn't seem to me that people make as bold proposals to policy makers.