I haven't yet looked at the papers cited, but aren't they probably hopelessly confounded? This seems to be one of the areas where it's hardest to measure causal effects.
Answers to this question could be relevant: What are some artworks relevant to EA?
Undoubtedly these are interesting questions, and I don't have much to contribute now. Your thought experiment reminds me of Timmerman's Drowning Children case from "Sometimes there is nothing wrong with letting a child drown". Timmerman argues with this case that we should reject the strong conclusion from "Famine, Affluence, and Morality".
I agree that the simple story of a producer reacting to changing demand directly is oversimplified. I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product's supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there. I can explain why I think this in more detail if we disagree on this.Leather example:
Sure, I chose this example to show how one's impact can be diluted, but I also think that decreasing leather consumption is unusually low-impact. I don't think the stories for other animal products are as convincing. To take your examples:
I'll need to think about it more, but as with two-candidate votes, I think that petitions can often have better than 1:1 impact.
Not an expert, but I think your impression is correct. See this post, for example (I recommend the whole sequence).
Late to the party here but I'd check out Räuker et al. (2023), which provides one taxonomy of AI interpretability work.
Thanks, this makes things much clearer to me.I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can't automatically use numbers from one situation for another. I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on.I'm still not sure I understand the specific examples you provide:
* Not exactly a byproduct, since sales of leather increases the revenue from raising a cow.** This is not accounting for less direct impacts on demand, like influencing others around oneself.
This position is commonly defended for consequentialist arguments for vegetarianism and veganism; see, e.g., Section 2 here, Section 2 here, and especially Day 2 here. The argument usually goes something like: if you stop buying one person's worth of eggs, then in expectation, the industry will not produce something like one pound of eggs that they would've produced otherwise. Even if you are not the tipping point to cause them to cause production, due to uncertainty you still have positive expected impact. (I'm being a bit vague here, but I recommend reading at least one of the above readings -- especially the third one -- because they make the argument better than I can.) In the case of animal product consumption, I'm confused what you mean by "the expected impact still remains negligible in most scenarios" -- are you referring to different situations? I agree in principle that if the expected impact is tiny, then we don't have much reason on consequentialist grounds to avoid the behavior, but do you have a particular situation in mind? Can you give concrete examples of where your shift in views applies/where you think the reasoning doesn't apply well?
Thanks for the posts so far, I've briefly thought about trying some of these ideas but haven't had the courage to really go for them.One thing I'm wondering: What "sample size" are you basing the takeaways of your posts on intro fellowships on? That is, how many semesters, and how many people participated?
Why is this post being downvoted? I seriously doubt that EAs working to prevent school shootings would be cost-effective, but I don't get why there are downvotes here -- it's a fair question.