Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Â
Sorry if I haven't been clear.
I agree that the animal movement, individually, and collectively, should take into account the entire counterfactual difference between someone being vegan and someone being an omnivore. This would include the harm caused by being an omnivore by increasing the demand for factory farmed meat as well as the absence of positive effects of being a vegan (such as normalizing being vegan and increasing demand for vegan products). Ideally, in deciding one's dietary choices, one who was concerned with animal welfare would consider the the harm avoided by being vegan and the good that is caused. They would then quantify the cost for animal welfare charities to both commensurately decrease the harm caused and effectuate the good that is not realized. This would probably a better measure and one could say, "OK I'm donating 10% to effective charities already. Is it easier for me to pay the cost of the whole counterfactual difference in addition to this which I would otherwise donate? Or is it easier for me to be vegan?"
The other frame for offsetting, however, would be to make it match the psychological appeal of undoing the harm one caused. If this is what is motivating people to donate to animal welfare charities, then it would make more sense to only include the harms that are caused by being an omnivore (i.e., contributing demand for factory-farmed meat). People may not feel morally obligated to make the positive difference, just not to cause the harm (or to undo it).Â
So, definitely for decision making of individuals and within the movement, considering the positives as well as the negatives avoided of veganism is important. Whether having "offsetting" include it is a prudential question that would really depend on the psychologies that cause people to offset.
An interesting question I have regarding offsetting is whether it should just be measuring the negative aspects of contributing to animal suffering by increasing demand for factory farmed products, or whether it should also be considering the positives avoided by not being vegan (signaling value, increasing the demand for vegan products, other possible things).
Because if one were considering whether or not to be vegan or to donate $X dollars, they should probably consider the full counterfactual (positives foregone as well as negatives caused).
I'm not drawing a metaphysical distinction between humans and animals. I care about welfare, full stop.
The difference is empirical, not metaphysical. Human suffering triggers compensatory responses from other humans that multiply the costs. People who learn hospitals might harvest organs stop going to hospitals. Communities that tolerate trafficking erode the trust structures enabling cooperation. Social fabric frays. These system-level effects make the total harm enormous and difficult to quantify. You can't reliably offset what you can't measure.
Farmed animals don't generate these dynamics. A chicken doesn't know some humans eat chickens while others donate to reduce chicken suffering. There's no institutional trust to erode, no behavioral adaptation that cascades through society. The welfare calculus is direct and measurable.
On the organ case: if you modify it enough to truly eliminate the systemic effects (no fear, no institutional erosion, no social knowledge of what occurred) then yes, I bite the bullet. Saving five lives at the cost of one is better than letting five die to keep one alive. If that conclusion seems monstrous, I'd suggest your intuition is tracking the systemic costs you've stipulated away, not the raw welfare math.
But we don't need to resolve exotic hypotheticals here. You're arguing from analogy to human cases where offsetting fails. It fails because of empirical features those cases have, not because human suffering can never be weighed against animal suffering.
Ultimately, for me, it all cashes out in the experiences of beings, whether human, chicken, or digital consciousness. That's what matters.
But there are important consequentialist reasons that make the doctor killing patients fail in the real world. Once you live in a world in which people are being killed and the organs are being repurposed when they go to hospitals, people cease going to hospitals.
On the other hand, the differences in treatments in farmed animals are not going to trigger responses from said farmed animals that lead to such knock-on effects. You can simply look at the welfare consequences.
I think of it from the perspective I would have if I knew I would die and immediately be reborn as a chicken. Would I rather there be more Georges in the world who are vegan and do not contribute directly to the demand which causes my torture or Henrys who are omnivores and thus contribute directly to the torture, but donate an amount that neutralizes the effect and then some more?
If we actually care about welfare of animals more than we care about moral purity, we would rather there are more Henrys than Georges.Â
Glad to hear about your commitment to utilitarianism!
I would note, re the camper van, that minimizing costs so that you can give more is only one part of the equation. There may be productivity costs associated with putting your own well-being at too low of a floor such that it may make sense to spend a bit more on yourself.
It only relates to it insofar as someone could view your post (just looking at the title) as implying socialism and EA (at the broadest level, trying to do the most good we can with resources) are at odds. In reality, a lot of critics of EA are addressing the community's choice of priorities rather than EA at the broadest level. I would prefer it if such critics embraced the EA framework explicitly and made the case that their cause area or philosophy is actually the most EA, if this is pretty much what they are doing.
There's a lot of conflation between what the EA community is prioritizing at any given point and EA as a philosophy to guide moral behavior. I think this conflation probably does a lot of damage to EA's ability to proliferate.
It might make sense to have the ability to toggle a "harm negation" and a "total counterfactual expected difference" calculation. But you're right that a lot of people who offsetting might appeal to may not want to investigate these distinctions.