Hide table of contents

Sometimes people seek to offset their harmful behaviours. Counterfactual impact of donations is often used in offsetting calculations. This seems mistaken.

Assume the following situation:

1 dollar donation for an animal product reduction charity results in 1 animal spared from being born into factory farming.

Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.

If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.

A better but still unsatisfactory approach would be looking at Shapley values. Here is a case where Shapley value is still unsatisfactory:

2 people cooperate on a project to spare one animal from being born. Participation of only one person is sufficient for the project to succeed. The counterfactual value of each participant is 0, Shapley value of each is 0.5.

Maybe min(Shapley, Counterfactual) would be a better benchmark for offsetting. But I’m not sure of this.

How much difference does this make?

Many effective charities tend to do institutional work. Institutional work often involves a lot people. In animal advocacy, welfare policies require mass support from the public. A petition easily gets more than 0.1 million signatures. 7.5 million people voted for Prop 12 in California. 

However, the specific supporters from the public are less critical compared to the donor. Many projects wouldn’t start at all without donor support, whereas Prop 12 would still pass even if one fewer person voted for it.

Nonetheless, there are quite a lot of veto-players involved in institutional animal welfare work. Assuming that there are 8 distinct individuals/coalitions that have power to kill a typical animal welfare project, Shapley value might be an order of magnitude lower than the counterfactual value.

21

1
1

Reactions

1
1
Comments4
Sorted by Click to highlight new comments since: Today at 6:24 PM

Thanks for posting this!

I think we can run into problems when we attempt to transfer cost-effectiveness analyses that were sound enough to answer "where should I donate?" into the harder question of "how much do I need to give to offset"? As you point out, assigning ~100% of the counterfactual good to the donor is  . . . at a minimum, generous.

When we are asking where to donate, that often isn't a major problem. For example, if my goal is to save lives, I can often assume that errors in assigning "moral credit" will be roughly equal across (at least) GiveWell-style charities like AMF. Because the error term is similar for all giving opportunities, we can usually ignore it because it shouldn't change the relative ranking of the giving opportunities unless they are fairly close.

But offset situations pose a different question -- we are looking to morally claim a certain quantum of good to counterbalance the not-good we are producing elsewhere. That means we need an absolute measure (or at least estimate) of that quantum. As a result, if we want to find the minimum amount necessary to offset, we necessarily must make judgments about distributing the moral credit available.

Some people might also want a confidence interval for their offsetting action -- e.g., "I want to be 99% confident that I am giving enough to actually offset my production of not-goods." This is likely impossible with some interventions. For instance, if I think there is a greater than 1% chance that the critics are correct that corporate campaigns are net-negative in the long run, then my 99% confidence interval will always include negative values. 

Someone who wants confidence in actual offset -- rather than offset in expectancy -- would logically seek "safer" donation opportunities. These would generally have more certain impact and low spread of potential impacts. Perhaps a bundle of interventions could achieve the necessary confidence interval (such as 3 programs with an 80% chance of success and no appreciable risk of being net harmful, or a larger number at lower success probabilities).

I am wondering if assigning "moral credit" for offset purposes is too complex to do with an algorithm and instead requires context-specific application of judgment. A few possible examples:

  • Let's assume that most of the individuals who voted for Prop 12 consume animal products regulated by the measure, and that Prop 12 causes an increase in the cost of those products. By voting yes, these individuals would have voted for taking money out of their own pockets to pay for the increase in animal welfare. While I'm fine adjusting "moral credit" based on the risk undertaken, I'm uneasy with a system that gives donors orders of magnitude more moral credit than others who voluntarily bear costs to achieve the objective. 
  • I also wouldn't collectively give the Supreme Court any "moral credit" for voting to uphold Prop 12, such that at least the Justices in the majority should feel entitled to eat meat without offsetting. This holds despite the counterfactual value and what I imagine the Shapley value for each Justice's vote would be.
  • Moreover, every elections cycle, the voters could repeal Prop 12. Getting the repeal measure on the ballot shouldn't be too difficult, and there are monied interests who would happily bear those costs. If they do not, it is likely because they decided that the voters would shoot them down. So for the subsequent elections cycle, there are at least two necessary conditions for Prop 12's benefits to persist to Cycle 2: it got passed at the beginning of Cycle 1 and it didn't get repealed at the start of Cycle 2. It's true that nobody really did anything during Cycle 1 to protect Prop 12, but it's also true that the voters at the end of Cycle 1 have been judged willing to continue bearing Prop 12's costs in Cycle 2 to continue its benefits. It seems odd to attribute all of the benefits accruing in Cycle 2 to Cycle 1 activity. But how to split the moral credit here?

Motivated reasoning is always a risk, and any moral-credit granting analysis is more likely to be underinclusive (and thus over-grant available moral credit to influences that were identified) than the reverse. In some or even many cases, it may be necessary to apply an upward adjustment on even min(counterfactual value, Shapley value) to account for these factors.

Thanks for this comment, it felt awkward to include all veto-players in Shapley value calculation while writing the post, now I'm able to see why. For offsetting we're interested in making every single individual weakly better off in expectation compared to the counterfactual where you don't exist/don't move your body etc. so that no one can complain about your existence. So instances of doing harm can only be offset by doing good. Meanwhile, Shapley doesn't distinguish between doing/allowing, therefore it assigns credit to everyone who could have prevented an outcome even if they haven't done any good.

Alice, Charles and Mike cooperate in this charity. The participation of all is indispensable for the outcome. So they each have a counterfactual impact on 1 animal.

If each of them were to assume to have offset one previous animal product consumption of theirs through this project, that would be triple counting. For this reason counterfactual values of donations shouldn't be used in offsetting calculations.

I'm not sure about this. Suppose that C & M are both committed to offsetting their past consumption, and also that both will count the present co-operative effort, should it go ahead, as a '+1 offset'. Then the counterfactual impact of Alice cooperating with them is saving 1 animal + causing two future animals not to be saved, i.e. an overall negative effect.

So I think the counterfactual approach works fine, and is compatible with your observation that offsetting may be more difficult than would at first appear. (But it really depends upon the details--in particular, whether it's really true that your attempted offset will cause multiple others to do less good in future.)

Curated and popular this week
Relevant opportunities