Suppose you are cooperating with someone. It seems like there would be good reason to keep an eye on your partner to make sure that your partner does not do very bad things. For example, here are some reasons to keep an eye on your partner:

  • It would be bad for the victims of your partner if your partner did bad things
  • It would be bad for your reputation to be associated with people who do bad things
  • It would be bad for you if your partner gets caught doing bad things and suddenly you are missing a partner you thought you could rely on

But how vigilant should you be about keeping an eye on your partners? And who should you keep an eye on?

Here's one proposal: To the extent that your partner helps you, and it then turned out that their help was funded by bad things, you should try to help the partner's victims as much as your partner helped you. For instance if your partner stole $10000 from a bank and then gave you $1000, you should return this $1000 to the bank.

This seems to me to give you nice certain incentives. For instance, it neatly defines which people you must keep an eye on, and gives you proportional incentives to keep an eye on them based on how entangled they are in your organizations. It also directly links diligence to cooperation, so you have a logical reason to give to your partners for why you want extra checks when they suggest helping you. And it also seems like it would help your reputation in case something genuinely does go wrong.

(I'm not sure if it should be 1:1 exactly. An argument for giving more than 1:1 back is that helping others later is likely worse than not harming them in the first place, and also you might not always catch bad actors so paying back more would counterbalance this incentives-wise. An argument for giving less than 1:1 back is that otherwise you would have much greater likelihood of not being able to cover the cost. Probably any significant self-inflicted cost of bad partners would massively improve the incentives.)

Inspired by discussion on twitter.




Sorted by Click to highlight new comments since: Today at 10:44 AM

Thanks for writing this! I think this is a good proposal worth seriously considering and engaging with (although I'm undecided on whether I'd endorse it all-things-considered).

One other consideration is that you may want to effectively negate the benefits obtained through fraud to prove that "crime doesn't pay", and this means decreasing budgets for the causes by the amounts granted/donated to these causes, by cause. Since FTX and associates were disproportionately funding longtermist interventions and presumably value them more than other causes, if we don't pay out disproportionately from our longtermist budget, FTX and associates get away with shifting the share and amount of funding towards longtermism, which is still a win to them. (Of course, this doesn't necessarily mean the share of longtermist funding from outside FTX should decrease overall, since there are also reasons to increase it now.)

Another consideration is that we should pay more than the benefits, to account for fraud that isn't caught.

Both good points.

I think it would depend. For many charities, the ultimate cost of this sort of "strict liability" policy is borne by the intended beneficiaries. I would be hesitant to extend in certain cases beyond what I think morality requires.

For a grad student receiving a micro grant, asking to return funds already earned is too much, and expecting significant vetting is unrealistic and inefficient.

The potential value, I think, would be for midsize+ organizations with a diverse donor base. They could put e.g. 5% of each year's donation into a buffer, releasing 1% each year to programs as time passed without any red flags.

Very few nonprofits could absorb a reversal of a megadonor's gifts.

I should maybe have been more explicit in stating the actual policy proposal:

I don't think paying back necessarily needs to be done on the level of an individual project/grant. Insofar as the EA community is, well, a community, it might be viable to take responsibility on the level of the community.

For instance, in the discussion I linked to on twitter, the suggestion was that EAs would set up a fund that they could donate to for the victims of FTX.

This would presumably still create lots of community-wide incentives, as well as incentives among the leaders of EA, because nobody wants their community to waste a lot of resources due to having worked with bad actors. But it would also be much less burdensome to individual granttakers.

Hello there,

I am not so sure if this is a great suggestion. Traditional institutions like banks and government approving entities for non-profits will be able to filter some of the nuisance, best cure the issue before donations or acceptance of funds will be made. Assuring that one is without issues, conflicts of interests or have committed a crime kind of solves 90% of the chances that someone will try to do a robin hood style of philanthropy. Just my thoughts on how to tackle the whatif situation.

All the best,


Hm, my understanding is that there is no traditional institution that will issue a "yep this person is good" document that works across contexts, including for e.g. people who work in crypto, so any approval process would require a lot of personal judgement?

That said I don't disagree with the notion of using preexisting approval systems like crime record, my suggestion is more for making sure that one does in fact use them in the correct proportions, and in particular credibly committing to doing so in the future.

Inspired by discussion on twitter.

Hey, it's me.

I agree with this policy I think. The idea of internalising externalities feels very neat/elegant, and I think it creates the right incentives for all involved parties.

Curated and popular this week
Relevant opportunities