You may have read recent reports that the U.S. Commodity Futures Trading Commission and Department of Justice have filed charges (announcements here and here) against the cryptocurrency exchange BitMEX and several people involved with the company. This includes Ben Delo, a major EA donor and a cofounder of BitMEX.
CEA, Effective Giving UK, and 80,000 Hours became aware of the charges yesterday, when the news was first reported. No findings have been made about these charges at this point. We will continue to watch how things unfold and learn more, and will continue to update the EA community.
[Conflict disclosure: A charity on whose board I sit, the Legal Priorities Project, has been funded by Ben Delo. I write solely in my personal capacity.]
I think most EA organizations should have a very high standard for outright rejecting donors' gifts on the basis of the donors' wrongdoings. In particular, even if Ben Delo is guilty of everything he's charged with doing, I don't think EA charities should reject his donations on that basis. (I know that CEA has not yet said it would do so or recommend doing so; just proactively voicing my opinion in case that's under consideration.)
For one thing, it seems pretty antithetical to EA—and the stakes of the work we purport to engage in—to reject money on such a basis. We care about doing good effectively, and more money is useful to doing that. We should be very reluctant to give up free money, especially if the counterfactual is the money is spent less effectively or selfishly. If, as we often suppose, it costs about $3,000 to save a life through GiveWell, it seems very implausible to me that whatever good comes from rejecting a donation of that size must be vastly outweighed by the good that can be accomplished by those donations. Indeed, it's not clear to me that EA even has a legitimate inherent interest in this caring about a donor's identity at all, without anything more.
I also think a standard of rejecting people on the basis of their wrongdoings is morally suspect. I really think we should avoid judging people on the basis of the worse thing they've done, even if that worst thing is criminal. Furthermore, as most people would probably agree, legality and criminality are imperfect indicators of morality, and so making donor-relations decisions on that basis seems slippery without more. This is especially true for malum prohibitum crimes.
Furthermore, even criminals or overall-bad-people should be allowed—and encouraged!—to engage in morally good actions, including donating to effective charities. It's very unclear to me what good can come of a blanket policy against rejecting donations from such individuals, and it risks depriving them of the opportunity to do good things, including as acts of penance or out of a genuine desire for moral growth.
Relatedly, EAs are often appropriately more risk-tolerant than others. Legal risk is one important type of risk; I don't see it as categorically different from other risks. I recall Tara Mac Aulay's 80K interview, where she noted:
I don't think we want to encourage a culture in which EAs stridently minimize all legal risks, because that can impede effectiveness! While we (and major EA organizations) may want to refrain from encouraging such risky behavior, I don't think assumption of such risks in pursuit of the greater good should be grounds for total rejection of donations (just as it wouldn't be for other forms of risk). We don't want to do shun would-be Robin Hoods.
I do see a few reasons why a charity might reasonably worry about things like this:
(1) is of course a legitimate worry, but I think EAs rightfully don't weigh PR as highly as other charities/movements, especially when the PR concerns are not founded in good moral reasoning. If we think that accepting donations from people convicted of crimes is better for the world, I may prefer us to stand by our convictions. Regardless, anonymization can obviously mitigate some (most?) of these risks.
(2) is also a very legitimate worry, but it seems like it should be solvable through either anonymizing donations or having an ethical firewall between those who know major donors' identities and those who set organizational priorities. For people who have engaged in violent or otherwise-repugnant behavior (such that employees/volunteers/others have good reason to want to not be around them), then simply excluding the donor (and even their name and likeness) from spaces should be sufficient.
(3) is a bit more complex, but would also be solved with anonymization. I'm actually not sure that the worry is a well-founded one, since it seems like a person's good acts are morally relevant to assessing that person's overall goodness in light of bad acts. But if we don't want to be complicit in that process, then precluding attribution should solve the problem.
I'm not sure I have a good working theory of how to deal with (4), other than to say I think it would be perfectly reasonable for any charity to assume that any anonymous donation was not the fruits of some heinous crime (as is almost always true). Furthermore, even if it was, it's not clear to me what good for the world is accomplished by precluding a person who is by supposition bad from getting rid of her money: bad people should probably have less money, not more.
I'm not an expert on (5), so maybe that's what's driving this.
Thus, in all cases except (5), anonymization would seem to do a good job of protecting charities' legitimate interests while also allowing for more donations to go through. CEA's efforts to deanonymize donor identity are therefore a bit puzzling to me. Unless (4) and (5) are doing a lot of lifting, I'm not sure I see the reasons for CEA's worries in this particular case or any cases in the same ballpark.
Justifying potentially bad stuff with "the stakes of the work EA does" feels like a slippery slope and a bit fanatic. There should be principled reasons that holds true for all charities, the cost-benefit approach you use the second part of your comment is better. Related: this thread on whether it's okay to work in the Tobacco industry.
Some other reasons I am uncomfortable with rejecting donations on such a basis:
As an addendum of2) and 4), FWIW, on the object-level I'm not particularly convinced that Ben Delo has acted particularly immorally (though I have not looked in detail at the allegations).
If we were to conflate morality with legality, we would also believe that eg anti-animal agriculture activists are evil terrorists, and that open science is similarly evil. Moreover, since there is not a particularly strong principled reason to privilege US/UK law as moral guidance over the laws of other countries, we should take seriously the possibility that we should revise our views based on the legal doctrines of other countries, which may have some counter-intuitive results.
Strongly agree. It’s what I was getting at with the malum prohibitum thing.
Anonymization would probably solve (3), but would, unfortunately, likely create PR risks of its own. Lawrence Lessig made a similar argument a while ago:
Unfortunately, from what I can remember, public response to this argument was overwhelmingly negative, and The New York times (yes, that newspaper) published a story whose headline portrayed Lessig in a very bad light (Lessig subsequently filed a defamation lawsuit against the Times, which he withdrew after the headline was amended four months later). I personally would not have anticipated such a response, since the argument seems pretty reasonable to me, and I wonder if EAs as a whole may be apt to underestimate certain PR risks simply because they rely on their own subjective sense of the merits of the relevant arguments to predict how the broader public and the media will react to them.
Yeah, this is a good point. But this is why I limited my position to setting "a very high standard" for rejecting donations (and not rejecting cases from people "in the same ballpark" as Delo, assuming he is guilty, which we should not) and not "never."
Also, I think there are some salient differences with the Epstein case, beyond the enormous gulf in moral turpitude implicated by the cases. Ito knew about Epstein's identity, and IIRC Epstein had toured the Media Lab. A truly anonymized system should allow for neither of these.
(I also thought the Lessig article was perfectly reasonable.)
Following up on this: I had a conversation that updated me to believe that CEA is doing the right thing here. Unfortunately I can't disclose much about that conversation, but I am posting this here for accountability.
My very quick take:
Someone pointed out to me that money laundering prevention due diligence could be another reason, especially for conditional grants or organizations like CEA that regrant.
Here’s an update from CEA's operations team, which has been working on updating our practices for handling donations. This also applies to other organizations that are legally within CEA (80,000 Hours, Giving What We Can, Forethought Foundation, and EA Funds).
Is the idea that such controls, had they been implemented in the past, would have prevented you from accepting Delo's donations?
Also, I am curious to see CEA's cost-benefit analysis behind this decision. Naively this seems like incurring a cost (staff time, consultant fees, lawyer fees, annoy donors) in order to reduce a benefit (donations). Based on my cursory research (talking to a lawyer and reading this) I couldn't work out if this was actually legally required given CEA's situation, though it does seem to be reasonably common.
I am checking with operations staff about this.
I don’t want this comment to read as all commentary on Delo or BitMEX specifically; we're also thinking about how to be prepared for other situations that could arise. [Edited for clarity]
A lot of what’s happening here is CEA realizing that there are a lot of potential donors who make money in crypto or other emerging fields where society is still trying to figure out how to apply legal and ethical frameworks. We need better systems for thinking about that. Many of the steps CEA is taking or considering are not strictly legally required, but that’s not our only consideration.
EA has long included the idea that some ways of making money could create net negative impact even if you donate your earnings, for example 80,000 Hours’ post on Why you should avoid harmful jobs even if you’ll do more good.
There are other ways of making money that don’t reach that bar, but that involve enough harm that their overall effect could be really damaging to EA, for example by spreading a norm that it doesn’t really matter whether you make your money in an ethical way as long as you donate it afterwards.
CEA’s guiding principles include this section on integrity:
Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.
Well you did announce the policy change as a comment on an article about Delo!
I think (?) I may have pointed this out previously, but there are some significant issues with this article. For example, it suggests a $42,000 average social cost of jobs in finance:
But if you follow the source link, you can see that this estimate is actually for only the 10% most harmful jobs:
So the average harm is 10x less, i.e. $4,200.
Even then, I think this is quite a poor estimate. It relies on ascribing all the expected costs of financial crisis to financial workers. However, a huge deal of the responsibility should surely be borne by other actors. Depending on your views of the causes of the crisis, some collection of these groups are quite responsible:
Furthermore, it does not assign any monetary value to the positive aspects of finance, even though these are probably very large:
Similarly, the article suggests that being a Tobacco CEO is unacceptable, linking to this analysis. However, I think the fermi calculation involved in this estimate was quite far off, as I explained here:
At the time Rob suggested he would think more on the issue, but to my knowledge the analysis was never updated.
Sorry, I mean my most recent comment specifically - the reasons we're considering these kinds of changes are not just because of this one situation but also because of others that could arise. I'll edit to clarify.