Hide table of contents

My understanding from the FTX fallout is that SBF was driven by an extreme form of total utilitarianism with little respect for virtues such as honesty. 

I think this kind of attitude is very rare in EA. 

To some, this may be reason not to worry too much about future PR risks to EA.

But I think it is normal across social, political and religous movements for PR risks to come from the most extreme views within the movement.

I think we've seen examples of this in:

  1. the effect of Salafist jihadism on the reputation of Islam
  2. the effect of "Antifa" on the reputation of the American left
  3. the effect of more extreme and authoritarian views on the reputation of the social justicey left

But also just in general, historically many political movements have developed a smaller violent wing which harms the reputation of the wider movement.

I think this has important implications for how EA tackles major PR risks going forward.

Firstly, CEA should bear in mind that as EA grows,  the probability of an individual holding a very extreme interpretation of EA principles will increase.

Secondly, CEA and other EA orgs should pre-empt, and publicly, firmly discourage and reject predictable extremism inside EA, for two reasons:

  1. To prevent harmful acts as an end in itself
  2. To protect EA’s reputation in response to potential harmful acts by EAs in the future 

“Predictable extremism” includes further cases of stealing-to-give / fraud-to-give, but also violence towards perceived bad actors. This kind of extremism and more severe violations of deontological principles might become more likely the closer we get to AGI / as people shorten their timelines, and may particularly become more common amongst those with high P(doom).

 

Some concrete actions I'd like to see:

  1. Giving What We Can and Founders Pledge prominently and explicitly making it clear that they condemn and discourage stealing-to-give. This should be prominent enough to make GWWC and FP fairly claim to be at no fault whatsoever for potential future cases of stealing-to-give by signatories. 80 000 Hours should do the same on its pages about earning-to-give.
  2. CEA should add honesty and non-violence as separate fifth and sixth "principles which unite effective altruism" for greater emphasis. 
  3. Community builders should emphasise honesty as a key EA value.
  4. Community builders (and other people) should emphasise non-violence as an EA value more often, to the extent that EAs can fairly claim to be at no fault whatsoever for potential future cases of violence motivated by extreme interpretations of EA principles.

 

In particular, I'd like to stress that we should not wait  for someone to commit an act of violence in the name of EA before we dedicate ourselves to non-violence, for both PR reasons and to just prevent violence in the first place.

 

(Obviously, there are less PR focused reasons to do these things, but I primarily wanted to focus on PR for the purposes of this post)

 

EDIT:

This post previously conflated “fringes” with “extreme views” and has been edited to distinguish between them.

29

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since: Today at 1:34 PM

There is a basic valid point here, but I would really want to distinguish between "fringe" and "extreme" views. 

Sam was not a "fringe" member of the EA community. He was extremely central and occupied a pretty central node in the social graph (due to the connections associated with the FTX Future Fund and lots of EAs working at FTX). I agree with you that his beliefs were extreme. Indeed I think it's not that rare for the core of a community to have more extreme beliefs than the fringes, especially as a movement starts growing a lot and is appealing to broader swaths of people each year.

I think this is a large enough conflation that I would recommend changing the title.

Good point, thank you. I agree that it’s important to distinguish between “fringes” and “extreme views” - I’ll edit the post soon.

This is interesting and I broadly agree with you (though I think Habryka’s comment is important and right). On point 2, I’d want us to think very hard before adopting these as principles. It’s not obvious to me that non-violence is always the correct option — e.g. in World War 2 I think violence against the Nazis was a moral course of action.

As EA becomes increasingly involved in campaigning for states to act one way or another, a blanket non-violence policy could meaningfully and harmfully constrain us. (You could amend the clause to be “no non-state-sanctioned violence” but even then you’re in difficult territory — were the French resistance wrong to take up arms?)

I think there are similar issues with the honesty clause, too — it just isn’t the case that being honest is always the moral course of action (e.g. the lying to the Nazis about Jews in your basement example).

These are of course edge cases, and I do believe that in ~99% of cases one should be honest and non-violent. But formalising that into a core value of EA is hard, and I’m not sure it’d actually do much because basically everyone agrees that e.g. honesty is important; when they’re dishonest they just think (often incorrectly!) that they’re operating in one of those edge cases.

Regarding point 2, I'd argue that both "honesty" and "non-violence" are implied by the actual text of the fourth principle on the page:

Collaborative spirit: It’s often possible to achieve more by working together, and doing this effectively requires high standards of honesty, integrity, and compassion. Effective altruism does not mean supporting ‘ends justify the means’ reasoning, but rather is about being a good citizen, while ambitiously working toward a better world.

I think this text, or something very similar, has been a part of this list since at least 2018. It directly calls out honesty as important, and I think the use of "compassion" and the discouragement of "ends justify the means" reasoning both point clearly towards "don't do bad things to other people", where "bad things" include (but are not limited to) violence.

I think honestly is clearly mentioned there but don’t think non-violence specifically is implied there.

Regardless, my case is for honesty and non-violence to both be listed separately as core principles for greater emphasis.

Agree that non-violence and honesty aren’t always the best option, but neither is collaboration, and collaborative spirit is listed as a core value. I think “true in 99% of cases” is fine for something to be considered a core EA value.

I’d also add that I think in practice we already abide by honesty and non violence to a similar degree to which we abide by the collaborative spirit principle.

I do think honesty and non-violence should be added to the list of core principles to further promote these values within EA, but I think the case of adding these values is stronger from a “protection against negative PR if someone violates these principles” perspective.

I think EA would need quite a bit more to "fairly claim to be at no fault whatsoever for potential future cases of stealing-to-give by signatories." Talk is relatively cheap, and its existence will convince few outside of EA if there is another major fraud-to-give scandal. More emphasis on honesty by community builders and leaders would help more with prevention, but would be less legible to the public.

I'm thinking that the only successful approaches would include something like committing to full and prompt disgorgement of the amount of any donations that were discovered to be tainted by fraud, with interest and with a pretty long look-back period (e.g., ten years). That should have at least some deterrent effect on would-be fraudsters. [1]

Moreover, it is hard to present EA as having a credible anti-fraud commitment if it retains benefits derived from fraudulent activity. While it's true that most social movements haven't made such a commitment, most movements don't have SBF in their rear-view mirror.

  1. ^

    The amount of the benefit received by EA may be somewhat less than the amount the fraudulent donor gave. However, the difference will usually be small enough that disgorging the amount given (vs. the amount of benefit) would be reasonable on optics grounds alone.

I'm thinking that the only successful approaches would include something like committing to full and prompt disgorgement of the amount of any donations that were discovered to be tainted by fraud, with interest and with a pretty long look-back period (e.g., ten years). That should have at least some deterrent effect on would-be fraudsters. [1]

People keep saying that, but I don't buy the causal mechanism. It sounds like you think insuring somebody's bad decisions would have a deterrent effect on them? This seems obviously convoluted and implausible by normal lights, and I don't find the specific weird altruistic game theory stuff to be that convincing in comparison.

To me, there are at least two important differences here from the classic insurance moral hazard scenario:

  • In the classic scenario, moral hazard exists primarily because the act of insurance is transferring risk from the insured to the insurer. Where the insured/fraudster is insolvent and owes more than he can possibly pay, the effect of insurance here is to transfer risk from third parties (e.g., depositors) to the insurer [edited typo here].
    • That is generally going to be the case here because the "insurance" would only cover EA donations, not all victims of the fraud. 
      • For example, it makes no practical difference for moral hazard purposes if SBF ends up owing $5B or $5.2B to his victims. As long as the sum is more than the person could ever pay, the deterrent effect created by risk of loss should be the same.[1]
  • In the classic scenario, the insured doesn't care about the insurer's interests. In contrast, someone who cared enough to donate big sums to EA probably does care that the monies to repay fraud-tinged donations are going to come out of future EA budgets.[2]

From the point of view of an act-utilitarian fraudster, the EV of fraudulent donations looks something like:

= odds of getting away with it * benefits from donations LESS

  odds of getting caught * (EA reputational damage - funds EA will get to keep)

A disgorgment policy ensures the italicized term is $0, and can increase "odds of getting caught" (by extending the window in which funds will be returned if the fraud is detected after some delay).

Of course, that's not going to perfectly deter anyone, but the claim was "at least some deterrent effect." Ensuring that the "getting caught scenario" has as little possible upside for the would-be fraudster as possible doesn't strike me as weird altruistic game theory stuff. Trying to ensure that the criminal accrues no net benefit from his crimes is garden-variety penological theory.

  1. ^

    And it would be easy to get subrogation rights from the repaid victims if desired, such that the EA "insurers" could probably collect the marginal $200MM from the fraudster anyway if for some reason he was able to pay off the $5B.

  2. ^

    I agree that insurance from an insurance firm would create a moral hazard risk. But it's unlikely non-EA insurance could be obtained for a risk like this at a reasonable price. 

I'm aware of this line of argument. I just don't buy it, and find the thinking around this topic somewhat suspicious. For starters, you shouldn't model potential fraudsters as pure act utilitarians, but as people with a broad range of motivations (or at least your uncertainty distribution should include a broad range of motivations). 

Which motivations might someone be worried about if they were caught committing fraud, of which some of the fraudulent money went to charity?

  • They might go through the criminal justice system and face criminal penalties
  • They might get sued, and face civil penalties
  • They might have a low reputation and eg have mean things written about them on the internet
  • Their friends and/or family might disown them, making their direct social lives worse
  • Crazed people with nothing to lose might be violent towards the fraudster and/or their family
  • Their freedom to act might be restricted, reducing their future abilities to accomplish their goals (altruistic or otherwise).
  • The charitable funds they donated might be returned (with or without penalties), making it harder for the fraudsters altruistic goals to be accomplished.

Given this wide array of motivations, we should observe that while some motivations (or maybe just one) push in favor of credible commitments to return money having a deterrence effect, most motivations upon learning about such an insurance scheme should be neutral or favorable towards insurance. So it's far from obvious how this all nets out, and I'm confused why others haven't brought up the obvious counterarguments before.

As another quick example, the two positions

  1. We should return all money for optics reasons, as the money is worth less than the PR hit and 
  2. We should credibly commit to always return donated grift money + interest when grift is caught, as this will create a strong deterrence for future would-be 'aligned' grifters

may well be internally consistent positions by themselves. But they're close to mutually exclusive[1], and again it's surprising that nobody else ever brought this up.

Generally, I find it rather suspicious that nobody else brought up the extremely obvious counterarguments before I did. I think I might be misunderstanding something, like perhaps there's a social game people are playing where the correct move to play is rational irrationality and for some reason I didn't "get the memo." [2](I feel this way increasingly often about EA discussions).

  1. ^

    Edit: to spell it out further, it seems like you are modeling SBF, or another theoretical fraudster, as genuinely interested in helping EA-related causes through donations. If you want to disincentivize a fraudster who wants to help the EA movement, you should theoretically precommit to doing something that would harm EA, not something that would help EA. So precommitting to returning donations doesn't make sense if you also, separately, think that the optics of the precommitment make it the best choice for EA.[3] You are effectively telling the fraudster: "If I find out you've done fraud, I'll do the thing you want me to do anyway." For the precommitment to make sense, you have to additionally assume that the fraudster disagrees with you about returning funds being net-positive given the circumstances, or has motives other than "helping EA-related causes."

  2. ^

    If someone who knows what's up did get the memo, feel free to pass it along and I will delete my comments on this topic.

  3. ^

    An even stronger precommitment than not returning money (and therefore getting the bad optics) would be to attempt to destroy the movement via infighting after such a case of grift has been revealed; one way to make the precommitment credible is of course to publicly do something similar in earlier cases.

The claim was "at least some deterrent effect," not "strong deterrence." I don't have to model a 100% act utilitarian to support the claim I actually made. 

I am not convinced that partial "insurance" would diminish other reasons not to commit fraud. In my own field (law), arguing for a lesser sentence because a charity acted to cover a fraction of your victim's losses[1] is just going to anger the sentencing judge. And as explained in my footnote above, the "insurers" could buy subrogation rights from the victims to the extent of the repayment and stand in their shoes to ensure civil consequences. 

"Crazed people with nothing to lose" are unlikely to be meaningfully less crazed because the fraudster's bankruptcy estate recovered (e.g.) 75% rather than 70% of their losses. Same for other social consequences. At some point, the legal, social, and other consequences for scamming your victims don't meaningfully increase with further increases in the amount that your primary victims ultimately lose.

  1. ^

    Although I haven't studied this, I suspect that the base rate of charity-giving fraudsters who give all -- or even most -- of the amounts they steal from their victims to charity is pretty low.

Flagging here where I think the next problem can arise from: AI doomers taking matters into their own hands to slow down AI progress or bring attention to the issue through tactics/strategy similar to those of Earth Liberation Front in late 90's/early 00's reminiscent.

Perceiving something as existential & impending and the typical options for change as inconsequential creates a logic to drastically escalate. I've heard this sentiment even as strong as tepidly supporting triggering a great power wat that would likely decimate semiconductor production because it would slow down AI development.

If you told me that five years from now a fringe EA did something like send a pipe bomb to an AI exec, I would not be surprised. We as community should on guard of doomer unilateralists doing something extreme in the name of an EA cause area.

Curated and popular this week
Relevant opportunities