Hide table of contents

Only the young and the saints are uncompromised.[1] Everyone else has tried to do something in the world and eventually slipped up (or just been associated with someone else who slipped up).

Say that you are compromised if it is easy for someone to shame you. This takes lots of forms:

  • "We are all sinners", say the Christians.
  • "We are all privileged", say the identitarians.[2]
  • "We all have some self-serving motives", says everyone sensible.
  • "Even just living quietly we destroy things", say the environmentalists.
  • "Even our noblest actions fall horribly short of the mark", say the EAs.[3]
     

Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4]

But this is the normal course of a life, including highly moral lives. (Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!)

But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.[5]

e.g.

  • The EU is a Nobel peace prize winning organisation you might have heard of. But their Common Agricultural Policy causes billions of dollars of damage to poor-world farmers, and has been called a "crime against humanity".
  • Mother Theresa's well-resourced clinics and hospices were remarkably incompetent and rarely prescribed pain medication, apparently under the belief that suffering brings us closer to God.
  • Gandhi's (and Nehru's) economic policies perpetuated poverty to the tune of millions of dead children equivalents.[6]
  • The American labour hero Cesar Chavez sold out undocumented Mexicans and opposed immigration in a classic protectionist scheme.
  • The Vatican.
  • and so on.
     

Despite appearances, this isn't a tu quoque defence of FTX! The point is to set the occasionally appropriate recriminations of the last month in context. You will make mistakes, and people will rightly hold you to them.[7] It will feel terrible. If you join a movement it will embarrass you eventually. Sorry.

(Someone could use the above argument to licence risky behaviour - "in for a penny". But of course, like anything, being compromised is a matter of degree. Higher degrees are to be avoided fervently, insofar as they are downstream of actual harm, which they probably are.[8])

  1. ^

    You might think that the idle (like the chattering classes) aren't compromised, but they are. They stood by while the millions suffered, despite their remarkable power to help.

  2. ^

    Quite true, since we are all living now rather than say under feudalism. 

  3. ^

    Maybe this sounds like a strawman to you, but consider our disdain for Mackenzie Scott giving her wealth to poor and artsy Americans.

  4. ^

    Bentham is perhaps the second most-demonised consequentialist - and yet he strikes me as nearly uncompromised. His much-mooted imperialism is not one, for instance. The most you can say is that he was a bit naive about state power, privacy, legibility.

  5. ^

    What's a prior then, if the incidence is 99%?

    Say 70 years in which to disgrace yourself. How many actions per year? Well, one tweet can do it, so potentially thousands. Call it 300. 

    99% / 21000 = a 0.005% risk of compromise per-action. Clearly a very fragile estimate.

  6. ^

    He was also very racist, but this isn't the sort of thing that can plausibly fall under understandable moral risk.

  7. ^

    also sometimes wrongly

  8. ^

    I await a quantification of compromise, so that we can integrate it into our pretend calculi.

Comments46
Sorted by Click to highlight new comments since: Today at 2:23 PM

Strong upvote. I remember when someone I knew was being dragged on the internet, and I found some of the things they'd said really upsetting to me and my moral sensibilities, and I found it really helpful (without it necessarily changing my mind on how bad those things were!) for a friend to help me reflect on how much I had or hadn't yet priced in the selection filter of "find the worst things anyone has ever written or that they've ever done". (Sometimes it's right to judge people on the worst thing they've ever done, but I suspect not often).

Similarly, in the last 8 months of working at an EA org / in the EA community, it's been really helpful to be able to understand what's unusual about the orgs and community, and what's incredibly standard boilerplate (which might be good or bad - lots of normal ways of doing things are stupid) - talking to people from journalism, politics, etc has been great for contextualization.

For some reason, this post about criticisms of the Gates foundation has been really sticking with me.

Tiny note - I think that link redirects from youtube to here: https://www.seattletimes.com/seattle-news/not-many-speak-their-mind-to-gates-foundation/. I suggest just replacing it. 

Oops, thanks!

Habryka
1y49
18
20

I am confused by your list of examples. From my perspective more than half of the organizations/people on your list are indeed quite terrible and have caused greatly more harm than good. If people use these examples as reason to be less risk-averse about causing great harm, then I think that would be an update in the wrong direction (indeed, I think the right lesson to learn here is something closer to "the road to hell is paved in good intentions", e.g. yes, if you are ambitious and have strong ethical convictions, it is really important that you remain cognizant that you might instead be causing great harm, and should probably think about that a good amount). 

To go one by one for your list of examples: 

  • EU:
    • By my lights the EU seems really quite bad. Their pandemic response alone might be enough to make it go down as one of the worst institutions in history, and their general legal framework is one of the greatest blockers to economic growth in Europe. (This is not an obvious state of affairs, but I am not going to write a whole case against the EU here, which would take a while, and doesn't seem top priority)
  • Mother Theresa
    • Indeed, my sense is the ethical commitments of Mother Theresa's clinics were indeed bad enough to cause much more suffering than she prevented. When I last looked into this it seems like she primarily redirected resources that would have gone to more sane efforts.
  • Gandhi's (and Nehru's)
    • My guess is Gandhi did actually make the world better
  • Cesar Chaves
    • The National Farm Workers Association seems like a pretty terrible organization, and has had a hugely distortive effect on politics and has caused really quite great harm. Farm unions show up all over the place whenever I look into crazy regulations and policies. I haven't looked a ton into the history of this, but my prior here would be that it's quite bad.
  • The Vatican.
    • I don't know why I would even have a prior on the Vatican being good for the world. 

So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world. 

Like, these are not institutions and individuals that have been "compromised" or "tainted". These are examples that indeed seem to have caused solidly more harm than they have caused good, and we should take them primarily as lessons on what not to do.

I don't think actually ending up net-negative like this is predetermined, and is not a result of just getting unlucky. I think it was ultimately predictable that these people/institutions would end up quite bad for the world, and we can avoid going down a similar path. I do think that involved a good amount of actually modeling the risk and being careful, and being aware of the skulls along the road that is filled with similarly ideologically committed people like us.

I read it, not as a list of good actors doing bad things. But as a list of idealistic actors [at least in public perception] not living up to their own standards [standards the public ascribes to them].

I understand your point, but I think you are especially harsh on these examples which  would all require a lot of complex investigation on good vs harm done before writing them of as net harm. At least you could list potential good done by those people/orgs as well as harm and state that the EV is very unclear.

The EU is very complex. Horrendous harm done on the farming subsidies front, but what about the good in making war almost inconceivable within the union? And In allowing poor laborers to move freely and get better lives? To be fair you do acknowlege the difficulty assessing the EU harm/good tradeoff. 

Mother Theresa and her organisation cared for people who were often completely neglected, and was a small part of opening up a global revolution in caring for the dying. Yes medical care wasn't good enough and many her organisation cared for could have been cured with proper medical care. I'm open to this being net harm but my sense is it was probably net good (but again so hard to know).

Chaves supported hundreds of thousands of workers' to get improved pay and important public health measures like hand washing at sites, protection from pesticides, clean drinking water etc. Sure he will have done harm but enough to offset that good?

The Vatican is head  office of a large religious body that has probably done both enormous good and enormous harm over their 2000 year history. Writing them off without explanation apart from a link to the clear and horrendous sexual abuse atrocities exposed in only the last 50 years seems too harsh

Nick - strongly agree, esp. regarding the Vatican.

If one is a secular agnostic (like me) who doesn't believe in God, Jesus, salvation, or an infinitely long, infinitely blissful afterlife in Heaven, then the Catholic Church over the last couple millennia probably looks like a net negative (arguably).

But if one believes in all these things, then insofar as the Catholic Church 'saved' hundreds of millions of souls, it's been an enormous net positive.

We EAs need to be a bit more epistemically humble about these religious issues. 

After all, many of us believe in the Simulation Hypothesis, which would imply that almost any conceivable theology that any human has ever believed, has a non-zero probability of being true -- and our assessments of total utility generated by any particular person, movement, or organization should be modulated accordingly.

PS Folks who disagree-voted with this comment -- I'm genuinely curious why you disagree? 

I didn't vote it down, but I think giving the Catholic Church the "benefit of the doubt" is off-base. You could say the same about anyone doing bad -- "Maybe they're right on some level." The Catholic Church has simply done tons and tons of bad. And I think I'm saying this not just because of my personal hatred of the Catholic Church. https://www.losingmyreligions.net/ 

Matt -- I'm not arguing that we should give the Catholic Church the 'benefit of the doubt'. Only that if -- big if -- their theology and metaphysics are correct, and if they actually managed to 'save some souls' (switching their fate from infinite suffering in hell to infinite bliss in heaven), then their net consequentialist impact in the afterlife would totally swamp any evil they've done on Earth. 

You may think there's zero % chance their theology and metaphysics are correct, but their beliefs are basically a variant of a Simulation Hypothesis, in which human actions 'in simulation' (during mortal life) determine rewards 'out of simulation (in the 'real' afterlife). 

There's obviously a variant of Pascal's wager that raises some thorny problems here. And it applies equally to every other religion that posits reincarnation or an afterlife....

Do you know any good articles or posts exploring the phenomenon of "the road to hell is paved in good intentions"? In the absence of a thorough investigation, I'm tempted to think that "good intentions" is merely a PR front that human brains put up (not necessarily consciously), and that humans deeply aligned with altruism don't really exist, or are even rarer than it looks. See my old post A Master-Slave Model of Human Preferences for a simplistic model that should give you a sense of what I mean... On second thought, that post might be overly bleak as a model of real humans, and the truth might be closer to Shard Theory where altruism is a shard that only or mainly gets activated in PR contexts. In any case, if this is true, there seems to be a crucial problem of how to reliably do good using a bunch of agents who are not reliably interested in doing good, which I don't see many people trying to solve or even talk about.

(Part of "not reliably interested in doing good" is that you strongly want to do things that look good to other people, but aren't very motivated to find hidden flaws in your plans/ideas that only show up in the long run, or will never be legible to people whose opinions you care about.)

But maybe I'm on the wrong track and the main root cause of "the road to hell is paved in good intentions" is something else. Interested in your thoughts or pointers.

This is a great question and I'm sorry I don't have anything really probative for you. Puzzle pieces:

  • "If hell then good intentions" isn't what you mean. You also don't mean "if good intentions then hell". So you presumably mean some surprisingly strong correlation. But still weaker than that of bad intentions. We'd have to haggle over what number counts as surprising. r = 0.1?
     
  • Nearly everyone has something they would call good intentions. But most people don't exploit others on any scale worth mentioning. So the correlation can't be too high.
     
  • Good things happen, sometimes, despite the odds. We have a good theory of how this can happen in a world without good intentions, so I don't want to use this as strong evidence for good intentions. But good things still happen without competition and counter to incentive gradients.
     
  • In general I have a pretty high bar for illusionism, eliminativism, accusations of false consciousness, etc (something something phenomenal conservatism). 

    • In this case: we clearly have more information than others about our own intentions. (This might not be a lot on an absolute scale though.)
       
  • I buy the 'moral licencing' idea, where people's sense of moral duty is (very) finite but their cupidity is way less bounded. So you can think that good intentions are real but just run out faster. Shard seems like a baroque but empirically adequate version of this.
     
  • I think I buy the PR spokesperson account of our internal narrative / phenomenal consciousness. But the spokesperson isn't limited to retconning naive egoism, since we know that other solutions are evolutionarily stable in the presence of precommitment and all the other dongles, and so it could be hiding those too.
     
  • I could look up the psychology literature but i'm not sure it would update either you or me.

I really think egoism strains to fit the data. From a comment on a deleted post:

[in response to someone saying that self-sacrifice is necessarily about showing off and is thus selfish]:

How does this reduction [to selfishness] account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction? 

(Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)

We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in exchange for denying the seemingly simpler hypothesis "they had terminal values independent of their wellbeing"?

(Limiting this to atheists, since religious martyrs are explained well by incentives.)

The best you can do is "egoism, plus virtue signalling, plus plain insanity in the hard cases".

Pure selfishness can't work, since if everyone is selfish, why would anyone believe anyone else's PR? I guess there has to be some amount of real altruism mixed in, just that when push comes to shove, people who will make decisions truly aligned with altruism (e.g., try hard to find flaws in one's supposedly altruistic plans, give up power after you've gained power for supposedly temporary purposes, forgo hidden bets that have positive selfish EV but negative altruistic EV) may be few and far between.

Ignaz Semmelweis

This is just a reasonable decision (from a selfish perspective) that went badly, right? I mean if you have empirical evidence that hand-washing greatly reduced mortality, it seems pretty reasonable that you might be able to convince the medical establishment of this fact, and as a result gain a great deal of status/influence (which could eventually be turned into power/money).

The other two examples seem like real altruism to me, at least at first glance.

The best you can do is “egoism, plus virtue signalling, plus plain insanity in the hard cases”.

Question is, is there a better explanation than this?

Good point thanks (though I am way less sure of the EU's sign). That list of examples is serving two purposes, which were blended in my head til your comment:

  1. examples of net-positive organisations with terrible mistakes (not a good list for this)
  2. examples of very well-regarded things which are nonetheless extremely compromised (good list for this)

You seem to be using compromised to mean "good but flawed", where I'm using it to mean "looks bad" without necessarily evaluating the EV.

Yet another lesson about me needing to write out my arguments explicitly.

Yeah, to be clear, my estimates of EU impact have pretty huge variance, so I also wouldn't describe myself as confident (though I do think these days the expected value seems more solidly in the negative). 

And yeah, that makes sense. 

So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world. 

This view is myopic (doesn't consider the nth-order effects of the projects) and ahistorical (compares them to present-day moral standards rather than the counterfactuals of the time). 

I'm down with a lot of this, but I'm not sure about the EU. Given the history of war on the continent, I think the EU is a totally reasonable response. Hard to run the counter-factual.

By my lights the EU seems really quite bad. Their pandemic response alone might be enough to make it go down as one of the worst institutions in history, and their general legal framework is one of the greatest blockers to economic growth in Europe.

Why exactly do you think that greater economic growth would lead to higher wellbeing in Europe ? From my understanding, in rich countries (US, UK, Germany, Australia), average subjective well being barely changed in the last decades, even if GDP greatly increased.

 

(However, environmental impact increased greatly in the same period, so this growth may have led to adverse consequences for other countries)

Thanks to Nina and Noah there's now a 2x2 of compromises which I've numbered:

The above post is a blend of all four.

Is this trying to make a directional claim? Like people (in the EA community? in idealistic communities?) should on average be less afraid / more accepting of being morally compromised? (On first read, I assume no, it seems like just a descriptive post about the phenomenon). 

FWIW, I think it's worth thinking about the 2 forms of "compromise" separately. (Being associated with something you end up finding morally bad / directly doing something you end up finding morally bad).  I think it's easier and more worthwhile to focus on avoiding the latter, but overall I'm not sure whether I've found a strong tendency that people overdo either of these things. 

There's some therapeutic intent. I'm walking the line, saying people should attack themselves only a proportionate amount, against this better reference class: "everyone screws up". I've seen a lot of over the top stuff lately from people (mostly young) who are used to feeling innocent and aren't handling their first shaming well.

Yes, that would make a good followup post.

I'm having trouble understanding this post. Maybe because I don't have a good sense of what 'being compromised' means, perhaps due to not being a native English speaker. I guess it's similar to 'being cancelled'? (Which to be fair isn't a clearly defined term either).

Also, just as Habryka pointed out, the examples leave me a bit confused about the message of the post. While clearly not your intention, parts of the post read as "Don't feel bad about making mistakes, some horrible organisations/people have made mistakes too!". I guess I just don't understand how some of the examples contribute to the point you want to make with this post.

Yeah it's not fully analysed. See these comments for the point.

The first list of examples is to show that universal shame is a common feature of ideologies (descriptive).

The second list of examples is to show that most very well-regarded things are nonetheless extremely compromised, in a bid to shift your reference class, in a bid to get you to not attack yourself excessively, in a bid to prevent unhelpful pain and overreaction. 

The first comment you linked makes things a lot clearer, thanks. But I'm still curious how exactly you define "being compromised".

Just shameable. 

That makes things even clearer, thank you!

Maybe people just aren't expecting emotional concerns to be the point of a Forum article? In which case I broke kayfabe, pardon.

As someone who only recently got involved with EA (I read some books in early 2022, then took an in-depth EA course last summer), I've been watching this and have a few general thoughts on what I've seen & what I think about EA going forward (Not just as an ideology but as a community). Sorry if this seems a bit off-topic since I can't say I feel particularly 'compromised'.

Personally, I was not surprised that FTX collapsed & was a fraudulent organization. I had been following the crypto space for a few years and SBF was an incredibly obvious con man in my view. I didn't even know that Future Fund was part of FTX until the situation blew up and I was like "oh great" when I heard EA come up in the news. 

I was rather disappointed in EA leadership for not being more cautious about SBF intrusion into the space. When someone shows up from an industry well-known for fraud and instantly becomes a meaningful % of overall movement funding, someone has to do an in-depth audit before distributing goodwill.

However, I think the way the EA community reacted to the situation on this forum was positive. I also think that leadership going through this experience will substantially reduce the likelihood of a similar (or worse) thing happening again in the future (both near and long term) since they will enforce much stronger regulations. It may have been good for EA to suffer a disaster that rattled everyone but has not destroyed the movement.

Overall, I was generally pleased with how most of the community responded (I think the infighting narrative was overblown). Going forward, I'm happy to describe myself as an Effective Altruist, and I think that we've got a bright future ahead. In fact, I think EA is likely to be more positively impactful than much of the rest of the trillion-dollar global nonprofit industry.

To people who have been beating themselves up about it: It wasn't your fault, don't beat yourselves up about it. I have seen most people react very intelligently and understand this for what it was: A major failure of leadership & regulations, and an opportunity to ensure that it doesn't happen again while also not getting derailed from the strong philosophical basis that EA has.

I'm mostly not talking about infighting, it's self-flagellation - but glad you haven't seen the suffering I have, and I envy your chill.

You're missing a key fact about SBF, which is that he didn't "show up" from crypto. He started in EA and went into crypto. This dynamic raises other questions, even as it makes the EA leadership failure less simple / silly.

Agree that we will be fine, which is another point of the list above.

You're missing a key fact about SBF, which is that he didn't "show up" from crypto. He started in EA and went into crypto. This dynamic raises other questions, even as it makes the EA leadership failure less simple / silly.

Ah, thank you this does add good context. If I was an EA with any background in finance, I'd probably be very upset at myself about not catching on (a lot) earlier. Since he'd been involved in EA for so long,  I wonder if he never truly subscribed to EA principles and has simply been 'playing the long game'. I've seen plenty of examples of SBF being a master at this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us

I had heard of him only a few times before the crash, and mostly in the context of youtube clips where he basically described a Ponzi scheme, then said that it was 'reasonable'. The unfortunate thing is that FTX's exchange business model wasn't inherently fraudulent. There was likely no way for anyone outside the company to know he was lending out users' money against their own terms of service (apart from demanding a comprehensive audit). 

Ultimately it doesn't look like he's going to get away with it, but it's good to be much more cautious with funders (especially those connected to an operating non-public company) going forward. 

Since he'd been involved in EA for so long,  I wonder if he never truly subscribed to EA principles and has simply been 'playing the long game'.

I explained in this comment and the comment reply below it why I think it's clear that he did believe in EA principles (except for the part of "EA principles" that are explicitly against fraud and so on).

I've seen plenty of examples of SBF being a master at this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us

That's evidence that he's deceptive, but note that he meant to refer to stuff like corporate responsibility, not his utilitarianism.

Great post and from a personal practical perspective this realisation can be so freeing and lifegiving

Recognising this compromise and asking to be freed from the guilt, shame and paralysis it can bring, then being drawn back into a life of (hopefully) loving action is a key part of my life. 

"We have sinned through what we have done, and through what we have failed to do... Through ignorance, through weakness and through our own deliberate fault. We are truly sorry.... Forgive yourself, forgive others."

Say that you are compromised if it is easy for someone to shame you.
...
Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4]
...

You will make mistakes, and people will rightly hold you to them.[7] It will feel terrible.

I'm confused why you're including Guzey and Thiel in this list. It doesn't seem like Guzey's critique is a mistake that he should "feel terrible" about (although I only did a quick skim), and Torres mentions Thiel exactly once in that article:

Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values.

Yes, if I was using the same implicature each time I should have said "MacAskill" for Guzey. Being associated with Thiel in any way is a scandal to some people, even though his far-right turn was after this talk.

It's not normative, it's descriptive - "shameable", not "ought to be ashamed".

bold to post memes on the EA forum. 

got karma to burn baby

Or did you mean ...?
 A dog sitting with his coffee. Flames in the background. Foreground is full of flowers and flowery bushes. A comics draw...

yes. The fire is in an entirely different room.

Thanks for writing this. It isn't exactly in the same line, but when I examine my career, I believe I have done more harm than good. It was writing Losing My Religions that really firmed up that conclusion.  I hope posts like these -- and the fascinating discussion -- help others to do more good than harm. https://www.losingmyreligions.net/ 

Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4] But this is the normal course of a life, including highly moral lives.... But the greater part of it being normal is that all action incurs risk, including moral risk.

It's not correct to say that action deserves criticism, but maybe correct to say that action receives criticism. The relevant distinction to make is why the action brought criticism on it, and that is different case-by-case. The criticism of SBF is because of alleged action that involves financial fraud over billions of dollars. The criticism of Singer with regard to his book Practical Ethics is because of distortion of his views on euthanasia. The criticism of Thiel with regard to his financial support of MIRI is because of disagreements over his financial priorities. And I could go on. Some of those people have done other things deserving or receiving criticism. The point is that whether something receives criticism doesn't tell you much about whether it deserves criticism. While these folks all risk criticism, they don't all deserve it, at least not for the actions you suggested with your links.

We're not disagreeing.

It could be that EA folks:

  1. risk criticism for all actions. Any organization risks criticism for public actions.
  2. deserve criticism for any immoral actions. Immoral actions deserve criticism.
  3. risk criticism with risky actions whose failure has unethical consequences and public attention. EA has drawn criticism for using expected value calculations to make moral judgments.

Is that the compromise you're alluding to when you write:

But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.) Empirically, this risk level is high enough that nearly everyone eventually bites it.

SBF claimed that, if events had gone differently, FTX would have recovered enough funds to carry on. In that hypothetical scenario, FTX's illegal dealing with Alameda would have gone unnoticed and would have had no adverse financial consequences. Then the risk-taking is still unethical but does not inspire criticism.

There is a difference between maximizing potential benefits and minimizing potential harms. It's not correct to say that minimizing unavoidable harms from one's actions has negative consequences for others and therefore those actions are immoral options, unless all one means by an immoral action is that the action had negative consequences for others.

I don't think there's unanimity about whether actions should be taken to minimize harms, maximize benefits, or some combination.

If all it means to "bite it" is that one takes actions with harmful consequences, then sure, everyone bites the bullet. However, that doesn't speak to intention or morality or decision-making. There's no relief from the angst of limited altruistic options in my knowing that I've caused harm before. If anything, honest appraisal of that harm yields the opposite result. I have more to dislike about my own attempts at altruism. In that way, I am compromised. But that's hardly a motive for successful altruism. Is that your point?

Good analysis. This post is mostly about the reaction of others to your actions (or rather, the pain and demotivation you feel in response) rather than your action's impact. I add a limp note that the two are correlated.

The point is to reset people's reference class and so salve their excess pain. People start out assuming that innocence (not-being-compromised) is the average state, but this isn't true, and if you assume this, you suffer excessively when you eventually get shamed / cause harm, and you might even pack it in.

"Bite it" = "everyone eventually does something that attracts criticism, rightly or wrongly"

You've persuaded me that I should have used two words:

  • benign compromise: "Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!"
  • deserved compromise: "all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.)"

Oh, I see. So by "benign" you mean shaming from folks holding common-sense but wrong conclusions, while by "deserved" you mean shaming from folks holding correct conclusions about consequences of EA actions. And "compromise" is in this sense, about being a source of harm.