The most efficient point of intervention on this issue is for confident insiders to point out when a behavior has unintended consequences or is otherwise problematic.
The post mentions this. It's hard to get stable, non-superficial buy-in for this from the relevant parties; everyone wants to talk the talk. But when you do, you'll get a much different effect than you will from hiring another diversity & inclusion officer.
I know of a few Fortune 500 companies that take the idea that this stuff affects their bottom line seriously enough that people in positions of power act on it, but EA seems more like a social club.
I don’t see many people who want to figure out how much of a problem there is, and then apply e.g. utilitarianism to decide what to do about that. That would count as acting seriously.
I like Michael's distinction between the style and core of an argument. I'm editing this paragraph to clarify the way in which I'm using a few words. When I talk about whether an argument is actually combative or collaborative, I mean to indicate whether it is more effective at goal-oriented problem-solving or at achieving political ends. By politics, I mean something like "social maneuvers taken to redistribute credit, affirmation, etc. in a way that is expected to yield selfish benefit". For example, questioning the validity of sources would be...
summary: changes in people's "values" aren't the same as changes in their involvement in EA and this analysis treats the two as the same thing; also, some observations from my own friendgroup on values changes v. retention
It sounds like no differentiation between "lowered involvement in EA and change in preferences" and "lowered involvement in EA while remaining equally altruistic" was made here, given the wording used in "The Data" section.
I can think of 3 people I've known who were previously EAs (by your six-month... (read more)