I agree, but I don't think anyone involved here has advocated for excluding people from Manifest or ~anything else based on "genetics or other immutable characteristics"?
It just seems Orwellian to describe "person A doesn't want to associate with person B because of person B's beliefs" as "person B has exclusionary beliefs". Person A may or may not be justified, but obviously they are the one being exclusionary.
It seems like quite a leap from your premises (1) and (2) to “socialism”. You don’t clarify much what you mean by it in this post, but afaik I agree with your premises but don’t agree with socialism.
One reason is that I am personally interested in being altruistic, but:
So “try and increase the size of government” isn’t that attractive as an end goal.
Another reason is that the actual track record of well-intentioned communist revolutions seems extremely bad - neutral at best (Cuba?) and catastrophic at worst.
At some margin I think this would become an important consideration (e.g., advocating some policy that made being non-vegan super expensive) but at the current margin it seems like these costs are just extremely small relative to the suffering reduction they induce.
Is there a cost-effectiveness analysis that takes these costs into account? I don't think I've seen one.
What specifically in farmed animal welfare do you think beats GiveWell? (GiveWell is a specific thing you can actually donate money to; "farmed animal welfare" is not)
Farmed animal welfare is politically controversial in a way that GiveWell is not. This is potentially bad:
- Maybe people who don't care about farmed animals are correct
- Farmed animal advocacy is so cost-effective because, if successful, it forces other people (meat consumers? meat producers?) to bear the costs of treating animals better. I'm less comfortable spending other people's money to make the world better than spending my own money to make the world better
- Increased advocacy for farm animals might just cause increased advocacy for farms, just burning money rather than improving the world
- It's hard to be as confident in political interventions - humans and groups of humans are much less predictable than e.g. malaria
- Farmed animal welfare sometimes seems overly-connected with dubious left-wing politics (e.g. https://forum.effectivealtruism.org/posts/5iCsbrSqLyrfP55ry/concerns-with-ace-s-recent-behavior-1)
This kind of deal makes sense, but IMO it would be better for it to be explicit than implicit, by actually transferring money to people with a lot of positive impact (maybe earmarked for charity), perhaps via higher salaries, or something like equity.
FWIW this loss of control over resources was a big negative factor when I last considered taking an EA job. It made me wonder whether the claims of high impact were just cheap talk (actually transferring control over money is a costly signal).
Yeah, that explanation seems right. But - the high-decoupler rationalists are the counterexample to your claim! That group is valuable to EA, and EA should make sure it remains appealing to them (including the ones not currently in EA - the world will continue to produce high-decoupler rationalists). Which is not really consistent with the strong norm enforcement you're advocating.
99% is really too high. It's more than 1% likely that you're just in a very strong ideological filter bubble (which are surprisingly common; for example, I know very few Republican voters even though that's roughly half the US). The fact that this is a strong social norm makes that more likely.
I already said this, but I don't really understand how you can be so confident in this given the current controversy. It seems pretty clear that a sizeable fraction of current members don't agree with "saying "different races have different average IQs" is irredeemably racist". Doesn't that disprove your claim? (current members are at least somewhat representative of prospective members)
I think a historical strength of EA has been its ability to draw from people disconnected from mainstream social norms, especially because there is less competition for such people.
IMO this post violates its own proposed rule of avoiding discussion of race science on the EA forum.
Similarly, there is some tension between the ideas “EA should publicly disavow race science” and “EA should never discuss race science”. Normally taking stances invites discussion.