The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic.
All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)
(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)
Please also note that we have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.
If you choose to participate in this discussion, please remember Forum norms. Chiefly,
- Be kind.
- Stay civil, at the minimum. Don’t sneer or be snarky. In general, assume good faith. We may delete unnecessary rudeness and issue warnings or bans for it.
- Substantive disagreements are fine and expected. Disagreements help us find the truth and are part of healthy communication.
Please try to remember that most people on the Forum are here for collaborative discussions about doing good.
I wholeheartedly agree that EA must remain welcoming to neurodiverse people. Part of how we do that is being graceful and forgiving for people who inadvertantly violate social norms in pursuit of EA goals.
But I worry this specific comment overstates its case by (1) leaving out both the "inadvertent" part and the "in pursuit of EA goals" part, which implies that we ought to be fine with gratuitous norm violation, and (2) incorporating political bias. You say:
I don't want to speak for anyone with autism. However, as best I can tell, this is not at all a universal view. I know multiple peope who thrive in lefty spaces despite seeming (to me at least) like high decouplers. So it seems more plausible to me that this isn't narrowly true about high decouplers in "woke" spaces; it's broadly true about high decouplers in communities who's political/ethical beliefs the decoupler does not share.
I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them. The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.
I also think we should be careful not to mistake correlation for causation, when looking at EA's success and the traits of many of its members. For example, you say:
There are valuable EA founders/popularizers who seem pretty adept at navigating taboos. For example, every interview I've seen with Will MacCaskill involves him reframing counterintuitive ethics to fit with the average person's moral intuitions. This seems to have been really effective at popularizing EA!
I agree that there are benefits from decoupling. But there are clear utilitarian downsides too. Contextualizing a statement is often necessary to anticipate its social welfare implications. Contextualizing therefore seems necessary to EA.
Finally, I want to offer a note of sympathy. While I don't think I'm autistic, I do frequently find myself at odds with mainstream social norms. I prefer more direct styles of communication than most people. I'm a hardcore utilitarian. Many of the leftwing shibboleths common in among my graduate school classmates I find annoying, wrong, and even harmful. For all these reasons, I share your feeling that EA is "oasis." In fact, it's the only community I'm a part of that reaffirms my deepest beliefs about ethics in a clear way.
But ultimately, I think EA should not optimize to be that sort of reaffirming space for me. EA's goal is wellbeing maximization, and anything other than wellbeing maximization will sometimes - even if only rarely - have to be compromised.