195Joined Jan 2023


Enthusiastic utilitarian and moral realist.  I made this anonymous account to talk about the controversial stuff.


I agree!  When I say "wing" I mean something akin to "AI risk" or "global poverty" - i.e., an EA cause area that specific people working on.

I agree!  Greater leniency across cultural divides is good and necessary.

But I also think that:

(1) That doesn't apply to the Bostrom letter

(2) There are certain areas where we might think our cultural norms are better than many alternatives; in these situations, it would make sense to tell the person from the alternate culture about our norm and try to persuade them to abide by it (including through social pressure).   I'm pretty comfortable with the idea that there's a tradeoff between cultural inclusion and maintaining good norms, and that the optimal balance between the two will be different for different norms.


I'm no cultural conservative, but norms are important social tools we shouldn't expect to entirely discard.  Anthropologist Joe Henrich's writing really opened my eyes to how norms pass down complex knowledge that would be inefficient for an individual to try to learn on their own.

I wholeheartedly agree that EA must remain welcoming to neurodiverse people.  Part of how we do that is being graceful and forgiving for people who inadvertantly violate social norms in pursuit of EA goals.

But I worry this specific comment overstates its case by (1) leaving out both the "inadvertent" part and the "in pursuit of EA goals" part, which implies that we ought to be fine with gratuitous norm violation, and (2)  incorporating political bias.  You say:

If we impose standard woke cancel culture norms on everybody in EA, we will drive away [neurodiverse people]. Politically correct people love to Aspy-shame.  They will seek out the worst things a neurodiverse person has ever said, and weaponize it to destroy their reputation, so that their psychological traits and values are allowed no voice in public discourse.

I don't want to speak for anyone with autism.  However, as best I can tell, this is not at all a universal view.  I know multiple peope who thrive in lefty spaces despite seeming (to me at least) like high decouplers.  So it seems more plausible to me that this isn't narrowly true about high decouplers in "woke" spaces; it's broadly true about high decouplers in communities who's political/ethical beliefs the decoupler does not share.

I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them.  The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.

I also think we should be careful not to mistake correlation for causation, when looking at EA's success and the traits of many of its members.  For example, you say:

[if we punish social norm violation] we will drive away everybody with the kinds of psychological traits that created EA, that helped it flourish, and that made it successful

There are valuable EA founders/popularizers who seem pretty adept at navigating taboos.  For example, every interview I've seen with Will MacCaskill involves him reframing counterintuitive ethics to fit with the average person's moral intuitions.  This seems to have been really effective at popularizing EA!

I agree that there are benefits from decoupling.  But there are clear utilitarian downsides too.  Contextualizing a statement is often necessary to anticipate its social welfare implications.  Contextualizing therefore seems necessary to EA.

Finally, I want to offer a note of sympathy.  While I don't think I'm autistic, I do frequently find myself at odds with mainstream social norms.  I prefer more direct styles of communication than most people.  I'm a hardcore utilitarian.   Many of the leftwing shibboleths common in among my graduate school classmates I find annoying, wrong, and even harmful.  For all these reasons, I share your feeling that EA is "oasis."   In fact, it's the only community I'm a part of that reaffirms my deepest beliefs about ethics in a clear way.

But ultimately, I think EA should not optimize to be that sort of reaffirming space for me.   EA's goal is wellbeing maximization, and anything other than wellbeing maximization will sometimes - even if only rarely - have to be compromised.

Lying to meet goals != contextualizing

It's hard for me to follow what you're trying to communicate.  Are you saying that high contextualizers don't/can't apply their morals universally while high decouplers can?  I don't see any reason to believe that.   Are you saying that decouplers are more honest?  I also don't see any reason to believe that.

I'd be very interested in seeing a more political wing of EA develop.   If folks like me who don't really think the AGI/longtermist wing is very effective can nonetheless respect it, I'm sure those who believe political action would be ineffective can tolerate it.

I'm not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won't tank my professional reputation through guilt-by-association with racism).

I think this is a much needed corrective.

I frequently feel there's a subtext here that high decouplers are less biased (whether the bias is racial, confirmation, in-group, status-seeking, etc.).  Sometimes it's not even a subtext.

But I don't know of any research showing that high decouplers are less biased in all the normal human ways.  The only trait "high decoupler" describes is tending to decontextualize a statement.  And context frequently has implications for social welfare, so it's not at all clear that high decoupling is, on average, useful to EA goals.

I say all this while considering myself a high decoupler!

I think it is trivially true that we sometimes face a tradeoff between utilitarian concerns arising from social capital costs and epistemic integrity (see this comment).

But I don't think the Bostrom situation boils down to this tradeoff.  People like me believe Bostrom's statement and its defenders don't stand on solid epistemic ground.  But the argument for bad epistemics has a lot of moving parts, including (1) recognizing that the statement and its defenses should be interpreted to include more than their most limited possible meanings, and that its omissions are significant, (2) recognizing the broader implausibility of a genetic basis for the racial IQ  gap, and (3) recognizing the epistemic virtue in some situations of not speculating about empirical facts without strong evidence.

All of this is really just too much trouble to walk through for most of us.  Maybe that's a failing on our part!  But I think it's understandable.  To  convincingly argue points (1) through (3) above I would need to walk through all the subpoints made on each link.  That's one heck of a comment.

So instead I find myself leaving the epistemic issues to the side, and trying to convince people that voicing support for Bostrom's statement is bad on consequentialist social capital grounds alone.  This is understandably less convincing, but I think the case for it is still strong in this particular situation (I argue it here and here).

I'm arguing not for a "conflict of principles" but a conflict of impulses/biases.  Anecdotally, I see a bias for believing that the truth is probably norm-violative in rationalist communities.  I worry that this biases some people such that their analysis fails to be sufficiently consequentialist, as you describe.

Decoupling by definition ignores context.  Context frequently has implications for social welfare.  Utilitarian goals therefore cannot be served without contextualizing.

I also dispute the idea that the movement's founders were high decoupler rationalists to the degree that we're talking about here.  While people like Singer and MacAskill aren't afraid to break from norms when useful, and both (particularly Singer) have said some things I've winced at,  I can't imagine either saying anything remotely like Bostrom's statement, nor thinking that defending it would be a good idea.

Load More