Scott Aaronson makes the argument that an Orthodox vs Reform analogy works for AI Alignment. I don't think I've really heard it framed this way before. He gives his takes on the beliefs of each group and there's a lot of discussion in the comments around that, but what's more interesting to me is thinking about what the benefits and risks of framing it this way might be.
Perhaps the reform option gives people a way to take the arguments seriously without feeling like they are aligning themselves with something too radical. If your beliefs tend reformist, you probably like differentiating yourself from those with more radical-sounding beliefs. If your beliefs are more on the Orthadox side, maybe this is the "gateway drug" and more talent would find its way to your side. This has a little bit of the "bait-and-switch" dynamic I sometimes hear people complain about (but I do not at all endorse) with EA - that it pitches people on global health and animal welfare, but it's all really about AI safety. As long as people really do hold beliefs along reformist lines though, I don't see how that would be an issue.
Maybe the labels are just too sloppy, most people don't really fit into either camp and it's bad to pretend that they do?
Not coming up with much else, but I'd be surprised if I wasn't missing something.
I agree with most of this - clusters probably not very accurate, divisive religious terminology, him identifying with one of the camps while describing them.
Can you elaborate a bit more on why you think binary labels are harmful for further progress? Would you say they always are? How much of your objection here is these particular labels and how Scott defines them, and how much of it is that you don't think the shape can be usefully divided into two clusters?
I find that, on topics that I understand well, I often object intuitively to labels on... (read more)