I don't have any strong views on whether this user should have been given a temporary ban vs a warning, but (unless the ban was for a comment which is now deleted or a private message, which are each possible, and feel free to correct me if so), from reading their public comments, I think it's inaccurate (or at least misleading) to describing them as "promoting violence". Specifically, they do not seem to have been advocating that anyone actually use violence, which is what I think the most natural interpretation of "promoting violence" would be. Instead, they appear to have been expressing that they'd emotionally desire that people who hypothetically would do the thing in question would face violence, that (in the hypothetical example) they'd feel the urge to use violence, and so on.
I'm not defending their behavior, but it does feel importantly less bad than what I initially assumed from the moderator comment, and I think it's important to use precise language when making these sorts of public accusations.
Worth noting that in humans (and unlike in most other primates) status isn't primarily determined solely by dominance (e.g., control via coercion), but instead is also significantly influenced via prestige (e.g., voluntary deference due to admiration). While both dominance and prestige play a large role in determining status among humans, if anything prestige probably plays a larger role.
(Note – I'm not an expert in anthropology, and anyone who is can chime in, but this is my understanding given my amount of knowledge in the area.)
Note to Israelis who may be reading this: I did not upvote/downvote this post and I do not intent to vote on such posts going forward. I think you should do the same.
You're free to vote (or refrain from voting) how you want, but the suggestion to others feels illiberal to me in a way that I think is problematic. Would you also suggest that any Palestinians reading this post refrain from voting on it? (Or, going a step further, would you suggest Kenyan EAs refrain from voting on posts about GiveDirectly?) Personally, I think both Israeli EAs and Palestinian EAs should feel comfortable voting on posts like this, and I'd worry about the norms in the community if we tell people not to vote/otherwise voice their perspective based on demographics (even more so if these suggestions are asymmetrical instead of universal).
Another group that naturally could be in a coalition with those 2 – parents who just want clean air for their children to breathe from a pollution perspective, unrelated to covid. (In principle, I think may ordinary adults should also want clean air for themselves to breath due to the health benefits, but in practice I expect a much stronger reaction from parents who want to protect their children's lungs.)
My problem with the post wasn't that it used subpar prose or "could be written better", it's that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn't about "argument style points", it's about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.
The reason I didn't personally engage with the object level is I didn't feel like I had anything particularly valuable to say on the topic. I didn't avoid saying my object-level views (if he had written a similar post with a style I didn't take issue with, I wouldn't have responded at all), and I don't want other people in the community to avoid engaging with the ideas either.
I feel like this post is doing something I really don't like, which I'd categorize as something like "instead of trying to persuade with arguments, using rhetorical tricks to define terms in such a way that the other side is stuck defending a loaded concept and has an unjustified uphill battle."
For instance:
let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.
I mean, no, that's just not how the term is usually used. It's misleading to hide your beliefs in that way, and you could argue it's dishonest, but it's not generally what people would call a "lie" (or if they did, they'd use the phrase "lie by omission"). One could argue that lies by omission are no less bad than lies by commission, but I think this is at least nonobvious, and also a view that I'm pretty sure most people don't hold. You could have written this post with words like "mislead" or "act coyly about true beliefs" instead of "lie", and I think that would have made this post substantially better.
I also feel like the piece weirdly implies that it's dishonest to advocate for a policy that you think is second best. Like, this just doesn't follow – someone could, for instance, want a $20/hr minimum wage, and advocate for a $15/hr minimum wage based on the idea that it's more politically feasible, and this isn't remotely dishonest unless they're being dishonest about their preference for $20/hr in other communications. You say:
many AI Safety people being much more vocal about their endorsement of RSPs than their private belief that in a saner world, all AGI progress should stop right now.
but this simply isn't contradictory – you could think a perfect society would pause but that RSPs are still good and make more sense to advocate for given the political reality of our society.
That's fair. I also don't think simply putting a post on the forum is in itself enough to constitute a group being an EA group.
I don't think that's enough to consider an org an EA org. Specifically, if that was all it took for an org to be considered an EA org, I'd worry about how it could be abused by anyone who wanted to get an EA stamp of approval (which might have been what happened here – note that post is the founders' only post on the forum).
[Just commenting on the part you copied]
Feels way too overconfident. Would the cultures diverge due to communication constraints? Seems likely, though also I could imagine pathways by which it wouldn't happen significantly, such as if a singleton was already reached.
Would technological development diverge significantly, conditional on the above? Not necessarily, imho. If we don't have a self-sufficient colony on Mars before we reach "technological maturity" (e.g., with APM and ASI), then presumably no (tech would hardly progress further at all, then).
Would tech divergence imply each world can't truly track whatever weapons the other world had? Again, not necessarily. Perhaps one world had better tech and could just surveil the other.
Would there be a for-sure 1st strike advantage? Again, seems debatable.
Etcetera.
I think there's a debate to be had about when it's best for political decisions be decided by what the public directly wants, vs when it's better for the public to elect representatives that make decisions based on a combination of their personal judgment and deferring to domain experts. I don't think this is obviously a case where the former makes more sense.
Sure, but the alternative isn't the money being spent half on AMF and half on the LTFF – it's instead some combination of other USG spending, lower US taxes, and lower US deficits. I suspect the more important factor in whether this is good or bad will instead be the direct effects of this on nuclear risk (I assume some parts of the upgrade will reduce nuclear risk – for instance, better sensors might reduce the chances of a false positive of incoming nuclear weapons – while other parts will increase the risk).
Not necessarily – the upgrade likely includes many aspects for reducing the chances that a first-strike from adversaries could nullify the US stockpile (efforts towards this goal could include both hardening and redundancy), thus preserving US second-strike capabilities.
I'm sure ~everyone involved considers nuclear war a negative-sum game. (They likely still think it's preferable to win a nuclear war than to lose it, but they presumably think the "winner" doesn't gain as much as the "loser" loses.)
Yeah, my sense is multiple countries will upgrade their arsenals soon. I'm legitimately uncertain whether this will on net increase or decrease nuclear risk (largely I'm just ignorant here – there may be an expert consensus that I'm unaware of, but I don't think the immediate reaction of "spending further money on nukes increases nuclear risk" is obviously necessarily correct). Even if it would be better for everyone to not, it may be hard to coordinate to avoid doing so (though may still be worth trying).
I think it's not crazy to think there might be a relative policy window now to change course, given these reasons.