Clarification for anyone who's reading this comment outside of having read the article – the article calls for governments to adopt clear policies involving potential preemptive military strikes in certain circumstances (specifically, against a hypothetical "rogue datacenter", as these datacenters could be used to build AGI), but it is not calling for any specific military strike right now.
Sorry, after rereading my comment, it comes off as more hostile than I was intending (currently sleep-deprived, which sometimes has that effect on me). The intended tone of my comment was more like "this move feels like it could lead to epistemics slipping or to newcomers being confused" and not like "this move violates some important norm of good behavior".
Regarding your specific question – no, I'm obviously not expecting you to preface every statement with "in my opinion". Most writing doesn't include "in my opinion" at the beginning of every statement, yet also most writing doesn't lead to a flag going off in my head for "huh, this statement is stated as a fact but is actually a matter up for debate" which I did notice here.
given his incorrect beliefs about AI-risk
I really don't like the rhetorical move you're making here. You (as well as many people on this forum) think his beliefs are incorrect; others on this forum think they are correct. Insofar as there's no real consensus for which side is correct, I'd strongly prefer people (on both sides) use language like "given his, in my opinion, incorrect beliefs" as opposed to just stating as a matter of fact that he's incorrect.
I don't think the crux here is about nanofactories – I'd imagine that if Eliezer considered a world identical to ours but where nanofactories were impossible, he'd place (almost) as high probability on doom (though he'd presumably expect doom to be somewhat more drawn out).
I feel like the power differential between community builders and new members decreases over time as the new member "graduates" from being a new member and becomes a longer-term members, so perhaps the policy could apply for the first few months of the member's involvement?
This assumes that the only benefit of public perception is that it brings in more people. In many instances, better perceptions could also mean various interventions working better (such as if an intervention depends on societal buy-in).
Responding just to the comment about StrongMinds – I think mental health is an incredibly complicated issue, and mental illness is very multi-factored, so even if some people in sub-Saharan Africa are depressed due to bad governance, others may be depressed due to reasons that mental health services would alleviate. In any event, the fact that depression in sub-Saharan Africa is not even remotely close to 100% means the statement "I'd also be quite depressed if my government was as dreadful as most governments are in sub-Saharan Africa" is basically a non sequitur.
Yeah, my point is that it's (basically) disjunctive.
I notice that some of these forecasts imply different paths to TAI than others (most obviously, WBE assumes a different path than the others). In that case, does taking a linear average make sense? Consider if you think WBE is likely moderately far away, versus other paths are more uncertain and may be very near or very far. In that case, a constant weight on the WBE probability wouldn't match your actual views.
Yeah, I think that's better