There's a psychological phenomenon that I can't remember the name of, but essentially, and subconsciously, a person tries to make others around them feel stressed and anxious in order to mitigate their own stress and anxiety.
I see a lot of this in mainstream climate change reporting, and I'm starting to notice it more on here with regards to AI x-risk.
Basically, I find seeing posts with titles like "We're All Gonna Die with Eliezer Yudkowsky" extremely tough emotionally, and they make me use the forum less. I suspect I am not the only one.
Obviously talking about significant x-risks is going to be stressful. I do not support people self-censoring when trying to provide realistic appraisals of our current situation; that seems clearly counter-productive. I also understand that the stressful nature of dealing with x-risk means that some people will find it too mentally tough to contribute.
At the same time, there are emotional wins to be had, and avoiding the psychological phenomenon I mentioned at the start seems like one of them. I think a decent heuristic for doing so is asking 'what action am I asking readers to take as a result of this information', and making sure you have a good answer.
Sticking with the Eliezer theme, his letter to Time performs well on this metric: emotionally harrowing, but with a clear call to support certain political initiatives.
In summary: AI x-risk is emotionally tough enough already, and I think some effort to avoid unnecessarily amplifying that difficulty is a valuable use of forum authors' time. I would certainly appreciate it as a user!
tcelferact - when posting about X risk issues, I agree that we should be careful about what kinds of emotions we accidentally or intentionally evoke in readers.
When facing major collective threats, humans, as hyper-social primates, have a fairly limited palette of emotions that can get evoked, and that motivate collective action to address those threats.
Probably the least useful emotions are despair, resignation, depression, generalized anxiety, and 'black-pilled' pessimism. These tend to be associated with curling up in a fetal position (metaphorically), and waiting passively for disaster, without doing much to prevent it. It's a behavioral analog of 'catatonia' or 'tonic immobility' or 'playing dead'. (Which can be useful in convincing a predator to lose interest, but wouldn't be much use against OpenAI continuing to be reckless about AGI development.)
Possibly more useful are the kinds of emotions that motivate us to proactively rally others to our cause, to face the threat together. These emotions typically include anger, moral outrage, moral disgust, fury, wrath, indignation, a sense of betrayal, and a steely determination to hold the line against enemies. Of course, intense anger and moral outrage have some major downsides: they reinforce tribalism (us/then polarization), can motivate violence (that's kinda one of their main purposes), and they can inhibit rational, objective analysis.
But I think on balance, EAs tend to err a bit too far in the direction of trying to maintain rational neutrality in the face of looming X risks, and trying too hard to avoid anger or outrage. The problem is, if we forbid ourselves from feeling anger/outrage (e.g. on the grounds that these are unseemly, aggressive, primitive, or stereotypically 'conservative' emotions), we're not left with much beyond despair and depression.
In my view, if people in the AI industry are imposing outrageous X risks on all of us, then moral outrage is a perfectly appropriate response to them. We just have to learn how to integrate hot and strong emotions such as outrage with the objectivity, rationality, epistemic standards, and moral values of EAs.
I don't object to folks vocalizing their outrage. I'd be skeptical of 'outrage-only' posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I'm requesting here.