There's a psychological phenomenon that I can't remember the name of, but essentially, and subconsciously, a person tries to make others around them feel stressed and anxious in order to mitigate their own stress and anxiety.

I see a lot of this in mainstream climate change reporting, and I'm starting to notice it more on here with regards to AI x-risk.

Basically, I find seeing posts with titles like "We're All Gonna Die with Eliezer Yudkowsky" extremely tough emotionally, and they make me use the forum less. I suspect I am not the only one.

Obviously talking about significant x-risks is going to be stressful. I do not support people self-censoring when trying to provide realistic appraisals of our current situation; that seems clearly counter-productive. I also understand that the stressful nature of dealing with x-risk means that some people will find it too mentally tough to contribute.

At the same time, there are emotional wins to be had, and avoiding the psychological phenomenon I mentioned at the start seems like one of them. I think a decent heuristic for doing so is asking 'what action am I asking readers to take as a result of this information', and making sure you have a good answer.

Sticking with the Eliezer theme, his letter to Time performs well on this metric: emotionally harrowing, but with a clear call to support certain political initiatives.

In summary: AI x-risk is emotionally tough enough already, and I think some effort to avoid unnecessarily amplifying that difficulty is a valuable use of forum authors' time. I would certainly appreciate it as a user!

27

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

I want to push back a little against this. I care more about the epistemic climate than I do about the emotional climate. Ideally in most cases they don't trade off. Where they do, though, I would rather people prioritize the epistemic climate, since I think knowing what is true is incredibly core to EA, more than the motivational aspect of it!

I agree with this. Where there is a tradeoff, err on the side of truthfulness.

tcelferact - when posting about X risk issues, I agree that we should be careful about what kinds of emotions we accidentally or intentionally evoke in readers.

When facing major collective threats, humans, as hyper-social primates, have a fairly limited palette of emotions that can get evoked, and that motivate collective action to address those threats.

Probably the least useful emotions are despair, resignation, depression, generalized anxiety, and 'black-pilled' pessimism. These tend to be associated with curling up in a fetal position (metaphorically), and waiting passively for disaster, without doing much to prevent it. It's a behavioral analog of 'catatonia' or 'tonic immobility' or 'playing dead'. (Which can be useful in convincing a predator to lose interest, but wouldn't be much use against OpenAI continuing to be reckless about AGI development.)

Possibly more useful are the kinds of emotions that motivate us to proactively rally others to our cause, to face the threat together. These emotions typically include anger, moral outrage, moral disgust, fury, wrath, indignation, a sense of betrayal, and a steely determination to hold the line against enemies. Of course, intense anger and moral outrage have some major downsides: they reinforce tribalism (us/then polarization), can motivate violence (that's kinda one of their main purposes), and they can inhibit rational, objective analysis.

But I think on balance, EAs tend to err a bit too far in the direction of trying to maintain rational neutrality in the face of looming X risks, and trying too hard to avoid anger or outrage. The problem is, if we forbid ourselves from feeling anger/outrage (e.g. on the grounds that these are unseemly, aggressive, primitive, or stereotypically 'conservative' emotions), we're not left with much beyond despair and depression.

In my view, if people in the AI industry are imposing outrageous X risks on all of us, then moral outrage is a perfectly appropriate response to them. We just have to learn how to integrate hot and strong emotions such as outrage with the objectivity, rationality, epistemic standards, and moral values of EAs. 

I totally agree with Dr. Miller. When we talk about AI risks, it's really important to find some balance between staying rational and acknowledging our emotions. Indeed feeling down or hopeless can make us passive, but being angry or morally outraged can push us to face challenges together. The trick being to use these emotions in a productive way while still sticking to our values and rational thinking.

I don't object to folks vocalizing their outrage. I'd be skeptical of 'outrage-only' posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I'm requesting here.

Maybe there could be "AI risk: pessimistic/less actionable" and "AI risk: pessimistic and actionable" tags so that people who are feeling overwhelmed can reduce or even zero the weight that one or both of these tags have on their frontpage?

I think there's something epistemically off about allowing users to filter only bad AI news. The first tag doesn't have that problem, but I'd still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.

My post has a long list of potential actions. "Steely determination to survive" (as per Geoffrey Miller's comment) is the vibe I'm going for.

Your post more than meets my requested criteria, thank you!

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig