There's a psychological phenomenon that I can't remember the name of, but essentially, and subconsciously, a person tries to make others around them feel stressed and anxious in order to mitigate their own stress and anxiety.

I see a lot of this in mainstream climate change reporting, and I'm starting to notice it more on here with regards to AI x-risk.

Basically, I find seeing posts with titles like "We're All Gonna Die with Eliezer Yudkowsky" extremely tough emotionally, and they make me use the forum less. I suspect I am not the only one.

Obviously talking about significant x-risks is going to be stressful. I do not support people self-censoring when trying to provide realistic appraisals of our current situation; that seems clearly counter-productive. I also understand that the stressful nature of dealing with x-risk means that some people will find it too mentally tough to contribute.

At the same time, there are emotional wins to be had, and avoiding the psychological phenomenon I mentioned at the start seems like one of them. I think a decent heuristic for doing so is asking 'what action am I asking readers to take as a result of this information', and making sure you have a good answer.

Sticking with the Eliezer theme, his letter to Time performs well on this metric: emotionally harrowing, but with a clear call to support certain political initiatives.

In summary: AI x-risk is emotionally tough enough already, and I think some effort to avoid unnecessarily amplifying that difficulty is a valuable use of forum authors' time. I would certainly appreciate it as a user!

27

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

I want to push back a little against this. I care more about the epistemic climate than I do about the emotional climate. Ideally in most cases they don't trade off. Where they do, though, I would rather people prioritize the epistemic climate, since I think knowing what is true is incredibly core to EA, more than the motivational aspect of it!

I agree with this. Where there is a tradeoff, err on the side of truthfulness.

tcelferact - when posting about X risk issues, I agree that we should be careful about what kinds of emotions we accidentally or intentionally evoke in readers.

When facing major collective threats, humans, as hyper-social primates, have a fairly limited palette of emotions that can get evoked, and that motivate collective action to address those threats.

Probably the least useful emotions are despair, resignation, depression, generalized anxiety, and 'black-pilled' pessimism. These tend to be associated with curling up in a fetal position (metaphorically), and waiting passively for disaster, without doing much to prevent it. It's a behavioral analog of 'catatonia' or 'tonic immobility' or 'playing dead'. (Which can be useful in convincing a predator to lose interest, but wouldn't be much use against OpenAI continuing to be reckless about AGI development.)

Possibly more useful are the kinds of emotions that motivate us to proactively rally others to our cause, to face the threat together. These emotions typically include anger, moral outrage, moral disgust, fury, wrath, indignation, a sense of betrayal, and a steely determination to hold the line against enemies. Of course, intense anger and moral outrage have some major downsides: they reinforce tribalism (us/then polarization), can motivate violence (that's kinda one of their main purposes), and they can inhibit rational, objective analysis.

But I think on balance, EAs tend to err a bit too far in the direction of trying to maintain rational neutrality in the face of looming X risks, and trying too hard to avoid anger or outrage. The problem is, if we forbid ourselves from feeling anger/outrage (e.g. on the grounds that these are unseemly, aggressive, primitive, or stereotypically 'conservative' emotions), we're not left with much beyond despair and depression.

In my view, if people in the AI industry are imposing outrageous X risks on all of us, then moral outrage is a perfectly appropriate response to them. We just have to learn how to integrate hot and strong emotions such as outrage with the objectivity, rationality, epistemic standards, and moral values of EAs. 

I totally agree with Dr. Miller. When we talk about AI risks, it's really important to find some balance between staying rational and acknowledging our emotions. Indeed feeling down or hopeless can make us passive, but being angry or morally outraged can push us to face challenges together. The trick being to use these emotions in a productive way while still sticking to our values and rational thinking.

I don't object to folks vocalizing their outrage. I'd be skeptical of 'outrage-only' posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I'm requesting here.

Maybe there could be "AI risk: pessimistic/less actionable" and "AI risk: pessimistic and actionable" tags so that people who are feeling overwhelmed can reduce or even zero the weight that one or both of these tags have on their frontpage?

I think there's something epistemically off about allowing users to filter only bad AI news. The first tag doesn't have that problem, but I'd still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.

My post has a long list of potential actions. "Steely determination to survive" (as per Geoffrey Miller's comment) is the vibe I'm going for.

Your post more than meets my requested criteria, thank you!

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies