Hide table of contents

Epistemic status: personal judgements based on conversations with ~100 people aged 30+ who were worried about AI risk "before it was cool", and observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events.

Summary: There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focussed parts of the EA and rationality communities — which is 

  • preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI, and
  • feeding the formation of small pockets of people with a highly adversarial stance towards the rest of the world (and each other).

[This post is also available on LessWrong.]

Part 1 — The trauma of being ignored

You — or some of your close friends or colleagues — may have had the experience of fearing AI would eventually pose an existential risk to humanity, and trying to raise this as a concern to mainstream intellectuals and institutions, but being ignored or even scoffed at just for raising it. That sucked.  It was not silly to think AI could be a risk to humanity.  It can.

I, and around 100 people I know, have had this experience.

Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”  

At least 30 people I've known personally have adopted that attitude in a big way, and I estimate many more.  In the remainder of this post, I'd like to point out some ways this attitude can turn out to be a mistake.

Part 2 — Forgetting that humanity changes

Basically, as AI progresses, it becomes easier and easier to make the case that it could pose a risk to humanity's existence.  When people didn’t listen about AI risks in the past, that happened under certain circumstances, with certain AI capabilities at the forefront and certain public discourse surrounding them.  These circumstances have changed, and will continued to change.  It may not be getting easier as fast as one would ideally like, but it is getting easier.  Like the stock market, it may be hard to predict how and when things will change, but they will.

If one forgets this, one can easily adopt a stance like "mainstream institutions will never care" or "the authorities are useless".  I think these stances are often exaggerations of the truth, and if one adopts them, one loses out on the opportunity to engage productively with the rest of humanity as things change.

Part 3 - Reflections on the Fundamental Attribution Error (FAE)

The Fundamental Attribution Error (wiki/Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else's behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change.  With a moment's reflection, one can see how the FAE can lead to

  • trusting too much — assuming someone would never act against your interests because they didn't the first few times, and also
  • trusting too little — assuming someone will never do anything good for you because they were harmful in the past.

The second reaction could be useful for getting out of abusive relationships.  The risk of being mistreated over and over by someone is usually not worth the opportunity cost of finding new people to interact with.  So, in personal relationships, it can be healthy to just think "screw this" and move on from someone when they don't make a good first (or tenth) impression.

Part 4 — The FAE applied to humanity

If one has had the experience of being dismissed or ignored for expressing a bunch of reasonable arguments about AI risk, it would be easy to assume that humanity (collectively) can never be trusted to take such arguments seriously.  But, 

  1. Humanity has changed greatly over the course of history, arguably more than any individual has changed, so it's suspect to assume that humanity, collectively, can never be rallied to take a reasonable action about AI.
  2. One does not have the opportunity to move on and find a different humanity to relate to.  "Screw this humanity who ignores me, I'll just imagine a different humanity and relate to that one instead" is not an effective strategy for dealing with the world.

Part 5 – What, if anything, to do about this

If the above didn't resonate with you, now might be a good place to stop reading :)  Maybe this post isn't good advice for you to consider after all.

But if it did resonate, and you're wondering what you may be able to do differently as a result, here are some ideas:

  • Try saying something nice and civilized about AI risk that you used to say 5-10 years ago, but which wasn’t well received.  Don’t escalate it to something more offensive or aggressive; just try saying the same thing again.  Someone new might take interest today, who didn’t care before.  This is progress.  This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
  • Try Googling a few AI-related topics that no one talked about 5-10 years ago to see if today more people are talking about one or more of those topics.  Switch up the keywords for synonyms. (Maybe keep a list of search terms you tried so you don't go in circles, and if you really find nothing, you can share the list and write an interesting LessWrong post speculating about why there are no results for it.)
  • Ask yourself if you or your friends feel betrayed by the world ignoring your concerns about AI.  See if you have a "screw them" feeling about it, and if that feeling might be motivating some of your discussions about AI.
  • If someone older tells you "There is nothing you can do to address AI risk, just give up", maybe don't give up.  Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.
Comments3


Sorted by Click to highlight new comments since:

This is super interesting!

You mentioned "observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events."

What are you seeing as the effects on the young people in the community?

Thank you for this post. I do agree that institutions can consider AI risk quite seriously (e. g. banning harmful AI and mandating human oversight in the most recent EU AI white paper) and that their regard can increase over time (comparing the 2021 white paper with the 2020 one, which focuses on 'winning the race'). Still, however, some institutions may have a way to go (e. g. specifics of the 'algorithmic bias' measurement in the EU).

As a European seeking to advance AI safety, I am offering an anecdotal story: My college agreed to subsidize ski trips to address disadvantaged groups' exclusion in skiing. Following this, angry flyers appeared in bathrooms asking for more subsidies. Nothing changed (the outdoor subsidy scheme has diversified the following year) but an approach like this can demotivate well-meaning entities.

A parallel can be drawn in AI safety advocacy. I think that it is quite awesome that in just 1 year, so much positive development has occurred. We can only work at the pace of the legislator's will. (Judging from the Commission's highly understanding response on the confinement ban in animal agriculture Citizens' Initiative, there are people who are willing to take innovative approaches to safety seriously.) Otherwise, we could demotivate well-meaning entities.

This parallel is likely, however, not applicable in reality. Legislators are looking (EU, US) for useful insights that follow on their questions, relevant to AI safety, presented in a concise manner. I am aware of only one online submission of EA community members (CSER co-authors) to a governance body. This piece seems that it, rather than concisely and actionably addressing the Commission's project's mandate, talks broadly about somewhat related research that should be done. So, it is a step backward: presuming that the Commission does not care and that some broad speaking has to be done to motivate the Commission to action. I would suggest that this is not due to trauma, considering the explicit welcomingness of EU's HUMAINT and the general reputation of the EU in being thoughtful about its legislation.

So, I can add to your recommendations that people can also review the development of an institution's (or department's or project's, ...) thinking about a specific AI topic (it can be less than up to 10 years ago) and understand its safety objectives. Then, any support of the institution in AI safety can be much more effective, whether done by the reviewer or an expert on (researching and) addressing the specific concerns.

I almost forgot: a comment on your language: you repeatedly use 'screw' to denote perhaps the spirit of this post, powerlessness in changing others' thinking (I argue that this is inaccurate because institutions are already developing their safety considerations) which would be addressed by the use of force or the threat of such. This approach is suboptimal because an allusion to the threat of the use of force can reduce persons' critical thinking abilities.

Alternatives include: 1) thinking 'I cannot trust these institutions,' 2) healthy to think 'that one should leave the relationship,' 3) trying to 'screw together a different humanity' is not an effective strategy for dealing with the world, and 4) 'see what emotions due to which perceived approaches you feel:' resentment (due to ignorance, personal disrespect, closemindedness, unwillingness to engage in a rational dialogue, ...), hate (due to inability to gain power over another, inconsideration of certain individuals, non-confirmation of one's biases, ...), suspicion (due to the institution's previous limited engagement with a topic, its limited deliberation of risky actions), shame (due to your limited ability to contribute usefully while increasing safety, the institution's limited capacity to develop sound legislation, ...), fear (due to reputational loss risk knowledge and limited ability to know what can be well received, harsh rejections of previous arguments, ...), etc . Then, analyze whether these emotions are rationally justified and if so, how can they be best addressed.

This is interesting! I'd like to add that there are reliable methods of resolving trauma and stuck thought patterns: 

  • Psychotherapy with a therapist who gets you and who knows effective therapeutic methods
  • Guided psychedelic assisted psychotherapy
  • More niche methods, a few of which are addressed in The Body Keeps the Score by a well-regarded trauma researcher
Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal