Hide table of contents

Epistemic status: personal judgements based on conversations with ~100 people aged 30+ who were worried about AI risk "before it was cool", and observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events.

Summary: There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focussed parts of the EA and rationality communities — which is 

  • preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI, and
  • feeding the formation of small pockets of people with a highly adversarial stance towards the rest of the world (and each other).

[This post is also available on LessWrong.]

Part 1 — The trauma of being ignored

You — or some of your close friends or colleagues — may have had the experience of fearing AI would eventually pose an existential risk to humanity, and trying to raise this as a concern to mainstream intellectuals and institutions, but being ignored or even scoffed at just for raising it. That sucked.  It was not silly to think AI could be a risk to humanity.  It can.

I, and around 100 people I know, have had this experience.

Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”  

At least 30 people I've known personally have adopted that attitude in a big way, and I estimate many more.  In the remainder of this post, I'd like to point out some ways this attitude can turn out to be a mistake.

Part 2 — Forgetting that humanity changes

Basically, as AI progresses, it becomes easier and easier to make the case that it could pose a risk to humanity's existence.  When people didn’t listen about AI risks in the past, that happened under certain circumstances, with certain AI capabilities at the forefront and certain public discourse surrounding them.  These circumstances have changed, and will continued to change.  It may not be getting easier as fast as one would ideally like, but it is getting easier.  Like the stock market, it may be hard to predict how and when things will change, but they will.

If one forgets this, one can easily adopt a stance like "mainstream institutions will never care" or "the authorities are useless".  I think these stances are often exaggerations of the truth, and if one adopts them, one loses out on the opportunity to engage productively with the rest of humanity as things change.

Part 3 - Reflections on the Fundamental Attribution Error (FAE)

The Fundamental Attribution Error (wiki/Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else's behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change.  With a moment's reflection, one can see how the FAE can lead to

  • trusting too much — assuming someone would never act against your interests because they didn't the first few times, and also
  • trusting too little — assuming someone will never do anything good for you because they were harmful in the past.

The second reaction could be useful for getting out of abusive relationships.  The risk of being mistreated over and over by someone is usually not worth the opportunity cost of finding new people to interact with.  So, in personal relationships, it can be healthy to just think "screw this" and move on from someone when they don't make a good first (or tenth) impression.

Part 4 — The FAE applied to humanity

If one has had the experience of being dismissed or ignored for expressing a bunch of reasonable arguments about AI risk, it would be easy to assume that humanity (collectively) can never be trusted to take such arguments seriously.  But, 

  1. Humanity has changed greatly over the course of history, arguably more than any individual has changed, so it's suspect to assume that humanity, collectively, can never be rallied to take a reasonable action about AI.
  2. One does not have the opportunity to move on and find a different humanity to relate to.  "Screw this humanity who ignores me, I'll just imagine a different humanity and relate to that one instead" is not an effective strategy for dealing with the world.

Part 5 – What, if anything, to do about this

If the above didn't resonate with you, now might be a good place to stop reading :)  Maybe this post isn't good advice for you to consider after all.

But if it did resonate, and you're wondering what you may be able to do differently as a result, here are some ideas:

  • Try saying something nice and civilized about AI risk that you used to say 5-10 years ago, but which wasn’t well received.  Don’t escalate it to something more offensive or aggressive; just try saying the same thing again.  Someone new might take interest today, who didn’t care before.  This is progress.  This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.
  • Try Googling a few AI-related topics that no one talked about 5-10 years ago to see if today more people are talking about one or more of those topics.  Switch up the keywords for synonyms. (Maybe keep a list of search terms you tried so you don't go in circles, and if you really find nothing, you can share the list and write an interesting LessWrong post speculating about why there are no results for it.)
  • Ask yourself if you or your friends feel betrayed by the world ignoring your concerns about AI.  See if you have a "screw them" feeling about it, and if that feeling might be motivating some of your discussions about AI.
  • If someone older tells you "There is nothing you can do to address AI risk, just give up", maybe don't give up.  Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.

82

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 5:40 PM

This is super interesting!

You mentioned "observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events."

What are you seeing as the effects on the young people in the community?

Thank you for this post. I do agree that institutions can consider AI risk quite seriously (e. g. banning harmful AI and mandating human oversight in the most recent EU AI white paper) and that their regard can increase over time (comparing the 2021 white paper with the 2020 one, which focuses on 'winning the race'). Still, however, some institutions may have a way to go (e. g. specifics of the 'algorithmic bias' measurement in the EU).

As a European seeking to advance AI safety, I am offering an anecdotal story: My college agreed to subsidize ski trips to address disadvantaged groups' exclusion in skiing. Following this, angry flyers appeared in bathrooms asking for more subsidies. Nothing changed (the outdoor subsidy scheme has diversified the following year) but an approach like this can demotivate well-meaning entities.

A parallel can be drawn in AI safety advocacy. I think that it is quite awesome that in just 1 year, so much positive development has occurred. We can only work at the pace of the legislator's will. (Judging from the Commission's highly understanding response on the confinement ban in animal agriculture Citizens' Initiative, there are people who are willing to take innovative approaches to safety seriously.) Otherwise, we could demotivate well-meaning entities.

This parallel is likely, however, not applicable in reality. Legislators are looking (EU, US) for useful insights that follow on their questions, relevant to AI safety, presented in a concise manner. I am aware of only one online submission of EA community members (CSER co-authors) to a governance body. This piece seems that it, rather than concisely and actionably addressing the Commission's project's mandate, talks broadly about somewhat related research that should be done. So, it is a step backward: presuming that the Commission does not care and that some broad speaking has to be done to motivate the Commission to action. I would suggest that this is not due to trauma, considering the explicit welcomingness of EU's HUMAINT and the general reputation of the EU in being thoughtful about its legislation.

So, I can add to your recommendations that people can also review the development of an institution's (or department's or project's, ...) thinking about a specific AI topic (it can be less than up to 10 years ago) and understand its safety objectives. Then, any support of the institution in AI safety can be much more effective, whether done by the reviewer or an expert on (researching and) addressing the specific concerns.

I almost forgot: a comment on your language: you repeatedly use 'screw' to denote perhaps the spirit of this post, powerlessness in changing others' thinking (I argue that this is inaccurate because institutions are already developing their safety considerations) which would be addressed by the use of force or the threat of such. This approach is suboptimal because an allusion to the threat of the use of force can reduce persons' critical thinking abilities.

Alternatives include: 1) thinking 'I cannot trust these institutions,' 2) healthy to think 'that one should leave the relationship,' 3) trying to 'screw together a different humanity' is not an effective strategy for dealing with the world, and 4) 'see what emotions due to which perceived approaches you feel:' resentment (due to ignorance, personal disrespect, closemindedness, unwillingness to engage in a rational dialogue, ...), hate (due to inability to gain power over another, inconsideration of certain individuals, non-confirmation of one's biases, ...), suspicion (due to the institution's previous limited engagement with a topic, its limited deliberation of risky actions), shame (due to your limited ability to contribute usefully while increasing safety, the institution's limited capacity to develop sound legislation, ...), fear (due to reputational loss risk knowledge and limited ability to know what can be well received, harsh rejections of previous arguments, ...), etc . Then, analyze whether these emotions are rationally justified and if so, how can they be best addressed.

Curated and popular this week
Relevant opportunities