Hide table of contents

Note: Due to the possibility that there will be a quick and easy answer to my question (e.g., a cache of sources I simply hadn’t encountered), I’ll keep the text of this post fairly short (especially in relation to my own notes/thoughts on this question).

 

For some time, I and other people I've asked have been unaware of good guides or resources about personally introducing and discussing AI risk as a legitimate "this-century" concern (as opposed to linking people to articles/videos). Given how far outside of the Overton window this claim seems to be, I’ve personally struggled to figure out how to best introduce the topic without coming across as a bit obsessive/whacky, with the result being some seemingly ineffective or half-baked conversations with co-workers and friends.

Especially given the apparent or potential increase in media/popular attention for EA, it seems that having better communication about AI risk would be a good idea. While I personally think that Rob Miles videos and Cold Takes are good, I would probably prefer to have a better personal grasp so that I don’t have to rely so heavily on “here are a bunch of links for you to check out.” (To be clear, that’s not what I’ve led with thus far.)

For me, there seem to be at least two key parts to this: 

  1. What are the “minimum viable arguments” or basic chains of points that go from “what is AI” to “AI has a non-trivial chance of existential risk this century”? This is just the bare epistemic foundation for the claims.
  2. What kinds of quotes, ideas, arguments, analogies, examples, etc. are fairly easy to introduce and at least are effective for getting people to be open to the (admittedly quite bold) claim that “AI risk this century” should be taken seriously?

I would love to get into deeper aspects of epistemics and questions about persuasion theory (e.g., how should you adapt to audiences or discussion goals, how can you reduce the time cost or cognitive difficulty of evaluating AI risk arguments), but for now I’ll just leave my question at that and see if anyone knows of resources that might help answer these initial questions.

12

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

I think Is Power-Seeking AI An Existential Risk is probably the best introduction, though it's probably too long as a first introduction if the person is yet that motivated. It's also written as a list of propositions, with probabilities, and that might not appeal to many people.

I also listed some shorter examples in this post for the AI Safety Public Materials Bounty we're running, that might be more suitable as a first introduction. Here are the ones most relevant to people not versed in machine learning:

The competition is also trying to get more, because I think there is a lot more that can be done.

I'm also interested in seeing better knowledge translation in this area. Particularly in the form of storytelling or narrative to make it less theoretical and provide more narrative traction.

Comments1
Sorted by Click to highlight new comments since: Today at 8:02 PM

Upvoted; I was considering making almost exactly the same post.

When philosophers[1] react this way to hearing the idea, it suggests a lot more work might have to be done on communication around AI risk.[2]

My giving has been directed by EA for almost a decade now, but I've only become familiar with the community and the forum and longtermism in the last year, so it's entirely possible that there are lots of amazing EA's working on this issue already. 

That said, my experience was that it was very hard to find any concrete stories of how AGI becomes an existential threat, even when I was specifically looking for them. I read The Precipice last year partially in an effort to find them, and here were my thoughts:

"In the EA community, AI risk is a super normal thing to talk about, and has been at the forefront of people's minds for several years now as potentially the biggest existential risk we will face in the next century. It's a normal part of the conversation, and the pathways through which AGI could threaten humanity are understood, at least on a basic level. 

This is just not at all true for most people. Like, it's easy to imagine how an asteroid impact could end humanity, or a nuclear war, or a pandemic, etc. So there's no need to spend time telling a story of how, for example, a nuclear war might start. But it just isn't easy to see, especially for a normal person new to the idea of existential risk, how AGI actually becomes a threat.

Because of this, most people just dismiss it out of hand. Many think of Terminator, which makes them even more dismissive of the risk. Among all the risks in the book, AI risk stands out as 1) the most difficult to realistically imagine, 2) the one people have probably thought the least about, and 3) the one Ord believes poses the greatest risk (by a wide margin), so it confused me that so little time was spent actually explaining it."

I'd be very interested to hear what work has been done on this issue, because it seems quite important, at least to me. If a growing number of people are introduced to EA by being told it's a group of people who are scared of "robot overlords," that's bad.

And a few people associating EA with "people scared of robot overlords" carries some risk of becoming many people, of course.[3]

  1. ^

    Kate Mann is a philosopher and author of Down Girl⁠—which I highly recommend—but many of the likes are from other philosophers as well.

  2. ^

    I suppose there's a chance some philosophers would actually be more dismissive of the idea of an AGI takeover relative to the average person, but the fact that smart people trained in critical thinking reacted so emotionally and dismissively made me stop and think "wow maybe this is an even more dangerous communication/pr problem than I thought."

  3. ^

     My brain jumps to a scenario where John Oliver is introduced to EA in this way and then dismisses all the other ideas out of hand and does a segment that's more about Peter Singer and deworming and repugnant conclusions than about EA, but is still disastrous for EA.

Curated and popular this week
Relevant opportunities