Hide table of contents

Note: Due to the possibility that there will be a quick and easy answer to my question (e.g., a cache of sources I simply hadn’t encountered), I’ll keep the text of this post fairly short (especially in relation to my own notes/thoughts on this question).

 

For some time, I and other people I've asked have been unaware of good guides or resources about personally introducing and discussing AI risk as a legitimate "this-century" concern (as opposed to linking people to articles/videos). Given how far outside of the Overton window this claim seems to be, I’ve personally struggled to figure out how to best introduce the topic without coming across as a bit obsessive/whacky, with the result being some seemingly ineffective or half-baked conversations with co-workers and friends.

Especially given the apparent or potential increase in media/popular attention for EA, it seems that having better communication about AI risk would be a good idea. While I personally think that Rob Miles videos and Cold Takes are good, I would probably prefer to have a better personal grasp so that I don’t have to rely so heavily on “here are a bunch of links for you to check out.” (To be clear, that’s not what I’ve led with thus far.)

For me, there seem to be at least two key parts to this: 

  1. What are the “minimum viable arguments” or basic chains of points that go from “what is AI” to “AI has a non-trivial chance of existential risk this century”? This is just the bare epistemic foundation for the claims.
  2. What kinds of quotes, ideas, arguments, analogies, examples, etc. are fairly easy to introduce and at least are effective for getting people to be open to the (admittedly quite bold) claim that “AI risk this century” should be taken seriously?

I would love to get into deeper aspects of epistemics and questions about persuasion theory (e.g., how should you adapt to audiences or discussion goals, how can you reduce the time cost or cognitive difficulty of evaluating AI risk arguments), but for now I’ll just leave my question at that and see if anyone knows of resources that might help answer these initial questions.

12

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

I think Is Power-Seeking AI An Existential Risk is probably the best introduction, though it's probably too long as a first introduction if the person is yet that motivated. It's also written as a list of propositions, with probabilities, and that might not appeal to many people.

I also listed some shorter examples in this post for the AI Safety Public Materials Bounty we're running, that might be more suitable as a first introduction. Here are the ones most relevant to people not versed in machine learning:

The competition is also trying to get more, because I think there is a lot more that can be done.

I'm also interested in seeing better knowledge translation in this area. Particularly in the form of storytelling or narrative to make it less theoretical and provide more narrative traction.

Comments1
Sorted by Click to highlight new comments since:

Upvoted; I was considering making almost exactly the same post.

When philosophers[1] react this way to hearing the idea, it suggests a lot more work might have to be done on communication around AI risk.[2]

My giving has been directed by EA for almost a decade now, but I've only become familiar with the community and the forum and longtermism in the last year, so it's entirely possible that there are lots of amazing EA's working on this issue already. 

That said, my experience was that it was very hard to find any concrete stories of how AGI becomes an existential threat, even when I was specifically looking for them. I read The Precipice last year partially in an effort to find them, and here were my thoughts:

"In the EA community, AI risk is a super normal thing to talk about, and has been at the forefront of people's minds for several years now as potentially the biggest existential risk we will face in the next century. It's a normal part of the conversation, and the pathways through which AGI could threaten humanity are understood, at least on a basic level. 

This is just not at all true for most people. Like, it's easy to imagine how an asteroid impact could end humanity, or a nuclear war, or a pandemic, etc. So there's no need to spend time telling a story of how, for example, a nuclear war might start. But it just isn't easy to see, especially for a normal person new to the idea of existential risk, how AGI actually becomes a threat.

Because of this, most people just dismiss it out of hand. Many think of Terminator, which makes them even more dismissive of the risk. Among all the risks in the book, AI risk stands out as 1) the most difficult to realistically imagine, 2) the one people have probably thought the least about, and 3) the one Ord believes poses the greatest risk (by a wide margin), so it confused me that so little time was spent actually explaining it."

I'd be very interested to hear what work has been done on this issue, because it seems quite important, at least to me. If a growing number of people are introduced to EA by being told it's a group of people who are scared of "robot overlords," that's bad.

And a few people associating EA with "people scared of robot overlords" carries some risk of becoming many people, of course.[3]

  1. ^

    Kate Mann is a philosopher and author of Down Girl⁠—which I highly recommend—but many of the likes are from other philosophers as well.

  2. ^

    I suppose there's a chance some philosophers would actually be more dismissive of the idea of an AGI takeover relative to the average person, but the fact that smart people trained in critical thinking reacted so emotionally and dismissively made me stop and think "wow maybe this is an even more dangerous communication/pr problem than I thought."

  3. ^

     My brain jumps to a scenario where John Oliver is introduced to EA in this way and then dismisses all the other ideas out of hand and does a segment that's more about Peter Singer and deworming and repugnant conclusions than about EA, but is still disastrous for EA.

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma