Hide table of contents

If you just want a link to the video, watch it here!

What’s AI in Context?

AI in Context is 80,000 Hours’ Video Program’s YouTube channel, hosted by Aric Floyd. We’re trying to do high production documentary storytelling about transformative AI and its risks. You can see our retrospective on our first two videos here.

Why this topic?

We loved making our first two videos. We’re Not Ready for Superintelligence recently crossed 10M views, and our video about MechaHitler crossed 3M.

But when we reflected, we felt like we’d been circling around the central message we wanted to share. The specific AI 2027 scenario is really interesting, and the MechaHitler story is illuminating, but we wanted to make sure that at least once, we’d gone through the whole argument, or at least one whole argument, for being concerned about existential risk via loss of control over AI.

Honestly, that made it really convenient that Nate Soares and Eliezer Yudkowsky wrote If Anyone Builds It, Everyone Dies.

If Anyone Builds It, Everyone Dies

Nate and Eliezer have thought about this issue for a long time – especially Eliezer, who was debating the Singularity via mailing list around when Aric was born. He and Nate have done a tremendous amount to turn AI safety from an eccentric worry into a real research field.

They spent decades doing research, but eventually came to believe that their work would be too little, too late. So they pivoted to communications:  trying to get the world to wake up and start taking the risks of superintelligent AI seriously.

And they certainly got the word out. Here are two of the many blurbs they got:

A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.

—Ben Bernanke, Nobel-winning economist; former Chairman of the U.S. Federal Reserve

A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.

—Jon Wolfsthal, former Special Assistant to the President for National Security Affairs; former Senior Director for Arms Control and Nonproliferation, White House, National Security Council

We don’t agree with everything in the book, but we’re glad they wrote it, and we were super excited to present the ideas (and where we’re less confident) to our audience.

Here's what we’re going for:

We made a video we think:

  • Tells a story: We take the fictional scenario at the center of the book, adapt it, and bring you into that world
  • Explains the argument: It’s not the only argument for why AI risk might be catastrophic, but it’s one we want more people to hear.
  • Communicates what’s special about the worldview: Aric interviewed Nate Soares for five hours. We didn’t want to hear him summarise the book; we wanted to understand his entire worldview. (Don’t worry — we did make some cuts for the video.)
  • Grapples with the ideas live, and shows who else is: We were lucky enough to interview a sitting U.S. Congressman, Dean Ball (Donald Trump’s former senior AI advisor), and our rival influencer Dwarkesh Patel, who have a range of views on this book. We also get into where we personally agree and disagree with Soares and Yudkowsky.
  • Talks about current AI capabilities: It’s impossible to fully keep up with these things, but we aim to weave real research into the scenario

We also tried harder this time to make the next steps really clear (we think we could have done better on this in the past). We also built a new landing page that’s meant to help people figure out what’s next based on what they need: more context, more skills or simply to know what opportunities are available. We’re really proud of it; the link is here.

We're trying to get better with every video. If you watch it, please tell us what you think. Tell us what worked, what didn't, what we missed.

Subscribing and sharing

Subscribe to AI in Context if you want to see future posts and videos.

If you like the video, and you want to help boost its reach, share it with people you think should watch it, like it and/or leave a comment (even a short one). All of this really helps us get the algorithm on our side – which is, of course, what AI safety is all about.

Team Update and Logistics

We now have our director and editor, Phoebe Brooks, working with us full time, and have contracted another editor and animator to really start building up our bench. We have also hired an operations associate, Sage Bergerson, which means we can be more efficient about production. We’re planning on having Phoebe, Aric and an external scriptwriter all work on scripts in parallel this and next quarter, and we’ll see how that goes!

We know this video took longer than our others; we decided not to publish in Q4 with all the holidays, and we ended up trying for something somewhat ambitious. We appreciate the patience.

Thanks so much!

We’re so grateful for all the support and goodwill from our community. The video space continues to be an amazing scene filled with tons of great people. In recent news, Mithuna Yoganathan is starting to make AI Safety videos, the Frame Fellowship has a killer set of content creators, and there’s a serious AI risk documentary on the horizon.

Thanks so much to our crew, editors and graphics folks: Nick Dolph, Andy Haney, David Jenkins, Ryan Tam, Zach Joseph, Daniel Recinto and Mila Graf.

Thanks also to our fantastic thumbnail artists: Nik Mastroddi, Ailbhe Treacy, along with Ignat Ignatov, Matthew Poppe and Giovanni Parks.

Special thanks to Dean Ball, Joe Carlsmith, Representative Bill Foster, Dwarkesh Patel, and Nate Soares for their interviews, and Steven Adler and Alex Lawsen for their technical advising.

And thanks to Bella Forristal, Arden Koehler, Clarissa Lam, Petr Lebedev, Gaetan Selle, Drew Spartz, Wrena Sproat, Rob Wiblin, Siliconversations, John Leaver, Ciel Creative Space and Lighthaven Recording Studio.

Let’s freaking go!

59

0
0
3

Reactions

0
0
3

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Aric interviewed Nate Soares for five hours.

Release full interview when?

So incredibly excited for everyone to see this video. From my slightly-outsider-insider perspective (I'm managing Chana) I really think the team gave it absolutely everything they had — just an insane amount of effort, thoughtfulness, collaboration, creativity, and love went into this. I hope you all love it as much as I do!!

These posts need warnings that if you have any important work to do in the next hour not to click on them. Watching these videos is too damn tempting! Well done to the team as always! 

Haven't watched the videos yet but love the concept of using social media to educate the public about important ideas / spread important knowledge! 😊

Curated and popular this week
Relevant opportunities