Starring Leonardo DiCaprio and Jennifer Lawrence. 

Judging from the trailer alone, I don't think I've seen a better fictional movie about xrisks to date. 

The movie is scheduled to be released in theatres on December 10, 2021, prior to streaming on Netflix on December 24, 2021 (H/T Eliezer Yudkowsky).

Comments16


Sorted by Click to highlight new comments since:

A random idea on how this film could end by explicitly promoting existential risk awareness:

  • At the end of the film, imagine the comet impacts Earth and humanity goes extinct.
  • The audience is surprised that the movie actually ends in extinction and the heroes don't win.
  • The otherwise-comedic film ends on this serious, sad note.
  • The screen goes dark, and the following text appears:
  • "Experts estimate the chance of extinction via asteroid or comet impact within the next 100 years at only ~1 in 1,000,000."
  • "However, experts believe the chance of extinction from other causes is much higher."
  • The film cuts from black to Toby Ord in his office, reading the caption of Table 6.1 from his book The Precipice: "My best estimates for the chance of an existential catastrophe from each of these sources occurring at some point in the next 100 years..."
  • The film cuts to the table itself as he begins reading the risks aloud:
  • "If you're wondering if this is a joke, it's not. The risks really do seem to be this high."
  • Cut to credits.

Toby Ord reading a table out loud sounds like a bridge too far, but it's not uncommon for movies to end with a link to some relevant real-world resource. If I knew the people behind this movie (I don't) and thought there might be time to change it (no idea), I'd probably advocate for something like this (many ways to improve the wording, I'm sure) before the credits:

This film isn't based on a true story. But it may become one.

Learn about risks to humanity, and how you can help: 

theprecipice.com

(More realistically, if I did have an in, I'd ask people like Toby Ord what message they'd want millions of random viewers to see.)

I could imagine them interviewing Toby Ord for a mockumentary, like Death to 2020

How much would it cost to influence the film to make this happen?

I don't know; I doubt it's a problem where throwing money at it is the right answer. In any case, it's unclear to me whether doing this would actually be positive value or not. I imagine it would be quite controversial, even among EAs who are into longtermism. I just shared the idea because I thought it was interesting, not because I necessarily thought it was good.

Yeah, I agree that money is not the bottleneck. I think the strongest bottleneck is decision quality on whether this is a good idea, and a secondary bottleneck is whether our Hollywood contacts are good enough to make this happen conditional upon us believing it's actually a good idea.

Do you have a story for why this could be a bad idea?

Having popular presentations of our ideas in an unnuanced form may either a) give the impression that our ideas are bad/silly/unnuanced or b) low-status, akin to how a lot of AI safety efforts are/were rounded off as "Terminator" scenarios.

Any predictions on whether the film will seem to be positive from an existential-risk-reducing perspective or not?

Or perhaps more constructively, what possible features of the film could make it seem positive from an xrisk perspective? Which possible features could make it seem negative?

Possible Positives:

  • If it makes viewers viscerally feel the importance of preventing extinction
  • If it reminds viewers of the importance of preventing extinction, such that they are more likely to take other action to reduce extinction risks in the future
  • If it educates people on coordination problems or other obstacles that humanity actually faces (or may actually face) when trying to deal with extinction risks

Possible Negatives:

  • If it leaves viewers with a false impression of the absolute risk of extinction due to asteroids
  • If it leaves viewers with a false impression of the relative risk of extinction due to asteroids (relative to other extinction risks)
  • If it reduces the chance that other (positive) Hollywood films about extinction risk get made in the coming years

It might give people language to describe their experiences. Like "when I watched this movie, it was just like how it was before Covid - people were either really scared or just laughed it off! I see people doing the same thing when it comes to [other risk]"

The premise of the film Seeking a Friend for the End of the World (2012) is that:

a mission to stop an incoming 70-mile wide asteroid known as "Matilda" has failed and that the asteroid will make impact in three weeks, destroying all life on Earth

This is taken as inevitable and accepted by the characters in the film. The film ends with the Earth getting destroyed, implying human extinction.

I'll be looking forward to see if/how they deal with the aftermath of the impact, and specifically with the agricultural collapse that would ensue which is probably the most severe consequence of an asteroid/comet impact.

I just started watching it. My thoughts so far:

One thing that's not believable in the movie is that the media barely reacts to the two scientists' message when they break it to the New York Herald and the talk show; instead, there's memes making fun of them. In 2020, social media was buzzing with memes about Trump's assassination of Soleimani starting WWIII; you'd think there'd be at least a similarly sized reaction to a warning that A GIANT COMET IS ABOUT TO STRIKE EARTH AND WE'RE ALL GONNA DIE.

Also, goddammit president, asking how much it will cost to stop the comet and bikeshedding with the scientists over whether it's 100% or 70% likely to hit. First of all, even if there's a 10% chance that it will strike Earth, we should be trying to deflect it! Second, preventing an existential catastrophe that is certain to happen is worth the entire value of the world economy!

My impression was that in early 2020, there was a lot of serious-sounding articles in the news about how worries about covid were covering up the much bigger problem of the flu.

I think there could be some EA press written around this. I hope Toby Ord gets at least one interview out of it. 

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the