Effective altruists focussed on shaping the far future face a choice between different types of interventions. Of these, efforts to reduce the risk of human extinction have received the most attention so far. In this talk, Max Daniel makes the case that we may want to complement such work with interventions aimed at preventing very undesirable futures ("s-risks"), and that this provides a reason for, among the sources of existential risk identified so far, focussing on AI risk.
In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.