Some decision theorists argue that when playing a one-shot prisoner’s dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we probably live in a large universe or multiverse. If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. 

This talk attempts to assess the practical implications of this idea for effective altruists. The talk doesn’t assume any specific prior knowledge, but it may be harder to follow if it is your first encounter with the prisoner’s dilemma, Newcomb’s problem, the orthogonality thesis, utility functions and gains from trade.

In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.


New comment