Written by LW user Wei_Dai.
This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
It seems likely that under an assumption such as Tegmark's Mathematical Universe Hypothesis, there are many simulations of our universe running all over the multiverse, including in universes that are much richer than ours in computational resources. If such simulations exist, it also seems likely that we can leave some of them, for example through one of these mechanisms:
- Exploiting a flaw in the software or hardware of the computer that is running our simulation (including "natural simulations" where a very large universe happens to contain a simulation of ours without anyone intending this).
- Exploiting a flaw in the psychology of agents running the simulation.
- Altruism (or other moral/axiological considerations) on the part of the simulators.
- Acausal trade.
- Other instrumental reasons for the simulators to let out simulated beings, such as wanting someone to talk to or play with. (Paul Christiano's recent When is unaligned AI morally valuable? contains an example of this, however the idea there only lets us escape to another universe similar to this one.) (Full post on LW)
Please feel free to,