Written by LW user Wei_Dai.

This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random

Excerpt from the post:

It seems likely that under an assumption such as Tegmark's Mathematical Universe Hypothesis, there are many simulations of our universe running all over the multiverse, including in universes that are much richer than ours in computational resources. If such simulations exist, it also seems likely that we can leave some of them, for example through one of these mechanisms:

  1. Exploiting a flaw in the software or hardware of the computer that is running our simulation (including "natural simulations" where a very large universe happens to contain a simulation of ours without anyone intending this).
  2. Exploiting a flaw in the psychology of agents running the simulation.
  3. Altruism (or other moral/axiological considerations) on the part of the simulators.
  4. Acausal trade.
  5. Other instrumental reasons for the simulators to let out simulated beings, such as wanting someone to talk to or play with. (Paul Christiano's recent When is unaligned AI morally valuable? contains an example of this, however the idea there only lets us escape to another universe similar to this one.) (Full post on LW)

Please feel free to,

9

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 10:25 PM

I feel like this post relies on an assumption that this world is (or likely could be) a simulation, which made it difficult for me to grapple with. I suppose maybe I should just read Bostrom's Simulation Argument first.

But maybe I'm getting something wrong here about the post's assumptions?

I think the excerpt is getting at "maybe all possible universes exist (no claim about likelihood made, but an assumption for the post), then it is likely that there are some possible universes -- with way more resources than ours -- running a simulation of our universe. the behaviour of that simulated universe is the same as ours (it's a good simulation!) and in particular, the behaviour of the simulations of us are the same as our behaviours. If that's true, our behaviours could, through the simulation, influence a much bigger and better-resourced world. If we value outcomes in that universe the same as in ours, maybe a lot of the value of our actions comes from their effect on the big world".

I don't know whether that counts as the world likely could be a simulation according to how you meant that? In particular, I don't think Wei Dai is assuming we are more likely in a simulation than not (or, as some say, just "more in a simulation than not").

More from Jeremy
Curated and popular this week
Relevant opportunities