Epistemic status: half-baked midnight thoughts after watching Black Mirror.

Summary:

It seems almost impossibly unlikely that we are alive at the time of history in which we exist, right on the precipice of creating an intergalactic civilization.

It seems even more unlikely that we (longtermist EAs) just happen to be the first ones to recognize and take this seriously.

If we think this indicates we are likely in a simulation, it might make sense to pursue hedonism. If we think we are not in a simulation, it would make sense to pursue longtermism.

Our situation seems reminiscent of Newcomb’s Paradox to me. Maybe someone in the far future is running a simulation to test out decision theories, and we are that simulation.

Detail:

Maybe we are in a simulation that is testing whether we will recognize this situation, and upon recognizing we are likely in a simulation choose to hedonistically pursue maximum pleasure for ourselves, or if instead we will go to the trouble of altruistically spending our time and energy to try to make the future go well, just in case we are actually in 1st layer, non-simulated reality, even though this seems absurdly improbable.

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

But if we choose indulgent hedonism, we sacrifice the entire future, we enjoy the moment, but we also probably weren’t actually in a simulation, because other beings like ourselves would likely realize this as well and so would also choose hedonism, and so longtermism would be doomed to always implode, and no simulations would be created.

Of course this doesn’t take into account the factor that you may actually enjoy altruism and the longtermism mission, making it less of a sacrifice. But it seems like a wild convergence cognitive bias to assume that what we are doing to maxim altruism also just happens to be maximizing hedonism as much as possible.

One resolution, however, may be that maximizing meaning ends up being the best way to maximize happiness, and in the future the universe is tiled with hedonium, which happens to be a simulation of the most important and therefore most meaningful century, the one we live in. If this analysis is right, then it might make sense that pursuing longtermism actually does converge with hedonism (enlightened self-interest).

The way I presented the problem also fails to account for the fact that it seems quite possible  there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

The thought experiment also breaks somewhat if you assume there is a significant probability a future civilization can’t or won’t create a large number of diverse simulations of this type for some systemic and unavoidable reason. This skews in favor of longtermism.

I guess the moral of the story is that perhaps we should hedge our altruistic bets by aiming to be as happy as possible at the same time as being longtermists. I don’t think this is too controversial since happiness actually seems to improve productivity.

Would appreciate any feedback on the decision theory element of this. Is one choice (between hedonism and longtermism) evidential and one causal? I couldn figure that part out. Not sure it is directly analogous to 1-box and 2-box of Newcomb’s Paradox.

16

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 7:24 PM

Thanks for this post! I've been meaning to write something similar, and have glad you have :-)

I agree with your claim that most observers like us (who believe they are at the hinge of history) are in (short-lived) simulations. Brian Tomasik discusses how this marginally makes one value interventions with short-term effects. 

In particular, if you think the simulations won't include other moral patients simulated to a high resolution (e.g. Tomasik suggests this may be the case for wild animals in remote places), you would instrumentally care less about their welfare (since when you act to increase their welfare, this may only have effects in basement reality as well as the more expensive simulations that do simulate such wild animals) . At the extreme is your suggestion, where you are the only person in the simulation and so you may act as a hedonist! Given some uncertainty over the distribution of "resolution of simulations", it seems likely that one should still  act altruistically.

I disagree with the claim that if we do not pursue longtermism, then no simulations of observers like us will be created. For example, I think an Earth-originating unaligned AGI would still have instrumental reasons to run simulations of 21st century Earth. Further, alien civilizations may have interest to learn about other civilizations.

Under your assumptions, I don't think this is a Newcomb-like problem.  I think CDT & EDT would agree on the decision,[1] which I think depends on the number of simulations and the degree to which  the existence of a good longterm future hinges your decisions. Supposing humanity only survives if you act as a longtermist and simulations of you are only created if humanity survives, then you can't both act hedonistically and be in a simulation. 

  1. ^

    When taking the lens of "I control my policy" as discussed here 

Thank you for this reply!

Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses.

And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as.

Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more.

I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI.

It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calculus at all, since it does seem quite possible - though perhaps the expected value of the simulation is ttoo small to have much of an effect, except in the universe where the universe is tiled with meaning-maximizing hedonium of the most important time in history and we are it.

I really appreciate your comment on CDT and EDT as well. I felt like they might give the same answer, even though it also “feels” somewhat similar to a Necomb’s Paradox. I think I will have to Study decision theory quite a bit more to really get a handle on this.

I disagree with the claim that if we do not pursue longtermism, then no simulations of observers like us will be created. For example, I think an Earth-originating unaligned AGI would still have instrumental reasons to run simulations of 21st century Earth. Further, alien civilizations may have interest to learn about other civilizations.

Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be running a simulation of us at this critical period of history to see what would have lead to counterfactual histories in which we adopted longtermism and thus protected ourselves from them. They would use these simulations to be better prepared for humans that might be evolving or have evolved in distant parts of the universe that they haven't accessed yet. Or maybe they still enslave a small or large portion of humanity, and are using the simulations to determine whether it is feasible or worthwhile to let us free again, or even whether it is safe for them to let the remaining human prisoners continue living. In this case, hedonism would be more miserable.

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

 Andrés Gómez Emilsson discusses this sort of thing in this video. The fact that our position in history may be uniquely positioned to influence the far future may be strong evidence that we live in a simulation.

Robin Hanson wrote about the ethical and strategic implications of living in a simulation in his article "How to Live in a Simulation".  According to Hanson, living in a simulation may imply that you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.

If some form of utilitarianism turns out to be the objectively correct system of morality, and post-singularity civilizations converge toward utilitarianism and paradise engineering is tractable, this may be evidence against the simulation hypothesis. Magnus Vinding argues that simulated realities would likely be utopias, and since our reality is not a utopia, the simulation hypothesis is almost certainly false.  Thus, if we do live in a simulation, this may imply that either post-singularity civilizations tend to not be utilitarians or that paradise engineering is extremely difficult.

Assuming we do live in a simulation, Alexey Turchin created this map of the different types of simulations we may be living in. Scientific experiments, AI confinement, and education of high-level beings are possible reasons why the simulation may exist in the first place.

This falls under anthropics, in case you're interested in related writing. It doesn't seem very close to Newcomb's problem to me, but I'm not that well acquainted with these areas.

One question I'd have is: Who counts as an observer for these observer selection effects? Do they need to be sufficiently intelligent? If so, an alternative to us being in a simulation that's about to end or never making it off Earth is that we're in base reality and the future and/or simulations could be filled with unintelligent conscious beings (and possibly unconscious but sufficiently intelligent AI, if observers also need to be conscious), but not astronomically many intelligent conscious beings. Unintelligent conscious beings are still possible and matter, at least to me (others might think consciousness requires a pretty high level of cognition or self-awareness), so this argument seems like a reason to prioritize such scenarios further relative to those with many intelligent conscious beings. We might think almost all expected moral patients (by number and moral weight) are not sufficiently intelligent to count as observers.

Thank you! Yes, I’m pretty new here, and now that you say that I think you’re right, anthropics makes more sense.

I am inclined to think the main thing required to be an observer would be enough intelligence to ask whether one is likely to be the entity one is by pure chance, and this doesn’t necessarily require consciousness, just the ability to assess likelihood one is in a simulation into one’s decision calculus.

I had not thought about the possibility that future beings are mostly conscious, but very few are intelligent enough to ask the question. This is definitely a possibility. Though if the vast majority of future beings are unintelligent, you might expect there to be far fewer simulations of intelligent beings like ourselves, somewhat cancelling this possibility out.

So yeah, since I think most future beings (or at least a very large number) will most likely be intelligent, I think the selection affects do likely apply.

The simulation dilemma intuitively seems similar to Newcomb's Paradox. However, when I try to reason out how it is similar, I have difficulty. They both involve two parties, with one having more control/information advantage over the other. They both involve an option with guaranteed rewards (hedonism or the $1,000) and one with an uncertain reward (longtermism or possible $1,000,000). They both involve an option that would exclude one of two possibilities. How the prediction of a predictor in Newcomb's Paradox that may exclude one of two possibilities directly correlates with the mutually exclusive possibilities in the simulation dilemma is not clear though.

Simulations might be useful to find out what factors were important/unimportant and alternative trajectories of critical periods of history. For that reason, it is an appealing idea that it is more likely that we are just in a simulation of our current period and not really in it.

The way I presented the problem also fails to account for the fact that it seems quite possible  there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

Anatoly Karlin's Katechon Hypothesis is one Fermi Paradox hypothesis that is similar to what you are describing. The basic idea is that if we live in a simulation, the simulation may have computational limits. Once advanced civilizations use too much computational power or outlive their usefulness, they are deleted from the simulation.

Curated and popular this week
Relevant opportunities