My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing various existential risks, making no assumptions about what risks were most likely.
Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at AI X-risk, and moved to Berkeley to do longtermist community building work.
I am now looking to close down a small business I have been running to research AI safety and other longtermist crucial considerations full time. If any of my work is relevant to open lines of research I am open to offers of employment as a researcher or research assistant.
Yes… So basically what you’re saying is this argument goes through if you make the summation of all bubble universes at any individual time step, but longtermist arguments would go through if you take a view from outside the metaverse and make the summation across all points of time in all bubble universes simultaneously?
I guess my main issue is that I’m having trouble philosophically or physically stomaching this, it seems to touch on a very difficult ontological/metaphysical/epistemological question of whether or not it is coherent to do the summation of all points in space-time across infinite time as though all of the infinite future already “preexists” in some sense. On the other hand, it could be the case that taking such an “outside view” of infinite space-time as though calculations could be “all at once” may not be an acceptable operation to perform, as such a calculation could not in reality ever actually be made by any observer, or at least could not be made at any given time
I have a very strong intuition that infinity itself is incoherent and unreal and therefore something like eternal inflation is not actually likely to be correct or may be physically possible. However, I am certainly not an expert in this and my feelings about the topic not necessarily correct; yet my sense is these sorts of questions are not fully worked out.
Part of what makes this challenging for me is that the numbers are so much ridiculously bigger than the numbers in longtermist calculations, that it would seem that even a very, very small chance that it might be correct would make me think it should get somewhat deeper consideration, at least have some specialists who work on these kinds of topics weigh in on how likely it seems something like this could be correct.
Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.
Yes, I believe this is correct. I am pretty uncertain about this.
A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.
Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.
Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!
And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away
Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.
Hi Magnus, thank you for writing out this idea!
I am very encouraged (although, perhaps anthropically I should be discouraged for not having been the first one to discover it) that I am not the only one who thought of this (also, see here.)
I was thinking about running this idea by some physicists and philosophers to get further feedback on whether it is sound. It does seem like adding at least a small element of this to a moral parliament might not be a bad idea, especially considering that making it only 1% of the moral parliament would capture the vast majority of value in terms of orders of magnitude (indeed, if at any given moment a single person who is encountering this idea just tried to “live in the moment” or smile for a second at the moment they thought of it, and then everyone forgot about it forever, we would still capture the vast majority [again, in orders of magnitude terms] of the value of the idea; but this continues to be true in every proceeding moment.)
Anyways, thanks for posting this, I am hoping to come back to my post sometime soon and add some things to it and correct a few mistakes I think I made. Let me know if you’d like to be involved in any further investigation of this idea! By the way, here’s the version I wrote in case you are interested in checking it out.
Hi Hans, I found your post incredibly helpful and validating, and much clearer than my own in some ways. I especially like the idea of "living in the moment" as a way of thinking about how to maximize value, I actually think this is probably correct and makes the idea potentially more palatable and less conflicting with other moral systems than my own framing.
Thank you, I appreciate your comment very much.
I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.
I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…
Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:
“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”
That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.
For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.
If I am mistaken in some way about this, though, please let me know!
On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement.
I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly.
It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction.
Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
Great, thank you!