Hide table of contents

Epistemic status:

Here is the original post. Due to learning other people have discovered this independently and other encouraging comments below, I actually believe this idea may be very important, and needs to be corrected and expanded in several ways, and so I will likely create a new updated version later.

While I believe this idea’s consequences could be profound if correct, and it should be taken slightly more seriously than April Fool’s post “Ultra-Near-Termism”,  I consider it mostly a quirky novelty, I’ve spent relatively little time thinking about it and suspect it may have major flaws. At the very least, I would hate for it to be taken too seriously before more serious investigation, and would really appreciate anyone pointing out flaws in reasoning that would clearly invalidate it!

TL;DR

If we give the prominent “eternal inflation theory” of cosmology and "evidential decision theory" a non-zero credence, then what is morally important may be what we are able to influence in the smallest unit of affectable time in the future. This would have a lot of weird implications, but it seems there are reasonable, if also weird,  arguments that may defeat it.

Eternal Inflation and the Youngness Paradox

In this video, between video start and 11:40, Matt O'Dowd explains Alan Guth's "Youngness Paradox," that in each second, due to "Eternal Inflation", a leading cosmological theory, every second another 10^10^34 universes are born MORE than were born in the previous second, etc., eternally, meaning that every second the number of universes increases by a factor of 10^10^34 from how many existed in the previous second.

Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens at the moment of the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware. Essentially, these are two different levels of the four level multiverse proposed by Max Tegmark.

Evidential Ultimate Neartermism

Anthropically/evidentially, this means that at any point in time, the sum of all younger universes carry exponentially far more weight than older universes, and almost all intelligent life that exists is that which is youngest across all universes; therefore, if we are trying to maximize the amount of good across all universes, what we should evidentially care about is what is happening soonest in time across all universes, including our own, as the weight of what happens across all universes soonest in time is always mind-boggling-ly more weight-y and hence valuable than what happens later in time. This, If correct, swamps longtermist arguments by a very, very large factor, and does so in expectation even if we give this theory and evidential decision theory an absurdly low credence. 

If we follow this line of argument, then what we should altruistically hope for is that in every universe, everyone in that universe is trying to do whatever they can to maximize value in the next smallest affectable unit of time, meaning that there is at least some possibility that the morally correct thing to do is to try to maximize value (most likely for yourself, as it seems difficult to affect others as rapidly) in the next ~millisecond, and to be doing this at all times, hence,"Ultimate Neartermism."

Possible escape routes and implications

To be clear, I think that the same arguments that I suggest defeat Pascal’s Mugging and actually support longtermism, may also defeat arguments against blindly accepting this idea. Namely, we should give a non-zero credence that we might eventually figure out how to create a perpetual motion machine, or other ways that we might create infinite value, such as if we accept a cyclic universe or Penrose's Conformal Cyclic Cosmology model, or various infinite multiverses or many other weird anthropic or highly speculative theoretical scenarios (although,  I can't say for certain any of these aren't also subject to the same youngness paradoxes due to different sizes of infinity). Another weird possibility, for example, is that even if we give an absurdly low, but nonzero credence that we might be able to influence all of the younger universes being created by eternal inflation, then perhaps that is what we should actually try to pursue, increasing the goodness of the exponentially huge number of future younger universes.

It seems to me that we should give this theory at least some credence, but its consequences are bizarre enough and potentially horrible enough from other perspectives that it seems worth thinking about it carefully rather than blindly accepting it. Doing further research on the value, likelihood, and consequences of this idea would, of course, mean that we may be losing a factor of 10^10^34 orders of magnitude worth of value of the universe every second this research takes (unless this research is moment to moment the most intrinsically valuable thing we could do,) however, because this loss of value always continues, if the theory is correct, doing the research is still worth incalculably more than other courses of action if it leads us to discover the idea is correct and act on it from there on out.

Interestingly, from the viewpoint of ultimate neartermism itself, it is of comparatively infinitesimally little value researching how to maximize value in the next millisecond. You lose so much value by doing so, that even if you have a positive but practically nonexistent credence that you may increase your next millisecond value by a virtually negligible amount, this value exceeds the value of researching how to best increase next millisecond value by a factor many, many, many times the value difference between the value of this research compared to the entire value of the future of our universe.

In fact, according to this theory virtually 100% of the value of the eternal inflation level of the multiverse is determined by how valuable the very first instant of consciousness is on average across all universes. Everything after that initial moment has 10^10^34 less orders of magnitude value every second that passes. Let’s hope that the very first spark of consciousness is a happy one! 

That may make you feel like your impact is very small, since we are this late in the universe, and the entire sum of all moments across all older universes in the entire multiverse is always approximately 10^10^34 times less valuable than the first sparks of consciousness appearing in the very youngest universes in which consciousness is appearing in just this moment (of course, in the next moment those sparks of consciousness are old news and become just as approximately irrelevant as the rest of the older multiverse.)

However, you can take some comfort in the fact that the amount of impact you are having must be at least infinite, in expectation, since the “eternal” in “eternal inflation”, if I understand correctly, implies that the eternal inflation level of the multiverse is eternally expanding and never stops. It’s just that there are different sizes of infinity, and the amount of infinite impact you are having is much, much smaller than the total infinite impact occurring; and in a sense what really matters is the ratio of infinite impact between all universes (see: averagism.)

One of the most bizarre consequences of this idea is that it is extremely anti-memetic, i.e., if you accept it and are altruistic, then immediately you should start trying to maximize value in the following millisecond and continue doing so as long as you are able, which means you should put zero effort into spreading this theory, as plans that far in the future have virtually zero value compared to the evidential impact beings like you will have in the proceeding millisecond. Due to this fact, and the unappealing nature of this idea, it seems unlikely it will catch on, unfortunately even if correct.

15

0
1

Reactions

0
1

More posts like this

Comments12
Sorted by Click to highlight new comments since:

Some prior discussion here.

I've also considered the combination of inflation and evidential decision theory on moral priorities. : https://substack.com/@hansgundlach/p-138186179

this is very interesting, thank you!

my thoughts, ~ in order:
(they all are conditional on this described cosmic inflation theory being true)

  1. this seems to imply there's a constant 'time' factor operating over all worlds at once, and that it's coherent to say that the arisal of a new universe happens at the 'same time as' some specific point in {time of a specific universe}.
    • i don't study physics, so i guess it could be true (and we can imagine programs which share that structure)! my intuition says it could also be that each universe happens 'all at once', relative to the arisal of universes, i.e that there's a separate meta-time that universe generation happens along. if so, longtermism is preferred again.

      possibly, in the actual physics theory, there's no 'meta' or ontologically-fundamental separation between universes, just causal separation (like the 'unobservable universe' in standard cosmology). this would probably imply a shared time / lack of 'meta-time'

    • (the below thoughts are as if that implication is true)
  2. for any set of universes in which the short-term-preferring tradeoff is made, that set will be worse off in the long-term. so this does, in a sense, trade short-term happiness for a larger amount of long-term suffering; it's just that larger amount, by the time it happens, is already outweighed by the short-term happiness in a vastly larger amount of future universes (which itself will also soon be outweighed in those same universes, but not before.. and so on).
    • this opens the door to two possible values: those which care about always-not-yet-infinite-time, and those which care about 'infinite' time as if over the entirety of this neverending pattern. for the latter, longtermism is optimal.
      • if you're doing anthropic reasoning, i.e without grounding in any particular world, it's always the case that your instantiations are more common in the 'last' generation. but there is no 'last' generation, because the pattern is neverending.
        • my guess is that there's no 'correct anthropic solution' in response to this, and it just depends on what algorithm an agent is running, but i can imagine some which intake this situation and reason over it as an infinite rather than always-increasing-finite set as a result.[1]
  3. there's a way to almost-maximize both this and the parts of our values which go against this:

    first, and immediately, take the action which results in the most well-being (or other value) in the immediate term. (alas, i was not able to figure out how to make myself feel happy in time)

    anything done beyond that brief period is so very exponentially small in comparison. (also, the first such brief period was actually way before we even existed, as is most life[2])

    given that, you've achieved the vast majority of value that will ever exist under those assumptions (or values). you may now spend the rest of time maximizing your other values (in particular, the 'infinite-time' framing of value described in 2).

    • though, i notice another odd way in which "anything done beyond that brief period is so very exponentially smaller in comparison" could be not quite true. if earlier in-universe-time is so much exponentially larger, then we might expect it to contain an exponentially large amount of boltzmann-brain-like-situations[3] occurring at the very start of times. to bite this bullet would suggest one of two things: 
      • always acting as an immediate-termist, so that a correlated action is taken in the earliest boltzmann-situations
        • (also see fn3[3])
        • or for some values, if extreme suffering is, say, of a different class than lighter suffering, always more important than any amount of light suffering; acting to minimize the soonest nearby instance of that.
      •  or a different strategy that is probably much harder for humans, but maybe possible for ASIs; if, somehow, your choices don't just acausally correlate to the actions of versions of you in boltzmann context, but also correlate to which boltzmann contexts arise to begin with. (because your actions are determined by the past in general, so what past is coherent depends on what choice you make)

(reminder: these are just thoughts that i had conditional on the physics theory in the post being true, and i don't have an inside-view belief about whether it is.)

  1. ^

    (slightly edited from fn1 of my earlier comment that misunderstood OP):

    possible values respond differently to infinite quantities.

    for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity. (at least, unless something they (dis)value occurs with exactly 0% frequency, implying a quantity of 0 - which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not the case in 'actual reality' if it's infinite, but possible at least in some mathematically-definable infinite universes; as a trivial case, a set of infinite 1s contains no 0s. more fundamentally, an infinite set of universes can be a finite set occurring infinite times.))

    other values might care about portion - that is, portion of / percentage-frequency within the infinite amount of worlds - the thing that determines the indexical probability of an observation in that context - rather than quantity. (e.g., i think my altruism still cares about this, though it's really tragic that there's infinite suffering).

    note this difference is separate from whether the agent conceptualizes this world as finite-increasing or infinite

  2. ^

    (as an aside, this means that us existing now does not resolve the 'youngness paradox'; that would require we exist at the first moment of the first observer)

    i would rather dissolve the youngness paradox by saying that the probability of our existence is still logically guaranteed (i.e. 1), even if it's exponentially small in comparative frequency (rather than logical probability); and to the extent it could answer the fermi paradox, answer that instead with mutual anthropic capture

  3. ^

    not necessarily lone brains, whose actions don't effect what observations they receive next as they soon dissolve; but maybe, in particular if it's required for our actions to have a correlated effect, situations where at least some brief period of action is possible before dissolution.

    maybe we'd select for the smallest possible context in which there's a copy of us, where it can take some form of action, such as 'thinking the happiest thought it can' rather than something requiring physical movement, on the basis that such smaller configurations will randomly occur exponentially more often; and then take that smallest action ourselves for the correlated effect.

    implies the correct action would be to try to have happy thoughts immediately

Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.

  1. Yes, I think there is a constant time factor. It is all one unified, single space-time, as I understand it (although this also isn’t an area of very high expertise for me,) I think that what causally separates the universes is simply that space is expanding so fast that the universe is are separated by incredible amount of space and don’t have any possibility of colliding again until much, much later in the universe’ s time-lines.
  2. Yes, I believe this is correct. I am pretty uncertain about this.

    A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.

    Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.

    A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.

    As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.

  3. Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!

    And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away

Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.

it's okay if you don't reply. my above comment was treating this post as a schelling point to add my thoughts to the historical archive about this idea.

about 'living in the moment' in your other comment: if we ignore influencing boltzmann brains/contexts, then applying 'ultimate neartermism' now actually looks more like being a longtermist to enable eventual acausal trades with superintelligence* in a younger universe-point. (* with 'infinite time' values, so the trade is preferred to them)

A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.

i'm not sure if by 'these theories' you meant different physics theories, or these different possible ways of valuing a neverending world (given the paragraphs before the quoted one). if you meant physics theories, then i agree that such quantitative differences matter (this is a weak statement as i'm too confused about infinite universes with different rates-of-increasing to have a stronger statement).

if you meant values:

  • that's not how value functions have to be. in principle example: there could be a value function which contains both these and normalizes the scores on each to be within -1 to 1 before summing them.
  • i don't think it's the case that the former function, unnormalized, has a greater range than the latter function. intuitively, it would actually be the case that 'infinite time' has an infinitely larger range, but i suspect this is actually more of a different kind of paradox and both would regard this universe as infinite.
    • paradox between 'reason over whole universe' and 'reason over each timestep in universe'. somehow these appear to not be the same here.

      i don't actually know how to define either of them. i can write a non-terminating number-doubling-program, and ig have that same program also track the sum so far, but i don't know what it actually means to sum an (at least increasing) infinite series.

      actually, a silly idea comes to mind: (if we're allowed to say[1]) some infinite series like [1/2 + 1/4 + 1/8 + ...] sum to a finite number (1 in that case), then we can also represent the universe going backwards with a decreasing infinite series. i.e., [1 + (1 ÷ 10^10^34) + (1 ÷ 10^10^34^2) + ...], where the first term represents the size of the end rather than start of the universe. this way, the calculation at least doesn't get stuck at infinity. this does end up more clearly implying longtermism, while maintaining the same ratio between size of universe at different times.

      but it's also technically wrong, if the universe has a start but no end, rather than an end but no start.

      (though in my intuitive[2] math system, these statements are true: [1/inf > 0], [1/inf × inf = 1], [2/inf × inf = 2]. this could resolve this by letting the start of the universe be represented as 1/10^10^34^inf (so that the increasing infinite series starting from here has the same sum as the decreasing infinite series above).

  1. ^

    (i'm not a mathematician). i don't understand how an infinite series can be writable in existing formal languages - it seems like it would require a '...' ('and so on...') operation in the definition itself, but '...' is not {one of the formally allowed operations}/defined.

  2. ^

    meant as a warning that this is not formal or well-understood by me. not meant as legitimation.

    that said, i think a formal system which allows these along with other desirable math is possible in principle (and this looks related), maybe in a trivial way

    as a simpler intuition for why such x/inf statements can be useful: if there is a sequence of infinite 0s which also contains, somewhere in it, just one 1, the portion of 1s is not 0 but 1 in infinity or 1/inf. similar: an infinite sized universe with finite instances of something (which is also trivially possible, e.g a unique center with repeatingly infinite area outwards from it)

Hi Hans, I found your post incredibly helpful and validating, and much clearer than my own in some ways. I especially like the idea of "living in the moment" as a way of thinking about how to maximize value, I actually think this is probably correct and makes the idea potentially more palatable and less conflicting with other moral systems than my own framing.

Thanks for the feedback ! I'll try to share the post more. As for the in-depth feedback above, I don't have any quick way to synthesize my thoughts but I'll try to update any future posts. Glad cosmological ideas are getting discussed on the EA forum.

(edit: the author meant something different and more interesting than what i thought they did when writing this and my next reply. see my reply there for my thoughts on what they really meant)

thanks for sharing this idea. it has a premise that seems conditionally true to me (and that i hadn't considered before). after that, i think there's a basic logical mistake. (i appreciate that you posted this even under uncertainty).

short version of my objection: the 'ultimate neartermism' argument would apply if the amount of worlds instead started very large, and exponentially diminished over time. because the future of an early world is instead expected to be exponentially many, this fact would instead increase the importance of taking actions which influence the futures of that early world. (the amount of worlds after x days is the same regardless of whether you optimize the near-term or the long-term; but the qualities of these futures differ based on your choice, such as the portion which contain life)

(the original drawn-out-example-case version of my objection, which may be less intuitive to follow, is in a footnote)[1]

i think the premise, that influencing the future of one[2] earlier world is more valuable than influencing the future of one later one, is true (if two assumptions are[3]) and interesting. this would be action-relevant in the following situation:

  • you're uncertain about how long the past is. you think it could either be {possibility 1: the past is 1n years long} or {possibility 2: the past is 2n years long}
  • you would want to act differently in either possibility, for some reason.
    • (e.g., let's say that earlier on there's a higher risk of asteroids or something) 
  • the future of possibility 1 being larger favors taking the actions you would in that world.
    • also account for the footnote[2]

finally, and more meta:

While I believe this idea’s consequences could be profound if correct, and it should be taken slightly more seriously than April Fool’s post “Ultra-Near-Termism”,  I consider it mostly a quirky novelty

i disagree with the part i italicized. ideas whose consequences are profound if true, and where one doesn't see a logical reason for the idea to be false, warrant a correspondingly large amount of investigation. an idea being 'a quirky novelty' as you put it, or weird-seeming, or not what other EAs seem to be thinking about, does not, in principle, mitigate its importance.

two other factors in the importance of thinking about such ideas:

  • they might be harder for someone to think about, which could reduce the likelihood that thinking about it more would improve one's beliefs about it.
    • (though, other cases of 'hard to think about' could mean there's really useful deconfusion to be had, particularly if something violates one's background ontology and so is confusing but only at first.)
  • others might be unlikely to think of such an idea. this increases its neglectedness / makes it more important to raise it for consideration like you did.

(i've been in a mindset before where 'what other people think' is automatically more relevant than it should be, and i diminish ideas i have as, e.g., 'probably not important because others would have thought of them' or 'probably not true because they imply a common background assumption is false'; i think it's liberating to be free of this mentality.)

  1. ^

    conditional on 'exponentially branching worldstates' like you described, actions which influence the futures of an earlier single world are more impactful than actions which influence the futures of a later single world. note the wording, 'influence the futures of', rather than the present.

    as an explanatory toy model, let's say there are copies of you in only one worldstate, and the world branches by a factor of 2 each day (after tomorrow there will be two similar worlds each containing a copy of you, then 4 the next day, and so on). if you maximize 'what is happening soonest in time' each day, then on day 16 there will be 32768 worlds, with a (probably pretty similar) version of you in each. if you keep on maximizing 'what is happening soonest in time', then eventually a static % of worlds start to become lifeless due to unreduced x-risks.

    if you instead spend each day trying to first reduce x-risks, then, although there is the same amount of worlds and versions of you after 16 days, the x-risks stop happening sooner (or happen at a lower rate), resulting in a greater total portion of live futures. (the amount of worlds after x days is the same regardless of your choice, but not the portion which are alive)

  2. ^

    (i use this odd 'one [earlier/later] world' phrasing because it's possible that, for example, on day 2 the actions of both copies of you are correlated, so your choice is still influencing the rest of the future, not just half of it)

  3. ^

    the assumptions:

    1. the world is branching
    2. total time* is finite

    * by total time, i mean time during which (dis)value happens; this could mean pre-'heat death'

    as i have not studied physics, i'm uncertain about both of these.

Thank you, I appreciate your comment very much.

I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.

I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…

Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:

“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur,  essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”

That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.

For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.

If I am mistaken in some way about this, though, please let me know!

On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement. 

I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly. 

It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction. 

Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.

this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur,  essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.

to paraphrase what i think you mean: "new universes are eternally coming into existence at an exponentially increasing rate, and where no universes can be causally influenced by actions in other ones". in that case:

  • because they're all causally separated, we can ignore which are newer or older and just model the portions between them.
    • (it's true that most copies of us would exist in later universes)
  • given causal separateness: apart from acausal trade, the best action is the same as if there were only one world: to focus on the long term (of that single universe).
    • (considerations related to acausal trade and infinite universe amount in footnote)[1]

i don't see where this implies ultimate-neartermism. below i'll write where i think your reasoning went wrong, if i understood it correctly. (edit: i read hans' post, and i now see that you indeed meant something different!. i'll leave the below as an archive.)

[there are exponentially more younger (where younger means later) universes, therefore...] if we are trying to maximize the amount of good across all universes, what we should evidentially care about is what is happening soonest in time across all universes

i could have misinterpreted this somehow, but it seems like a mistake mainly of this form:

  1. (premise) statement A is true for set Y.
  2. statement A being true for set Z would imply statement B is true for set Z.
  3. therefore statement B is true for set Z.

(2) is invalid, because it has not been established that statement A is true of set Z, only that it's true of set Y.

Applying this to the quote:

  1. for Y:[the set of all possible universes], A:[most universes are younger (existing later in time)].
  2. ~A:[most moments[2] are younger (beginning later in time)] being true for Z:[moments within a single universe] implies B:[the majority of moments are the last[3] possible one] for Z
  3. therefore B is true for Z

(my original natural language phrasing: though there are vastly more younger [later] universes, this does not imply younger [later] points in time within a single universe's time are quantitatively more than those at earlier points.)

  1. ^

    i think these are both orthogonal to your argument for 'ultimate neartermism'.

    for acausal trade considerations, just model the portions of different utility across worlds and make the trade accordingly.

     

    though, new universes coming into existence 'eternally' (and at a non-diminishing rate) implies an infinite amount. possible values respond differently to this.

    for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity* - at least, unless something they care about occurs with exactly 0% frequency - which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not possible for 'actual reality', but possible at least in some mathematical universes)

    other utility functions might care about portion - that is, portion of / frequency within the infinite amount of worlds - rather than quantity. (e.g., i think my altruism still cares about this, though it's really tragic that there's infinite suffering). these ones acausal trade with each other.

    * actually, that's not necessarily true. it can be reasoned that the amount is never actually infinite, only exponentially large, no matter how long it continues (2^x never does reach infinity), in which case at any actual point in time, quantity can still be increased / decreased.

  2. ^

    (also, a slightly different statement A is used in (2): about moments rather than universes)

  3. ^

    it seems to me that another understandable language mistake was made: a 'younger universe' (i.e a universe which begun to exist after already-existing (older) ones) sounds like it would, when translated to a single universe, mean 'an earlier point in time within that universe'; after all, a universe where less time has passed is younger. but 'younger' actually meant 'occurs later', in that context, plus we're now discussing moments rather than universes.

This sounds like one of those puzzles of infinities. If you take the limits in one way then it seems like one infinity is bigger than another, but if you take the limits a different way then the other infinity seems bigger.

A toy version: say that things begin with 1 bubble universe at time 0 and proceed in time steps, and at time step k, 10^k new bubble universes begin. Each bubble universe lasts for 2 time steps and then disappears. This continues indefinitely.

Option A: each bubble universe has a value of 1 in the first time step of its existence and a value of 5 in its second time step. (Then it disappears, or forever after has value 0.)

Option B: each bubble universe has a value of 3 in the first time step of its existence and a value of 1 in its second time step. (Then it disappears, or forever after has value 0.)

This has the same basic structure as the setup in the post, though with much smaller numbers.

We could try summing across all bubble universes at each time step, and then taking the limit as the total number of time steps increases without bound. Option B is 3x as good in the zeroth time step (value of 3 vs. 1), 2.125x as good through the next time step (value of 34 vs. 16), about 2.072x as good through the next time step (value of 344 vs. 166), and in the limit as the number of time steps increases without bound it is 2.0666... times as good (31/15). That is how this post sets up its comparison of infinities (with larger numbers so the ratio would be much more lopsided).

Instead, we could try summing within each bubble universe across all of its time steps, and then sum across all complete bubble universes. Each bubble universe has a total value of 6 in Option A vs. 4 in Option B, so Option A is 1.5x as good for each of them. Option A is 1.5x as good for the first bubble universe that appears (6 vs. 4), and for the first 11 bubble universes it is 1.5x as good (66 vs. 44), and for the first 111 bubble universes it is 1.5x as good (666 vs. 444), and if you take the limit as the number of bubble universes increases without bound it is 1.5x as good. This matches the standard longtermist argument (which has larger numbers so the ratio would be more lopsided).

Yes… So basically what you’re saying is this argument goes through if you make the summation of all bubble universes at any individual time step, but longtermist arguments would go through if you take a view from outside the metaverse and make the summation across all points of time in all bubble universes simultaneously?

I guess my main issue is that I’m having trouble philosophically or physically stomaching this, it seems to touch on a very difficult ontological/metaphysical/epistemological question of whether or not it is coherent to do the summation of all points in space-time across infinite time as though all of the infinite future already “preexists” in some sense. On the other hand, it could be the case that taking such an “outside view” of infinite space-time as though calculations could be “all at once” may not be an acceptable operation to perform, as such a calculation could not in reality ever actually be made by any observer, or at least could not be made at any given time

I have a very strong intuition that infinity itself is incoherent and unreal and therefore something like eternal inflation is not actually likely to be correct or may be physically possible. However, I am certainly not an expert in this and my feelings about the topic not necessarily correct; yet my sense is these sorts of questions are not fully worked out.

Part of what makes this challenging for me is that the numbers are so much ridiculously bigger than the numbers in longtermist calculations, that it would seem that even a very, very small chance that it might be correct would make me think it should get somewhat deeper consideration, at least have some specialists who work on these kinds of topics weigh in on how likely it seems something like this could be correct.

Curated and popular this week
Relevant opportunities