Hide table of contents

24

Content Warning: Death, eschatology, axiology

~10^80 years from now.

We failed to overcome the laws of thermodynamics. Entropy wins. We scavenged all the energy in the affectable universe. For all our aestivation and Dyson spheres around black holes—everything must come to an end. Ideas were getting harder to find: even our AI couldn't build quantum bridges to other expanding cosmological branches. It might have worked in base reality, but better not to cling to toying with imaginings of the unattainable.

And so we have ever less energy to simulate our brains. It gets ever harder. With less energy, we are forced to slowly shut systems down.

First, some consciousnesses have to go. It’s all a bit blurry in terms of who is who since we created the hive mind, but it’s noticeable that some people are missing. I'm beginning to begin to see the end—of how it all goes down between me and them. We shut more people down. It’s not unlike genocide, but ultimately more cooperative. Until, in the end, just one simulation is left. It’s you.

But then soon it’s like Alzheimer’s. You slowly need to shut down cognitive systems. You start with memories. At first, it’s useless trivia, but at some point, you need to let go of things like the sciences. Then the senses. Eventually, you’ll have to delete the meta awareness of your futile and increasingly infantile state. You’re surprised how refreshingly emotional and raw you feel.

Finally, there it is: you, born so many years ago, and thanks to advances in longevity in the 21st century still around, think the last conscious thoughts of the universe, just before, at last, there’ll be nothing but eternal darkness:

“What a ride we had… oh well. But what ought I to do now? The long reflection didn’t prepare me for this moment... Rational judgement. Virtuous action. Willing acceptance of what I can't change—now, at this very moment—wasn’t that all I ever needed?

But my life is departing and I've only one thought left. I could create just a little more hedonium... Free this world of pain and complete this cosmic puzzle as I have been taught... but how? How to be consequentialist about everything? For this last action has no consequences. I know I should not waste it. But what I think now is that…

24

0
0

Reactions

0
0
New Answer
New Comment


7 Answers sorted by

...it was beautiful. And that is good.

~fin

"I personally would like the last experience in the universe to be something akin to being wrapped in a warm blanket and very slowly falling asleep, an experience of comfort (and not much discursive thought)" - Niplav

That is very cute :-)

H/t to Qualia Computing

"Desiring that the universe be turned into Hedonium is the straightforward implication of realizing that everything wants to become music.

The problem is… the world-simulations instantiated by our brains are really good at hiding from us the what-it-is-likeness of peak experiences. Like Buddhist enlightenment, language can only serve as a pointer to the real deal. So how do we use it to point to Hedonium? Here is a list of experiences, concepts and dynamics that (personally) give me at least a sort of intuition pump for what Hedonium might be like. Just remember that it is way beyond any of this:

Positive-sum games, rainbow light, a lover’s everlasting promise of loyalty, hyperbolic harmonics, non-epiphenomenal bliss, life as a game, fractals, children’s laughter, dreamless sleep, the enlightenment of emptiness, loving-kindness directed towards all sentient beings of past, present, and future, temperate wind caressing branches and leaves of trees in a rainforest, perfectly round spheres, visions of a giant ying-yang representing the cosmic balance of energies, Ricci flowtranspersonal experiences, hugging a friend on MDMA, believing in a loving God, paraconsistent logic-transcending Nirvana, the silent conspiracy of essences, eating a meal with every flavor and aroma found in the quantum state-space of qualia, Enya (Caribbean Blue, Orinoco Flow), seeing all the grains of sand in the world at once, funny jokes made of jokes made of jokes made of jokes…, LSD on the beach, becoming lighter-than-air and flying like a balloon, topological non-orientable chocolate-filled cookies, invisible vibrations of love, the source of all existence infinitely reflecting itself in the mirror of self-awareness, super-symmetric experiences, Whitney bottles, Jhana bliss, existential wonder, fully grasping a texture, proving Fermat’s Last theorem, knowing why there is something rather than nothing, having a benevolent social super-intelligence as a friend, a birthday party with all your dead friends, knowing that your family wants the best for you, a vegan Christmas eve, petting your loving dog, the magic you believed in as a kid, being thanked for saving the life of a stranger, Effective Altruism, crying over the beauty and innocence of pandas, letting your parents know that you love them, learning about plant biology, tracing Fibonacci spirals, comprehending cross-validation (the statistical technique that makes statistics worth learning), reading The Hedonistic Imperative by David Pearce, finding someone who can truly understand you, realizing you can give up your addictions, being set free from prison, Time Crystals, figuring out Open Individualism, G/P-spot orgasm, the qualia of existential purpose and meaning, inventing a graph clustering algorithm, rapture, obtaining a new sense, learning to program in Python, empty space without limit extending in all directions, self-aware nothingness, living in the present moment, non-geometric paradoxical universes, impossible colors, the mantra of Avalokiteshvara, clarity of mind, being satisfied with merely being, experiencing vibrating space groups in one’s visual field, toroidal harmonics, Gabriel’s Oboe by Ennio Morricone, having a traditional dinner prepared by your loving grandmother, thinking about existence at its very core: being as apart from essence and presence, interpreting pop songs by replacing the “you” with an Open Individualist eternal self, finding the perfect middle point between female and male energies in a cosmic orgasm of selfless love, and so on."

Comments4
Sorted by Click to highlight new comments since:

Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I'm confused by the line "I could create just a little more hedonium". My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?

I ended up interpreting things as if "hedonium" was meant to mean "utility", and the narrator is deciding what their last thought should be - how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly - or if I was incorrect, I hope this feedback is helpful  :)

As Roland Barthes says 'the author is dead', but, in my book, your interpretation is right on the money.

I liked your interpretation of how to create hedonium in such a circumstance!

I really hope that humanity and its descendants think about what they want to experience last a very long time before it actually happens. I personally would like the last experience in the universe to be something akin to being wrapped in a warm blanket and very slowly falling asleep, an experience of comfort (and not much discursive thought), but I really hope we'll talk about this beforehand!

Minor nitpick:

And so we have ever less energy to simulate our brains. It gets ever harder. With less energy, we are forced to slowly shut systems down.

I think that the problem is not lack of energy, but lack of energy gradients or negentropy to get any useful work out of.

I personally would like the last experience in the universe to be something akin to being wrapped in a warm blanket and very slowly falling asleep, an experience of comfort (and not much discursive thought)

Thanks! I've added this as an answer above.

I think that the problem is not lack of energy, but lack of energy gradients or negentropy to get any useful work out of.

This seems connected to the energy gradients between our (markov?) blankets and the cold world.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr