Imagine 2 valleys. One valley leads to billions and billions of years of extreme joy, while the other valley leads to billions and billions of years of extreme suffering. A superintelligence comes to you and tells you that there is a 98% chance that you will end up in the valley of joy for billions of years and a 2% chance that you will end up in the valley of suffering for billions of years. It also gives you a choice to choose non-existence. Would you take the gamble, or would you choose non-existence?

The argument presented in this post occurred to me several months ago and in the last several months, I have spent time thinking about the argument and discussed it with AI models and have not found a satisfactory answer to it given the real-world situation. The argument can be formulated for things other than advanced AI, but given the rapid progress in the AI field and that the argument was originally formulated in the context of AI, I will present it in that context.

Now apply the reasoning from the valleys from above to AGI/ASI. AGI could be here in about 15 months and ASI not long after that. Advanced AI(s) could prolong human life to billions and billions of years, take over the world and create a world in its image - whatever that might be. People have various estimates of how likely it is that the AGI/ASI will go wrong, but one thing that many of them keep saying is that the worst case scenario is that it will kill us all. That is not the worst case scenario. The worst case scenario is that it will cause extreme suffering or torture us for billions and trillions of years.

Let's assume better than 2% odds, let's say they are 0.5%, would you be willing to take the gamble with heaven or hell even if the odds are 0.5% for hell? And if not, at what point would you be willing to take the gamble instead of choosing non-existence?

If some of you might say that you would be willing to take the gamble at 0.5% for a living hell, in this case, would you be willing to spend 1 hour in a real torture chamber now for every 199 hours that you are in a positive mental state and not there? Advanced ASI could not only create suffering based on the current levels of pain experience that humans can have, which are already horrible, but increase pain experience of humans to unimaginable levels (for whatever reason - misalignment, indifference, evilness, sadism, counterintuitive ethical theories, whatever it might be).

If you do not assume a punitive afterlife for choosing non-existence and if choosing non-existence is an option, at which point, what odds do you need to have to take the gamble between almost a literal heaven or hell instead of choosing non-existence? I've asked myself this question and the answer is, when it comes to extreme suffering of billions and trillions of years, the odds would have to be very, very close to zero. What are your thoughts on this? If you think that this argument is not a valid argument, can you show me where is the flaw?

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
Dorothy M.
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa