In a recent report, Toby Ord and I introduce the idea of 'existential hope': roughly, the chance of something extremely good happening. Decreasing existential risk is a popular cause area among effective altruists who care about the far future. Could increasing existential hope be another useful area to consider?

Trying to increase existential hope amounts to identifying something which would be very good for the expected future value of the world, and then trying to achieve that. This could include getting more long-term focused governance (where perhaps the benefit is coming from reduced existential risk after you reach that state), or effecting a value-shift in society so that it is normal to care about avoiding suffering (where the benefit may come from much lower chances of large amounts of future suffering).

What other existential hopes could we aim for?

Technical note: the idea of increasing existential hope is similar to that of a trajectory change, as explained in section 1.1.2.3 of Nick Beckstead's thesis. It is distinct in that it is extremely hard to tell when a trajectory change occurs, because we don't know what the long-term future will look like. In contrast we can have a much better idea of expectations.

Comments15


Sorted by Click to highlight new comments since:

Thanks for the paper Owen.

Existential hope sounds like an opposite to existential despair, rather than to existential risk and could increase the already common confusion regarding that!! Of course, it's only a private paper, but since it's designed to establish terminologies, it's something to think about.

When I first heard of Bostrom's phrase "existential risk", I felt it was overly philosophical because it sounded like a concept in existentialism. I agree with Owen+Toby's paper that "extinction risk" is already adequate when talking about extinction.

Words are sticky, so it may be hard to ditch "existential risk", but if we were doing it over again, I'd choose something else, like "astronomical risks" and "astronomical benefits".

Thanks Ryan, I hadn't actually spotted that issue with the term "existential hope". I don't think it's necessarily enough to sink the term, but it's worth being aware of.

English doesn't have a work for the chance of a good thing, which makes it awkward to find the right term. Earlier drafts just stuck to "existential eucatastrophe", which has the right meaning. However it was pointed out that eucatastrophe is very obscure and most people would see the word 'catastrophe' inside it and assume it meant something bad. We wanted a term which would give something of the right impression.

Perhaps you could use the phrase 'existential reward' instead of 'existential eucatastrophe'?

That falls a bit flat for me -- neither the right connotations (as 'existential dream' more or less has) nor the right actual meaning (as 'existential eucatastrophe' has).

I like "x-dream"!

"Existential dream" is an interesting alternative. It carries some helpful connotations. However, it doesn't sound like the kind of thing you can increase or decrease, which makes it less good as the term mirroring "existential risk".

You seem to be using it closer to the sense of "existential eucatastrophe". I admit that it's more grokkable than that!

Okay, I thought x-hope/eucatastrophe were the same thing. I was thinking of "existential dream" for the latter.

For a positive parallel to "risk" that can increase or decrease, I'd use "potentiality". Potential/potentiality is generally used to refer to something good/beneficial, but I can't say that "existential potentiality" exactly roles off the tongue!

'Existential opportunity'?

Everything I can think of sounds mercantile: 'Existential profit' 'Existential gain'

This is the kind of nugget I visit this forum for! I have x-dreams for veganism and EA to become the norm globally. Other x-dreams I can think of are making electricity out of nothing (or close to it); high yield, drought resistant crops; a technological breakthrough that will permit people to work less, or not at all; Islam to go the way of Christianity in giving up violence; a strong African Union that keeps peace on the continent; a way of preventing global warming or of cooling the earth; an end to the practice of girl killing in Asia responsible for huge gender imbalances; compassion replacing domination as the prevailing worldview/lifestyle choice; and an end put to anonymous shell companies and secret bank accounts that enable the corrupt.

I think all those are great. But I am more suspicious that taking work out of the equation will improve society - how will we ensure that surplus is distributed reasonably?

my x-hopes are really a kind of success criteria for the movement: a culture pinned around evidence, scientific reasoning and trial and error in policy, medicine and other important areas. A culture around prioritising what problems to solve based on suffering, human flourishing and equality (and that includes the brakes on trial and error in some areas such as new technologies). A global economic/political system that comes from/whose creation is the genesis of those two things that is immensely more effective in improving the human condition than we have currently.

Probably the most important "good things that can happen" after FAI are:

  • Whole brain emulation. It would allow eliminating death, pain and physical violence, not even mentioning ending discrimination and social stratification on the basis of appearance (although serious investment into cybersecurity would be required).

  • Full automation of the labor required to maintain a comfortable style of living for everyone. Avoiding a Malthusian catastrophe would still require reasonable reproduction culture (especially given immortality due to e.g. WBE).

It seems like the development of these would increase expected value massively in the medium term. I'm not sure what the effect on long term expected value would be (because we'd expect to develop these at some point anyway in the long term).

Good point. In the long run, the important thing is to reach the best attractor point / asymptotic trajectory. We need to develop much better understanding of the space of attractors ("possible ultimate fates") and the factors leading to reaching one rather than another. I'd say that the things I mentioned are definitely on the wish list of the attractor we want to select.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f