March 17 - 23 will be Existential Choices Debate Week on the EA Forum. We’ll be discussing the debate statement "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]"
Add this and future events to your calendarLike the last two debate weeks (1,2), during the event you’ll find a banner on the front page with an axis going from “strong disagree” to “strong agree” where forum users can place their avatar, and attach a comment explaining their position.
We’ll also be hosting a “symposium” with Will MacAskill — a chance to join in a live debate with experts on the issue. Provisionally, this will happen on the Monday of debate week, in the comments of a symposium post[4].
If you’d like to take part, you can start thinking about writing a post to be published during the week. Posts considered a part of the event don’t have to answer the question directly — the rule of thumb is that if a post could plausibly help someone decide how to vote, it’s relevant for the week.
As always, message me if you have any questions, or would benefit from writing support.
Why this topic?
In recent years, we’ve somewhat moved away from the longtermism label (I at least see it pop up far less often on the Forum). Partially this is for reasons well phrased by Scott Alexander in his post "Long-Termism" [sic][5] vs. "Existential Risk".
A movement whose descriptive priorities can be explained as extinction risk reduction may as well just say that, rather than appealing to a philosophy which concluded that extinction risk reduction is important. However, if we don’t discuss longtermism, we also won’t discuss a range of potential projects not captured by common sense morality or extinction reduction. These are projects that aim for trajectory change, or a better future.
Now, terms like “post-AGI-governance” are starting to pop up… attempts to seed projects which hope to improve the long term future in ways other than merely (haha) ensuring that it exists.
This seems like a point when it is important to ask the question — should we be doing this now? Are there promising projects to be sought out, researched, or directly worked on, which are more important than extinction reduction? Is this area or research and action, a tangent, or a necessity?
Clarifications
Extinction is far narrower than “existential risk”: The extinction of “earth originating intelligent life” means a future devoid of any value which could have come from those lives. But “existential risk” is a broader term which includes irreversibly bad futures — full of suffering, led by an immortal dictator, trapped in pre-industrial scarcity forever. The future’s value could be below 0. In this debate, the ‘value of the future’ side, not the ‘extinction’ side, incorporates these risks which don’t route through extinction.
Edit: There has been some discussion of the haziness of the extinction definition in the comments. The solution to this is difficult - "extinction" seems like it should be easier to define than "existential risk", but it has its own issues. One odd scenario is a future where morally valuable, conscious, AI systems are living good lives, but at some point in the past, they killed off all the humans. Under the definition we are using in this debate this would not count as extinction (even though humans are gone, the value of the future is still secured by the descendant AI systems). For the purpose of our week long debate, I'll treat this as a feature not a bug. If you think this would be a bad and likely outcome, then that might be a reason to vote disagree on the statement.
We are treating extinction reduction and increasing the value of the future as mutually exclusive: In order to make this a debate of (at least) two sides, we are separating interventions which reduce the risk of extinction, and interventions that increase the value of the future, via means other than extinction reduction. Otherwise, a substantive position in the argument, that extinction risk reduction is the best way to increase the value of the future, would be recorded as a neutral vote, rather than a strong agree.
Tractability is a part of this conversation. Including “on the margin” in the debate statement means that we can’t avoid thinking about tractability — i.e, where extra effort would actually do the most good, today. This makes the debate harder, but more action relevant. Remember though — just because the core debate question relies on claims about tractability, you can write about anything that could meaningfully influence a vote.
Please do ask for clarification publicly below - I'll add to this section if multiple people are confused about something.
How to take part:
Vote and comment
The simplest way to contribute to the debate week (though you can make it as complex as you like) is voting on the debate week banner, and commenting on your vote, describing your position. This comment will be attached to your icon on the banner, but it’ll also be visible on the debate week discussion thread, which will look like this.
Post
Everyone is invited to write posts for debate week! All you need to do is tag them with the “Existential Choices Debate Week” tag. Posts don’t have to end with a clear position on the debate statement to be useful. They can also:
- Summarise other arguments and classify them.
- Bring up considerations which might influence someone’s position on the statement.
- Crosspost content for elsewhere, which contributes to the debate.
Message me if you have questions or would like feedback (anything from “is this post suitable?” to “does this post make my point clearly enough?”)
Turn up for the Symposium
We’ll hold the “Symposium” on Monday of debate week. It’ll (probably)[7] be a post, like an AMA post, where a conversation will happen in the comments, between Will MacAskill and other experts. If you log onto the Forum at that time, you can directly take part in the debate, as a commenter.
We reserve the right to announce different moderation rules for this conversation. For example, we’ll consider hiding comments that aren’t on topic to make sure the discussion stays valuable.
Further reading:
A helpful term, “MaxipOK”, comes from this paper by Nick Bostrom. In it, he writes:
- “It may be useful to adopt the following rule of thumb for moral action; we can call it Maxipok: Maximize the probability of an okay outcome [bolding mine], where an “ okay outcome” is any outcome that avoids existential disaster. At best, this is a rule of thumb, a prima facie suggestion, rather than a principle of absolute validity, since there clearly are other moral objectives than preventing terminal global disaster.”
Also:
- Toby Ord’s work modelling future trajectories, and providing a taxonomy of the different ways we might influence, or think we are influencing, the long-term future.
- A perhaps relevant reading list on the “long reflection” — a characterisation of a time in the future when questions of value are solved and consolidated so that the best decisions can be made.
- This post, which argues for the value and persistence of locking the future into a regime of “good reflective governance”, a state which, ideally, would lead to the best futures.
- Brian Tomasik’s essay, warning of astronomical future suffering risks, also known as S-risks.
- "The option value argument doesn't work when it's most needed" - a post which argues, considering s-risks, that at least some effort should be devoted to better futures, not just extinction-avoidance. I take this post as representing a disagree vote on this debate axis.
- A post which represents an even stronger disagree vote on our debate axis - arguing that suffering risks and the potential for changing the trajectory of the future, mean that extinction-risk reduction is not the highest expected value path to longtermist impact .
If there are other posts you think more people should read, please comment them below. I might highlight them during the debate week, or before.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
- ^
This may change based on the preferences of the participants. More details will come soon.
- ^
Sorry- I’m reading a book right now (The Power Broker) with some really snarkily placed [sic]s and I couldn’t help it. Longtermism is the philosophy, long-termism is the vibe of Long Now, The Long View, the Welsh Future Generations Commission, etc…
- ^
Or very close to zero when compared to other future trajectories. For example, worlds where only a small population of intelligent life exists on earth for a relatively short time, are often treated as extinction scenarios when compared to worlds where humans or their descendants occupy the galaxy.
- ^
This depends on the preferences of the participants, which are TBC.
How about 'On the margin, work on reducing the chance of our extinction is the work that most increases the value of the future'?
As I see it, the main issue with the framing in this post is that the work to reduce the chances of extinction might be the exact same work as the work to increase EV conditional on survival. In particular, preventing AI takeover might be the most valuable work for both. In which case the question would be asking to compare the overall marginal value of those takeover-prevention actions with the overall marginal value of those same actions.
(At first glance it's an interesting coincidence for the same actions to help the most with both, but on reflection it's not that unusual for these to align. Being in a serious car crash is really bad, both because you might die and because it could make your life much worse if you survive. Similarly with serious illness. Or, for nations/cities/tribes throughout history, losing a war where you're conquered could lead to the conquerors killing you or doing other bad things to you. Avoiding something bad that might be fatal can be very valuable both for avoiding death and for the value conditional on survival.)