This week, we are highlighting Forethought's Better Futures series. To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option.
Fin Moorhouse (@finm), who authored two chapters in the series (Convergence and Compromise, and No Easy Eutopia) along with @William_MacAskill, has agreed to answer a few of your questions.
You can read (and comment) on the full series on the Forum. In order, the chapters are:
* Introducing Better Futures
* No Easy Eutopia
* Convergence and Compromise
* Persistent Path-Dependence
* How to Make the Future Better
* Supplement: The Basic Case for Better futures
Leave your questions and comments below. Note that Fin isn't committing to answer every question, and if you see someone else's question you can answer, you're free to.
Better Futures
To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option. The essays are designed to be read in order, beginning with "Introducing Better Futures".
This week, Fin Moorhouse, one of the authors of these essays, will be available to answer your questions in the discussion thread.
This is a crosspost of the full text of Introducing Better Futures from Forethought's website, made for the EA Forum's Better Futures Highlight Week. There's more discussion at the original summary post of this article here.
----------------------------------------
1. The basic case
Suppose we want the future to go better. What should we do?
One prevailing approach is to try to avoid roughly zero-value futures: reducing the risks of human extinction or of misaligned AI takeover.
This essay series will explore an alternative point of view: making good futures even better. On this view, it’s not enough to avoid near-term catastrophe, because the future could still fall far short of what’s possible. From this perspective, a near-term priority — or maybe even the priority — is to help achieve a truly great future.
That is, we can make the future go better in one of two ways:
1. Surviving: Making sure humanity avoids near-term catastrophes (like extinction or permanent disempowerment).[1]
2. Flourishing: Improving the quality of the future we get if we avoid such catastrophes.
This essay series will argue that work on Flourishing is in the same ballpark of priority as work on Surviving. The basic case for this appeals to the scale, neglectedness and tractability of the two problems, where I think that Flourishing has greater scale and neglectedness, but probably lower tractability. This section informally states the argument; the supplement (“The Basic Case for Better Futures”) makes the case with more depth and precision.
Scale
First, scale. As long as we’re closer to the ceiling on Survival than we are on Flourishing — if there is more room for improvement on the latter — then Flourishing has greater scale.
To illustrate, suppose you think that our chances of survival this century are reasonably high (greater than 80%) but that, if we survive, we should expect a future that falls far short of how good it could be (less than 10% as good as the best feasible
ONE
This is a crosspost of the full text of No Easy Eutopia from Forethought's website, made for the EA Forum's Better Futures Highlight Week. There's more discussion at the original summary post of this article here.
----------------------------------------
1. Introduction
The basic argument for the "better futures" perspective relied on the idea that we are closer to the ceiling on Surviving than we are on Flourishing. If, however, we are very likely to get to a near-best future given survival, then there's more to gain from ensuring we survive, and there's less potential upside from improving those futures where we do survive.
Surviving represents the probability of avoiding a near-zero value future this century (an "existential catastrophe"), while Flourishing represents the expected value of the future conditional on Surviving.
We could be close to the ceiling of Flourishing for a couple of reasons. First, eutopian futures could present a big target: that is, society would end up reaching a near-best outcome across a wide variety of possible futures, even without deliberately and successfully honing in on a very specific conception of an extremely good future. We call this the easy eutopia view.
Second, even if the target is narrow, society might nonetheless hone in on that target — maybe because, first, society as a whole accurately converges onto the right moral view and is motivated to act on it, or, second, some people have the right view and compromise between them and the rest of society is sufficient to get us the rest of the way.1
As an analogy, we could think of reaching a near-best future as an expedition to sail to an uninhabited island. The expedition is more likely to reach the island to the extent that:
1. The island is bigger, more visible, and closer to the point of departure;
2. The ship's navigation systems work well, and are aimed toward the island;
3. The ship's crew can send out smaller reconnaissance boats, and not everyone onboard t
TWO
This is a crosspost of the full text of Convergence and Compromise from Forethought's website, made for the EA Forum's Better Futures Highlight Week. There's more discussion at the original summary post of this article here.
----------------------------------------
1. Introduction
The previous essay argued for "no easy eutopia": that only a narrow range of likely futures capture most achievable value, without serious, coordinated efforts to promote the overall best outcomes. A naive inference from no easy eutopia would be that mostly great futures are therefore very unlikely, and the expected value of the future is barely above 0.
That inference would be mistaken. Very few ways of shaping metal amount to a heavier-than-air flying machine, but powered flight is ubiquitous, because human design honed in on the design target. Similarly, among all the possible genome sequences of a certain size, a tiny fraction codes for organisms with functional wings. But flight evolved in animals, more than once, because of natural selection. Likewise, people in the future might hone in on a mostly-great future, even if that's a narrow target.
In the last essay, we considered an analogy where trying to reach a mostly-great future is like an expedition to sail to an uninhabited island. We noted the expedition is more likely to reach the island to the extent that:
1. The island is bigger, more visible, and closer to the point of departure;
2. The ship's navigation systems work well, and are aimed toward the island;
3. The ship's crew can send out smaller reconnaissance boats, and not everyone onboard the ship needs to reach the island for the expedition to succeed.
The previous essay considered (1), and argued that the island is small and far away. This essay will consider ideas (2) and (3): whether future humanity will deliberately and successfully hone in on a mostly-great future. Mapping onto scenarios (2) and (3), we consider two ways in which that might happen:
* First
THREE
This is a crosspost of the full text of Persistent Path-Dependence from Forethought's website, made for the EA Forum's Better Futures Highlight Week. There's more discussion at the original summary post of this article here.
----------------------------------------
1. Introduction
One of the most common objections to working on better futures is that, over sufficiently long time horizons, the effects of our actions will 'wash out'.1 This is often combined with the view that extinction is a special case, where the impacts of our actions really could persist for an extremely long time. Taken together, these positions imply that it's much more important, from a longtermist perspective, to work on reducing extinction risk than to work towards better futures. The future we'll get given survival might only be a fraction as good as it could be, but we might just be unable to predictably improve on the future we get. So we should focus on Surviving rather than Flourishing.
In this essay, I'll argue against this view. There are a number of events that are fairly likely to occur within our lifetimes that would result in extremely persistent path-dependent effects of predictable expected value. These include the creation of AGI-enforced institutions, a global concentration of power, the widespread settlement of space, the first immortal beings, the widespread design of new beings, and the ability to self-modify in significant and lasting ways.
I'm not confident that such events will occur, but in my view they're likely enough to make work on better futures high in expected value from a long-term perspective. To be more precise, my view is that the expected variance in the value of the future will reduce by about a third this century, with the majority of that reduction coming from things other than the risk of human extinction or disempowerment to AI.
In section 2 of this essay, I'll explain why the skeptical argument I'm considering is more complicated than it first app
FOUR
This is a crosspost of the full text of How to Make the Future Better from Forethought's website, made for the EA Forum's Better Futures Highlight Week. There's more discussion at the original summary post of this article here.
----------------------------------------
1. Introduction
In the last essay, we saw reasons why, at least in principle, we can take actions that have predictably path-dependent effects on the long-run future. But what, concretely, can we do to have a positive long-term impact? Ultimately, the case for better futures work stands or falls with how compelling the concrete actions one can take are. So this essay tries to give an overview of what you could do in order to make the future go better, given survival.1
I'll caveat that these are all just potential actions, at this stage. They are briefly described, they aren't deeply vetted, and I expect that many of the ideas I list will turn out to be misguided or even net-negative upon further investigation. The point of this essay is to give ideas and show proof of concept — that there's lots to do from a better futures perspective, even if we haven't worked out the ideas in detail yet, know if all of them are tractable, or know which actions are highest-value. In many cases, the most important next step is further research. The ideas I list are also presented merely from the better futures perspective: some might be in tension with existential risk reduction, whereas others are actively complementary; some might be good from a short-term perspective, whereas others might not. When deciding what to do, we should consider all the effects of our actions.
In section 2 of this essay, I discuss ways in which we can keep our options open, by delaying events that risk forcing civilisation into one trajectory or another. These include:
* Preventing post-AGI autocracy
* Delaying decisions around space governance
* Making new global governance arrangements explicitly temporary
* Generally trying to
FIVE
This is a crosspost of the full text of Supplement: The Basic Case for Better Futures from Forethought's website, made for the EA Forum's Better Futures Highlight Week.
----------------------------------------
1. Introduction
This report introduces a simplified model for evaluating actions aimed at producing long-term good outcomes: the "SF model", where the expected value of the future can be approximated by the product of two variables, Surviving (S) and Flourishing (F). Surviving represents the probability of avoiding a near-total loss of value this century (an "existential catastrophe"), while Flourishing represents the expected value of the future conditional on our survival. Using this model and the "scale, neglectedness, tractability" framework, we argue that interventions aimed at improving Flourishing are of comparable priority to those focused on Surviving.
The SF model
We'll define value as the difference that Earth-originating intelligent life makes to the value of the universe.
Insofar as we're aiming to do as much good as possible, we want to maximise expected value. We can break the expected value of an action into two components:1
EV = EV(near-term) + EV(future)
We'll let "near-term" refer to "between now and 2100". If we accept longtermism,2 we accept that the best action we can take must be near-best with respect to the latter component.
We can further decompose EV(future) as follows:
EV(future) = P(survival this century) * EV(future | survival this century) + P(not-survival this century) * EV(future | not-survival this century)
We'll come back to the definition of "survival this century", but for now we'll say that: (i) by stipulation, outcomes involving the total extinction of Earth-originating life are of 0 value; (ii) the best feasible long-run outcomes are of value 1;3 and (iii) "survival this century" occurs if nothing has happened by 2100 to lock us into a 0-value future. That is, EV(future | not-survival this century) = 0. So ou
SIX
