Human history can be viewed as the continuous evolution of the relative power of different value systems, and the interactions between agents that embody them. This process may reach a steady state (also sometimes called a “lock-in”) at some point, representing the end of history.
Assuming that a steady state occurs eventually – which is unclear – it can take one of the following forms1:
- Singleton: A single actor holds all power and shapes the universe according to its values. (Note that these values can still be a compromise over many value systems, e.g. in a world government that implements a negotiated agreement.)
- Multipolar outcome: Power is distributed between several stable centres of power with different values. We can further distinguish two cases:
- A fully cooperative multipolar outcome where all actors agree to optimise a compromise value system to reap gains from trade. (This is equivalent to a singleton with compromise values.)
- A not fully cooperative setting that entails cooperation problems or even conflicts between different actors.
- Extinction: No actors are able to shape the universe according to their values, e.g. because Earth-originating intelligence goes extinct.
In cases 1 and 2, we can further ask whether (and to what degree) the powerful actors face natural constraints in their attempt to shape the universe. For instance, large-scale space colonisation is not feasible with today’s level of technology, regardless of the distribution of power and values. However, the technological capacity of human civilisation will likely expand further in the future – just as there has been fairly consistent (accelerating) technological progress and economic growth over the course of human history.
From the perspective of long-termist consequentialism, and s-risk reduction in particular, there are precautionary reasons to assume:
- that a steady state other than extinction will eventually be reached. Influencing that resulting steady state is most relevant as it will persist over cosmic timescales – billions of years – and will therefore affect an astronomical number of sentient beings.2
- that the steady state is technologically mature: the level of technological capacity that’s available to actors is close to the theoretical maximum. This is because powerful technology is a risk factor for s-risks as it allows actors to optimise the universe to a much greater degree, for better or for worse.
The question is, how can we influence the resulting steady state? It is hard to accurately predict the future trajectory of human civilisation over long timespans. And even if you can predict the future, it may still be hard to influence it. For instance, an effective altruist in the year 1700 would struggle to find specific actions to alter the result of the industrial revolution, even if armed with the (highly unusual) knowledge of what will happen.
One possible answer is to prevent human extinction, which is a lever to change the resulting steady state from extinction to some distribution over non-extinction steady states. Whether this is a priority depends on one’s view on population ethics, since an empty universe would, while not containing anything of value, also not contain any suffering.3 (For two different perspectives on this, see here and here.)
I’d like to emphasize that even if one were to believe that extinction is preferable to what happens in the future, it would be clearly wrong to try and increase existential risk. Any such action would be extremely adversarial toward other value systems and would likely lead to increased polarisation and conflict – which is another risk factor for s-risks. There are many good reasons to be nice to other value systems and even pure consequentialists should adopt some quasi-deontological rules, such as not using or advocating violence of any form.
Still, from my perspective – having suffering-focused values – it is better to try and improve the resulting steady state if it’s not extinction, rather than attempting to switch probability mass between extinction and non-extinction steady states.
—
This brings us back to the question of when such a steady state is reached. It is possible that this happens soon, in which case we can have a direct and lasting impact on what the resulting steady state will be. (A concrete example that comes up frequently is that artificial general intelligence will be built in this century and will soon achieve a decisive strategic advantage and form a singleton.)
However, I worry about a potential bias to overestimate our impact, as this requires the belief that our generation is in a unique position to have a disproportionate impact, and such beliefs should, at least a priori, be penalised. It seems more likely that the end of history is not near. If I had to guess, I’d say it’s less than 50% likely that a steady state will be attained within 1000 years, assuming no extinction.
This could lead to the frustrating conclusion that our influence over what happens in the long-term future may be limited, since the values of future actors will mutate in many ways before a steady state is reached. While our actions have ripple effects on the future, they are hard (if not impossible) to predict, thwarting attempts to deliberately alter the long-term future in a particular direction. (Robin Hanson has also argued that future influence is hard and that value drift seems unavoidable at this point.)
But, since our impact as effective altruists is much smaller in this case, there are precautionary reasons to assume that we do have a reasonable chance to influence the resulting steady state:
- Our impact is larger if a steady state is very close after all. (But I don’t see a very plausible scenario for this – in particular, I’m sceptical about the specific claim that advanced artificial intelligence is around the corner.)
- It is possible that the general pace of change will speed up drastically in the near future (e.g. later this century) due to technological advances, such as whole-brain emulation or powerful biological enhancement. In this case, the equivalent of thousands years of history at the current pace may happen within a relatively short physical timespan, which allows contemporary humans to have a larger influence.
- We can hope that there are ways to reliably influence the result even if a steady state is far away. For instance, if we can effect changes in values that are fairly sticky over longer timescales, then moral advocacy is a promising lever.
This may seem like a Pascalian wager, and there are good reasons to not put too much weight on such arguments. But, while some form of the “frustrating conclusion” is fairly likely in my opinion, the listed alternatives are not too far-fetched either, so I think this is an acceptable kind of precautionary reasoning.
I don't really understand the conclusion this post is arguing for (or if indeed there is one). In particular, I didn't spot an answer to "how can we influence the long-term future?".