Many of the biggest companies in the world are racing to build superintelligence — artificial intelligence that far exceeds the capability of the best humans across all domains. This will not merely be one more invention. The magnitude of the transformation will be beyond that of the printing press, or the steam engine, or electricity; more on a par with the evolution of Homo sapiens, or of life itself.
Yet almost no one has articulated a positive vision for what comes after superintelligence. Few people are even asking, “What if we succeed?” Even fewer have tried to answer.[1]
The speed and scale of the transition means we can’t just muddle through. Without a positive vision, we risk defaulting to whatever emerges from market and geopolitical dynamics, with little reason to think that the result will be anywhere close to as good as it could be. We need a north star, but we have none.
This essay is the first in a series that discusses what a good north star might be. I begin by describing a concept that I find helpful in this regard:
Viatopia: an intermediate state of society that is on track for a near-best future, whatever that might look like.[2]
Viatopia is a waystation rather than a final destination; etymologically, it means “by way of this place”. We can often describe good waystations even if we have little idea what the ultimate destination should be. A teenager might have little idea what they want to do with their life, but know that a good education will keep their options open. Adventurers lost in the wilderness might not know where they should ultimately be going, but still know they should move to higher ground where they can survey the terrain. Similarly, we can identify what puts humanity in a good position to navigate towards excellent futures, even if we don't yet know exactly what those futures look like.
In the past, Toby Ord and I have promoted the related idea of the “long reflection”: a stable state of the world where we are safe from calamity, and where we reflect on and debate the nature of the good life, working out what the most flourishing society would be. Viatopia is a more general concept: the long reflection is one proposal for what viatopia would look like, but it need not be the only one.[3] [4]
I think that some sufficiently-specified conception of viatopia should act as our north star during the transition to superintelligence. In later essays I’ll discuss what viatopia, concretely, might look like; this note will just focus on explaining the concept.
We can contrast the viatopian perspective with two others. First, utopianism: that we should figure out what an ideal end-state for society is, and aim towards that. Needless to say, utopianism has a bad track record.[5] From Plato’s Republic onwards, fiction and philosophy have given us scores of alleged utopias that look quite dystopian to us now. Members of every generation have been confident they understood what a perfect society would look like, and they have been wrong in ways their descendants found obvious. We should expect our situation to be no different, such that any utopia we design today would look abhorrent to our more-enlightened descendants. We should have more humility than the utopian perspective suggests.
The second perspective, which futurist Kevin Kelly called “protopianism” and Karl Popper decades earlier called “piecemeal engineering”, is motivated by the rejection of utopianism.[6] On this alternative perspective, we shouldn’t act on any big-picture view of where society should be going. Instead, we should just identify whatever the most urgent near-term problems are, and solve such problems one by one.[7]
There is a lot to be said in favour of protopianism, but it seems insufficient as a framework to deal with the transition to superintelligence. Over the course of this transition, we will face many huge problems all at once, and we’ll need a way of prioritising among them. Should we accelerate AI, to cure disease and achieve radical abundance as fast as possible? Or should we slow down and invest in increased wisdom, security, and ability to coordinate? Protopianism alone can’t help us; or, if it does, it might encourage us to grab short-term wins at the expense of humanity’s long-term flourishing.
Viatopianism offers a distinctive third perspective. Unlike utopianism, it cautions against the idea of having some ultimate end-state in mind. Unlike protopianism, it attempts to offer a vision for where society should be going. It focuses on achieving whatever society needs to be able to steer itself towards a truly wonderful outcome.
What would a viatopia look like? To answer this question, we need to identify what makes a society well-positioned to reach excellent futures. John Rawls coined the idea of primary goods: things that rational people want whatever else they want.[8] These include health, intelligence, freedom of thought, free choice of occupation, and material wealth. We could suggest an analogous concept of societal primary goods: things that it would be beneficial for a society to have, whatever futures people in that society are aiming towards.
What might these societal primary goods be? They could include:
- Material abundance
- Scientific knowledge and technological capability
- The ability to coordinate to avoid war and other negative-sum competition
- The ability to reap gains from trade
- Very low levels of catastrophic risk
Beyond societal primary goods, we should also favour conditions that enable society to steer itself towards the best states, and away from dystopias. This could include:
- Preserving optionality, so a wide variety of futures remain possible.
- Cultivating people's ability and motivation to reflect on their values.
- Structuring collective deliberations so that better arguments and ideas win out over time.
- Designing decision-making processes that help people realize what they value as fully as possible.
- Ensuring sufficient stability that these viatopian structures cannot be easily overturned.
But this list is provisional: intended to illustrate what viatopia might look like, rather than define it.
The transition to superintelligence will be the most consequential period in human history, and it is beginning now. During this time, people will need to make some enormously high-stakes decisions, which could set the course of the future indefinitely. Aiming toward some narrow conception of an ideal society would be a mistake, but so would just trying to solve problems in an ad-hoc and piecemeal manner. Instead, I think we should make decisions that move us towards viatopia: a society that, even if it doesn't know its ultimate destination, has equipped itself with the resources, wisdom, and flexibility it needs to steer itself towards a future that’s as good as it could be.
- ^
AI company leaders have typically pointed to particular ways in which AI will be beneficial for society. Dario Amodei describes this at most length in Machines of Loving Grace; Sam Altman in Moore’s Law for Everything and Planning for AGI and beyond; Demis Hassabis and Elon Musk have made comments across various interviews (see e.g. here and here for Hassabis and here and here for Musk). Some of the named benefits include curing disease, improving mental health, radical abundance and prosperity, and very high-quality education.
But this is a far cry from a complete positive vision for a post-AGI future. AGI won’t result in a world that’s just like ours except we’re richer and have better health; it will transform society. Such a vision needs to grapple with the many changes that AGI would bring about; I give an overview of these challenges in Preparing for the Intelligence Explosion (co-authored with Fin Moorhouse).
There are some other limited exceptions that tackle parts of the problem. For example, Nick Bostrom’s Letter from Utopia describes just how good things could get in a post-AGI world. In Deep Utopia, Bostrom has an extended and interesting discussion of how life could be meaningful once survival, work, and progress no longer require us.
And Eric Drexler has introduced the concept of Paretopia. He powerfully makes the case that (i) AI-driven abundance means that everyone, by working together, can get vastly more of what they want and that (ii) for most people, as long as they get some share of the post-AI abundance, ensuring that such abundance occurs at all is much more important than trying to get an even larger share if it does come about.
- ^
More precisely: a viatopia is a society whose expected value is at least 50% that of a guarantee of a best feasible outcome.
A best feasible outcome is an outcome at the 99.99th percentile in terms of how well things could go, judged from today. The probabilities here invoked are epistemic probabilities: the subjective credences a highly intelligent and well-informed observer would have.
I define an “outcome” as the whole history of a society. So, for example, one could have the characteristically nonconsequentialist view that any future for society that is achieved via a bad process (e.g. a dictator seizes power and then implements their benevolent will) could not amount to a near-best outcome.I intend for the concept of viatopia to be useful for those with many different moral perspectives, including non-consequentialism; in some cases that might require minor departures from the above definition. For views that reject the idea that value can be cardinal, we could define viatopia directly as a state that has a very high probability of resulting in a near-best outcome and a very low probability of resulting in an astronomically bad outcome. Some forms of non-consequentialism reject the idea of impartial value altogether; on such views, we could talk about the expected choiceworthiness of different states of society instead.
- ^
And, in particular, given the sheer scale of cognitive abundance that superintelligence could unlock, the reflective process might not need to last very long in calendar time. So I think it’s unwise to bake in the idea that the viatopian state needs to last a long time.
- ^
Another account which you could interpret as a proposal for viatopia is Robert Nozick’s idea of “meta-utopia” where many different communities pursue different utopian visions, which people are free to leave as they wish, and where no one can impose their utopian vision on others (Anarchy, State and Utopia, p.312). Scott Alexander’s concept of “Archipelago” is similar, as is my concept of a “morally exploratory world”, in What We Owe The Future. In my account, at least, the core idea is that individual free choice would lead to the best societies winning out over time.
- ^
And it has a bad track record even if we put aside the atrocities that have been done in the name of utopian ideals, or its tendency towards totalitarianism.
- ^
It’s also related to the ideal of “liberal neutrality” in political philosophy: that the state should have no view on the moral good.
- ^
This is like the idea of “hill-climbing” algorithms: take whatever small actions will improve things from where you currently are, rather than trying to work out what hill in the landscape is highest and walking straight towards it, even if that means going downhill initially.
- ^

Also, I notice no references to anything about the concentration of power or wealth here. Isn't that probably something we want to avoid if we want to reach a good destination, at least all things being equal?
Even if we are bad at answering the "what would utopia look like" question, what's the reason to think we'd be any better answering the "what would viatopia look like" question? If we are just as bad or worse at answering the second question, it's either useless or actively counterproductive to switch from utopian to viatopian planning.
Executive summary: The author argues that during the transition to superintelligence we should aim not for a fixed utopia or ad-hoc problem solving, but for “viatopia,” an intermediate societal state that preserves optionality, safety, and collective wisdom so humanity can reliably steer toward near-best futures.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
It seems like the wrong framing to talk about a "positive vision" for the transition to superintelligence, if that transition involves immense risks and is generally a bad idea. If you think the transition could be “on a par with the evolution of Homo sapiens, or of life itself” but compressed into years, then that surely involves immense risks (of very diverse kinds!).
From what I've heard you say elsewhere, I think you basically agree with this. But then, surely you must agree that the priority is to delay this process until we can make sure it's safe and well-controlled. And if you are going to talk about positive visions, then I would say it's really important that such visions come with an explicit disclaimer that they are talking about a future we should be actively trying to avoid. I'm afraid that otherwise these articles might give people the wrong idea.
Edit: to make my point clearer, I think a good analogy would be to think of yourself right before the development of nuclear power (including the nuclear bomb). Suppose other people are already talking about the risks, and it seems it's likely to happen so maybe it's worth thinking about how we can make a good future with nuclear. Ok. But given the risks (and that many people still aren't aware of them), talking about a good nuclear future without flagging that the best course of action would be to delay developing this technology until we're sure we can avoid catastrophe seems like a potential infohazard.
Hey Will, very excited to see you posting more on viatopia, couldn't agree more that some conception of viatopia might be an ideal north star for navigating the intelligence explosion.
As crazy as this seems, I just last night wrote a draft of a piece on what I have been calling primary and secondary cruxes/crucial considerations, (in previous work I also used a perhaps even more closely related concept of “robust viatopia proxy targets”) which seems closely related to your "societal version of Rawls' primary goods," though I had not been previously aware of this work by Rawls. I continue to be quite literally shocked at the convergence of our research, in this case profoundly (if you happen to be as incredulous as I am, I do by chance have my work on this time-stamped through a few separate modalities I’d be happy to share.)
I believe figuring out primary goods and primary cruxes should be a key priority of macrostrategy research, we don't need to figure out everything, we just need to get the right processes and intermediate conditions in order to move us progressively in the right direction.
I think what is ultimately most important is that we reach a state of what I have been calling “deep reflection”; a state in which we have both comprehensively reflected to determine how to achieve a high value future, and simultaneously are in such a state in which society is likely to act on that knowledge. This is not quite the same as viatopia, as it’s more of an end state that would occur right before we actualize our potential, hence I think it can act as another useful handle as the kind of thing we should hope viatopia is ultimately moving us toward.
I’m really looking forward to seeing more essays in your series!