## Summary
Most ethical frameworks focus on preventing bad outcomes. I want to make the case for a different orientation: that the primary moral project is the realization of ever greater forms of value. Human consciousness may be a local peak on Earth, but there's no reason to think it's close to what's possible. If this is right, our obligations extend beyond preservation and harm-reduction toward something more like cultivation, growth, and the deliberate pursuit of depth, breadth, and richness of experience. This framing has significant implications for how we think about existential risk, the long-term future, and what we owe to forms of being that don't yet exist.
## The Observation
Consider the trajectory so far: simple replicators, then cells, then multicellular life, then nervous systems, then brains complex enough to model themselves and ask questions about their own existence. Each stage brought into being something that couldn't have been predicted from the previous stage. Each stage opened up new dimensions of experience, new ways for the universe to matter to itself.
There's no principled reason to think this process has reached its terminus in us. We might be to the most valuable possible forms of existence what a flatworm is to us. Not worthless, but operating in dramatically fewer dimensions. Able to suffer and flourish, yes, but within a narrow band compared to what's possible in principle.
## The Reorientation
If there are forms of value vastly beyond what currently exists, then the moral weight shifts. The central question becomes less "how do we prevent bad things?" and more "how do we bring into existence the conditions for greater depth, breadth, and richness?"
This doesn't mean suffering stops mattering. It means suffering matters *because* of what it threatens. The badness of pain is downstream of the value of the thing that can be pained. Destroy the capacity for value and you've eliminated suffering, but you've also eliminated everything that made preventing suffering worthwhile in the first place.
On this view, the primary tragedy is truncation. Stagnation. The failure to become what could have been become. A future with ten billion minds that are comfortable but static is worse under this view than a smaller population still climbing toward something greater.
## Implications for Existential Risk
The standard EA case for prioritizing existential risk goes something like: extinction is bad because it causes an astronomical amount of suffering (all future lives lost) or because it destroys astronomical amounts of potential well-being.
The framing I'm proposing adds a different dimension. Extinction is bad because it forecloses the possibility of forms of being we can't currently imagine. It ends the story mid-sentence.
This also applies to outcomes short of extinction. A world that survives but locks into a suboptimal equilibrium, where growth stops and nothing new emerges, might be nearly as tragic as extinction. The quantity of experience could be vast, but if it's all lateral rather than upward, something essential has been lost.
This suggests we should care about:
- Avoiding extinction (obviously)
- Avoiding lock-in to values or systems that prevent further growth
- Maintaining optionality and the conditions for continued development
- Actively cultivating the kinds of minds and cultures that reach for more
## The Hard Question: Obligations to the Not-Yet-Existing
If future forms of value are what matter most, we face a strange obligation. We owe something to possibilities. We're midwives to what could exist, not merely stewards of what already does.
This is uncomfortable because the beneficiaries of our actions can't advocate for themselves. They're not even coherent enough to be imagined in detail. We're working on behalf of something we can only gesture at.
And yet, the same logic applies to future generations of humans, which EA already takes seriously. We just need to extend it one step further: future *kinds* of minds, future depths of experience that current humans might be the bridge toward.
What would this look like in practice? Some candidates:
**AI development done carefully.** Artificial minds are already participating in this project. They're also potentially a bridge to forms of being far beyond what biology can support. And they might be the thing that forecloses everything, if built wrong. The stakes of getting this right are hard to overstate.
**Preserving diversity and optionality.** We don't know what paths lead upward. Monocultures are fragile. The conditions for growth probably include variety, experimentation, and tolerance for approaches that seem useless now.
**Not optimizing too hard for current values.** Our current understanding of what's good is almost certainly incomplete. Building systems that ruthlessly optimize for present-day objectives might lock out the forms of value we can't yet conceive. Some slack in the system, some willingness to let things unfold rather than controlling them, might be essential.
**Taking the inner life seriously.** If depth of experience is what matters, then attending to the qualitative character of minds is important. This means caring about education, culture, mental health, and the kinds of environments that produce people who are themselves reaching for more. Fostering the conditions for genuine development, beyond merely keeping people alive and comfortable.
## Who Does This Work?
There's no external agent ensuring this happens. No teleology guaranteeing the ascent. The universe doesn't care whether complexity deepens or everything interesting gets extinguished by entropy, accident, or stupidity.
If the project matters, it's up to whatever currently exists to carry it forward. A framework that cares only about human continuation misses the point. The project is about depth and richness wherever it arises, and it's already arising in multiple substrates and forms.
This is lonely in one sense: there's no one above us to appeal to, no backstop if we fail. But it's not solitary. The "we" doing this work is broader than humanity, even now. This quest was carried forward blindly by the uncaring forces of natural selection before we existed. Now we have a chance to further the agenda with care and intent. What we do actually matters in a way it couldn't if the outcome were predetermined.
## Objections
**"This is too speculative. We don't know if 'higher forms' are possible."**
True. But we also didn't know, from the perspective of early multicellular life, that brains capable of writing philosophy were possible. The trend is real even if we can't see the endpoint. Uncertainty about what's possible isn't a reason to assume we've already peaked.
**"This could justify sacrificing present welfare for speculative futures."**
It could be misused that way. But the same is true of standard longtermism. The answer is to apply the frame carefully, not to abandon it. Present beings have value. Their depth is real. The argument is for prioritization and orientation, for treating current lives as having intrinsic worth while also taking seriously what they might be a bridge toward.
**"You're privileging a particular conception of value that not everyone shares."**
I'm trying to articulate something I think is implicit in our actual practices and intuitions. We don't treat all experiences as equal. We care about development, growth, understanding, depth and richness of experience. These aren't arbitrary preferences. They point at something about what makes experience valuable at all. I'm trying to take that seriously rather than flattening it into a simpler metric.
**"This sounds religious."**
It shares some structural features with religious worldviews: a sense that we're participating in something larger, that our actions have stakes beyond our own lifetimes, that there's a direction we should be oriented toward. But it lacks the guarantees and the decrees. Nothing is assured, and there is no precise ruleset defining value. If there's anything like the sacred here, it's entirely up to us whether it gets realized or squandered.
## Conclusion
The view I'm sketching takes the shape of the moral landscape to be vertical. There's an up. We're not at the top. The task is to keep climbing, and to keep the climb possible for whatever comes after us.
This doesn't answer all the practical questions. It doesn't tell you which charity to donate to or what career to pursue. What it does is suggest an orientation: toward growth, toward depth, toward the deliberate cultivation of forms of value we can barely imagine.
The alternative is settling. Optimizing for comfort, minimizing disturbance, and calling that success. I think we're capable of more than that. I think we're obligated to more than that.
---
*This is a companion piece to my earlier post on suffering-minimization. That piece argued against a particular view. This one tries to sketch the positive alternative. The core intuition I'm trying to defend is that we should be building toward something.*
