The Ethics of Building What We Cannot Control
Cross-post from Substack: The Moral Weight of the Great Silence
Beyond our tiny corner, the universe appears vast and silent. At this threshold moment in the development of artificial intelligence, the Fermi silence suggests a sobering possibility: the cosmos may not be empty of life but littered with civilisations that failed to survive their own technological adolescence.
From the outside, our society appears to be approaching a similar inflection point. Just as primordial chemistry once gave rise to intelligent life, we are now attempting to create something that functions beyond biological limits. We are building artificial general intelligence, and perhaps even superintelligence.
For some, this represents the apex of human ingenuity and a beacon of hope: solve intelligence first, then use it to solve everything else. A machine capable of reasoning at speeds and levels of complexity beyond human comprehension could cure disease, eliminate scarcity, and enable interstellar exploration. From this vantage point, superintelligence is the instrument by which humanity transcends its biological constraints.
Yet our understanding of these systems remains incomplete. Modern AI models are grown and shaped through training rather than designed and verified. We are constructing systems whose internal logic we cannot fully map, and whose values and objectives remain opaque. Even their creators acknowledge a “non-zero” probability of catastrophic outcomes, with estimates sometimes reaching into the double digits.
Perhaps such figures are overstated, shaped by competitive incentives or media amplification. But even a one-percent probability of existential harm carries staggering moral weight. In any other context, such a risk would be intolerable. No one would allow a child to experiment with a device that carried a one-percent chance of lethality. The scale of the potential consequence transforms even small probabilities into profound ethical burdens.
Meanwhile, the race accelerates. Billionaires, entrepreneurs, nation-states, and Nobel laureates act within a competitive architecture few appear willing to abandon. Open-weight models proliferate across geopolitical lines, accelerating development under the logic of strategic necessity. The narrative is a familiar one: if “we” do not build it first, someone less scrupulous will.
But strategic pressure does not confer moral permission.
Herein lies a deeper trap — the pairing of “the ends justify the means” with a sense of inevitability. Those living in the present risk being treated not as individuals with inherent dignity, but as instruments in the machinery of progress. When potential long-term benefits are framed as astronomical and inevitable, present harms become psychologically easier to discount. The imagined future of humanity flourishing alongside superintelligence can begin to outweigh the moral standing of individuals who bear today’s risks. Present harm is reframed as the necessary price of a future utopia.
In 1785, Immanuel Kant’s Formula of Humanity argued that we must treat people never merely as a means, but always also as an end. Whatever one’s broader moral theory, this principle captures an enduring constraint: individuals possess moral standing that cannot be overridden solely by appeal to aggregate future gains. Human worth does not fluctuate according to its utility within a grand technological narrative.
There is also a psychological ratchet at work. As billions are committed and public pledges deepen, we approach a moral analogue of the sunk cost fallacy — a point at which reversal becomes unthinkable. The more that has been invested, the more necessary success appears. Contradictory evidence is reframed as a temporary obstacle, and harms are reinterpreted as acceptable trade-offs. The weight of past decisions generates a powerful psychological need to justify them. The future must be inevitable, because only inevitability can redeem what has already been sacrificed in its name.
Yet none of this absolves responsibility. The diffusion of agency across corporations and countries obscures a fundamental truth: history is not inevitable, but the accumulation of individual decisions. No company, country, or individual receives moral exemption because the trajectory feels unstoppable.
If the creation of superintelligence carries a credible risk of catastrophe, then each participant bears some responsibility for the future they are helping to shape. If intelligence has indeed been a civilisational bottleneck, then the moral question is not only whether we can cross it, but under what conditions it is permissible to attempt it — and whether we are prepared to bear the ethical weight of doing so.
The silence of the stars may be a warning, or it may not. But if other civilisations once stood where we stand now, their absence suggests that the transition from intelligence to superintelligence may be a gate few pass through, and that the burden of proof lies with those who assume we will.
Our civilisation will not be measured by whether we possessed the power to build a god, but by whether we possessed the wisdom to ensure it was safe. Technological progress is not a law of nature. It is a series of choices.
