Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.
What is the Windfall Clause?
The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By “extremely large profits,” or “windfall,” we mean profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities. It is unlikely, but not implausible, that such a windfall could occur; as such, the Windfall Clause is designed to address a set of low-probability future scenarios which, if they come to pass, would be unprecedentedly disruptive. By “ex ante,” we mean that we seek to have the Clause in effect before any individual AI firm has a serious prospect of earning such extremely large profits. “Donate” means, roughly, that the donated portion of the windfall will be used to benefit humanity broadly.
Properly enacted, the Windfall Clause could address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable. It provides AI labs with a credible, tangible mechanism to demonstrate their commitment to pursuing advanced AI for the common global good. Finally, it provides a concrete suggestion that may stimulate other proposals and discussion about how best to mitigate AI-driven disruption.
Motivations Specific to Effective Altruism
Most EA AI resources to-date have been focused on extinction risks from AI. One might wonder whether the problems addressed by the Windfall Clause are really as pressing as these.
However, a long-term future in which advanced forms of AI like AGI or TAI arrive but primarily benefit a small portion of humanity is still highly suboptimal. Failure to ensure advanced AI benefits all could "drastically curtail" the potential of Earth-originating intelligent life. Intentional or accidental value lock-in could result if, for example, a TAI does not cause extinction but is programmed to primarily benefit shareholders of the corporation that develops it. The Windfall Clause thus represents a legal response to this sort of scenario.
There remain significant unresolved issues regarding the exact content of an eventual Windfall Clause, and the way in which it would be implemented. We intend this report to spark a productive discussion, and recommend that these uncertainties be explored through public and expert deliberation. Critically, the Windfall Clause is only one of many possible solutions to the problem of concentrated windfall profits in an era defined by AI-driven growth and disruption. In publishing this report, our hope is not only to encourage constructive criticism of this particular solution, but more importantly to inspire open-minded discussion about the full set of solutions in this vein. In particular, while a potential strength of the Windfall Clause is that it initially does not require governmental intervention, we acknowledge and are thoroughly supportive of public solutions.
We hope to contribute an ambitious and novel policy proposal to an already rich discussion on this subject. More important than this policy itself, though, we look forward to continuously contributing to a broader conversation on the economic promises and challenges of AI, and how to ensure AI benefits humanity as a whole. Over the coming months, we will be working with the Partnership on AI and OpenAI to push such conversations forward. If you work in economics, political science, or AI policy and strategy, please contact me to get involved.