Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the possibility of extreme disruption, and act to mitigate its negative impacts. This report introduces a new policy lever to this discussion: the Windfall Clause.
What is the Windfall Clause?
The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By “extremely large profits,” or “windfall,” we mean profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities. It is unlikely, but not implausible, that such a windfall could occur; as such, the Windfall Clause is designed to address a set of low-probability future scenarios which, if they come to pass, would be unprecedentedly disruptive. By “ex ante,” we mean that we seek to have the Clause in effect before any individual AI firm has a serious prospect of earning such extremely large profits. “Donate” means, roughly, that the donated portion of the windfall will be used to benefit humanity broadly.
Motivations
Properly enacted, the Windfall Clause could address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable. It provides AI labs with a credible, tangible mechanism to demonstrate their commitment to pursuing advanced AI for the common global good. Finally, it provides a concrete suggestion that may stimulate other proposals and discussion about how best to mitigate AI-driven disruption.
Motivations Specific to Effective Altruism
Most EA AI resources to-date have been focused on extinction risks from AI. One might wonder whether the problems addressed by the Windfall Clause are really as pressing as these.
However, a long-term future in which advanced forms of AI like AGI or TAI arrive but primarily benefit a small portion of humanity is still highly suboptimal. Failure to ensure advanced AI benefits all could "drastically curtail" the potential of Earth-originating intelligent life. Intentional or accidental value lock-in could result if, for example, a TAI does not cause extinction but is programmed to primarily benefit shareholders of the corporation that develops it. The Windfall Clause thus represents a legal response to this sort of scenario.
Limitations
There remain significant unresolved issues regarding the exact content of an eventual Windfall Clause, and the way in which it would be implemented. We intend this report to spark a productive discussion, and recommend that these uncertainties be explored through public and expert deliberation. Critically, the Windfall Clause is only one of many possible solutions to the problem of concentrated windfall profits in an era defined by AI-driven growth and disruption. In publishing this report, our hope is not only to encourage constructive criticism of this particular solution, but more importantly to inspire open-minded discussion about the full set of solutions in this vein. In particular, while a potential strength of the Windfall Clause is that it initially does not require governmental intervention, we acknowledge and are thoroughly supportive of public solutions.
Next steps
We hope to contribute an ambitious and novel policy proposal to an already rich discussion on this subject. More important than this policy itself, though, we look forward to continuously contributing to a broader conversation on the economic promises and challenges of AI, and how to ensure AI benefits humanity as a whole. Over the coming months, we will be working with the Partnership on AI and OpenAI to push such conversations forward. If you work in economics, political science, or AI policy and strategy, please contact me to get involved.
Thanks very much for sharing this. It is nice to see some innovative thinking around AI governance.
I have a bunch of different thoughts, so I'll break them over multiple comments. This one mainly concerns the incentive effects.
I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).
This is sort of covered in B.1., but I do not think the responses are very persuasive. The main response is rather glib:
There are a lot desirable investments which would be rendered uneconomic. The fact that some investment will continue at a reduced level does not mean that missing out on the other forgone projects is not a great cost! For example, a 20% pre-tax return on investment for a moderately risky project is highly attractive - but after ~25% corporate taxes and ~50% windfall clause, this is a mere 5% return* - almost definitely below their cost of capital, and hence society will probably miss out on the benefits. Citation 231, which seems like it should be doing most of the work here, instead references a passing comment in a pop-sci book about individual taxes:
But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.
I also think the sub-responses are unsatisfying. You mention that the clause will be voluntary:
But this does not mean it won't reduce incentives to innovate. Firms can rationally take actions that reduce their future innovation (e.g. selling off an innovative but risky division for a good price). A firm might voluntarily sign up now, when the expected cost is low, but then see their incentives dramatically curtailed later, when the cost is large. Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.
Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.
The report then goes on to discuss externalities:
Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax. 'Being very profitable' is not a negative externality, so a tax on profits is not an effective way of minimising negative externalities. Similarly, the Malicious Use paper is mainly about specific bad use cases, rather than size qua size being undesirable. Moreover, size has little to do with Seb's argument, which is about estimating the costs of specific research proposals when applying for grants.
I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.
Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical):
and
However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:
\* for clarity of exposition I am assuming the donation is not tax deductible, but the point is not dramatically altered if it is.
However, it's very hard to
... (read more)