Hide table of contents

We recently launched a Working Paper Series on SSRN to disseminate our research more widely within the legal academic community. Our series includes some very recent pieces produced by our teammates and affiliates, which might be of interest to some forum readers. Below you can read the abstracts of the pieces currently included in the series. For a complete list of published and ongoing research, you can visit our website. Earlier this year, we also published our research agenda (see this EA Forum post), which you can also find on SSRN.

Protecting Future Generations: A Global Survey of Legal Academics

By Eric Martínez and Christoph Winter

Abstract

The laws and policies of today may have historically unique consequences for future generations, yet their interests are rarely represented in current legal systems. The climate crisis has shed light on the importance of taking into consideration the interests of future generations, while the COVID-19 pandemic has shown that we are not sufficiently prepared for some of the most severe risks of the next century. What we do to address these and other risks, such as from advanced artificial intelligence and synthetic biology, could drastically affect the future. However, little has been done to identify how and to what degree the law can and ought to protect future generations. To respond to these timely questions of existential importance, we sought the expertise of legal academia through a global survey of over 500 law professors (n=516).

This Article elaborates on the experimental results and implications for legal philosophy, doctrine, and policy. Our results strongly suggest that law professors across the English-speaking world widely consider the protection of future generations to be an issue of utmost importance that can be addressed through legal intervention. Strikingly, we find that law professors desire more than three times the perceived current protection for humans living in the far future (100+ years from now), roughly equal to the perceived level of current protection for present generations. Furthermore, the vast majority of law professors (72%) responded that legal mechanisms are among the most predictable, feasible mechanisms through which to influence the long-term future, with environmental and constitutional law particularly promising. These findings hold true independent of demographic factors such as age, gender, political affiliation, and legal training. Although future generations have not been granted standing in any cases to date, responses indicated that law professors believe there is a plausible legal basis for granting standing to future generations and other neglected groups, such as the environment and non-human animals, in at least some cases. Other topics surveyed included which constitutional mechanisms were perceived as more able to protect future generations.

Finally, we outline some limitations of the study and potential directions for future research. For example, one might doubt legal scholars’ ability to estimate the long-term impact of law and legal systems, due to the conjunction fallacy and availability bias. In that regard, future research could survey forecasters, who have expertise in evaluating and predicting the future more generally.

Experimental Longtermist Jurisprudence

By Eric Martínez and Christoph Winter

Abstract

Recent scholarship has revealed a seemingly stark mismatch between the value of future generations and the lack of protection afforded to them under present legal systems. Although climate change, pandemics, nuclear war, and artificial intelligence impose greater threats to the future of humanity than any previous risk (Ord, 2020), legal systems fail to grant future generations democratic representation in the legislature, standing to bring forth a lawsuit in the judiciary, and serious consideration in cost-benefit analyses in the executive. What is the source of this disconnect, is it justified, and—to the extent that it is not justified—what might one do about it?

Here we discuss how a new research field within experimental jurisprudence—which we refer to as experimental longtermist jurisprudence—might help address these questions and in turn help determine the appropriate level and form of legal protection to future generations.

The chapter is divided into three parts. In Part I, we provide an overview of the substantive and methodological underpinnings of experimental longtermist jurisprudence. In Part II, we introduce three research programs within experimental longtermist jurisprudence, and in Part III, we discuss the normative implications of each of these research programs.

Antitrust-Compliant AI Industry Self-Regulation

By Cullen O’Keefe

Abstract

The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal.

The “learned professions,” including engineers, frequently engage in somewhat anticompetitive self-regulation through professional standards. These standards are not exempt from antitrust scrutiny. However, some Supreme Court opinions have nevertheless held that some forms of professional self-regulation that would otherwise receive per se condemnation could receive more preferential antitrust analysis under the “Rule of Reason.” This Rule weighs procompetitive and anticompetitive impacts to determine legality. To receive the rule-of-reason review, such professional self-regulation would need to:

  1. Be promulgated by a professional body;
  2. Not directly affect price or output level; and
  3. Seek to correct some market failure, such as information asymmetry between professionals and their clients.

Professional ethical standards promulgated by a professional body (i.e., comparable to the American Medical Association or American Bar Association) that prohibit members from building unsafe AI could plausibly meet all of these requirements.

This paper does not argue that this would clearly win in court, or that such an agreement would be legal. Nor does it argue that it would survive rule-of-reason review.† It merely says that there exists a colorable argument for analyzing such an agreement under the Rule of Reason, rather than a per se rule. Thus, this could be a plausible route to an antitrust-compliant horizontal agreement to not engineer AI unsafely.

The Rise of the Constitutional Protection of Future Generations

By Renan Araújo and Leonie Koessler

Abstract

Many comparative constitutional law scholars have listed constitutional rights and studied their historical development. However, as new waves of constitution-making arise, new rights emerge too. This article argues that future generations are a new holder of legal interest in constitutions worldwide, a consequential phenomenon that has been overlooked by the literature thus far. By looking at all national written constitutions, historical and contemporary, we present a chronology of the constitutionalization of future generations and show how they went from a handful to 41% of all constitutions as of 2021 (81 out of 196). Through content analysis, we show how they have gradually become part of a modern, universalist language of constitution-making and have reframed older rights from abstraction into the protection of people in the future. We also assess the strength of these provisions, analyzing their de jure intensity and de facto repercussions, the latter through case studies from all over the globe.

Defining the Scope of AI Regulations

By Jonas Schuett

Abstract

The paper argues that policy makers should not use the term artificial intelligence (AI) to define the material scope of AI regulations. The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper suggests that policy makers should instead deploy a risk-based definition of AI. Rather than using the term AI, they should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by considering the main causes of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.

AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

By Peter Cihon, Moritz J. Kleinaltenkamp, Jonas Schuett, and Seth D. Baum

Abstract

As artificial intelligence (AI) systems are increasingly deployed, principles for ethical AI are also proliferating. Certification offers a method to both incentivize adoption of these principles and substantiate that they have been implemented in practice. This paper draws from management literature on certification and reviews current AI certification programs and proposals. Successful programs rely on both emerging technical methods and specific design considerations. In order to avoid two common failures of certification, program designs should ensure that the symbol of the certification is substantially implemented in practice and that the program achieves its stated goals. The review indicates that the field currently focuses on self-certification and third-party certification of systems, individuals, and organizations—to the exclusion of process management certifications. Additionally, the paper considers prospects for future AI certification programs. Ongoing changes in AI technology suggest that AI certification regimes should be designed to emphasize governance criteria of enduring value, such as ethics training for AI developers, and to adjust technical criteria as the technology changes. Overall, certification can play a valuable mix in the portfolio of AI governance tools.

The Challenges of Artificial Judicial Decision-Making for Liberal Democracy

By Christoph Winter

Abstract

The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this void by identifying and engaging with challenges arising from artificial judicial decision-making, focusing on three pillars of liberal democracy, namely equal treatment of citizens, transparency, and judicial independence. Methodologically, the work takes a comparative perspective between human and artificial decision-making, using the former as a normative benchmark to evaluate the latter.

The chapter first argues that AI that would improve on equal treatment of citizens has already been developed, but not yet adopted. Second, while the lack of transparency in AI decision-making poses severe risks which ought to be addressed, AI can also increase the transparency of options and trade-offs that policy makers face when considering the consequences of artificial judicial decision-making. Such transparency of options offers tremendous benefits from a democratic perspective. Third, the overall shift of power from human intuition to advanced AI may threaten judicial independence, and with it the separation of powers. While improvements regarding discrimination and transparency are available or on the horizon, it remains unclear how judicial independence can be protected, especially with the potential development of advanced artificial judicial intelligence (AAJI). Working out the political and legal infrastructure to reap the fruits of artificial judicial intelligence in a safe and stable manner should become a priority of future research in this area.

Longtermist Institutional Reform

By Tyler John and William MacAskill

Abstract

In all probability, future generations will outnumber us by thousands or millions to one. In the aggregate, their interests therefore matter enormously, and anything we can do to steer the future of civilisation onto a better trajectory is of tremendous moral importance. This is the guiding thought that defines the philosophy of longtermism. Political science tells us that the practices of most governments are at stark odds with longtermism. But the problems of political short-termism are neither necessary nor inevitable. In principle, the state could serve as a powerful tool for positively shaping the long-term future. In this chapter, we make some suggestions about how to align government incentives with the interests of future generations. First, in Section II, we explain the root causes of political short-termism. Then, in Section III, we propose and defend four institutional reforms that we think would be promising ways to increase the time horizons of governments: 1) government research institutions and archivists; 2) posterity impact assessments; 3) futures assemblies; and 4) legislative houses for future generations. Section IV concludes with five additional reforms that are promising but require further research: to fully resolve the problem of political short-termism we must develop a comprehensive research programme on effective longtermist political institutions.

Empowering Future People by Empowering the Young

By Tyler John

Abstract

A number of recent writers have argued that the obligations of modern states to people who will exist in the future may far outstrip their obligations to their present citizens, given the vast number of people who will exist in the future and whose livelihoods depend on our actions (Beckstead 2013, Greaves and MacAskill 2019, John 2020, Tarsney 2019). And yet modern states do precious little on behalf of future generations, choosing to allow and incentivize destructive practices such as the widespread burning of fossil fuels, while failing to take preventative measures that could deter global pandemics and other catastrophes.

The state is plagued with problems of political short-termism: the excessive priority given to near-term benefits at the cost of future ones (González-Ricoy and Gosseries 2016B). By the accounts of many political scientists and economists, political leaders rarely look beyond the next 2-5 years and into the problems of the next decade. There are many reasons for this, from time preference (Frederick et al 2002, Jacobs and Matthews 2012) to cognitive bias (Caney 2016, Johnson and Levin 2009, Weber 2006) to perverse re-election incentives (Arnold 1990, Binder 2006, Mayhew 1974, Tufte 1978), but all involve foregoing costly action in the short term (e.g. increasing taxes, cutting benefits, imposing regulatory burdens) that would have larger moderate- to long-run benefits. Such behavior fails not only the generations of people who are to come, but also the large number of existing citizens who still have much of their lives left to lead.

One type of mechanism for ameliorating political short-termism that receives much attention these days involves apportioning greater relative political influence to the young. As the story goes: younger citizens generally have greater additional life expectancy than older citizens, and it therefore looks reasonable to expect that they have preferences that are extended further into the future. If we apportion greater relative political influence to the young, it therefore seems that our political system as a whole will show greater concern for the future.

In light of this story, a number of particular mechanisms have been proposed for apportioning greater relative political influence to the young, including lowering the voting age (Piper 2020), weighting votes inversely with age (MacAskill 2019, Parijs 1998), disenfranchising the elderly (Parijs 1998), and instituting youth quotas in legislatures (Bidadanure 2016, MacKenzie 2016).

In what follows, I argue that merely apportioning greater political power to the young is unlikely to make states significantly less short-termist, but underexplored age-based mechanisms may be more successful. In particular, states might mitigate short-termism by employing age-based surrogacy and liability incentives mechanisms within a deliberative body of young people charged with representing the young.

Protecting Sentient AI: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection of Sentient Artificial Intelligence

By Eric Martínez and Christoph Winter

Abstract

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n=1061) on their views regarding granting (a) general legal protection, (b) legal personhood, and (c) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities