Hide table of contents

TLDR: Even if transformative AI systems are technically aligned and deployed with good intentions, there will likely be an “uncanny valley” between the time such systems are created and their value is equitably distributed. In the interim, there is a risk of mass suffering being inflicted on those least proximate to the creation of these systems. This creates an argument for a research and policy agenda on “AI adaptation”, which pushes resources and policy ideas toward ensuring that the disruptive effects of transformative AI systems are minimized while we cross to the other side of the uncanny valley. 

Note of Thanks

I am thankful for the following papers which helped inspire me to write this post. These include the thoughts of Markus Anderlung on AI misuse, the work of David Kreuger and Andrew Critch on AI research considerations, the chapter by Ben Garfinkel on AI in historical perspective, and various papers by Allan Defoe including those on cooperative AI, AI governance opportunities, and technological determinism. I am also grateful to, amongst many papers from CSER and GCRI, this paper on transformative AI and this paper on resilience to global catastrophe. Thank you also to others who discussed this idea with me and helped me refine this post, your advice was invaluable.

Alignment, Deployment, Adaptation

Members of the Effective Altruism, X-risk, AI Safety, security, policy (and overlapping) communities have dedicated a laudable amount of effort and resources towards ensuring that our collective future can best take advantage of the benefits of advanced artificial intelligence while minimizing the potential existential risks from advanced AI systems. These efforts can be (broadly) divided into two buckets: 

  1. Alignment: There is significant attention devoted to ensuring that AI systems are technically aligned such that they follow the intentions of their human inventors and supervisors in letter and in spirit. 
  2. Deployment: There is considerable focus on guaranteeing that powerful and technically aligned AI systems are deployed in a fashion that protects widely-held values and minimizes damage to the world. Part of this research involves ensuring that AI systems are not developed by those who may have malicious incentives, and hence intend to abuse the power of advanced AI systems. This also includes research on figuring out what we want from AI systems in the first place. 

Efforts in this space are, as most people reading this post would agree, critical and should be supported in whatever way each of us can. 

At the same time, I feel that this intense focus on alignment and deployment might come at the expense of attention towards potentially less important and yet nonetheless vital issues. I call this set of issues, “AI Adaptation”. Borrowing from the vast literature on climate adaptation, I define AI adaptation as the project of adjusting to the expected disruptive effects of advanced artificial intelligence in order to moderate or avoid harm from these disruptions.  

The argument here is as follows. Even if one can assume that: 

  • Transformative AI systems are likely to be technically aligned. 

and

  • These systems are likely to be deployed in a fashion that is not intentionally harmful.

There is still considerable work to be done to fashion global economic, political, and social systems that are prepared for a transition to this fundamentally different world. 

There is a growing amount of research adjacent to this adaptation space such as that mentioned at the top of this post (the paper by Jess Whittlestone and Ross Gruetzemacher is particularly relevant). There is also an increasing amount of research that seeks to ensure that we also pay considerable attention to the risks from artificial intelligence systems on the road to transformative artificial intelligence, such as that focusing on disinformation, developments in biotechnology, and lethal autonomous weapons

I believe an important aspect of research that is still under-considered is the disruptive economic and political impact of transformative artificial intelligence, in particular on those who live outside the United States and Western Europe. I believe that in the absence of a project dedicated to funding and research adaptation to the disruptive effects of transformative artificial intelligence, there may be mass suffering in many parts of the world. 

Relevant Assumptions

In arguing for this project, I am making the following assumptions: 

  • First, I am assuming that transformative artificial intelligence is most likely to emerge from either a lab or a government-run facility in the United States. (This premise is not necessary per se to the argument but is an illustration of a more general premise that “transformative AI will emerge in some powerful country”). 
  • Second, (for argument’s sake) I am assuming that the actor who develops this system has fairly good intentions (however we define them) and has also been successful in technically aligning this system. 
  • Third, I am assuming that the development of such an AI system will be, by definition, transformative, and rapidly claim immense amounts of economic and technological value in the global system. 
  • Fourth, I assume that while initial ownership of this value will be retained by the inventors of such a system, there will be efforts made to distribute the dividends of this technology to those across the world. 
  • Fifth, and most importantly, I believe there will be an “uncanny valley” between the time that this system is created and starts generating value and the time that its dividends are distributed equitably to those across the world. 
  • Sixth, on the other side of this uncanny valley, AI systems of such intelligence will be able to provide tractable and efficient solutions to global poverty, disease, and other relevant problems faced by many members of our global community. 

An Illustration

As someone who has grown up and spent most of his life in a non-Western country without significant international influence, I am acutely concerned about this uncanny valley. In particular, I am worried that mass economic disruption is likely to inflict suffering on likely hundreds of millions of people who are least responsible for the technology’s creation, least proximate to its benefits, and most vulnerable to its disruptive effects. Here is a possible illustration of my argument: 

  • An advanced AI system is developed by a company within the United States. This system is transformative – it is rapidly generating immense economic and technological value, leading to an explosion in the company’s value. The US government steps in to regulate this company, which is completely open to the government’s position. Together, both the government and this company attempt to both realize the value being generated by this system and to craft structures to distribute this value equitably across the world. For a range of reasons including competing constituency priorities, political inefficiencies, limited human capital, lack of global coordination, and others, it takes 1-5 years for a structure to be set up that all relevant actors can agree on to distribute this value, even as the American economy takes full advantage of this development and inter-state inequality skyrockets. After five years, benefits from this system are distributed to the world’s poor, after hundreds of millions have died or experienced serious suffering as a result of this economic disruption.  

This illustration is, hopefully, concerning. 

As the tone of this post makes clear, the intention of this illustration is not to provide an argument against the development of advanced AI systems, and it is definitely not an argument against investing resources towards technical alignment and responsible deployment of AI systems. 

Instead, the intention here is to argue for resources and attention (being, for now, agnostic as to how much) to be devoted towards AI Adaptation to ensure that ‘best-case’ advanced AI scenarios account for the potentially transformative negative effects of advanced AI systems on those least likely to be protected against economic and political disruptions. The use of the term ‘adaptation’ here is intentional – I am assuming an inevitability to the development of transformative artificial intelligence, in the same way that many assume that significant disruptions from climate change are now more or less inevitable and have incentivized efforts to adapt to a warmer climate. 

Some Other Relevant Factors 

In writing this post, I have attempted to be fairly generous in making my assumptions, but perhaps it is important to note some things that could go much worse which would make this case for adaptation much stronger: 

  1. Transformative Artificial Intelligence is likely to be achieved “soon”. I am not an expert in this space, and I am agnostic as to whether such a benchmark is reached in 2040, 2050, 2070, or some later date. However, the prospect of this benchmark being reached sooner rather than later is especially alarming as we are likely to be that much less prepared for guarding communities against its disruptive effects. 
  2. The Takeoff Speeds are very hard (fast). Again, I am completely unsure as to whether we will go from ‘human-level’ AI systems to superintelligence in a few days or a decade, or anything in between. But for similar reasons to point 1, a hard takeoff makes disruption more likely and strengthens the case for working on adaptation with a greater sense of urgency.
  3. Transformative AI may not have to be that intelligent. If it is the case that advanced AI systems become transformative long before we are close to achieving artificial superintelligence, then we have much less time to guard against disruptions than if AI could only be transformative if it was close to our definitions of superintelligence. 
  4. The Uncanny Valley is an Uncanny Canyon. If this valley –the measured time from getting value from transformative AI to distributing it across the world – is much wider than current estimates, suffering from disruption is likely to be greater, bolstering a stronger argument for innovating and improving adaptation structures.  
  5. Finally, AI systems are not technically aligned or poorly deployed. If this happens, we may have bigger problems on our hands, but this too bolsters the argument for adaptation; if the general trend of history - that when global cataclysms happen they disproportionately affect the most vulnerable - is true, this creates a further need to provide these communities a line of defense. 

These factors, and many others, are likely to have a significant bearing on the case for and nature of adaptation and each deserve further independent inquiry of their own in this context. 

Some Tentative Policy Suggestions

While this project of adaptation requires much deeper thought and reflection – as well as institutional resources devoted to its inquiry – I think the following tentative policies could be of interest to those who find value in researching this problem: 

  1. Universal basic income. Universal in this case is taken literally; there could be a basic stipend provided to every human on Earth (perhaps pegged to purchasing power terms) which generates revenue from dues paid by affluent governments, and potentially dues paid by companies at the lead of the AI race. 
  2. Construction of national and global social safety nets. These could rely on similar sources to those mentioned in the first recommendation. A discussion of AI adaptation also has the potential to push national governments themselves to re-orient budgetary priorities towards adaptation efforts in the form of bigger and better-constructed safety nets. 
  3. Global fund for economic disruption. The World Bank’s Financial Intermediary Fund is a good (albeit timid) example of a system that could be massively expanded to include financial support to poor communities across the world who may suffer in the interim as the world adapts to transformative artificial intelligence. 

As those focused on global governance, economics, politics, and many others are aware, each of these proposals has significant problems and likely much greater issues with tractability. The intention is to pitch them tentatively as a starting point for crafting a policy and research agenda which can aid adaptation efforts. 

Parting Thoughts

I believe such a project would be of interest to those vested in reducing the risks from emerging technologies, as well as those dedicating their lives to reducing global poverty and improving global health and well-being. It may also provide an additional general argument against the rapid development of advanced AI systems without careful thought of the consequences. 

3

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Great stuff! Agree on the importance of this. I think that the odds of this type of disruption being harmful are largely a function of the pace at which increasingly capable systems are deployed. If you go from e.g. 20% task automation capabilities to 100% over the course of 50 years that will be a far less disruptive, and more equitable transition than one that happens over 3 years. In the fast takeoff case, I would argue that there probably is not a social safety net program that could adequately counter the social, political and economic disruptions caused by that pace of deployment. So while we should plan to build societal resilience via institution building and shoring up safety nets for sure, we may want to consider “figure out optimal deployment speeds for aligned, not dangerous, misuse-proof AI” and “figure out the right regulatory mechanisms to enforce those timelines on AI labs” to this research agenda as well.

More from HTC
Curated and popular this week
Relevant opportunities