Introduction
The question: What policies will impact AI risks?
This post is looking to answer the challenge of, given we are still at early stages of AI development and implications for Transformative AI are highly uncertain: What kinds of policies might we want to focus on now? And how certain can we be that these policies will impact the development of Transformative AI?
Scope of post
This document is focused on domestic (not international) government (not corporate) policy that could positively affect the introduction of Transformative AI (TAI, as explained here). This should be globally applicable, although it is based on my understanding of and research into UK policy, and all the examples are from the UK.
Summary
If we want to ensure a world where transitions to TAI go well we need to focus now on suggesting policies that build robust institutions and processes of checks and balances to ensure governments have good flexible decision making about the long term issues and AI issues. This is because such institutional design type policies:
- Are best practice. Building institutions and processes are one of the main ways that democratic governments have a significant effect on the long-term.
- Are likely to impact TAI development but avoid the challenges of information hazards and of developing policies directly about TAI.
- Are urgent. Government’s are already developing the charters strategies and institutions that will for the basis for future decisions on TAI.
Examples of such policies are:
- Future generations policies (Eg: like the Welsh Future Generations act),
- Well designed AI/tech regulators (Eg: with models like Office for Nuclear Regulation),
- Accountability checks on military use of AI (Eg: like Defense Safety Authority).
1. Knowing the long-term impacts of today’s policies
Any details of TAI development are highly uncertain. If we want to affect TAI development with policy now, we should look at how policy makers develop policy that deal with uncertain future scenarios.
Ensuring long term impacts of any policy
There are a range of solutions that policy makers apply to develop policy for uncertain futures:
- Understand the future. Work closely with science, academia, foresight experts and the private sector in policy development. Eg: the CCUS Cost Challenge Taskforce.
- Put in place flexible laws and commitments. Eg: The Climate Change Act sets targets for carbon removal but not setting how the carbon is to be removed.
- Build policy institutions. Eg: parliaments or regulators or commissions can often adapt and change rules as they go.
- Put in place expectations for good policy making processes. Eg: the expectation for policy makers to consult before introducing new policy means that public views will be considered.
- Implement processes that improve themselves. Eg: the Nuclear Non-Proliferation Treaty (NPT) has regular review conferences. Eg: most bills have a requirement for them to be reviewed.
- Build future government’s access to knowledge. Ensure ongoing monitoring and evaluation of policies and collection of useful data. Eg: the work of the Office for National Statistics.
- Build future government’s expertise and ensure skilled staff working in or closely with government. Eg: The Open Innovation Team brings scientists into government.
- Set up future focused financial instruments. Eg: pension schemes or Reinsurance.
In general the process for developing good future focused policy is to do one or both of:
- Hold constant the things that should remain constant and build flexibility into the system so future policy makers can address uncertainty.
- Create incentive structures so future policy makers are likely to make good decisions (including ensuring the future policy makers will be equipped with useful resources and information).
Further research: More in depth research could be done on best practice for long term decision making in policy or in business.
Applying this to TAI
Ideally we want to develop policies to impact TAI with a reasonable level of certainty that the expected value of such policies on TAI development is positive and non-negligible. The methods above give some guidance on how to do this. It is unclear at this stage what exactly we might want to hold constant about TAI policy (although if people have ideas let me know). So it is likely that for policy makers today (Eg. if you were currently a head of state) the best way to ensuring good outcomes for future TAI scenarios would be to put in place the incentive structures, and resources to guide future decision makers.
Conclusion 1: Focus on building flexible policies, institutions and tools and systems of checks and balances to support decision makers on AI issues further down the line.
2. General Vs specific policies
I have found the following model useful for considering policies that could affect TAI development. It is a scale ranging from general policies that improve altruistic incentives to very specific TAI focused policies.
General policies. If we lived in a world with global prosperity, perfect decision systems and coordinated value-aligned actors, then the risks from TAI and other future technologies, would be reduced. Creating such a world is a difficult task. Yet there are numerous ways to improve policy decision making processes and spread good institutional design. However not everything that leads to this end goal is clearly positive. For example state actors having better access to up-to-date science might lead to actors building more dangerous weapons.
Specific policies. It does not seem implausible that there are small policy changes that a single key government individual could implement that would have a clear effect on TAI development. However, currently, there are:
- Difficulties and uncertainties in developing such policies. We do not know how TAI development will proceed.
- Information hazards. If impressing upon policy makers the case that TAI could be powerful, there is a risk that messages of power will dominate over messages of loss of control risk, leading to poor decisions, such as AI arms races.
It is hard to draw conclusions from this. Some considerations
- There is no strong case to assume that the best policies to implement all clump at one point of this spectrum. The difference in expected value of different individual policy suggestions is likely high with good and bad suggestions across the spectrum.
- General heuristics of specificness and diffuseness would push us towards more specific targeted policies having a greater actual impact on TAI
- Most, but certainly not all, campaigns groups focus on their specific issue rather than high level improvements to decision making. This implies that if we want to affect change we should lean towards more specific policies.
- In general we are unlikely to currently be able to identify many good “specific policies” due to the aforementioned difficulties and information hazards.
- In general people concerned with existential risks should consider the value of trying to improve institutional decision making mechanisms.
Further research: it could be useful to map out example cases where high level general policies influenced specific policies, especially where the high-level changes were pushed by groups external to government.
3. Delaying and urgency
Unless we think TAI is imminent. Why not delay and take more time to develop policy suggestions?
Current state of global AI policy
The rising interest in AI is leading states from China to Tunisia to adopt AI policies or national AI strategies. These strategies cover topics such as supporting the AI industry, developing or capturing AI skills, improving digital infrastructure, funding R&D, data use, safeguarding public data, AI use by government and AI regulation. In each case these strategies sit alongside work on related topics such as supporting the tech industry, software regulation, privacy and data use. States are also developing more targeted AI polices looking at how AI could be used in military, transport, healthcare and so on.
Within this process states have recognised the need to understand and prepare for the sociological, economic, ethical and legal implications that may stem from the widespread adoption of AI technology. There has been minimal action at the state level to address the implications of TAI.
Further research: Could be useful to map out in more detail exactly what policies are in development or likely to be implemented soon. It would also be useful to map policies that are not working or poorly designed and what might be going wrong.
What this means for TAI
For many potential policies relating to TAI there is no hurry to suggest and develop policies because the topics are so far from the eyes of policy makers that decisions are not being made. Additionally, for risk related reasons you may wish to delay pushing for a policy. For example it would be prudent to delay on pushing for any policy where the sign of the impact of that policy is dependent on an uncertain crucial consideration. (I will be considering risk in a separate paper, draft here.)
However, as set out above, on some topics policy makers are already setting the policy. It would be prudent to consider the TAI implications of the policies being made and where current decisions might impact future decisions on TAI issues it would be good to work with policy-makers to ensure that good policy is developed.
Conclusion 2: Focus on the AI policies that are being implemented that could impact decisions on TAI. (Which is largely high level strategies on AI and some regulation of digital technology and data use.)
Conclusions: policies to implement
So far we discussed the connection between high level general policies and specific policies and drew the following 2 conclusions:
Conclusion 1: Focus on building flexible policies, institutions and tools and systems of checks and balances to support decision makers on AI issues further down the line
Conclusion 2: Focus on the AI policies that are being implemented that could impact decisions on TAI. (Which is largely high level strategies on AI and some regulation of digital technology and data use.)
Overall, from this, I conclude that it would be useful to focus efforts on:
Optimising the policy decision-making processes that could have an effect on TAI development
This should include the adoption of best practice decision making processes in institutions such as governments, tech regulators and militaries. (Although this should not be at the exclusion of pushing for any clear specific policies that are identifiable and beneficial.)
The rest of this section explores this idea in more detail.
Examples of the kinds of policies this implies
Ensuring that any AI regulators have a reasonable amount of autonomy from government, expert staff, public support, flexibility to adapt to new technological developments, use regulatory best practice (eg outcome focused regulation) and so forth. (Like the Office for Nuclear Regulation or Human Fertilisation and Embryology Authority.
Policies that set expected standards for good governance in the tech industry, including the expertise and ethical behaviour of senior officials at large firms and the need for clear lines of responsibility. (Like the corporate governance rules for the banking industry).
Future generations policies that encourage concern for future wellbeing (Like the Wellbeing of Future Generations (Wales) Act 2015).
Having a check and balance on the use and development of AI by the military (Like the UK’s Defence Nuclear Safety Regulator, or the UK’s commitment to international nuclear treaties, provide checks on the safety of the UK military’s development of nuclear capabilities).
Further research: I will be writing in more detail of the kinds of decision making reforms I would like to see in particular on technology regulation and improving long-term planning, with a UK focus. I would be interested in others doing similar research especially on other areas or other countries.
Further considerations
Based on my experience I would further add the following qualifiers to consider when improving institution policy decision making processes:
- Trade offs will be made in designing systems. For example, requiring more people to be involved in a decision process means a better outcome but the process will also take longer.
- The focus should be on reforming checks, balances and incentive structures; such as by boosting concern for citizen wellbeing, the long run future and global cooperation (rather than providing leaders or the state with the strategic capability to do as they wish).
- Improvements happen iterativley and slowly. Over the course of years, system are adopted, mistakes made, improvements identified, and systems adapted. No system is perfect, but for learning, I would support adopting systems that work as well as possible, as soon as possible.
- Given the challenge of developing good policy and the existing knowledge from best practice I personally (and maybe controversially) would be very wary of anyone looking to implement entirely new, untested or highly innovative policy processes in this space (Eg: regulatory markets, age weighted voting, etc)
Conclusions: How certain can we be that this will help?
There is still a high level of uncertainty (as with any work relating to TAI or the long run future). My anecdotal experience suggests that good well-designed institutions make good decisions and that a lack of checks and balances leads to very bad decisions. This seems to be a widely supported view.
It is possible to look at evidence international development where developing world governance reforms and anti-corruption measures has been a focus of interventions for the last few decades. The evidence I have come across seems to suggest that:
- Improving policy governance processes works to create change.
- There is no one-size-fits-all approach. Change-makers need to be cautious about what reforms they suggest and work with governments to implement bespoke processes that work for them.
It is also worth considering that on the whole humanity knows how to do this. Systems design, organisational design, regulation, policy making, etc are well researched and oft applied disciplines. There is a host of best practice and relevant examples to draw from. (That said, as above, solutions need to be adapted to circumstances and making new processes can present a challenge.)
Overall I think we can be reasonably confident that such policies are robustly positive and have a non-negligible expected impact on TAI development.
Further research: The above is a quantitative argument for the value of this policy work. It would be good to see a quantitative cost-effectiveness estimate.
Further research: I would love to see research on exactly how important good institutions are and research from experts in international development governance reforms on what we can learn about how to shape domestic institutions.
Why you might think this is incorrect
One might argue that:
- The evidence above that good institutions lead to good decisions is too weak.
- TAI policy is different from all other policy and therefore just because we can normally trust institutions to make good decisions we cannot do so for TAI policies.
- There are still information hazzards.
- Good decision making might allow one country to act in the benefit of that country rather than in the global benefit or allow a leader to succeed with increasing totalitarianism.
- There may be unintended consequences. Eg: an independent check on the military use of AI could hamper a military at a key moment in a global conflict and leads to civilisational collapse.
- We may design the wrong institutions.
- Best practice would be adopted anyway, so it is a waste of effort to push for it.
- Pushing for things and cause the opposite effect. This could happen if a new institution is too powerful that it gets closed down or if an issue becomes politicised.
- There may be other unintended consequences.
- We should not adopt good decision making practices because making mistakes now means we will learn more so can do better later.
- Better decisions are slower decisions. AI-focused institutions could slow down development of TAI in countries that could better handle TAI, leading to TAI being developed somewhere worse.
- There are better alternative approaches to influencing government TAI policy that do not involve influencing policy in the short-term. For example such doing more foundational research or building credibility and relationships within government on broader emerging technology issues.
Epistemic status of conclusion
Medium.
The points in this document look sensible to me. My main concern is that this document appears to be me making the case that the areas of domestic policy I understand best (regulation and institutional decision making) are the most relevant to TAI. I am concerned that this has been driven by motivated reasoning. (When you have a hammer, everything looks like a nail).
Next steps
People looking to influence TAI policy or considering careers in AI policy should also consider shifting into policy areas to do with institutional design or regulation.
Institutions and individuals who are concerned about TAI (or more generally about existential risks) and have experience of policy development and working with governments should be writing, publishing and talking to government on matters of institutional design.
Further research on the areas listed above and on the areas out of scope (such as international AI policy and corporate AI governance).
Everyone should feel free to provide feedback and comments and questions or get in touch with me at policy@ealondon.com .
I will be writing something on specific policy suggestions I would like to see
Further research
I hope to write some stuff on
- Specific policy suggestions I would like to see that will relate to TAI
- Promoting long-termism in policy
- Improving institutional policy making systems and process
- The EA policy space (consideration of risks, mapping of the space, measuring the impact of funding policy interventions, etc)
Let me know if there is anything you think you would find particularly useful or not useful.
With thanks to Seb and Juila for comments. All views here are my own and are not representative of anyone else
I'm not used to the acronym TAI. If the title had included 'Transformative AI' rather than 'TAI', it would have been easier to know if it's relevant to me.
Thank you for the useful feedback: Corrected!
I think all the four topics you highlight for potential research at the end are important, but that I'd be particularly interested in discussions of how, concretely, long-termism should and could be promoted in policy.
Also, did you mean to say "qualitative" instead of the first "quantitative" in this sentence?