- This is a tentative overview of the current main paths to impact in EU AI Policy. There is significant uncertainty regarding the relative impact of the different paths below
- The paths mentioned below were cross-checked with four experts working in the field, but the list is probably not exhaustive
- Additionally, this post may also be of interest to those interested in working in EU AI Policy. In particular, consider the arguments against impact here
- This article doesn’t compare the potential value of US AI policy careers with those in the EU for people with EU-citizenship. A comparison between the two options is beyond the scope of this post
People seeking to have a positive impact on the direction of AI policy in the EU may consider the following paths to impact:
- Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulation
- Working on the current AI Act draft (possibility to have impact immediately)
- Working on technical standards and auditing services of the AI Act (possibility to have impact immediately)
- Making sure the EU AI Act is enforced effectively (possibility to have impact now and >1/2 years from now)
- Working on a revision of the AI Act (possibility to have impact >5 years from now), or on (potential) new AI-related regulation (e.g. possibility to have impact now through the AI Liability Directive)
- Working on export controls and using the EU’s soft power (possibility to have immediate + longer term impact)
- Using career capital gained from a career in EU AI Policy to work on different policy topics or in the private sector
While the majority of impact for some paths above is expected to be realised in the medium/long term, building up career capital is probably a prerequisite for making impact in these paths later on. It is advisable to create your own personal “theory of change” for working in the field. The list of paths to impact below is not exhaustive and individuals should do their own research into different paths and speak to multiple experts.
Paths to impact
Working on (enforcement of) the AI Act, related AI technical standards and adjacent regulation
Since the EU generally lacks cutting edge developers of AI systems, the largest expected impact from the EU AI Act is expected to follow from a Brussels effect of sorts. For the AI act to have a positive effect on the likelihood of safe, advanced AI systems being developed within organisations outside of the EU, the following assumptions need to be true:
- Advanced AI systems are (also) being developed within private AI labs that (plan to) export products to EU citizens.
- The AI-developing activities within private AI labs need to be influenced by the EU AI Act through a Brussels Effect of some sort (de facto or de jure). This depends on whether advanced AI is developed within a product development process in which AI-companies take EU AI Act regulation into account.
- Requirements on these companies either have a (1) slowing effect on advanced AI development, buying more time for technical safety research or better regulation within the countries where AI is developed (2) direct effect, e.g. through risk management requirements that increase the odds of safe AI development
Working on the final draft of the AI Act
Current status of AI Act
The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. On April 21, 2021 the Commission published a proposal to regulate artificial intelligence in the European Union. The proposal of the EU AI Act will become law once both the Council (representing the 27 EU Member States) and the European Parliament agree on a common version of the text.
The Czech Presidency has presented its full compromise text to the Council, in which it attempts to settle previously contentious issues, namely how to define an AI system, classification of AI systems as high-risk, governance and enforcement, as well as national security exclusion (see timeline). This was approved by ministers from EU Member States on December 6. In the European Parliament, negotiations on a joint position are still ongoing. Recently the European Parliament’s rapporteurs circulated a new batch of compromise amendments redesigning the enforcement structure of the AI Act. An agreement in the European Parliament should come by mid-2023, but a final text agreed between the EU institutions is not likely before 2024.
Paths to impact AI Act
Depending on the role, people could still have some impact improving the current draft of the EU AI Act, which could have an effect internationally through the Brussels Effect. The following organisations could still have positive impact on the final draft:
- Think tanks and NGOs advising the Council and Parliament
- Member states can improve the quality in trilogue negotiations through the Council. Sweden could be especially influential given their upcoming presidency of the Council in the first half of 2023.
- Assistants to relevant MEPs (although it is hard to acquire such a role in the relevant timeframe without pre-existing career capital and a degree or luck)
Working on the technical standards of the AI Act
Current status of standard setting of the AI Act
The AI Act’s high-risk obligations will be operationalised by technical standards bodies. These bodies need to be filled by technical experts within national standard-setting bodies, e.g. VDE in Germany, through CEN / CENELEC / ETSI in the JTC21. Through request of the European Commission this process runs parallel with that of the AI Act and is currently ongoing. Standards play such a critical role in bringing down the compliance costs that they have been defined as the ‘real rulemaking’ in an influential paper on the EU’s AI rulebook.
Paths to impact in standard setting
Technical standards can have an effect internationally through the Brussels Effect. The following organisations and their personnel impact technical standards:
- In addition to private sector organisations, NGOs and think tanks are invited by member state standard-setting bodies to provide their input
- National standard-setting bodies usually appoint experts to participate in the national committees during the negotiations of technical standards. With sufficient career capital people can help their national committee
People with more technical (governance) expertise could make sure the technical standards actually make AI development safer and more ethical, preventing them from becoming a tick-box exercise. There seems to be the most room for making an impact through measures on:
- Risk Management Systems: The AI Act imposes requirements regarding internal company processes, such as requiring there be adequate risk management systems and post-market monitoring.
- Documentation: The AI Act introduces requirements on documentation of companies’ AI systems, to be shared with regulators and users. Similar to “model cards,” an AI system should be accompanied by information about its intended purpose, accuracy, performance across different groups and contexts, likely failure modes
- Requirements on accuracy, robustness, and cybersecurity of AI systems
Working on EU AI Act enforcement
Current status of enforcement
Making sure the EU AI Act is enforced is a prerequisite for impact and the Brussels Effect coming into existence. The Act will be overseen by a European AI Board (or, possibly, an AI office as requested by some policymakers). However, the Board’s role is advisory, and enforcement will be primarily the responsibility of national market surveillance authorities. The European AI Board collects and shares best practices, takes a position on emerging issues for the implementation and enforcement of the regulation and will also monitor the latest trends in AI.
The most recent compromise text by EP rapporteurs gives the national supervisory authorities the power to conduct unannounced on-site and remote inspections of high-risk AI, acquire samples related to high-risk systems to reverse-engineer them and acquire evidence to identify non-compliance.
Paths to impact in enforcement
The following paths to impact on enforcement are identified:
- NGOs, think tanks and members of Parliament could lobby for better (use of) budgets (possibility to have impact now). The GDPR showed that Member State surveillance authorities often lack the resources or don’t use the resources wisely in order to prevent back-logs in processes.
- Working in enforcement (possibility to have impact in a few years). In order to work on enforcement in a few years, it is probably good to first build up experience:
- Individuals could already join existing national and European supervisory authorities, e.g. on enforcement of the Digital Services Act with the Commission. As the Digital Services Act is enacted earlier than the AI Act, it might be the case that the enforcement team for this digital matter would also work on AI Act enforcement. Individuals could also use this experience to change ships to the AI Act once the AI Act has come into existence. This could be either on the level of the European Artificial Intelligence Board (EAIB), chaired by the Commission or on the level of the national supervisory authorities.
- Working on audits for cutting-edge AI systems now, which can be used for third party conformity assessments / enforcement when the regulation comes into force
Working on adjacent AI regulation and a revision of the EU AI Act
Over the coming years more regulation on AI and adjacent technologies is expected. The European Commission recently released a proposal for an AI Liability Directive to change the legal landscape for companies developing and implementing AI in EU Member States. This would require Member States to implement rules that would significantly lower evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims. In addition, AI-related R&D regulation is expected at one point since R&D is exempted from the EU AI Act.
There will be work at the member state level for the technical bodies which will work on EU AI Act secondary legislation. There is a good chance some provisions of the AI Act will be decided by an 'implementing act'.
Finally, most EU laws undergo a revision after several years of the law being created. The AI Act will probably be adopted into 2024 and will only come into force around 2026 (to give organisations the possibility to implement the requirements. Citing article 84 of the current EC proposal: "By [three years after the date of application of this Regulation... and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public. There is path dependency, so the structure of the law is very unlikely to change after this, but working on this evaluation and changes seem a viable path to impact in a few years.
Working on export controls and using the EU’s soft power
Current status export controls and the EU’s soft power
The entire high-end semiconductor supply chain relies on Netherlands/EU-based ASML, because their EUV lithography machines are currently the only ones in the world that enable the creation of the most advanced chips. ASML continues to innovate and it will be hard for non-EU based organisations to catch up. In July the US government pressed the Dutch government into banning exports from the latest generation of ASML EUV machines. In a recent interview the Dutch Minister of Foreign Trade already mentioned that the Dutch government will only engage in stricter export controls on its own terms.
On the other side there is international collaboration, mainly between the US and the EU via the TTC (trade & technology council). The goal of the TTC is to harmonise AI regulation between the EU and US. Working on the EU-side would give people the opportunity to incorporate important governance measures in the US legal system. The EU recently opened an office in SF to reinforce its digital diplomacy. The European Commission and US Administration leaders will gather in Washington on December 5 for the latest EU-US TTC summit.
Finally, there are diplomatic discussions on AI governance ongoing at the OECD and other multilateral organisations. This is probably the space where other measures outside of the consumer product space could be proposed (e.g. on regulating autonomous weapons or on compute governance)
- OECD.AI is a platform to share and shape trustworthy AI
- The UN starts to think more about future generations, e.g. through their Declaration for Future Generations (part of Our Common Agenda). Looking at the track record, e.g. the UN Sustainability goals, it is expected that Our Common Agenda will have an impact on the way jurisdictions deal with topics regarding technology development.
Paths to impact in export controls and the EU’s soft power
As mentioned above the US government plays an important role in pressing the Dutch government into stricter export controls. Therefore the following paths to impact regarding export controls (and other forms of compute governance) are observed:
- Think tank research into the desirability of different forms of export controls from EUV machines. People could also work on researching other forms of compute governance.
- Working for the Dutch Ministry of Foreign Trade working on export controls. It will probably be hard to have impact without career capital and luck is required to end up in the right position
Regarding making impact through international organisations and treaties (see also this post):
- TTC: People need to have some luck to be in the right place in the EU team to have an impact here, since there is no “dedicated TTC organisation”. Seems like a relatively hard route for impact according to experts
- OECD.AI: Sometimes they hire permanent staff, but there are also networks of experts more senior people can join
- The UN is a platform for international coordination outside of EU-US relations and could be seen as a way to keep China in the loop on topics of (international) tech governance. It is possible to positively impact the UN in multiple ways:
- Directly, from certain positions within the UN
- Through member states’ permanent representations or as EU Ambassador
- From within think tanks that play a major role in advising the UN
- There are also specific specialised think tanks on China-EU relations, sometimes touching upon the topic of the semiconductor industry
Using career capital from EU AI Policy to work on different policy topics or in the private sector
Career capital within (tech) policy is highly transferable. Having experience in regulation of one specific technology will probably result in some career capital to work in regulation of other, risky technologies (e.g. biosecurity, nanotechnology or a completely “new” risky technology that’s currently not on our radar). Even when people want to switch to work in policy outside of the tech space (e.g. animal welfare or foreign aid), tech policy career capital could still prove to be valuable. This makes starting a career in tech policy somewhat robust against scenarios in which regulation of other technologies becomes more pressing.
Some impact-focussing individuals switched from government regulation to private sector organisations to work on self-regulation. It is important to contemplate on the question if it’s good to work for an organisation that also accelerates AI capabilities.
Thanks for this post. Readers who found this useful would probably also find some things linked to from my post Collection of work on 'Should you should focus on the EU if you're interested in AI governance for longtermist/x-risk reasons?' useful, since those posts tackle a similar topic but with different framings / different specific points.
(I've also now added this post to that collection.)
This is helpful, thanks for sharing!