I work on AI governance and I have run the European Governance Research Network (EGRN) since October 2019. The EGRN is a small community of EAs interested in European Governance, especially with regards to AI.

Wiki Contributions


Will the EU regulations on AI matter to the rest of the world?

Thanks for your questions Max! Hope this helps:

Regard if the EU regulation will matter, I suppose many if not most important people in AI governance will learn about them and this will at least impact their thinking/plans/recommendations for their governments somehow, right?

I don't think this is the case: most actors involved are not incentivized to care about balance, transferability and effectiveness. Making a gross generalization, we could say  civil society groups are concerned about issues that often uninformed donors care about. Industry groups on the other hands are hoping to minimize short-term costs from the regulatory requirements given financial markets' pressure -  by reducing either the law's scope of application or its requirements for whatever remains in scope. (There are more stakeholders and angles than just civil society and industry of course, but in practice most discussions end up being about whether a solution is pro- or anti-innovation nowadays). Policymakers themselves generally don't specialize in AI governance, but instead have a broad range of topics to deal with. Their concerns involve political positioning and they care about balance, transferability and effectiveness only to the extent it improves their positioning. 

Of course, policymakers and all people within civil society and industry groups are individuals with their own beliefs which affect their actions (that is why having EA or longtermist individuals in these roles would matter). They therefore sometimes sacrifice the party/org's mission in favor of doing what they think is right, particularly in civil society where there aren't enough resources to surveil compliance with HQ's talking points and "doing the right thing" can be argued to be part of the staff's mandate. 

In this system, balance is generally achieved thanks to both "sides" pushing as hard as they can in opposite directions. A vibrant civil society and democratically elected policymakers can offset the industry's resource advantage in lobbying. Moreover, transferability is an increasing function of balance. So both balance and transferability are not the primary concerns. 

However, effectiveness seems to be purely accidental: besides occasional individuals skewing the interpretation of their mandate in order to push for effectiveness, there is little incentive and pressure in the system to result in effective policies. 


Do you think it's possible that ineffective laws by the EU might lead European governments to invest more in their own AI regulation efforts?

The EU AI Act will reduce political demand for national AI regulations among Member States and beyond: as it is a Regulation (as opposed to a Directive), it requires all Member States to apply it the same way, so additional national AI regulations would literally layer up over, rather than complement or substitute, EU regulations. Countries outside the EU would also have less demand for regulations because of a potential de jure Brussels effect - though this effect would have to offset the hypothetical "regulatory competition" effect i.e.  lawmakers trying to be the first to have invented a legislative framework for topic X. Ineffective EU laws' impact on political demand will be smaller than effective ones, but not enough to offset the primary effect. 

so maybe that would be actually end up being good?

"Effectiveness" to the longtermist/EA community is different from "effectiveness" to the rest of society. For example, AGI-concerned individuals care more about requirements related to safety and alignment than measures to foster digital skills in rural areas. So it is possible that whatever we call ineffective is hailed as a major success of effectiveness by decisionmakers and will cut the demand for further policymaking for the next 20 years. I am very interested in the topic of experimentation and adaptiveness/future-proofing in policy, but since it requires decisionmakers i) acknowledging ignorance or admitting than current decisions might not be the best decisions and ii) considering time horizons of >8 years, it is politically difficult to achieve in representative democracies. 

Common conceptual framework to talk about progress on AGI Safety?

11. Actor relevance

In this conceptual framework, various actors will have more or less influence -through their own actions- at different points on the series of actions. Therefore, there is a subset of actors having a disproportionate influence on the development, deployment and safety of AGI/CAIS/TAI (e.g. biggest tech investors, heads of government, OECD.AI executive director, DeepMind safety team, regulators, government AI procurement officials, standard-setting bodies representatives, "Jane" in the explanations above, …).  Depending on the costs involved, identifying the most relevant actors and altering their actions to ensure that the outcome is safe could be an effective strategy.

AI policy careers in the EU

In the "where to aim long-term" section, you do not mention Commissioners' cabinet of advisors. I'd be curious to know why, as I thought these roles are quite impactful (though very political). My understanding is that Commissioners' cabinet members steer policy issues, not only in terms of substance of legislative proposal but also during negotiations with the Parliament and the Council.

My guess would be that DGs & Secretariat General's freedom to influence a policy area is inversely proportional to the extent the relevant Commissioner's cabinet cares about that policy area: if the Commissioner's adviser in charge makes it her or his pet topic with clear ambitions, the DGs staff will be constrained. For example, GDPR is reportedly the brainchild of mostly one advisor to the President of the Commission. AI has become so politicized in the EU that VP Vestager & Commissioner Breton have already been quite vocal about it and their cabinet members are therefore following DGs' work on that matter very closely (of all the files in their pipeline, it is possibly the closest to a career-defining policy issue).