Hide table of contents

We just published a ~90-page report called The Brussels Effect and AI: How EU regulation will impact the global AI market. This post is a supplement to our report, where we speculate on topics and draw out implications outside the report’s scope. We’ll provide a very brief summary of the report, discuss why the potential of a Brussels Effect might matter, whether a strong Brussels Effect in AI is desirable, and list some suggestions for potential follow-on work. 

 

What does the report do? 

The report aims to answer the question of whether EU AI regulation will have impacts on AI systems being deployed outside the EU, even though the regulation only applies to systems in the EU (i.e. whether the regulation will produce a “Brussels Effect”). The impacts might be produced either via companies voluntarily complying outside the EU (a de facto Brussels Effect) or via other jurisdictions adopting EU-esque regulation (a de jure Brussels Effect). We conclude there will be at least a partial Brussels Effect. 

Here’s the abstract: 
“The European Union is likely to introduce among the first, most stringent, and most comprehensive AI regulatory regimes of the world’s major jurisdictions. In this report, we ask whether the EU’s upcoming regulation for AI will diffuse globally, producing a so-called “Brussels Effect”. Building on and extending Anu Bradford’s work, we outline the mechanisms by which such regulatory diffusion may occur. We consider both the possibility that the EU’s AI regulation will incentivise changes in products offered in non-EU countries (a de facto Brussels Effect) and the possibility it will influence regulation adopted by other jurisdictions (a de jure Brussels Effect). Focusing on the proposed EU AI Act, we tentatively conclude that both de facto and de jure Brussels effects are likely for parts of the EU regulatory regime. A de facto effect is particularly likely to arise in large US tech companies with AI systems that the AI Act terms “high-risk”. We argue that the upcoming regulation might be particularly important in offering the first and most influential operationalisation of what it means to develop and deploy trustworthy or human-centred AI. If the EU regime is likely to see significant diffusion, ensuring it is well-designed becomes a matter of global importance.”

The report also includes a summary of current and forthcoming AI-relevant regulation in the EU (e.g. the AI Act) as well as a table summarizing our conclusions. 

 

Why might this work matter?

A stronger Brussels Effect increases the value of affecting EU AI regulation

One of our primary motivations for this work was to better understand the value of AI policy work in the EU to address risks from advanced AI. A particularly important argument in favor of such work being valuable seemed to be (see e.g. Stefan Torges’ How Europe might matter for AI governance and Nicolas’ Should you work in the European Union to do AGI governance?): 

  1. The EU regulatory regime will impact advanced AI outcomes, at least in the jurisdictions it affects.
  2. It is possible to influence EU AI regulation to produce better impacts from advanced AI systems, at least in the jurisdictions it affects.
  3. EU regulation of AI will diffuse globally via the Brussels Effect.
  4. If 1, 2, and 3 hold, then people interested in shaping AI’s impact on the world should try to positively affect the regulation.

Therefore, people interested in shaping AI’s impact on the world should try to positively affect the regulation. 

We thought premises 1, 2, and 4 seemed likely to be true, but were unsure about 3, motivating the report. Briefly on the other premises: Premise 1 seemed likely because the EU seemed likely to introduce relatively stringent regulation. Though it’s difficult to tell how, it seems likely that this regulation will have some impact on how advanced AI systems are developed and deployed. For example, the legislation may push industry to focus more on e.g. transparency than it would have otherwise, in turn shaping global R&D. Premise 2 also seemed likely to hold true. Though it may be difficult to identify good regulation, it seemed likely that EU regulation could be improved upon with sufficient effort. Beyond premise 3, the most tricky question is whether 4 holds true. It could be the case that other opportunities for impact are significantly greater, e.g. if it is only possible to marginally improve the quality of EU regulation. 

A stronger Brussels Effect may suggest other jurisdictions can have significant influence on AI regulation beyond their borders
If there is a strong AI Brussels Effect, there could be a decent argument for folks to focus on AI policy in California or other US states likely to adopt more stringent AI regulation than the rest of the country. We think the most likely path to federal US AI regulation involves states passing such regulation first. The story would roughly go: 

  1. States such as California adopt some piece of EU-inspired regulation, potentially following a de facto Brussels Effect.
  2. As a result, companies across the US experience a stronger de facto effect as the size of the market covered by EU-inspired regulation has increased.
  3. The chance that federal EU-inspired AI regulation is passed increases as a result.

3 is partly because US big tech companies and companies operating across many US states would be incentivized to lobby in favor of federal US regulation, to make sure they are not put at a disadvantage compared to companies only operating in states without the more stringent regulation. In the report, we call this the “De Facto Channel” to a de jure effect. It could also be because state representatives would be particularly keen to see their state-level regulation reflected in federal regulation, e.g. as companies have already borne the transition costs of becoming compliant with their requirements. 

If the above is correct, that would be an argument in favor of influencing state-level regulatory efforts in useful directions and potentially trying to increase the chance that they lead to federal regulation. This mechanism could be undermined if states adopt regulation that is inconsistent with each other’s and/or EU regulation. 

Studying the Brussels Effect can also inform thinking on the extent to which Chinese AI regulation will diffuse (here’s a good summary of recent developments). It could also suggest that certain standards are more likely to have a global impact. For example, if big tech companies would prefer having similar risk management procedures globally, we might expect the US NIST AI Risk Management Framework to have global influence. 

A stronger Brussels Effect suggests the global AI industry will be more heavily regulated
The stronger the Brussels Effect in AI, the more stringent one should expect global regulation to be. It could be one of the main mechanisms pushing against regulatory races to the bottom. Also, all else being equal, the stricter one expects regulation to be, the stronger the case in favor of figuring out what good AI regulation looks like and trying to influence policymakers in that direction. 

The strength of the Brussels Effect is an important consideration for policymakers
We were also interested in the question because it seemed decision-relevant for policymakers. Many EU policymakers are excited about the AI Act because they think it will lead to a Brussels Effect. Other jurisdictions are trying to get a sense of the extent to which their regulatory regimes for AI should follow the EU’s example. If other jurisdictions expect a large de facto effect, they might want to avoid introducing incompatible regulation, as that could put their AI industry at a disadvantage. In particular, they would face incentives not to impose inconsistent requirements on AI products that are likely to be used in their jurisdiction and in the EU. It would be to diverge from EU regulation in imposing less strict requirements or to e.g. define risk categories differently. 
 

Is a stronger Brussels Effect with regards to AI desirable?

There are three ways you could try to influence EU regulation. First, you could try to make it higher quality. Second, you could try to make it more likely to diffuse. Third, you could try to make it less likely to diffuse. Whether either the second or third pathway is promising depends on a number of factors:

  • Route to more similar and more stringent AI regulation globally. People often worry that competitive pressures will lead to weaker regulation and for companies to try to comply with the least stringent regulation possible. The de facto Brussels Effect produces the opposite effect, making it advantageous for companies to comply with more stringent regulation than they need to. It can also significantly reduce the extent to which jurisdictions risk hampering their AI industry by introducing AI regulation. Further, it might also be one of the main mechanisms pushing in favor of AI regulation being similar across the world. This could reduce global competitive pressures to skimp on safety-enhancing measures and could help create a common understanding of risks and mitigation strategies across labs and countries at the frontier of AI development. These positive effects could be somewhat counteracted by making it more difficult to adjust regulatory regimes should they prove inadequate and hindering regulatory experimentation and learning.
  • Effects on AI progress. AI regulation can plausibly both affect the speed and the direction of the AI industry. Our prior should be that regulation slows down the speed of AI development, though some mechanisms discussed in the report could push the other way (e.g. increasing regulatory certainty). Further, we should expect regulation to incentivize the development of AI systems that can more cheaply and effectively comply with EU requirements, which seems useful if the requirements are well designed.
  • Brussels Effect for whom? Another important question is how the Brussels Effect might differentially spread to different jurisdictions or regions. If the de facto effect is more likely to reach US than Chinese tech companies, should that be a worry?
  • The relative quality of the EU regulation. What is the quality of the EU regulatory regime for AI? By spreading, would it stifle other possible regulatory regimes for AI? If there were significantly better possible regulatory regimes, a de facto effect might be worrying.
  • Aggregation. How can the above factors be weighed against each other?

The above factors determine the value of a stronger Brussels Effect. One also needs to evaluate how tractable it is to change the diffusion of EU AI regulation and how the expected value of that work compares to other work people could engage in. 

Overall, we think a stronger Brussels Effect is desirable, but we’re not fully convinced (perhaps ~75% confidence). We are more confident that increasing the quality of the EU regulation is a good idea and would expect such work to be of higher value.

 

Suggestions for additional work

The report suggests there is value in trying to improve EU AI regulation (though it would be a negative update if you thought EU regulation would be fully complied with globally). This could be done by engaging with the policymaking process of the AI Act (which is likely to continue for at least another year) or in the standard-setting processes started at CENELEC and CEN, which are likely to have a big impact on how the AI Act is interpreted and implemented in practice. A particularly important question currently being deliberated on is whether general-purpose AI systems should be covered under the AI Act or not. See the AI Act website and newsletter from Risto Uuk (Future of Life Institute) or Charlotte Stix’s EuropeanAI newsletter for news, updates, and commentary. You could also review this guide to EU AI Policy careers from 2020.

Further, we hope others will explore related research questions. Below are a few ideas we’re interested in. Feel free to reach out to us if you’d be interested in pursuing any of them.

  • Regulatory diffusion in other domains, from the EU and other jurisdictions. Will other parts of the EU regulatory regime for AI see diffusion, e.g. the Digital Services Act and the Digital Markets Act? Would regulation of the compute supply chain, say of cloud compute providers, see regulatory diffusion? What about regulation of biotech companies, animal farming, space law, or other industries of interest to effective altruists?
  • Will we see a California Effect (an overview and description of the research questions here) or a Beijing Effect in AI? For what types of regulation would that be the case? To what extent should we expect state-level AI regulation in the US to increase the chance of and shape eventual federal US AI regulation?
  • If we expect a de facto Brussels Effect to be likely, how does that update us on the importance of corporate self-governance efforts in AI? It could suggest that self-governance efforts too could see global diffusion and become more critical, especially if these can in turn affect future regulation.
  • Empirical work tracking the extent to which there is likely to be a Brussels Effect. Most of the research on regulatory diffusion focuses on cases where diffusion has already happened. It seems interesting to instead look for leading indicators of regulatory diffusion. For example, you could analyze relevant parliamentary records or conduct interviews, to gain insight into the potential global influence of the EU AI Act, the EU, and legal terms and framings of AI regulation first introduced in the EU discussion leading up to the EU AI Act.
  • Is a Brussels Effect desirable? Did we capture the relevant considerations above? How do they all stack up against each other?
  • Work on what good AI regulation looks like from a TAI/AGI perspective seems particularly valuable. Questions include: What systems should be regulated? Should general-purpose systems be a target of regulation? Should regulatory burdens scale with the amount of compute used to train a system? What requirements should be imposed on high-risk systems? Are there AI systems that should be given fiduciary duties (see Aguirre et al 2020)?
Comments7
Sorted by Click to highlight new comments since: Today at 12:18 PM

Great work! I think this is a really important report -- especially with so many regulatory entities only recently starting to put AI regulations into writing (I'm not at my computer right now, but a few that come to mind are the US's NIST and the British Department for Digital, Culture, Media, and Sport), it's really important that we get these regulations right.

Also, I'm currently working on a paper/forum post looking into which legislative pathways could produce a California Effect for AI, with a first draft (hopefully) finished in a week or so. Without giving too much away from that, it feels to me as though California can have a disproportionately-large effect on AI, not only because of a state-to-state or state-to-federal CA effect (which would still be huge), but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California. 

Thanks!

That sounds like really interesting work. Would love to learn more about it. 

"but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California." Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they're more likely to be aware of the regulation? 

Just as a caveat, this is me speculating and isn't really what I've been looking into (my past few months have been more "would it produce regulatory diffusion if CA did this?"). With that said, the location in which the product is being produced doesn't really effect whether regulating that product produces regulatory diffusion -- Anu Bradford's criteria are market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility of production. I haven't seriously looked into it, but I think that, even if all US AI research magically switched to, say, New York, none of those five factors would change for CA (though I do think any CA regulation merely targeting "systems being produced in CA" would be ineffective for a similar reason -- with remote work being more and more acceptable and the fact that, maybe aside from OpenAI, all these companies have myriad offices outside CA, AI production would be too elastic). In this hypothetical, though, CA still has huge consumer market (both inre: individuals and corporations --  >10% of 2021's Fortune 500 list is based in CA), it still has more regulatory capacity and stricter regulations than any other US state, and I think that certain components of AI production (e.g. massive datasets, the models themselves) are  inelastic and non-divisible enough that CA regulation could still produce regulatory diffusion. 

When it comes to why the presence of AI innovation in California makes potential California AI regulation more important, I imagine it being similar to your second suggestion, that "CA regulation is particularly likely to affect the norms of frontier AI companies," though I don't necessarily think awareness is the right vehicle for that change. After all, my intuition is that any company within an order of magnitude or two of Google or Meta would have somebody on staff whose job it is to stay abreast of regulation that affects them. I'm far from certain about it, but if I had to put it in words, I'd say that CA regulation could affect the norms of the field more broadly because of California's unique position at the center of technology and innovation. 

To use  American stereotypes as analogies, CA enacting AI regulations would feel to me like West Virginia suddenly enacting landmark coal regulation, or Iowa suddenly doing the same with corn. It seems much bigger than New Jersey regulating coal or Maine regulating corn, and it seems to me that WV wouldn't regulate coal unless it was especially important to do so. (This is a flawed analogy, though, since coal/corn is bigger for WV/IA than AI is for CA.) 
Either way, if California, the state which most likely stands to reap the greatest share of AI profits, home to Berkeley and Stanford and the most AI innovation in the US (maybe in the world? don't quote me on that) were to regulate AI, it would send an unmistakable signal about just how important they think that regulation is. 

Do you think that makes sense?

I suspect that it wouldn't be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1/12 of Google's US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target. 

Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they've produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it. 

I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don't know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there's also stuff wrt autonomous vehicles. 

Yeah, I'm really bullish on data privacy being an effective hook for realistic AI regulation, especially in CA. I think that, if done right, it could be the best option for producing a CA effect for AI. That'll be a section of my report :)

Funnily enough, I'm talking to state legislators from NY and IL next week (each for a different reason, both for reasons completely unrelated to my project). I'll bring this up.

Great! Looking forward to seeing it!

Congratulations! So glad this is out.