Hide table of contents

Epistemic Status: Mainly a summary of what some think tanks and news outlets have put out on the subject, and my own thoughts on the subject. 

 

Summary

This is an exploration of the EU AI Act (AIA)’s likelihood of a Brussels Effect, given its status as early/expansive AI policy. The Brussels Effect is usually conceived in terms of  first De Jura effects, in which other countries' legal systems adopt or imitate aspects of EU policy, which seems possible through certain multilateral channels but perhaps unlikely to be taken up  independently by other nations. There are also De Facto effects, in which EU policy has a ripple effect in other countries despite the policies not being officially in place, notably through market effects. It’s worth noting that the impact of the AIA regulation proliferation on timelines might not actually be significant. Regulations are based on intended use, and as general capabilities research is rarely relevant to the average consumer, the AIA does little to limit general capabilities research, classifying it as limited or minimal risk. 

 

Brief Overview of How the AIA Works: 

The AIA classifies different AI systems into levels of risk based mainly on how they are deployed rather than how they are built, though there are requirements regarding training data. High-risk things are considered to be products that could cause physical harm to someone–namely heavy machinery or biotech in many different forms– software used by companies for purposes such as determining people’s access to employment, education, or financial services, and highly manipulative algorithms that could cause people to make decisions that ‘materially impact their lives.’ Lower-risk applications include chatbots, deep-fakes, or other algorithms used in less manipulative contexts. Pre-market, companies are required to examine their own products and services to make sure that they meet standards mainly on transparency, security, and human oversight. If these standards aren’t upheld, heavy fines are levied on the company. Post-market, companies are required to continue to oversee their products. 


 

De Jura Effects 

The most notable reason that any of the AIA’s regulations would be adopted by other nations is the EU’s strong ties to a variety of different international standard-setting bodies, such as the EU-US Trade and Technology council, the Organization for Economic Cooperation and Development’s AI Policy Observatory, the Global Partnership on Artificial Intelligence, and the International Organization for Standards. The EU has sway within these bodies due to geopolitical power and its large market. Similar EU regulations have been used as blueprints for international bodies or other nations,  so it seems likely that they would adopt EU regulatory policy for any use of AI being put to international markets. Multilateral cooperation could also  happen through more informal diplomatic channels, rather than just through formalized cooperative bodies. The fact that EU’s regulations are more stringent than any other nation’s thus far would set a relatively firm precedent for just how far AI regulations can go. Generally, it’s been shown that well-written regulatory frameworks decrease regulatory cost and increases companies’ compliance with the regulations, which would further incentivize other nations to use the AIA as a model for AI regulation. [1]

In particular, the UK seems poised to adopt some form of regulation, due to the high levels of both industry and academic research into ML capacity. The UK has launched several parliamentary investigations into AI regulation and it was featured strongly in recent budgets. There is, as always, hesitation to over regulate and limit the potential for growth in the private sector, but initial steps are being taken. In addition, the UK did adopt the EU’s GDPR in 2020, indicating an openness to regulating tech. 

The United States seems to be a much more difficult case. On one hand, Congress has always been hesitant to rein in any major tech company, and American politics are favorable to the free market. Recent literature on AI put out by the government has largely tended to use language that suggests that it’s more interested in investing in AI that is safe and productive than regulating it, and the CHIPS act funneled billions into AI research and the hardware needed to manufacture it.[2] Competition with China  disincentivizes any limitation on companies’ ability to develop powerful AI. Additionally, the base rate for a  Brussels Effect in US policy is relatively low. [3]Alternatively, the Biden administration has begun to express some interest in applying anti-trust laws to large tech companies, particularly when it comes to data usage, something that’s crucial both in the development of general capacity AI, and in the application of many algorithms. The AIA labels algorithms used in social media as limited risk, requiring internal regulation by companies and transparency for the consumer. This type of regulation could be a good framework for responding to misinformation on social media platforms, particularly as a response to election manipulation. Additionally, several bipartisan committees looking into AI regulation have formed within Congress, and California adopted the GDPR, indicating that there’s burgeoning interest in regulation. Moreover, policy debate on the subject has often touched on the relationship between automation and unemployment, particularly among the uneducated. In the past, Republicans, while normally unwilling to limit the free market, have sponsored legislation that prevents off-shoring low-skill jobs, and with the recent turn to populism and protectionism in the party, they might be more open to regulating AI. 

China is also a complicated case. There’s heavy precedent for adopting regulatory frameworks from the EU, but even before the AIA China was starting to write its own legislation on the subject. Additionally, a strong emphasis within the AIA on consumer protection based on a very liberal conception of human rights, as well as the EU’s stricter regulations on data protection, mean that the AIA goes against a lot of what China has already put out in terms of AI regulation. Additionally, China’s motivations are different. Given censorship, many companies that apply deep learning algorithms to the internet are already prohibited. Unlike the US and the UK, the government is incentivized to restrict the power of larger tech companies.  

De Facto Effects 

Frequently, what causes the Brussels Effect isn’t EU geopolitical sway, it’s the prominent market. Given the prominence of multinational corporations with a market in and out of the EU, this means that what matters is the cost of differentiation: is it worth it to develop two separate products for the regulated EU market and that outside of it? Generally, the answer seems to be no. The US and the EU account for  about 50% of the market spending on AI, so if the AIA was adopted by a few companies using AI that would be regulated, it would not be profitable to produce a second product that complies to more stringent laws while leaving the original code as is, especially given the cost of compute or training an AI. Elasticity also seems relevant: it’s unlikely that the average consumer will be open to changing their internet habits because of stricter regulations. 

It’s also worth noting that many of the uses of AI that are deemed high-risk, and thus heavily restricted by the AIA, are in fields that are already thoroughly vetted by EU regulatory policy, saliently medical equipment and biotech, or automated heavy machinery such as industrial equipment or airplanes. This means that the process used to regulate the use of these technologies won’t change, there only would be more requirements to comply with. [4] In most cases, EU regulatory bodies that perform direct oversight on certain products will simply also start checking for AI related criteria, e.g., the EU has an outfit that specifically oversees the regulation of aviation. This outfit would also start making sure that the AI systems on airplanes that were labeled risky to humans were up to code. Therefore, the robustness of oversight of existing such products increases, but the main difference might be for new types of machinery coming to market, for instance, self-driving cars will be impacted heavily by this. The onus to guarantee that they meet stringent safety standards will be on the manufacturer. If these standards aren’t met, products can’t go to market and companies could face heavy fines. 

 

The most heavily regulated areas, those deemed high-risk or unacceptable-risk, are those that deal with things like the justice system, evaluating loans, actuarial systems, education, or other personal rights. However, these systems all deal with policy that is unique to the EU: an understanding of the minutiae of EU law surrounding, say, employment or criminal justice is built into the AI, and it couldn’t be deployed in other countries. Some of the AI systems that are heavily regulated by the AIA are just too local to indicate a strong Brussels Effect, and the corresponding local systems in other nations have little to no reason to take up the same regulation. 

Large tech companies still have a chance to influence the structure of the AIA, including IBM and Google, who have both lobbied for legislation that moves liability from the manufacturer of any given tool to its deployer, and encouraged narrow applications to be regulated more heavily than capabilities research. [5]

For companies with large markets in the EU, notably, biotech from the US or automated heavy machinery from China, losing out on the EU market would be a major blow, so despite the fact that they are not quite as international as something like Facebook, there’s still a relatively high cost of differentiation. Between the cost of differentiation, the prominent EU market, the inelasticity, and the international base of many companies that employ AI, it seems like the market naturally leans towards a de facto Brussels Effect. 
 

Many thanks to the London EA Hub for assistance. 

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^

    Commentary from IBM and Google 

16

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities