Hide table of contents

The European Union (EU) could start or join new AI development and governance collaborations with the USA, China, and other countries and maintain the collaborations existing already. This could be important because the EU wants to be a leader in trustworthy AI development and could influence other countries to develop AI applications with that mindset and those values.[1] Many effective altruist organizations consider the approach of the EU broadly correct and so, diffusing this approach abroad might be an effective strategy.[2] On the other hand, if the EU’s approach turns out to be detrimental to AGI safety following heavy lobbying from anti-safety industry interests, it would be useful to prevent this approach from spreading elsewhere. Either way, if the EU or EU actors are party to collaborations that are relevant to AGI safety, having EAs in position to affect the EU or EU actors’ behavior is useful. We list some of the collaborations below.

Current collaborations

Public sector and governments

In her speech at the World Leader for Peace and Security Award, the European Commission’s President Ursula von der Leyen proposed to start work on a Transatlantic AI agreement.[3] The proposal was about aligning AI with human rights, pluralism, inclusion, and the protection of privacy. This speech has since been followed up by launching the EU-US Trade and Technology Council which "serves as a forum for the EU and the US to coordinate approaches to key global trade, economic and technology issues, and to deepen transatlantic trade and economic relations based on shared democratic values."[4] The US and the EU are planning to cooperate on AI standards, as well as other AI policy, through the Trade and Technology Council (TTC). The most notable TTC working groups for AI are WG 1, WG 4, WG 5, WG 6. The creation, dissemination, and enforcement of international standards can build trust among participating researchers, labs, and states.

The Global Partnership on AI has 25 international partners to “guide the responsible development and use of artificial intelligence, grounded in human rights, inclusion, diversity, innovation and economic growth.” Currently, the members are Australia, Belgium, Brazil, Canada, Czech Republic, Denmark, France, Germany, India, Ireland, Israel, Italy, Japan, Republic of Korea, Mexico, Netherlands, New Zealand, Poland, Singapore, Slovenia, Spain, Sweden, United Kingdom, United States, and the EU. 
 

In 2019, OECD countries, more than half of which are EU countries, endorsed the OECD AI principles, which was the first time that the USA and like-minded democracies committed to common AI principles. Many other countries in addition to the OECD members then committed to those principles. In that same year, the G20 (which includes China, Russia, India and the EU as well) adopted the “G20 AI Principles”, drawn from the OECD AI Principles.[5]
 

Private sector and other

In 2019, Facebook and the Technical University of Munich established the TUM Institute for Ethics in Artificial Intelligence.[6] In 2018, Facebook announced that it would invest €10m in its French AI center to grow the number of AI scientists and increase its funding of PhD candidates.[7]

In 2017, Amazon and the Max Planck Society started a collaboration on research in artificial intelligence. The collaboration meant that Amazon was going to create over 100 high-skilled jobs in machine learning at a new Amazon Research Center adjacent to Tuebingen’s Max Planck Society campus within the next five years with renowned Max Planck scientists Bernhard Schölkopf and Michael J. Black to support the Center.[8] In 2017, Amazon opened an artificial intelligence research center in the German university city of Tuebingen to create 100 jobs over the next five years.[9]

CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) set up an international advisory board that have members from the US, including Francesca Rossi, AI Ethics Global Leader and Distinguished Research Staff Member at IBM Research in the USA and Manuela Veloso, Head of AI Research at J.P. Morgan and Professor at Carnegie Mellon University in the USA, former president of AAAI.[10]

Most notably, the EU has set up InTouchAI.eu, a private-sector consortium in charge of supporting the European Commission to develop responsible leadership in global discussions around AI; create the conditions for the uptake of policies and good practices and standards that ensure an appropriate ethical and legal framework on AI; and improve public awareness of the challenges and opportunities associated with AI.

Interest for future collaboration

The 2020 US AI strategy indicated that the US wants to engage actively in many international venues on topics of relevance to AI, including G7, G20, NATO, the EU, and the OECD.[11] Close cooperation between the EU and US is not easy, however, because the EU sees the US as its main competitor in AI and while the US wants to join forces against China on AI, European interest in that is weak.[12] 

The European Commission and China signed the Comprehensive Agreement on Investment (CAI) at the end of 2020, an investment deal intended to significantly diminish barriers to investment flows between the EU and China. Regarding AI collaboration, the agreement includes points on prohibition of forced transfer of technology, standard setting, authorisations, transparency, and sustainable development which could indicate more interest for collaboration on trustworthy artificial intelligence.[13] Since then, the investment deal has faced obstacles as the EU parliament froze its work on the trade deal over the treatment of Uyghurs in China, and following Chinese sanctions on MEPs.[14] While EU-China relations are at their worst, they are still better than US-China relations. As such, the EU might serve as a pathway for influence into Chinese AI governance.  

Conclusion

In combination, these various initiatives and collaborative efforts suggest that the EU is at the center of key governance conversations and institutions such as the OECD, the G7 and G20 countries, NATO, and various industry partnerships. The OECD's AI Policy Observatory indicates that EU countries and institutions have been arguably the most prolific in the world, along with countries like the USA and Australia. As an early mover in shaping conversations on policy and principles (i.e., trustworthy or ethical AI), the EU can foster norm cascades, path dependency, and diffusion of its ideas through mechanisms associated with the Brussels effect and policy diffusion more broadly. Therefore, fostering collaborative initiatives that center decision-making power and participation amongst EU actors could help to alleviate power asymmetries and increase the likelihood of the EU shaping pathways in a beneficial fashion.

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
Comments1
Sorted by Click to highlight new comments since:

Thanks, great overview! How can early stage career European EAs contribute to this? Do you know which organisations you mentioned have the capacity to absorb interns / starters from the EA community? 

Curated and popular this week
Relevant opportunities