Mark Brakel

Mark is FLI’s Director of European Policy, leading our advocacy and policy efforts with the EU institutions in Brussels and in European capitals. He works to limit the risks from artificial intelligence to society, and to expand European support for a treaty on lethal autonomous weapons.

Before joining FLI, Mark worked as a diplomat at the Netherlands’ Embassy in Iraq and on Middle East policy from The Hague. He has studied Arabic in Beirut, Damascus and Tunis, holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS).

Topic Contributions

Comments

AMA: Future of Life Institute's EU Team

Hi aogara, we coordinate with other tech ngo's in Brussels and have also backed this statement by European Digital Rights (EDRi), which addressed many concerns around bias, discrimination and fairness: https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf

Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels' work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks,  I have generally found people to be receptive.

AMA: Future of Life Institute's EU Team

Hello Neil, thank you for these questions - I have sent you a DM with some additional answers. 

AMA: Future of Life Institute's EU Team

Hi @JBPDavies, thank you for your questions and happy to comment on examples from my home country.

Our current focus in Europe is on two priorities:  i) strengthened the AI Act, and ii) building support among European countries for a treaty on autonomous weapons systems. For the first priority, we work mainly with the EU institutions. For the second, our focus is with Member State capitals (due to the limited EU influence over security issues, as you rightly point out). 

We regularly evaluate our choice of projects, and are currently conduct an evaluation of our 'policy platform', which we hope to share on our website sometime later this year. Nevertheless, we current focus on the AI Act because it is the first piece of legislation on AI by a major regulator anywhere, and because it could set regulators around the world on a certain path that could impact how we deal with increasingly powerful AI systems. 

Our focus on autonomous weapons is partly driven by the Asilomar principles, which FLI helped coordinate and where the principle of avoiding an AI arms race (#18) got most support from attending experts working on beneficial AI. This line of effort also helps us understand global coordination problems, because autonomous weapons may be an early example of AI that we would want to regulate. 

In reply to your clarifying questions, and as I mentioned earlier, our advocacy on autonomous weapons is mainly targeted at the Member State level (please do note that this will not be the main focus of the job for which we are currently advertising!). We are a small team, but we do build coalitions with other civil society organisations, businesses and academics where we can. A recent example of this is the open letter we coordinated among German AI researchers in which they called for a more progressive stance on this issue in the German coalition agreement (https://autonomewaffen.org/FAZ-Anzeige/). 

Hope this is helpful, but please let me know if you have further questions or if I am unclear.