Hello everyone,
We, Mark Brakel and Risto Uuk, are the two current members of the Future of Life Institute's (FLI) EU team and we are hiring a new person to our team: an EU Policy Analyst!
We have also announced several other vacancies and although we may not be able to answer your questions about those, we are happy to direct you to the right colleague – the full FLI team can be found on our website.
Through this thread, we would like to run a Europe-focused Ask Me Anything and will be answering questions starting today, Monday, 31 January.
About FLI
FLI is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement.
In the last few years, our main focus has been on the benefits and risks from AI. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap. FLI also recently announced a €20M multi-year grant program aimed at reducing existential risk. Our first grant program part of that, AI Existential Safety Program, launched at the end of 2021.
We expanded to the EU in 2021 with the hiring of our current two EU staff members. FLI has two key priorities in Europe: i) mitigate the (existential) risk of increasingly powerful artificial intelligence and ii) regulate lethal autonomous weapons. You can read some of our EU work here: A position paper on the EU AI Act, a dedicated website to provide easily accessible information on the AI Act, feedback to the European Commission on AI liability, and a paper about manipulation and the AI Act. Our work has also been covered in various media outlets in Europe: Wired (UK), SiècleDigital (France), Politico (EU), ScienceBusiness (EU), NRC Handelblad (Netherlands), Frankfurter Allgemeine, Der Spiegel, Tagesspiegel (Germany).
About Mark Brakel
Mark is FLI’s Director of European Policy, leading our advocacy and policy efforts with the EU institutions in Brussels and in European capitals. He works to limit the risks from artificial intelligence to society, and to expand European support for a treaty on lethal autonomous weapons.
Before joining FLI, Mark worked as a diplomat at the Netherlands’ Embassy in Iraq and on Middle East policy from The Hague. He has studied Arabic in Beirut, Damascus and Tunis, holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS).
About Risto Uuk
Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems.
Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Ask Us Anything
We, Mark and Risto, are happy to answer any questions you might have about FLI's work in the EU and the role we are currently hiring for. So please fire away!
If you are interested in learning more about FLI broadly, sign up to our newsletter, listen to our podcast, and follow us on Twitter.
Regarding your 2nd question, I think it is an important argument and it's good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.