Hello everyone,
We, Mark Brakel and Risto Uuk, are the two current members of the Future of Life Institute's (FLI) EU team and we are hiring a new person to our team: an EU Policy Analyst!
We have also announced several other vacancies and although we may not be able to answer your questions about those, we are happy to direct you to the right colleague – the full FLI team can be found on our website.
Through this thread, we would like to run a Europe-focused Ask Me Anything and will be answering questions starting today, Monday, 31 January.
About FLI
FLI is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement.
In the last few years, our main focus has been on the benefits and risks from AI. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap. FLI also recently announced a €20M multi-year grant program aimed at reducing existential risk. Our first grant program part of that, AI Existential Safety Program, launched at the end of 2021.
We expanded to the EU in 2021 with the hiring of our current two EU staff members. FLI has two key priorities in Europe: i) mitigate the (existential) risk of increasingly powerful artificial intelligence and ii) regulate lethal autonomous weapons. You can read some of our EU work here: A position paper on the EU AI Act, a dedicated website to provide easily accessible information on the AI Act, feedback to the European Commission on AI liability, and a paper about manipulation and the AI Act. Our work has also been covered in various media outlets in Europe: Wired (UK), SiècleDigital (France), Politico (EU), ScienceBusiness (EU), NRC Handelblad (Netherlands), Frankfurter Allgemeine, Der Spiegel, Tagesspiegel (Germany).
About Mark Brakel
Mark is FLI’s Director of European Policy, leading our advocacy and policy efforts with the EU institutions in Brussels and in European capitals. He works to limit the risks from artificial intelligence to society, and to expand European support for a treaty on lethal autonomous weapons.
Before joining FLI, Mark worked as a diplomat at the Netherlands’ Embassy in Iraq and on Middle East policy from The Hague. He has studied Arabic in Beirut, Damascus and Tunis, holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS).
About Risto Uuk
Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems.
Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Ask Us Anything
We, Mark and Risto, are happy to answer any questions you might have about FLI's work in the EU and the role we are currently hiring for. So please fire away!
If you are interested in learning more about FLI broadly, sign up to our newsletter, listen to our podcast, and follow us on Twitter.
Thank you! Yes, that would be so great if all manipulative techniques are banned, but I would recognize not only targeting people in moments of vulnerability but also 1) using negative, often fear- or/and shame-based, biases and imagery to assume authority, 2) presenting unapproachable images that should (thus) portray authority,[1] 3) physical and body shaming, 4) using sexual appeal in non-sexual contexts and/especially when it can be assumed that the viewer is not interested in such appeal, 5) allusions to physical/personal space intrusion, especially when the advertisement assumes/motivates the assumption of the viewer's vulnerability, 6) hierarchies in entitlement to attention of persons who are not looking to share such based on the reflection of commercial models, 7) manipulative use of statistics and graphs, 8) use of contradictory images and text that motivate decreased enjoyment of close ones, 9) generally demanding attention when viewers are not interested in giving attention, 10) evoking other negative emotions, such as despair, guilt, fear, shame, hatred including self-hatred, decreasing confidence in own worth of enjoyment and respect, and motivating the feeling of limited enjoyment of one's situations, 11) shopping processes that can be understood as betrayal or abuse, 12) normalization or glorification of throwing up and allusions to such in unrelated contexts, 13) onomatopoeic expressions that appeal to impulsive behavior, and other negatively manipulative techniques as those which should be banned or regulated and explained alongside with the ad.
From the AI Act, it may be apparent that the EU seeks to do the minimum in commercial regulations to not lose competitiveness in this area (or perhaps since resolving this issue seems somewhat challenging and would require the decisionmakers' admittance of being subject to potentially suboptimal advertisements) and rather focus on updating existing systems, such as those related to administration in various government branches.
Thus, the optimal solution can be to develop a public resource on recognizing and ignoring manipulative advertisement while keeping economic activity (such as an ad analysis and blocking app) that can be likeable even by public officials and gather data that allow for economic growth and wellbeing development modeling in scenarios with different advertisement regulations. Then, specific ad analysis and blocking suggestions can be presented to the government to make optimal regulatory decisions.
The alternative can be using imagery that evokes caring engagement with the objective to develop the intellectual and emotional capacity of the viewer.