Hide table of contents

Hello everyone,

We, Mark Brakel and Risto Uuk, are the two current members of the Future of Life Institute's (FLI) EU team and we are hiring a new person to our team: an EU Policy Analyst!

We have also announced several other vacancies and although we may not be able to answer your questions about those, we are happy to direct you to the right colleague – the full FLI team can be found on our website.

Through this thread, we would like to run a Europe-focused Ask Me Anything and will be answering questions starting today, Monday, 31 January.

About FLI

FLI is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement.

In the last few years, our main focus has been on the benefits and risks from AI. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap. FLI also recently announced a €20M multi-year grant program aimed at reducing existential risk. Our first grant program part of that, AI Existential Safety Program, launched at the end of 2021.

We expanded to the EU in 2021 with the hiring of our current two EU staff members. FLI has two key priorities in Europe: i) mitigate the (existential) risk of increasingly powerful artificial intelligence and ii) regulate lethal autonomous weapons. You can read some of our EU work here: A position paper on the EU AI Act, a dedicated website to provide easily accessible information on the AI Act, feedback to the European Commission on AI liability, and a paper about manipulation and the AI Act. Our work has also been covered in various media outlets in Europe: Wired (UK), SiècleDigital (France), Politico (EU), ScienceBusiness (EU), NRC Handelblad (Netherlands), Frankfurter Allgemeine, Der Spiegel, Tagesspiegel (Germany).

About Mark Brakel

Mark is FLI’s Director of European Policy, leading our advocacy and policy efforts with the EU institutions in Brussels and in European capitals. He works to limit the risks from artificial intelligence to society, and to expand European support for a treaty on lethal autonomous weapons.

Before joining FLI, Mark worked as a diplomat at the Netherlands’ Embassy in Iraq and on Middle East policy from The Hague. He has studied Arabic in Beirut, Damascus and Tunis, holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS).

About Risto Uuk

Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems.

Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.

Ask Us Anything

We, Mark and Risto, are happy to answer any questions you might have about FLI's work in the EU and the role we are currently hiring for. So please fire away!

If you are interested in learning more about FLI broadly, sign up to our newsletter, listen to our podcast, and follow us on Twitter.

44

0
0

Reactions

0
0

More posts like this

Comments16
Sorted by Click to highlight new comments since: Today at 1:02 PM
  1. DG Connect proposed the AI Act. Rauh's (2019) study of 2,200 proposals for regulations and directives that the Commission tabled between 1994 and 2016 suggests DG Connect proposals have a 40-55% chance that the adopted law equals what the Commission has originally proposed.
    Does your team have any internal estimates of how likely the final law will equal the original proposal? Or perhaps more importantly, how likely the sections of the original proposal that you like will remain intact?
  2. Given the importance of the Council and qualified majority voting do you have a sense of how many countries * their percent of the EU population support/oppose the amendments to the AI Act that you prefer (you mention France & Finland above)?
  3. Do you only focus on the EU level (Commission, Parliament, Councils) or do you also work directly at the EU member state level (if so which ones and why)?
  4. How did you decide which channels of influence to pursue? You mention above submitting feedback to the European Commission, why do you think your submission will be taken seriously? How do you rate the various channels for engaging with the Commission: stakeholder conferences, DG meetings, online consultations (restricted/open)?
  5. Which of the three major institutions (Commission, Parliament, Councils) have you found most receptive to your preferred policies?






     

Thank you, a lot of great questions. In response to question (3), some of our work focuses on EU member states as well. Because we are a small team, our ability to cover many member states is limited, but hopefully, with the new hire we can do a lot more on this front as well. If you know anybody suitable, please let us know. For example, we have engaged with Sweden, Estonia, Belgium, Netherlands, France, and a few other countries. Right now, the Presidency of the Council of the EU is held by France, next up are Czechia and Sweden, so work at the member state level in these countries is definitely important. 

[anonymous]2y1
0
0

Hello Neil, thank you for these questions - I have sent you a DM with some additional answers. 

Thank you for hosting this - always love the opportunity to put questions to knowledgeable & interesting people!

I am particularly interested in learning more about your theory of change. Would you be able to elaborate on what activities you focus on, with which actors, and why? How will this lead to your stated goals?

Some questions in my head which may help to clarify what I mean:

  1. The future of the EU as a global security actor appears uncertain at best (thinking for example of the EU's recent exclusion from Ukraine negotiations). If member states choose to pursue security/defence related issues primarily outside of the realm of the EU, would that shift your advocacy focus away from Brussels and (further) onto member states (i.e. the ones who may be directly funding/developing AI for security purposes)? What is the value that FLI sees in the EU here as a pressure point for advocacy?
  2. Within the Netherlands (for example) we see a lot of technical work being carried out by coalitions of public agencies, private commercial actors and university working groups. Do you account for these kinds of actor level dynamics in your advocacy (i.e. are you mapping and targeting these non-state actors within EU member states too), or are you more focused on leveraging the regulatory power of the state to achieve an impact?

Hopefully this is not too vague! Many thanks.

[anonymous]2y11
0
0

Hi @JBPDavies, thank you for your questions and happy to comment on examples from my home country.

Our current focus in Europe is on two priorities:  i) strengthened the AI Act, and ii) building support among European countries for a treaty on autonomous weapons systems. For the first priority, we work mainly with the EU institutions. For the second, our focus is with Member State capitals (due to the limited EU influence over security issues, as you rightly point out). 

We regularly evaluate our choice of projects, and are currently conduct an evaluation of our 'policy platform', which we hope to share on our website sometime later this year. Nevertheless, we current focus on the AI Act because it is the first piece of legislation on AI by a major regulator anywhere, and because it could set regulators around the world on a certain path that could impact how we deal with increasingly powerful AI systems. 

Our focus on autonomous weapons is partly driven by the Asilomar principles, which FLI helped coordinate and where the principle of avoiding an AI arms race (#18) got most support from attending experts working on beneficial AI. This line of effort also helps us understand global coordination problems, because autonomous weapons may be an early example of AI that we would want to regulate. 

In reply to your clarifying questions, and as I mentioned earlier, our advocacy on autonomous weapons is mainly targeted at the Member State level (please do note that this will not be the main focus of the job for which we are currently advertising!). We are a small team, but we do build coalitions with other civil society organisations, businesses and academics where we can. A recent example of this is the open letter we coordinated among German AI researchers in which they called for a more progressive stance on this issue in the German coalition agreement (https://autonomewaffen.org/FAZ-Anzeige/). 

Hope this is helpful, but please let me know if you have further questions or if I am unclear.

Great that you are doing this! 

Here are some questions: 

What do you consider to be the biggest bottlenecks in the EU AI policy space? How do you think that might change in the coming 5-10 years? 

Do you consider the argument that the EU is not an AI superpower to be important for whether the EU can play a major role in governance? (As discussed here)

Thank you for the questions. I think that the biggest bottleneck right now is that very few people work on the issues we are interested in (listed here). We are trying to contribute to this by hiring a new person, but the problems are vast and there's a lot more room for additional people. Another issue is lack of policy research that would consider the longer-term implications but would at the same time be very practical. We are happy that in addition to the Future of Life Institute, a few other organizations such as Centre for the Governance of AI, Centre for Long-Term Resilience, and some other ones are contributing more here or starting to do so. I'm not sure about the next 5-10 years, so I'll leave it to someone else who might have some tentative answers. 

Regarding your 2nd question, I think it is an important argument and it's good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.  

Hey, thanks for hosting this. A few questions about your timelines for AI progress:

How long do you expect progress as usual before we see superintelligent behavior from AI systems across a wide range of human domains?

Which technical areas will drive the most growth in AI over the next fifty years: compute, algorithmic improvements within the deep learning paradigm, or new paradigms that replace neural networks?

Which economic industries will see the greatest disruption by artificial intelligence over the next fifty years? Natural language processing, image recognition, and unsupervised learning by RL agents have all seen great progress under the deep learning paradigm of the last 20 years. Would you expect AI progress in these domains to outpace developments in other popular technologies such as virtual reality, efficient energy storage, or blockchain?

Where do your opinions different most greatly from those of academics and policy makers around you?

Thank you, these are some really big questions! Most of them are beyond what we work on, so I'm happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now. 

I'll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still don't think much about the longer-term implications of AI and don't think that AI progress can have such significant impact (negative or positive) that we do or don't think that it is reasonable to focus on it right now. That said, I don't think that there is necessarily a very big gap between us in practice. For example, many people who are interested in bias, discrimination, fairness, and other issues that are already prevalent, can also be concerned about more general purpose AI systems that will become more available on the market in the future, as these systems can present even bigger challenges and have more significant consequences in terms of bias, discrimination, fairness, etc. In the paper On the Opportunities and Risks of Foundation Models, it was stated that, "Properties of the foundation model can lead to harm in downstream systems. As a result, these intrinsic biases can be measured directly within the foundation model, though the harm itself is only realized when the foundation model is adapted, and thereafter applied."

Thank you for the quick reply! Totally understand the preference to focus on FLI's work and areas of specialty. I've been a bit concerned about too much deference to a perceived consensus of experts on AI timelines among EAs, and have been trying to form my own inside view of these arguments. If anybody has thoughts on the questions above, I'd love to hear them!

Many academics and policymakers in the EU probably still don't think much about the longer-term implications of AI and don't think that AI progress can have such significant impact (negative or positive)

Right, this sounds like a very important viewpoint for FLI to bring to the table. Policymaking seems like it's often biased towards short term goals at the expense of bigger long run trends. 

Have you found enthusiasm for collaboration from people focused on bias, discrimination, fairness, and other alignment problems in currently deployed AI systems? That community seems like a natural ally for the longtermist AI safety community, and I'd be very interested to learn about any work on bridging the gap between the two agendas. 

[anonymous]2y8
0
0

Hi aogara, we coordinate with other tech ngo's in Brussels and have also backed this statement by European Digital Rights (EDRi), which addressed many concerns around bias, discrimination and fairness: https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf

Despite some of the online polarisation, I personally think work on near-term and future AI safety concerns can go hand in hand and agree with you that we ought to bridge these two communities. Since we have started our Brussels' work in May last year, we have tried to engage all actors and, although many are not aware of long-term AI safety risks,  I have generally found people to be receptive.

Hey, I am reading the Communication Artificial Intelligence for Europe (other languages), it seems that EU is very enthusiastic about attracting further investment into AI in order to keep its economic competitiveness in this sector. Although more socially beneficial uses (such as healthcare in advanced economies) are introduced in the beginning, the application specifics are not extensively examined. Ethics are overviewed toward the end of the document, suggesting building general awareness of algorithms. What would be necessary for this public algorithm awareness to prevent negative emotions based advertisement from being effective and thus used by companies?[1]

In conjunction with increases in public awareness regarding algorithms, what else would support EU in gaining wellbeing competitiveness? Could this coincide with measures that support global advancement and prevent catastrophic risks, such as developing supportive institutions in industrializing nuclear powers?

  1. ^

    For example, if people understood 'ok, this advertisement is showing a bias that induces fear and subsequently assures the viewer of the company's protection, thus motivates the advertised product's purchases, but it is only manipulation, the product's influence on one's wellbeing, net of such related to impulsive behavior, does not change,' would persons seek less value added by marketing and more health and leisure? Would this development be aligned with EU's objectives?

Thank you for the questions. Regarding emotions-based advertisement, you might find our recent EURACTIV (a top EU policy media network) op-ed about AI manipulation relevant and interesting: The EU needs to protect (more) against AI manipulation. In it, we invite EU policymakers to expand the definition of manipulation and also consider societal harms from manipulation in addition to individual psychological and physical harms. And here's a bit longer version of that same op-ed. 

Thank you! Yes, that would be so great if all manipulative techniques are banned, but I would recognize not only targeting people in moments of vulnerability but also 1) using negative, often fear- or/and shame-based, biases and imagery to assume authority, 2) presenting unapproachable images that should (thus) portray authority,[1] 3) physical and body shaming, 4) using sexual appeal in non-sexual contexts and/especially when it can be assumed that the viewer is not interested in such appeal, 5) allusions to physical/personal space intrusion, especially when the advertisement assumes/motivates the assumption of the viewer's vulnerability, 6) hierarchies in entitlement to attention of persons who are not looking to share such based on the reflection of commercial models, 7) manipulative use of statistics and graphs, 8) use of contradictory images and text that motivate decreased enjoyment of close ones, 9) generally demanding attention when viewers are not interested in giving attention, 10) evoking other negative emotions, such as despair, guilt, fear, shame, hatred including self-hatred, decreasing confidence in own worth of enjoyment and respect, and motivating the feeling of limited enjoyment of one's situations, 11) shopping processes that can be understood as betrayal or abuse, 12) normalization or glorification of throwing up and allusions to such in unrelated contexts, 13) onomatopoeic expressions that appeal to impulsive behavior, and other negatively manipulative techniques as those which should be banned or regulated and explained alongside with the ad.

From the AI Act, it may be apparent that the EU seeks to do the minimum in commercial regulations to not lose competitiveness in this area (or perhaps since resolving this issue seems somewhat challenging and would require the decisionmakers' admittance of being subject to potentially suboptimal advertisements) and rather focus on updating existing systems, such as those related to administration in various government branches.

Thus, the optimal solution can be to develop a public resource on recognizing and ignoring manipulative advertisement while keeping economic activity (such as an ad analysis and blocking app) that can be likeable even by public officials and gather data that allow for economic growth and wellbeing development modeling in scenarios with different advertisement regulations. Then, specific ad analysis and blocking suggestions can be presented to the government to make optimal regulatory decisions.

  1. ^

    The alternative can be using imagery that evokes caring engagement with the objective to develop the intellectual and emotional capacity of the viewer.

[comment deleted]2y0
0
0
Curated and popular this week
Relevant opportunities