I work on AI governance and I have run the European Governance Research Network (EGRN) since October 2019. The EGRN is a small community of EAs interested in European Governance, especially with regards to AI. I also work for The Future Society.
Thank you for sharing your experience in such a clear way, and to mention the remaining uncertainties. This post has helped me better understand the role of political advisors, which still was a blind spot in my understanding of the Parliament!
Thank you for taking the time for explaining this so clearly, I understand now √ I will edit the title and link to this comment to disclaim on framing.
That's a good point
Thanks - I see what you mean re advocacy towards corporate actors - that would make it one of the actions in industry norms building. However, I originally had congressional grillings in mind as part of the law-making process: it serves both as expressing discontent vis-a-vis companies' behavior, but also as a way to inform/signal for policymakers among themselves on the need for better policy/enforcement of the policy and on the prioritization of this issue (high enough on the agenda to make a public fuss about it).
This is great, thanks for sharing this! I had not come across your comment before -not sure if Caroline did- but it's quite reassuring that despite two different approaches, two objectives, two different authorship, the overlapping descriptive part of the "steps in the chain" match almost exactly. I will edit the post to link to this.
It is not coming from another language, I just wanted to broaden the concept of "audit trail" (which is much more established) to encompass all of compliance-related items. I also didn't want to imply it was as sequential/neat as a trail, so I went for trace. I am not a native English speaker so perhaps "trace" is not the right term.
Thanks for this! Added them as examples of actions. Could you explain how "setting of corporate policy" is different from corporate compliance & corporate governance? I also didn't add the last one because it seems quite rare that policymakers would do advocacy towards corporate actors for these corporate actors to change their policy, while I believe all other activities are fairly common.
Thanks Elliot - these are good points. In addition to the popular culture-shaping activities (by media actors and others) that you mention (i), I would add
ii) education activities (forming the future AI developers and other relevant individuals - either through the education system or extra-curricular),
iii) profession-shaping activities (carried out e.g. a professional association of AI developers), and
iv) ideology-shaping activities (e.g. by religious actors).
We decided against including them as I had not witnessed significant influence from these actors on AI governance and therefore didn't feel confident about explaining this influence (yes, to a certain extent the Cambridge Analytica scandal or France Haugen leaks have been talked about a lot, but they only shifted the mindset of relevant actors on AI because advocates/lobbyists/thinktanks/advisors kept using these scandals as excuse for altering a policy stance. For many policymakers, the scandal was resumed as "facebook = bad company", not "bad AI".) One additional source of complexity is that, given their indirect impact, it is difficult to ensure they result in a net-positive impact. For example, Unsafe at Any Speed triggered the American consumer movement but also triggered the Powell Memorandum which arguably resulted in a capture of US policymaking by for-profit interests even to date. However, uncertainty doesn't mean these activities shouldn't be considered for impact - especially since they are quite versatile (It's plausible a skilled journalist could write impactful pieces about AI, pandemics, animal welfare, effective philanthopy and global development throughout her.his career)Likewise, we faced the question of whether to include EA community-building and grant-making activities as a meta activities influencing AI governance, but that again stretched from "traditional" non-EA definitions of AI governance.
I think there is a bit of confusion on several fronts here and in your main comment. On title/framing:The post aims to be descriptive of factors relevant for decision-making, rather than prescribing a decision (ergo absence of "should", absence of judgment of the arguments). As mentioned in the intro of the post, I am hoping to inform or trigger a conversation/further research, rather than to directly presume that people should make a decision about it. I trust readers would derive decision-relevant aspects themselves (is that what I am wrong about?) Relevance vs value & importance:
I agree this magnifies the importance of the EU, but just if we assume that the EU is relevant.
I am not sure I understand. The way I see this, the value of EU governance work and the importance of the EU is a function of the relevance of the EU for AGI governance.
The concept of relevance:I am confused by "relevance" in your comment:
This point only seems like an argument for the EU's relevance if we assume (a) that the EU is relevant
These all seem like arguments for EU AI governance work being in some way valuable; I don't see how any of these are arguments for the EU being relevant to AI's trajectory.
Given your comment mentioning AI's trajectory, is it possible you understood the post as being about AI trajectory rather than the way we govern AI trajectory? Also, to be sure we are on the same page, in the post, relevance is not a binary concept and is directly related to the actions leading up to AGI governance. If you have time, please let me know what I misunderstood.
Agreed on the halo-effect, but besides one's prestige, I think one's region-specific knowledge and network matter a lot and does not transfer. As a result, if the EU is less relevant, building up prestige in the EU might not be as efficient as building up prestige in China or the US, given that in parallel you'd be developing a network and region-specific knowledge that will be more helpful to be impactful overall.(That being said, even though I wanted to avoid anchoring the reader by expressing my opinion in the post, I expect the EU to be most relevant right now for AGI governance given the institutional precedents it sets. I believe the lack of investment in EA time & money there is an unfortunate mistake. So the "if the EU is less relevant" scenario should be disregarded in my opinion.)