I'm a PhD candidate at Mila (https://mila.quebec/en/mila/).  I've spoken with a lot of people there (and some people elsewhere) who want to work on AI governance, but don't know where to begin.  At the same time, I believe there is significant interest in governance experts in working on AI, but lacking the technical expertise makes it less likely that their work will be relevant.  Bruce Schneier talks about the need for a "public interest technologist" role / career path with the same prominence and prestige of "public interest law".  This seems on-point.

Since before I entered the field (in 2013), I've been convinced that improvements in governance (including fundamental changes to institutions, e.g. as envisioned by RadicalXChange) would be important to making AI go well (whether it turns out to be a significant source of x-risk or not!)  This is in apparent contrast with a large proportion of the AI alignment community, who seem more optimistic about purely technical solutions than I am.

Increasingly, people in AI -- especially those concerned with the social impact of AI and the power of big tech companies -- are deciding that AI governance seems necessary.  Companies seem to have failed to effectively "self-regulate".  They have invested substantially in "FATE-ML", which is often criticized (I believe rightly so) as too focused on "solutioneering", and failing to address underlying structural issues which are the root cause of the socio-technical problems with the way AI systems are deployed.  

This investment in FATE-ML can serve as a form of "ethics-washing".  It is telling that companies are investing heavily in promoting these lines of research, and not doing the same for AI governance.  I believe there is substantial untapped interest among technical AI researchers in working on AI governance.  What is missing is mentorship from people with governance expertise, institutional support, and funding.  EA orgs can potentially help with these things.

I know there have already been some big moves in this space (CSET).  But I think there is appetite for a lot more!  And more interdisciplinary work with heavy involvement from ML experts.  I think most large academic AI/ML research groups could support such a governance research group/center/institute/whatever.  It also seems like AI governance is gathering steam.  I think people in EA thinking about AI governance have typically been overly concerned about alienating the AI research community, including the many people working for big tech.  Thus there has been an emphasis on hypothetical regulations at some point in the future.  I believe AI governance is going to move forward with or without involvement from the EA community, and now is a good time to start trying to get ahead of this trend.

8

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 1:33 PM

Hello, I work at the Centre for the Governance of AI at FHI. I agree that more work in this area is important. At GovAI, for instance, we have a lot more talented folks interested in working with us than we have absorptive capacity.  If you're interested in setting something up at MILA, I'd be happy to advice if you'd find that helpful. You could reach out to me at markus.anderljung@governance.ai

Curated and popular this week
Relevant opportunities