Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
We're not yet decided on how we'll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
Thanks! I agree that using a term like "socially beneficial" might be better. On the other hand, it might be helpful to couch self-governance proposals in terms of corporate social responsibility, as it is a term already in wide use.
Some brief thoughts (just my quick takes. My guess is that others might disagree, including at GovAI):
Happy to give my view. Could you say something about what particular views or messages you're curious about? (I don't have time to reread the script atm)
Thanks Michael! Yeah, I hope it ends up being helpful.
I'm really excited to see LTFF being in a position to review and make such a large number of grants. IIRC, you're planning on writing up some reflections on how the scaling up has gone. I'm looking forward to reading them!
Thanks for pointing that out, Michael! Super helpful.
You can find the talk here.
Thanks for the catch :) Should be updated now
Hello, I work at the Centre for the Governance of AI at FHI. I agree that more work in this area is important. At GovAI, for instance, we have a lot more talented folks interested in working with us than we have absorptive capacity. If you're interested in setting something up at MILA, I'd be happy to advice if you'd find that helpful. You could reach out to me at email@example.com
That's exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I'm remembering correctly!)
To make it maximally helpful, you'd work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that's not possible (we do get a lot more good applications than we're able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you're saying and whether it will be relevant to their interests.