AI governance is an enormous challenge. The paper that we'll be discussing today addresses part of the question: suppose there are standards in place governing the training of large-scale ML projects. How can we ensure these standards are followed? And can we preserve privacy and security while doing it?
After a round of introductions and catching up on any AI news, we'll do a presentation on this paper (https://arxiv.org/abs/2303.11341) with plenty of opportunities for questions and discussion.
Some key questions to think about:
- the framework seems ambitious and involves the cooperation or involvement of a lot of actors. What would it take for there to be the political appetite for this kind of approach? Do we need to wait for some kind of disaster?
- is it technically feasible?
- the framework only covers training, not deployment of ML systems. How do you distinguish "good" from "bad" AI at the training stage?
- are very large scale training runs still going to be relevant in a few years' time?
- is there a different approach we can take instead?
