I have written a new GovAI blog post - link here.
How should labs should share large AI models? I argue for a "structured access" approach, where outsiders interact with the model at arm's length. The aim is to both (a) prevent misuse, and (b) enable safety-relevant research on the model. The GPT-3 API is a good early example, but I think we can go even further. This could be a promising direction for AI governance.
I would be interested to hear people's thoughts :)