OpenAI put out a new safety paper: summary (a), paper
The abstract:
In this paper, we argue that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact.
Ensuring that AI systems are developed responsibly may therefore require preventing and solving collective action problems between companies.
We note that there are several key factors that improve the prospects for cooperation in collective action problems. We use this to identify strategies to improve the prospects for industry cooperation on the responsible development of AI.
First thought is to wonder why prizes aren't more common. E.g. awards for fostering cross organizational coordination, either on the object (direct cross org efforts that result in research) or meta level (platforms, conferences etc.). One guess is that prize grantors don't gain enough from granting them. Grantors might also have a systematic aversion to paying for things that have already happened without much guarantee that doing so will incentivize further desired behavior.