this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.
Policy recommendations:
1. Mandate robust third-party auditing and certification.
2. Regulate access to computational power.
3. Establish capable AI agencies at the national level.
4. Establish liability for AI-caused harms.
5. Introduce measures to prevent and track AI model leaks.
6. Expand technical AI safety research funding.
7. Develop standards for identifying and managing AI-generated content and recommendations.
Shouldn't #6 be at the top? It seems to be one of the most critical factors for getting larger numbers of hardcore-quantitative people (e.g. polymaths, string theorists) to go into AI safety research. It will probably be one of those people, not anyone else, who makes a really big discovery.
There's already several orders of magnitude more hardcore-quant people dithering about trying to do pointless things like solving physics, and most of what we have so far is from the tiny fraction of the hardcore-quant people who ended up in AI safety research.
Trevor -- yep, reasonable points. Alignment might be impossible, but it might be extra impossible with complex black box networks with hundreds of billions of parameters & very poor interpretability tools.