DT

Daniel Tan

1 karmaJoined

Comments
1

One concern I haven't seen raised elsewhere: PauseAI was initiated during the 'era' of scaling training-time compute, and seems predicated on the assumption that we can stop the development of more advanced AIs by stopping big labs from training bigger models. 

However, the paradigm has shifted. As Ilya Sutskever discussed in his 2024 NeurIPS talk, we've hit slowdowns on scaling pre-training and post-training compute. (You could say that AI development along these lines has already naturally 'paused', not due to political pressure, but due to technical and economic factors) 

Instead, gains are increasingly coming from scaling inference-time compute. And this is no longer in the future - it can be straightforwardly done now, as long as you have the compute to run queries. So I feel like there's a need to also govern inference-time laws. 

(FWIW: I recognise two things can be important at once)