Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
1/6 might be high, but perhaps not too many orders of magnitude off. There is an interview in the 80000hours podcasts (https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/) about a forecasting contest in which experts and superforecasters estimated AI extinction risk in this century to be 1% to 10%. And after all, AI is likely to dominate the prediction.
Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
This reminds me of quantum computers or fusion reactors — we can build them, but the economics are far from working.
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
I think you should explain in this post what the pledge people may take :-)
I am particularly interested in how to pledge more concrete. I have always thought that the 10% pledge is somewhat incomplete because it does not consider the career. However, I think it would be useful to make the career pledge more actionable.