EAs concerned about AI alignment need to challenge much more actively the dominant AI-apologist narratives that (1) 'AGI is coming no matter what we do, given the geopolitical and corporate arms races driving AI development', (2) 'Any strong opposition to advancing AI capabilities reflects reactionary luddite ignorance about AI', and (3) 'AGI must be worth developing, despite all the risks, because it can and will solve all of our other problems'.
All three of these narratives strike me as fatalistic cope -- as if EAs can help solve every major problem in the world, with the unique exception of having zero power to slow down AI development if we decide that it's prudent to slow it down.
We have to confront the possibility that we live in a world where radically slowing down AI capability development is one of the most important strategies for (1) giving us time to seriously assess whether AGI alignment is even possible, in principle (and it might not be), and (2) planning specific strategies for AGI alignment, if it seems possible.
I basically agree but following this advice would require lowering one's own status (relative to the counterfactual). So its not surprising people dont follow the advice.
Andrea - I strongly agree.
EAs concerned about AI alignment need to challenge much more actively the dominant AI-apologist narratives that (1) 'AGI is coming no matter what we do, given the geopolitical and corporate arms races driving AI development', (2) 'Any strong opposition to advancing AI capabilities reflects reactionary luddite ignorance about AI', and (3) 'AGI must be worth developing, despite all the risks, because it can and will solve all of our other problems'.
All three of these narratives strike me as fatalistic cope -- as if EAs can help solve every major problem in the world, with the unique exception of having zero power to slow down AI development if we decide that it's prudent to slow it down.
We have to confront the possibility that we live in a world where radically slowing down AI capability development is one of the most important strategies for (1) giving us time to seriously assess whether AGI alignment is even possible, in principle (and it might not be), and (2) planning specific strategies for AGI alignment, if it seems possible.
I basically agree but following this advice would require lowering one's own status (relative to the counterfactual). So its not surprising people dont follow the advice.