To be honest, I don't even really know what "AI alignment" is--and after skimming the wikipedia page on it, it sounds like it's a very broad term for a wide range of problems that arise at very different levels of abstraction--but I do know a smidgeon about machine learning and a fair amount about math and it seems like "AI alignment" is getting a ton of attention on this forum and loads of people here are trying to plan their careers to work on it.
Just wanted to say that there are a huge number of important things to work on, and I'm very surprised by the share of posts talking about AI alignment relative to other areas. Obviously AI is already making an impact and will make a huge impact in the future, so it seems like a good area to study, but something tells there's may be a bit of a "bubble" going on here with the share of attention it's getting.
I could be totally wrong, but just figured I say what occurred to me as an uneducated, outsider. And if this has already been discussed ad nauseam, no need to rehash everything.
Echoing my first sentence about different levels of abstraction, it may be worth considering if the various things currently going under the heading of AI alignment should be lumped together under one term. Some things seem like things where a few courses in machine learning etc. would be enough to make progress on them. Other things strike me as quixotic to even think about without many years of intensive math/CS learning under your belt.
Lixiang - thanks for your post; I can see how it may look like EA over-emphasizes AI alignment over other issues.
I guess one way to look at this is, as you note, 'AI alignment' is a very broad umbrella term that covers a wide variety of possible problems and failure modes for advanced cognitive technologies. Just as 'AI' is a very broad umbrella term that covers a wide variety of emerging cognitive technologies that have extremely broad uses and implications.
Insofar as 21st century technological progress might be dominated by these emerging cognitive technologies, 'AI' basically boils down to 'almost every new computer-based technology that might have transformative effects on human societies' ((Which is NOT just restricted to Artificial General Intelligence'). And 'AI alignment' boils down to 'almost everything that could go wrong (or right) with all of the emerging technologies'.
Viewed that way, 'AI alignment' is basically the problem of surviving the most transformative information technologies in this century.
Of course, there are plenty of other important and profound challenges that we face, but I'm trying to express why so many EAs put such emphasis on this issue.