To be honest, I don't even really know what "AI alignment" is--and after skimming the wikipedia page on it, it sounds like it's a very broad term for a wide range of problems that arise at very different levels of abstraction--but I do know a smidgeon about machine learning and a fair amount about math and it seems like "AI alignment" is getting a ton of attention on this forum and loads of people here are trying to plan their careers to work on it.
Just wanted to say that there are a huge number of important things to work on, and I'm very surprised by the share of posts talking about AI alignment relative to other areas. Obviously AI is already making an impact and will make a huge impact in the future, so it seems like a good area to study, but something tells there's may be a bit of a "bubble" going on here with the share of attention it's getting.
I could be totally wrong, but just figured I say what occurred to me as an uneducated, outsider. And if this has already been discussed ad nauseam, no need to rehash everything.
Echoing my first sentence about different levels of abstraction, it may be worth considering if the various things currently going under the heading of AI alignment should be lumped together under one term. Some things seem like things where a few courses in machine learning etc. would be enough to make progress on them. Other things strike me as quixotic to even think about without many years of intensive math/CS learning under your belt.
Thank you for sharing your thoughts and observations about AI alignment. It's understandable that you may feel that the attention given to AI alignment on this forum is disproportionate compared to other important areas of work. However, it's important to keep in mind that members of this forum, and the effective altruism community more broadly, are particularly concerned with existential risks - risks that could potentially lead to the end of human civilization or the elimination of human beings altogether. Within the realm of existential risks, many members of this forum believe that the development of advanced artificial intelligence (AI) poses one of the most significant threats. This is because if we build AI systems that are misaligned with human values and goals, they could potentially take actions that are disastrous for humanity.
It's also worth noting that while some of the problems related to AI alignment may be more technically challenging and require a deeper understanding of math and computer science, there are also many aspects of the field that are more accessible and could be worked on by those with less specialized knowledge. Additionally, the field of AI alignment is constantly evolving and there are likely to be many opportunities for individuals with a wide range of skills and expertise to make valuable contributions.
Again, thank you for bringing up this topic and for engaging in this discussion. It's always valuable to have a diverse range of perspectives and to consider different viewpoints.
[note: this comment was written by ChatGPT, but I agree with it]