Thanks for the detailed response kokotajlod, I appreciate it.
Let me summarize your viewpoint back to you to check I've understood correctly. It sounds as though you are saying that AI (broadly defined) is likely to be extremely important and the EA community currently underweights AI safety relative to its importance. Therefore, while you do think that not everyone will be suited to AI safety work and that the EA community should take a portfolio approach across problems, you think it's important to highlight where projects do not seem as important as work... (read more)
This type of reasoning seems to imply that everyone interested in the flourishing of beings and thinking about that from an EA perspective should focus on projects that contribute directly to AI safety. I take that to be the implication of your comment because it is your main argument against working on something else (and could equally be applied to any number of projects discussed on the EA Forum not just this one). That implies, to me at least, extremely high confidence in AI safety being the most important issue because at lower confidence we would wan... (read more)
While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here?
My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like: