44 karmaJoined


Thank you for these references, I'll take a close look on them. I'll write a new comment if I have any thoughts after going through them.

Before having read them, I want to say that I'm interested in research about risk estimation and AI progress forecasting. General research about possible AI risks without assigning them any probabilities is not very useful in determining if a threat is relevant. If anyone has papers specifically on that topic, I'm very interested in reading them too.

I do agree that there is some risk, and it's certainly worth some thought and research. However, in the EA context, the cause areas should have effective interventions. Due to all this uncertainty, AI risk seems a very low-priority cause, since we cannot be sure if the research and other projects funded have any real impact. It would seem more beneficial to use the money for interventions that have been proved effective. That is why I think that EA is a wrong platform for AI risk discussion.