Why should EAs be risk-neutral?
When people talk about being risk neutral are they only referring to DALYs/QALYs or are they also referring to monetary returns?
For example, if you plan to earn to give and can either take a high-paying salaried position or join a startup and both options have equal EV why shouldn't you have a preference for the less risky option?
My understanding of why individuals should be risk-averse with respect to money is that money has diminishing marginal returns. Doesn't money also have diminishing marginal returns for helping other people so EAs should be somewhat risk averse when earning to give?
It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
How confident are safety researchers about this point?
At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?