N

Nihal

5 karmaJoined May 2022

Comments
3

Answer by NihalOct 13, 20222
0
0

It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.

How confident are safety researchers about this point?

At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?

Answer by NihalOct 13, 20224
0
0

How are DALYs/QALYs determined?

Life years are pretty objective but how are the disability/quality adjustments made?

Answer by NihalOct 13, 20222
0
0

Why should EAs be risk-neutral? 

When people talk about being risk neutral are they only referring to DALYs/QALYs or are they also referring to monetary returns?

For example, if you plan to earn to give and can either take a high-paying salaried position or join a startup and both options have equal EV why shouldn't you have a preference for the less risky option? 

 

My understanding of why individuals should be risk-averse with respect to money is that money has diminishing marginal returns. Doesn't money also have diminishing marginal returns for helping other people so EAs should be somewhat risk averse when earning to give?