Thanks, that link definitely touches on many of the same points!
Where my proposal is more concrete is that the models should learn morals using evolutionary pressures / RL rewards that are designed using game theory to push towards cooperation and tit-for-tat.
Thanks, that link definitely touches on many of the same points! Where my proposal is more concrete is that the models should learn morals using evolutionary pressures / RL rewards that are designed using game theory to push towards cooperation and tit-for-tat.