PK

Pauline Kuss

0 karmaJoined Sep 2022

Comments
1

I am intrigued by your point that superhuman intelligence does not imply an AI’s superhuman power to take over the world. Highlighting the importance of connecting information-based intelligence with social power, including mechanisms of coordinating and influencing humans, suggests that AI risks ought to be considered not from a purely technical, but from a socio-technical perspective instead. Such socio-technical framing raises the question how technical factors (e.g. processing power) and social factors (e.g. rights and trust vested into the system by human actors; social standing of the AI) interrelate in the creation of AI risk scenarios. Do you know of current work in the EA community on the mechanisms and implications of such socio-technical understanding of AI risks?