S

samjm

9 karmaJoined Nov 2022

Comments
2

Great stuff! Agree on the importance of this. I think that the odds of this type of disruption being harmful are largely a function of the pace at which increasingly capable systems are deployed. If you go from e.g. 20% task automation capabilities to 100% over the course of 50 years that will be a far less disruptive, and more equitable transition than one that happens over 3 years. In the fast takeoff case, I would argue that there probably is not a social safety net program that could adequately counter the social, political and economic disruptions caused by that pace of deployment. So while we should plan to build societal resilience via institution building and shoring up safety nets for sure, we may want to consider “figure out optimal deployment speeds for aligned, not dangerous, misuse-proof AI” and “figure out the right regulatory mechanisms to enforce those timelines on AI labs” to this research agenda as well.

Thanks for posting these! It’s helpful both as a nudge to participate and as a way of keeping up with human challenge trials research.