SC

Steven Cuppen

1 karmaJoined
Interests:
Forecasting

Comments
1

The FTX contest description listed "two formidable problems for humanity": 

"1. Loss of control to AI systems
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.

2. Concentration of power
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future."

My sense is that the contest is largely framed around (1) at the neglect (2). Nick Beckstead's rationale behind his current views is based around a scenario involving power seeking AI, whereas arguably scenarios related to (2) don't require the existence of AGI in the first place. which is central to the main forecasting question. It seems AI developments short of AGI could be enough for all sorts of disruptive changes with catastrophic consequences, for instance in geopolitics. 

Based on my limited understanding, I'm often surprised how little focus there is within the AI safety community on human misuse of (non-general) AI. In addition to not requiring controversial assumptions about AGI, these problems also seems more tractable since we can extrapolate from exisiting social science and have a clearer sense of what the problems could look like in practice. This might mean we can forecast more accurately, and my current sense is that it's not obvious AI-related catastrophic consequences are more likely to come from AGI than human misuse (of non-AGI). 

Maybe it would be helpful to frame the contest more broadly around catastrophic consequences resulting from AI.