Risks from malevolent actors

Applied to 2024 S-risk Intro Fellowship 6mo ago
Applied to Against Anonymous Hit Pieces 9mo ago
Applied to New book on s-risks 1y ago

In a pioneering essay on the topic, David Althaus and Tobias Baumann have proposed a number of interventions aimed at reducing risks from malevolent actors.[1] Their proposals include advancing the science of malevolence—by aligning constructs with the morally relevant forms of malevolence and by developing manipulation-proof measures of it—, and promoting political reforms to make the rise of malevolent actors less probable. Because malevolent actors likely pose the gravest risks in scenarios where they gain access to transformative technology, the authors also propose shaping the development of suchthose technologies so as to mitigate the harms posed by these individuals, such as by reducing the presence of malevolent traits in AI training environments and by ensuring that candidates for whole brain emulation score low on measures of malevolence.

Malevolence may be operationalized as the general factor that accounts for the observed correlations between negative personality traits—the so-called "dark core" of personality.[2] People who score unusually highly on this general factor could pose serious risks for the long-term future of humanity, including existential risks and risks of astronomical suffering. Such people appear to be more likely than average to attain positions of power in government and industry. And conditional on gaining this influenceinfluence, they are also much more likely to cause major harm.