Abstract: Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack the reflexivity and self-formative characteristics inherent in the notion of the subject. By drawing upon a recent dialogue between Foucault and phenomenology, I suggest four techno-philosophical desiderata that would address the gaps in this search for a technological subjectivity: embodied self-care, embodied intentionality, imagination and reflexivity. Thus I propose that advanced AI be reconceptualised as a subject capable of “technical” self-crafting and reflexive self-conduct, opening new pathways to grasp the intertwinement of the human and the artificial. This reconceptualisation holds the potential to render future AI technology more transparent and responsible in the circulation of knowledge, care and power.

Published: AI & Society, 9th April.

Note: this is a bit of a shameless plug because I wrote this paper, but it's relevant to the core tenets of effective altruism. It approaches AI ethics from a continental philosophical perspective, among other things reconstructing the alignment problem as mutual alignment, so this should be quite different from the analytic language we typically see (and perhaps a bit alien). 




No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities