"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).
In this post, I argue that:
1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section).
2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI.
The problem
What is Moral Alignment?
AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings.
Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Hi all,
I'm Chase Enright and it's nice to (virtually) meet all of you.
EA is pretty new to me and I'm looking forward to learning more through this forum. I'm the Foundation Relations Officer at Ploughshares Fund, where I work with other foundations and institutional funders in the nuclear risk reduction field.* Prior to Ploughshares Fund, I've worked at other nuclear risk reduction orgs ranging from traditional "guns, guards, gates"-type research at the Stimson Center's nuclear security program to doing grassroots advocacy with Beyond the Bomb.
The areas of EA that I'm particularly interested in include existential risk (of course) and global health. It's really encouraging to see all of the enthusiasm and interest people have for these challenging topics and I look forward to participating in the rigorous conversations on this forum.
*Usual disclaimers: I am on this forum in my personal capacity. Unless explicitly stated otherwise, all opinions/statements are solely my own and do not reflect the policy or grant making priorities of Ploughshares Fund.
Hello Chase, and welcome. I'm also new here, and was convinced to join EA due to its commitments around existential risk and animal advocacy. Nuclear war is one of our biggest x-risks, and even without a full nuclear exchange war is unbelievably destructive. I look forward to reading your posts here on nuclear security and x-risk as a whole.