Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate
Interested in:
Bio:
I can help with
1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)
2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)
I can help with
1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)
2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)
Durrell added - I wish all those protesting to animals living in zoos and claiming animals lead far happier lives in the wild - I wish they all saw this!
I agree we shouldn't assume that animals lead far happier lives in the wild; but I don't think that means we should support zoos (which unlike sanctuaries exist for the benefit of humans rather than the benefit of the animals, and typically rely on breeding animals).
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
This implies preferences matter when they cause well-being (positively-valenced sentience).
I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This (I think) is a circular argument.
I don't have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn't describe newborn humans in these terms.
If it's coherent to talk about a particular mind "wanting" something, then I think it matters from an ethical point of view.
I think we're back to square 1, because what does "wanting something" mean? If you mean "having preferences for something", which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that's all that matters morally in the world.
To be clear, which preferences do you think are morally relevant/meaningful? I'm not seeing a consistent thread through these statements.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
...
I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
...
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.
It's not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsic preferences?):
- I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
- I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
We should probably be more painist: