Bio

Participation
3

Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate

Interested in:

  • Sentience- & suffering-focused ethics; sentientism; painism; s-risks
  • Animal ethics & abolitionism
  • AI safety & governance
  • Activism, direct action & social change

Bio:

  • From London
  • BA in linguistics at the University of Cambridge
  • Almost five years in the British Army as an officer
  • MSc in global governance and ethics at University College London
  • One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising
  • Now pivoting to the (future) impact of AI on biologically and artifically sentient beings
  • Currently lead organiser of the AI, Animals, & Digital Minds conference in London in June 2025

How others can help me

I can help with

1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)

2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)

How I can help others

I can help with

1. Connections with the animal advocacy/activism community in London, with the AI safety advocacy community (especially/exclusively PauseAI)

2. Ideas on moral philosophy (sentience- and suffering-focused ethics, painism), social change (especially transformative social change) and leadership (partly from my education and experiences in the British Army)

Comments
27

We should probably be more painist:

[painism is…] the theory that moral value is based upon the individual’s experience of pain (defined broadly to cover all types of suffering whether cognitive, emotional, or sensory), that pain is the only evil, and that the main moral objective is to reduce the pain of others, particularly that of the most affected victim, the maximum sufferer. (Ryder 2010, p. 402)

  • I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
  • However, couching demands for an AGI moratorium in terms of "reducing x-risk" rather than "reducing suffering" seems
    • More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
    • More effective in communicating catastrophic AI risk to the public
Alistair Stewart
1
1
1
93% disagree

Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.

To shrimps and other sentient non-humans, we are a misaligned superintelligence

Durrell added - I wish all those protesting to animals living in zoos and claiming animals lead far happier lives in the wild - I wish they all saw this!

I agree we shouldn't assume that animals lead far happier lives in the wild; but I don't think that means we should support zoos (which unlike sanctuaries exist for the benefit of humans rather than the benefit of the animals, and typically rely on breeding animals).

To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being

This implies preferences matter when they cause well-being (positively-valenced sentience).

I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.

I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).

In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.

...

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

This implies that what matters is intrinsic preferences as opposed to revealed preferences.

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

This (I think) is a circular argument.

I don't have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world.

This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn't describe newborn humans in these terms.

If it's coherent to talk about a particular mind "wanting" something, then I think it matters from an ethical point of view.

I think we're back to square 1, because what does "wanting something" mean? If you mean "having preferences for something", which preferences (revealed, intrinsic, meaningful)?

My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that's all that matters morally in the world.

To be clear, which preferences do you think are morally relevant/meaningful? I'm not seeing a consistent thread through these statements.

To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being

...

I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.

I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.

...

In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.

...

These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.

In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.

It's not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsic preferences?):

  1. I subscribe to an eliminativist theory of consciousness, under which there is no "real" boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
  2. I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
Load more