Bio

Participation
4

Alongside my role at CRS, I am co-organising Sentient Futures Summit London 2026, which will be Friday 22nd to Sunday 24th May (the weekend before EA Global London).

My career goal is to prevent, reduce and alleviate the most intense forms of suffering, with a focus on the intersection of powerful AI and sentient nonhumans (biological and artificial).

Interested in:

* Sentience- & suffering-focused ethics; sentientism; painism; s-risks
* Animal ethics & abolitionism
* AI safety & governance
* Activism, direct action & social change

Bio:

* From London
* BA in linguistics at the University of Cambridge, 2014-17
* Almost five years in the British Army as an officer, 2018-22
* MSc in global governance and ethics at University College London, 2022-23
* One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising, 2023-24
* Lead organiser of AI, Animals, & Digital Minds 2025 in London
* Now working part-time on fundraising and external relationships at CRS, and part-time co-organising Sentient Futures Summit London 2026

How others can help me

If you can help fill CRS's funding gap for 2026 (between $25k and $125k) – by donating or putting me in touch with donors.

Comments
65

I guess the causal mechanism I'm thinking of here is:

  1. Most humans feel at least a little sad when they see a baby gazelle being eaten alive by hyenas
  2. AGI is so powerful that humans can order it to do things like "stop baby gazelles being eaten alive whilst retaining the beauty of nature and the complexity of ecosystems" and then it'll just go away and do it somehow

Maybe this is foolish and naive on my part! And maybe I'm wrong to think our moral preferences/intuitions will be so robust to the disruption of AGI, even if AGI goes well for us.

Some really cool points here Lee, and I mostly agree with you I think.

Crux: how many actors have terminal preferences for suffering? agency may be amplified for animal advocates, but it could also be amplified for malevolent actors.

This could be very important. I'm not sure what it means for AGI to go well for humans if some of those humans have terminal preferences for suffering / are sadistic. If the AGI protects the rest of us from the sadists, is AGI going well for the sadists?

EDIT: as well as sadists, we can consider humans who think animal agriculture, testing etc. has enough aesthetic/historical/cultural value that it's worth continuing to do it in a post-AGI world of abundance.

I need to think about b) more. I see arguments in both directions.

I don't think I can properly imagine what it's like to be tortured or eaten alive, and yet the thought of each happening to me or someone else makes me feel some combination of horror, fear, upset and compassion. And the idea of suffering more intense than torture or being eaten alive (if future artificially sentient beings have wider welfare ranges than we do) is terrifying to me.

But if I could never suffer worse than a pinprick, maybe I would stop caring about the most intense forms of suffering. Concerning stuff.

What kinds of values will humans have post-AGI, if AGI goes well for us? We don't need to be scope-sensitive utilitarians to want to adopt even radical preferences like ending animal exploitation and solving WAS, no? (Most humans don't like factory farming or the idea of cute animals being eaten alive.)

This makes sense. I would worry about the purely corrigible AGI being used by actors in such a way that we never get to instil the correct/good/post-long-reflection values in AGI/ASI down the line.

I should have sketched this out more.

In my view, AGI going well for humans should see:

  1. Intense (and perhaps even moderate) human suffering eradicated
  2. Probably, humans remaining empowered
    1. Probably, our species isn't disempowered by AGI; and
    2. Probably, there isn't severe inter-human inequality, specifically inequality of power; we don't have a political elite determining how all other human lives go

Some kind of AGI technological innovation will be able to do 1); not clear to me how we get to 2), as we'll probably need some kind of political pro-democracy innovation (I don't think our existing political institutions will get us there).

What this world actually looks like, feels like, is very unclear to me! But if we do both those things, it seems more likely than not that we humans will both want and be able to help animals by abolishing animal exploitation and solving wild animal suffering.

Toby, would you be more optimistic for animals if we can align AGI to specific values rather than just making it corrigible to humans' preferences and commands?

My impression is that pro-animal views are (dramatically?) overrepresented at Anthropic relative to the rest of society. If Anthropic gets to AGI first and instils/locks in pro-animal values in/to that AGI, that seems better for animals than if whoever gets to AGI first just makes it purely corrigible, because most humans who operate the purely corrigible AGI won't be as pro-animal.

My position statement (50% agree with the statement "If AGI goes well for humans, it'll go well for animals")

  • As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
    • Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
    • Perhaps similar arguments can be made for the industrial and digital revolutions
    • Even AGI Omelas is not an instance of AGI going well
  • “AGI going well" necessarily leaves many humans the stated preference to help animals (which might look like "abolishing animal exploitation and solving wild animal suffering"), and it certainly gives us the means and opportunity to do so
  • I happen to think that AGI going well for humans is unlikely, even by the lights of someone who is more upside-focused
    • We're on track for creating something that is more intelligent than us (better at understanding the world and achieving goals within it) – and probably something with awareness, autonomy, agency, and the capacity for recursive self-improvement and self-replication – without understanding how it works, how to make it do what we want, or what it is we even want it to do
  • So, between normative and empirical claims, I believe a world in which AGI goes well for humans is a very small fraction of the possibility space
  • And when I try to think about what this AGI-going-well-for-humans world looks like, mostly I don’t really know, but it seems likely that in this world:
    • We retain and develop our moral wisdom (the most fundamental tenet of which is plausibly “non-maleficence and compassion towards all sentient beings”)
    • And we have the means to enact this moral wisdom
    • So, we abolish animal exploitation and solve wild animal suffering
    • Thus, AGI goes well for animals as well as humans!

My position statement

  • As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
    • Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
    • Perhaps similar arguments can be made for the industrial and digital revolutions
    • Even AGI Omelas is not an instance of AGI going well
  • “AGI going well" necessarily leaves many humans the stated preference to help animals (e.g. abolishing animal exploitation and solving wild animal suffering), and it certainly gives us the means and opportunity to do so
  • I happen to think that AGI going well for humans is unlikely, even by the lights of someone who is more upside-focused
    • We're on track for creating something that is more intelligent than us (better at understanding the world and achieving goals within it) – and probably something with awareness, autonomy, agency, and the capacity for recursive self-improvement and self-replication – without understanding how it works, how to make it do what we want, or what it is we even want it to do
  • So, between normative and empirical claims, I believe a world in which AGI goes well for humans is a very small fraction of the possibility space
  • And when I try to think about what this AGI-going-well-for-humans world looks like, mostly I don’t really know, but it seems likely that in this world:
    • We retain and develop our moral wisdom (the most fundamental tenet of which is plausibly “non-maleficence and compassion towards all sentient beings”)
    • And we have the means to enact this moral wisdom
    • So, we abolish animal exploitation and solve wild animal suffering
    • Thus, AGI goes well for animals as well as humans!
Load more