Alongside my role at CRS, I am co-organising Sentient Futures Summit London 2026, which will be Friday 22nd to Sunday 24th May (the weekend before EA Global London).
My career goal is to prevent, reduce and alleviate the most intense forms of suffering, with a focus on the intersection of powerful AI and sentient nonhumans (biological and artificial).
Interested in:
* Sentience- & suffering-focused ethics; sentientism; painism; s-risks
* Animal ethics & abolitionism
* AI safety & governance
* Activism, direct action & social change
Bio:
* From London
* BA in linguistics at the University of Cambridge, 2014-17
* Almost five years in the British Army as an officer, 2018-22
* MSc in global governance and ethics at University College London, 2022-23
* One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising, 2023-24
* Lead organiser of AI, Animals, & Digital Minds 2025 in London
* Now working part-time on fundraising and external relationships at CRS, and part-time co-organising Sentient Futures Summit London 2026
If you can help fill CRS's funding gap for 2026 (between $25k and $125k) – by donating or putting me in touch with donors.
I guess the causal mechanism I'm thinking of here is:
Maybe this is foolish and naive on my part! And maybe I'm wrong to think our moral preferences/intuitions will be so robust to the disruption of AGI, even if AGI goes well for us.
Some really cool points here Lee, and I mostly agree with you I think.
Crux: how many actors have terminal preferences for suffering? agency may be amplified for animal advocates, but it could also be amplified for malevolent actors.
This could be very important. I'm not sure what it means for AGI to go well for humans if some of those humans have terminal preferences for suffering / are sadistic. If the AGI protects the rest of us from the sadists, is AGI going well for the sadists?
EDIT: as well as sadists, we can consider humans who think animal agriculture, testing etc. has enough aesthetic/historical/cultural value that it's worth continuing to do it in a post-AGI world of abundance.
I need to think about b) more. I see arguments in both directions.
I don't think I can properly imagine what it's like to be tortured or eaten alive, and yet the thought of each happening to me or someone else makes me feel some combination of horror, fear, upset and compassion. And the idea of suffering more intense than torture or being eaten alive (if future artificially sentient beings have wider welfare ranges than we do) is terrifying to me.
But if I could never suffer worse than a pinprick, maybe I would stop caring about the most intense forms of suffering. Concerning stuff.
What kinds of values will humans have post-AGI, if AGI goes well for us? We don't need to be scope-sensitive utilitarians to want to adopt even radical preferences like ending animal exploitation and solving WAS, no? (Most humans don't like factory farming or the idea of cute animals being eaten alive.)
I should have sketched this out more.
In my view, AGI going well for humans should see:
Some kind of AGI technological innovation will be able to do 1); not clear to me how we get to 2), as we'll probably need some kind of political pro-democracy innovation (I don't think our existing political institutions will get us there).
What this world actually looks like, feels like, is very unclear to me! But if we do both those things, it seems more likely than not that we humans will both want and be able to help animals by abolishing animal exploitation and solving wild animal suffering.
Toby, would you be more optimistic for animals if we can align AGI to specific values rather than just making it corrigible to humans' preferences and commands?
My impression is that pro-animal views are (dramatically?) overrepresented at Anthropic relative to the rest of society. If Anthropic gets to AGI first and instils/locks in pro-animal values in/to that AGI, that seems better for animals than if whoever gets to AGI first just makes it purely corrigible, because most humans who operate the purely corrigible AGI won't be as pro-animal.
My position statement (50% agree with the statement "If AGI goes well for humans, it'll go well for animals")
My position statement
Thanks for organising Toby!