Best of: AGI & Animals Debate Week

We had 18 dedicated posts written for AGI & Animals Debate Week. For this highlight reel, I’m forcing myself to choose only five. Opinions may differ, so if you haven’t at least glanced at the posts yet, check out the full list here

My top five, in no particular order, were:

Cultivated meat isn’t necessarily a solved problem under AGI (@Hannah McKay🔸 )

Without this post, it'd be all too easy to say "well, if we had AGI, it'd solve cultivated meat, so there wouldn't be farm animal suffering". Hannah McKay problematises this issue, but in a way which allows her to outline many possible futures, rather than restricting herself to a best guess or most plausible path. Hannah was also a part of our symposium, which you can read here

List of ideas for improving animal welfare in light of transformative AI (@MichaelDickens)

Michael Dickens was absolutely blasting out relevant (and good) posts during the debate week. I chose this one to share here because it covers an angle that was lacking in a lot of the debate, i.e., what we can actually do about aligning AI for animal welfare, and whether it is tractable. 

Check out his other posts here, he wrote one for every day of the week. 

AI Safety and Cross-Species Robustness: A brief critical review (@Jim Buhler)

Jim Buhler's post is an example of well-practised equipollence — the sceptical skill of balancing arguments just right, so that the thinker is left unmoved in either direction. I think Jim justifies his 0% agree/disagree vote on the debate statement, and he might also justify yours.

If AGI goes well for humans, will it go well for (wild) animals? (@Simon Eckerström Liedholm)

Thanks to Simon Eckerström Liedholm for giving us a post specifically about wild animals, who, as some commenters on the discussion thread pointed out, represent most animals. The post is a thoughtful defence of a specific credence: a 30% chance that if AGI goes well for humans, it'll go well for animals. 

Animal Welfare is Just Part of AI Alignment Now (@Aidan Kankyoku)

Aidan Kankyoku's post is very practically directed. In it, he argues that if you put some credence (he gives 10% in one example) to the view that AGI that goes well for humans will not go well for animals, you'd best be thinking about how to make that number go down. As such, he argues that Animal Welfare and AI Alignment are (and/or should be) both part of a broader 'Make AI Go Well' movement.

Again, this is my opinion rather than the result of a survey. I'm almost definitely missing some gems... which you can find here