Thanks! I agree that this issue is very important - this is why intersubstrate welfare comparisons are one of the four main AI welfare research priorities that I discuss in the post. FYI, Bob Fischer (who you might know from the moral weight project at Rethink Priorities) and I have a paper in progress on this topic. We plan to share a draft in late July or early August, but the short version is that intersubstrate welfare comparisons are extremely important and difficult, and the main question is whether these comparisons are tractable. Bob and I think that the tractability of these comparisons is an open question, but we also think that we have several reasons for cautious optimism, and we discuss these reasons and call for more research on the topic.
With that said, one minor caveat: Even if you think that (a) all systems are potential welfare subjects and (b) we should give moral weight to all welfare subjects, you might or might not think that (c) we should give moral weight to all systems. The reason is that you might or might not think that we should give moral weight to extremely low risks. if you do, then yes, it follows that we should give at least some moral weight to all systems, including systems with an extremely low chance of being welfare subjects at all. If not, then it follows that we should give at least some moral weight to all systems with a non-negligible chance of being welfare subjects, but not to systems with only a negligible chance of being welfare subjects.
Good question! I think that the best path forward requires taking a "both-and" approach. Ideally we can (a) slow down AI development to buy AI ethics, safety, and sentience researchers time and (b) speed up these forms of research (focusing on moral, political, and technical issues) to make good use of this time. So, yes, I do think that we should avoid creating potentially sentient AI systems in the short term, though as my paper with Rob Long discusses, that might be easier said than done. As for whether we should create potentially sentient AI systems in the long run (and how individuals, companies, and governments should treat them to the extent that we do), that seems like a much harder question, and it will take serious research to address it. I hope that we can do some of that research in the coming years!
Yes, I think that assessing the moral status of AI systems requires asking (a) how likely particular theories of moral standing are to be correct and (b) how likely AI systems are to satisfy the criteria for each theory. I also think that even if we feel confident that, say, sentience is necessary for moral standing and AI systems are non-sentient, we should still extend AI systems at least some moral consideration for their own sakes if we take there to be at least a non-negligible chance that, say, agency is sufficient for moral standing and AI systems are agents. My next book will discuss this issue in more detail.
Thanks! I share your concern about sadism. Insofar as AI systems have the capacity for welfare, one risk is that humans might mistakenly see them as lacking this capacity and, so, might harm them accidentally, and another risk is that humans might correctly see them as having this capacity and, so, might harm them intentionally. A difficulty is that mitigating these risks might require different strategies. I want to think more about this.
I also share your concern about objectification. I can appreciate why AI labs want to mitigate the risk of false positives / excessive anthropomorphism. But as I note in the post, we also face a risk of false negatives / excessive anthropodenial, and the latter risk is arguably worse (more likely and/or severe) in many contexts. I would love to see AI labs develop a more nuanced approach to this issue that mitigates these risks in a more balanced way.
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as digital minds are connected, we might see the world as containing a large number of small moral patients, a small number of large moral patients, or both. Luke Roelofs and I will be releasing work about this soon.
Another question is how much welfare they might have. No matter how we individuate them, they could have a lot, either because a large number of them have a small amount, a small number of them have a large amount, or both. I discuss possible implications here: https://www.tandfonline.com/doi/abs/10.1080/21550085.2023.2200724
It also seems plausible that some digital minds could process welfare more efficiently than biological minds because they lack our evolutionary baggage. But assessing this claim requires developing a framework for making intersubstrate welfare comparisons, which, as I note in the post, will be difficult. Bob Fischer and I will be releasing work about this soon.
Thanks Fai! Our year one goals include producing a research agenda and set of research priorities, so we still have an open mind about the details here. But generally speaking, I expect that our early research will focus on foundational questions that matter for both populations, and that insofar as we prioritize between them, our early research will prioritize AIs. (With that said, MEP is one of two new programs that we plan to launch this year, and the other one is more on the animal side. That one will be announced next week, so stay tuned for that!)
Yes, thanks for noting this Ben! Very, very excited about The Edge of Sentience - Jonathan and I traded drafts a while ago, and I think that his book is going to be a big deal. And happily the books will pair well together; they both argue for moral circle expansion on precautionary grounds, but they have different areas of focus in a way that makes them nicely complementary.
Thanks for preordering my book as well! :)