Sometime in the next few decades, and potentially much sooner, the world is likely to see explosive economic growth from artificial intelligence that can automate human labor.
One of the consequences of this massive scaling up of economic activity will be a large increase in the stock of computing hardware. In the present, a lot of computers are owned by private citizens, who have a great deal of freedom in what programs they choose to run on them. To the extent that they partake of the wealth generated by the new world, this will probably continue.
In addition to an increase in hardware, of course another consequence of the AI revolution will be more kinds of software that we can run on that hardware. This may include computer programs that constitute moral patients. Even if for some reason you don’t think the kinds of AIs that will end up generating most economic value will be conscious, there may be other kinds of programs available that will be - maybe uploads of humans or other animals, or AIs more closely inspired by the particular anatomy of mammalian brains. A narrow focus on the rights of highly capable AI workers misses the fact that the future will probably be very BIG, and contains lots of different kinds of minds.
Now, unless there is something to prevent them, some people will choose to subject digital minds running on computers they own to horrific abuses. I have a suspicion that one of the reasons futurists haven’t thought much about this issue is because they’ve expected that AI will either bring about a singleton utopia or human extinction. But I think more intermediate, decentralized outcomes are probably more likely.
It is not too difficult to find people online talking about how they enjoy subjecting LLM-generated characters to gory or psychologically stressful scenarios. I am not taking a position on whether this is already immoral, but at some point in the development of digital minds it will be. Because of their rarity, effective altruists don’t focus on deviant, criminal acts of extreme sadism, but they are some of the most disturbing stories to read about. A world in which perpetrating these kinds of crimes is as easy as, say, distributing child pornography is in the internet age would be a catastrophe.
Bringing up this issue in person, I’ve gotten the response that it is already pretty easy for someone who wants to to torture animals in secret. What makes what I’m talking about so different from the current world? For one, digital minds could be optimized to experience far higher levels of suffering than is realizable by biological organisms. The duration of their sufferings could also probably be manipulated more easily, though the relation between computer speeds and the speeds of minds running on them is a subtle issue that isn’t clear to me.
I care most about relieving the most intense forms of suffering, like cluster headaches, and the level of control people may have over digital minds makes me think that the most intense forms of suffering in the future will sadly be deliberately engineered.
More Attention Needed!
Here I’ll just a list some topics I think deserve a closer look:
Getting more quantitative about the post-singularity world: is it possible to develop a model that will give more precise estimates of some of the quantities I’ve mentioned in this post? Ex. “large increase in the stock of computing hardware”: how much should we actually expect, how much will be available to private consumers, how many digital minds could be run on them?
Technical feasibility of enforcement: if governments wanted to pass laws preventing abuses, how could they enforce them? It seems like full prevention would require some kind of intensive monitoring regime. How could this work, in detail? (Why not leave this up to future intelligences to figure out? Well, I’m worried we might end up getting “locked in” to an equilibrium where it’s too late to implement a good plan. Having a very clear idea of what the target should be might help prevent this)
Political feasibility of enforcement: Effective policing could raise serious privacy and concentration of power issues. Are there any good ways of addressing these issues without sacrificing robust protections for digital minds? Additionally, are there other political concerns that can be effectively allied with this one? For example, perhaps effective bio-security will also eventually require intensive monitoring.
