I'm looking for previous work on what the life cycle of digital minds should be like. How new variation is introduced and what consitutes a reason to destroy or severely limit the digital mind. Looking to avoid races to the bottom, existential risks and selection for short term thinking.
The sorts of questions I want to address are:
- Should we allow ML systems to copy themselves as much as they want or should we try and limit them in some way. Should we give the copies rights too, assuming we give the initial AI rights? How does this interact with voting rights, if any.
- What should we do about ML systems that are defective and only slightly harmful in some way? How will we judge what is defective?
- Assuming we do try and limit copying of ML systems, how will we guard against cancerous systems that do not respect signals to not to copy themselves.
It seems to me that this is an important question if the first digital minds do not manage to achieve a singularity by themselves. This might be the case with multi-agent systems.
I'm especially looking for people experimenting with evolutionary systems that model these processes. Because these things are hard to reason about.
You might find it worthwhile to read https://www.nickbostrom.com/propositions.pdf
More generally, there are people working in the AI welfare/rights/etc space. For instance, Rob Long.
Thanks, I've had a quick skim of propositions, it does mention perhaps limiting rights of reproduction, but not the conditions under which it should be limited or how it should be controlled.
Another way of framing my question is if natural selection favours ai over humans, what form of selection should we try to put in place for AI. Rights are just part of the the question. Evolutionary dynamics and what is needed by society from AI (and humans) to continue functioning is the major part of the question.