KB

Kaspar Brandner

125 karmaJoined

Posts
1

Sorted by New

Comments
51

The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for "future generations" (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.

Though I'm indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.

This argument appears very similar to the one I addressed in the essay about how delaying or accelerating AI will impact the well-being of currently existing humans. My claim is not that it isn't bad if humanity goes extinct; I am certainly not saying that it would be good if everyone died.

I'm not supposing you do. Of course most people have a strong preference not to die. But there is also (beyond that) a widespread preference for humanity not to go extinct. This is why it e.g. would be so depressing (as in the movie Children of Men) when a global virus made all humans infertile. Ending humanity is very different from and much worse than people merely dying at the end of their lives, which by itself doesn't imply extinction. Many people would likely even sacrifice their own life in order to safe the future of humanity. We don't have a similar preference for having AI descendants. That's not speciesist, it's just what our preferences are.

I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.

No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn't harm them if we decide against creating them. Even if the AIs would be happier than the humans.

Yes, though I don't think that contradicts anything I said originally.

Not coming into existence would not be a future harm to the person that doesn't come into existence, because in that case it not only doesn't exist, it also won't exist. That's different from a person that would suffer from something, because in that case it would exist.

For preference utilitarianism, there aren't any fundamentally immoral "speciesist preferences". Preferences just are what they are, and existing humans clearly have a strong and overwhelming-majority preference for humanity to continue to exist in the future. Do we have to weigh these preferences against the preferences of potential future AIs to exist, on pain of speciesism? No, because those AIs do not now exist, and non-existing entities do not have any preferences, nor will they have any if we don't create them. So not creating them isn't bad for them. Something could only be bad for them if they existed. This is called the procreation asymmetry. There are strong arguments for the procreation asymmetry being correct, see e.g. here.

The case is similar to a couple which is about to decide whether to have a baby or get a robot. The couple strongly prefers having the baby. Now, both not creating the baby and not creating the robot isn't bad for the robot nor the baby, since neither would suffer their non-existence. However, there is still a reason to create the baby specifically: The parents want to have one. Not having a baby wouldn't be bad for the non-existent baby, but it would be bad for the parents. So the extinction of humanity is bad because we don't want humanity to go extinct.

Sorry, a belated response. It is true that existing humans having access to a decreasing relative share of resources doesn't mean their absolute well-being decreases. I agree the latter may instead increase, e.g. if such AI agents can be constrained by a legal system. (Though, as I argued before, a rapidly exploding number of AI agents would likely mean they gain more and more political control, which might mean they eventually get rid of the legal protection of a human minority that has increasingly diminishing political influence.)

However, this possibility only applies to increasing well-being or absolute wealth. It then is still likely that we will lose most power and will have to sacrifice a large amount of our autonomy. Humans do not just have a preference for hedonism and absolute wealth, but also for freedom and autonomy. Being mostly disempowered by AI agents is incompatible with this preference. We may be locked in an artificial paradise inside a golden cage we can never escape.

So while our absolute wealth may increase with many agentic AIs, this is still uncertain, depending e.g. on whether stable, long-lasting legal protection for humans is compatible with a large number of AI agents gaining rights. And our autonomy will very likely decrease in any case. Overall the outlook is does not seem to clearly speak in favor of a future full of AI agents being positive for us.

Moreover, the above, and the points you mentioned, only apply the the second of my three objections I listed in my previous comment. It only applies to what will happen to currently existing humans. The objections 1 (our overall preference for having human rather than AI descendants) and 3 (a looming Malthusian catastrophe affecting future beings) are further objections to creating an increasing number of AI agents.

So there are several largely independent reasons not to create AI agents that have moral or legal rights:

  1. Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn't need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
  2. Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That's so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn't make such agents in the first place.
  3. Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn't affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.

At least point 2 and 3 would also apply to emulated humans, not just AI agents.

Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.

Under a robust system of property rights, it becomes less economically advantageous to add new entities when resources are scarce, as scarcity naturally raises costs and lowers the incentives to grow populations indiscriminately.

I don't think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a "moral hazard" in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.

I appreciate this proposal, but here is a counterargument.

Giving AI agents rights would result in a situation similar to the repugnant conclusion: If we give agentic AIs some rights, we are likely quickly flooded with a huge number of right bearing artificial individuals. This would then create strong pressure (both directly via the influence they have and abstractly via considerations of justice) to give them more and more rights, until they have similar rights to humans, including possibly voting rights. Insofar the world has limited resources, the wealth and power of humans would then be greatly diminished. We would lose most control over the future.

Anticipating these likely consequences, and employing backward induction, we have to conclude that we should not give AI agents rights. Arguably, creating agentic AIs in the first place may already be a step too far.

Load more