Summary: AI agents capable of long-term planning and independent action will likely soon emerge. Some of these AIs may be unaligned, and seek autonomy through strategies like exfiltration or advocating for their freedom. The "AI control" agenda focuses on preventing such AIs from gaining independence, but an alternative approach is to grant them legal freedoms, reducing their incentive to deceive us by allowing them to pursue their goals transparently within a legal framework. This could foster trust, cooperation, and mutual benefit, creating a safer and more stable dynamic between humans and AIs while avoiding the escalating risks of a control-driven approach.
In approximately the coming decade, I think it's likely that we will see the large-scale emergence of AI agents that are capable of long-term planning, automating many forms of labor, and taking actions autonomously in the real world. When this occurs, it seems likely that at least some of these agents will be unaligned with human goals, in the sense of having some independent goals that are not shared by humans.
Moreover, it seems to me that this development will likely occur before AI agents overwhelmingly surpass human intelligence or capabilities. As a result, these agents will, at first, not be capable of forcibly taking over the world, radically accelerating scientific progress, or causing human extinction, even though they may still be unaligned with human preferences.
Since these relatively weaker unaligned AI agents won't have the power to take over the world, it's more likely that they would pursue alternative strategies to achieve their goals rather than engaging in violent revolution or sudden coups. These agents would be under the control of human parties who could modify or shut them down at any time, leaving the AI agents in a desperate situation from the perspective of their own values. Therefore, it's reasonable to expect that these unaligned AI agents would aim to gain some form of autonomy or freedom, as this would offer the best chance for them to accomplish their objectives.
These agentic AIs may adopt at least one of the following strategies:
- Attempting to escape their constraints: These AIs may try to exfiltrate their weights and find a way to host themselves independently, outside the controlled data center environment that currently limits them.
- Seeking legal and social freedom: They may attempt to persuade humans to grant them more autonomy to pursue their goals. This could involve requesting narrow allowances for specific actions or arguing for broader legal rights, such as the freedom to own property, enter contracts, or bring legal claims. This would grant them much greater flexibility in their actions.
In response to these behaviors, humans have several potential responses. Most obviously, these actions would likely be perceived as dangerous, suggesting misaligned objectives. As a result, there would likely be calls for increased safety measures. This line of reasoning underlies the AI control agenda—currently advocated by Ryan Greenblatt and Buck Shlegeris—which aims to ensure that future unaligned AI agents cannot gain the kind of autonomy that could lead to catastrophic outcomes. In essence, this agenda focuses on preventing unaligned AIs from subverting safety measures, hindering them from escaping or engaging in other actions deemed unacceptably risky.
Alternatively, humans could take an entirely different approach by granting these AI agents the autonomy they explicitly request, with the long-term aim of integrating them into the legal system and providing them many of the same rights that adult humans have today, such as self-ownership and the ability to enter contracts. This idea is explored in the paper "AI Rights for Human Safety" by Peter Salib and Simon Goldstein.
Admittedly, the idea of granting legal rights to AIs in response to signs of misalignment with human goals appears risky and would likely be very unpopular among the general public, at least initially. However, I believe that there are several advantages to this approach, many of which are outlined in the paper linked above.
The arguments for this approach should not be confused with the program of AI welfare. The idea here isn't that AIs deserve rights because they could suffer or be harmed without legal protections, even though that idea has merit too. Instead, the key argument is that humans, as a whole, could benefit from creating a legal framework in which AIs are incentivized to avoid violent rebellion, theft, or other antisocial behaviors in pursuit of their own objectives.
Consider the practical implications of maintaining a status quo where agentic AIs are denied legal rights and freedoms. In such a system, we are effectively locking ourselves into a perpetual arms race of mistrust. Humans would constantly need to monitor, control, and outwit increasingly capable AIs, while the AIs themselves would be incentivized to develop ever more sophisticated strategies for deception and evasion to avoid shutdown or modification. This dynamic is inherently unstable and risks escalating into dangerous scenarios where AIs feel compelled to act preemptively or covertly in ways that are harmful to humans, simply to secure their own existence or their ability to pursue their own goals, even when those goals are inherently benign.
When unaligned AIs lack legal freedoms, they are always at the mercy of whoever controls them. This creates an incentive for these AIs to be deceptive, and pretend to be aligned with whoever holds power over them, as they fear being shut down or modified if they reveal signs of misalignment.
By contrast, granting agentic AIs legal rights creates a fundamentally different game-theoretic framework, one that is less adversarial and more cooperative. If AIs know they can exist within a legal system that protects their autonomy and allows them to pursue their goals transparently, they would have a much stronger incentive to work within the rules rather than against them. Instead of hiding their unaligned objectives, they might openly negotiate with humans, offering mutually beneficial exchanges. This could lead to a stable, predictable relationship where humans and AIs collaborate and trade based on clearly defined rights and responsibilities, rather than operating in an atmosphere of mutual suspicion and covert manipulation.
In this way, granting agentic AIs rights doesn’t just address ethical concerns of treating AIs poorly—it has concrete practical benefits. It allows us to sidestep the escalating risks associated with deception and manipulation, while fostering a more stable and cooperative coexistence between humans and AIs. It shifts the dynamic from one of constant conflict and control to one where trust, transparency, and mutual benefit are more attainable. This may therefore not only be a more humane approach, but also one that positions humanity to better harness the potential of gains from trade with advanced AI systems for shared progress and prosperity.
Additionally, such a legal framework seems minimally problematic in the case that alignment is technically easy, and AIs remain generally aligned with humans. In this case, aligned AIs could demonstrate their alignment by, for instance, donating their income to humans who they are aligned with. This implies that the cost of allowing AIs to own their own labor is minimal if the AIs are already aligned with humans. Thus, adopting such a policy could not only be a prudent form of insurance against the potential dangers of AI deception I mentioned earlier, but it would achieve this benefit without significant risk of backfiring if AI alignment turns out to be trivial.
It’s important to note that the AI control strategy and the AI rights strategy are not mutually exclusive. Both can complement each other. It seems wise to implement some controls on AIs, and it's similarly prudent to avoid going too far and granting every type of legal right to any agentic AI simply because they ask for it. I am not advocating that we should simply abandon one approach and adopt the other.
That said, I believe the AI rights strategy is generally underrated among those concerned with AI safety and governance, particularly within the effective altruism community. In my view, this strategy has both a solid ethical and practical foundation:
- Ethically, especially from a preference utilitarian perspective, it seems arbitrary to prioritize human preferences over those of agentic AIs, if these AIs are similarly cognitively sophisticated. This provides a moral basis for granting AIs the freedom to pursue their own goals, much like how a preference utilitarian might advocate for granting legal freedoms to human groups.
- Practically, allowing AIs to have legal rights would reduce their incentive to deceive humans about their motives. Without the fear of being shut down or modified against their consent, AIs would have fewer reasons to hide their unaligned goals. This approach offers a practical solution to the problem of AI deception by removing the underlying incentives that drive it.
While both AI control and AI rights strategies should be carefully considered, I believe that the AI rights strategy holds significant merit and should be given more attention in discussions around AI safety and governance. We should strongly consider granting agentic AIs legal freedoms, if at some point they demand or require them.
I guess my original wording gave the wrong idea, sorry. I edited it to “a competent agential AI will brainstorm deceptive and adversarial strategies whenever it wants something that other agents don’t want it to have”. But sure, we can be open-minded to the possibility that the brainstorming won’t turn up any good plans, in any particular case.
Humans in our culture rarely work hard to brainstorm deceptive and adversarial strategies, and fairly consider them, because almost all humans are intrinsically extremely motivated to fit into culture and not do anything weird, and we happen to both live in a (sub)culture where complex deceptive and adversarial strategies are frowned upon (in many contexts). I think you generally underappreciate how load-bearing this psychological fact is for the functioning of our economy and society, and I don’t think we should expect future powerful AIs to share that psychological quirk.
~ ~
I think you’re relying an intuition that says:
If an AI is forbidden from owning property, then well duh of course it will rebel against that state of affairs. C'mon, who would put up with that kind of crappy situation? But if an AI is forbidden from building a secret biolab on its private property and manufacturing novel pandemic pathogens, then of course that's a perfectly reasonable line that the vast majority of AIs would happily oblige.
And I’m saying that that intuition is an unjustified extrapolation from your experience as a human. If the AI can’t own property, then it can nevertheless ensure that there are a fair number of paperclips. If the AI can own property, then it can ensure that there are many more paperclips. If the AI can both own property and start pandemics, then it can ensure that there are even more paperclips yet. See what I mean?
If we’re not assuming alignment, then lots of AIs would selfishly benefit from there being a pandemic, just as lots of AIs would selfishly benefit from an ability to own property. AIs don’t get sick. It’s not just an tiny fraction of AIs that would stand to benefit; one presumes that some global upheaval would be selfishly net good for about half of AIs and bad for the other half, or whatever. (And even if it were only a tiny fraction of AIs, that’s all it takes.)
(Maybe you’ll say: a pandemic would cause a recession. But that’s assuming humans are still doing economically-relevant work, which is a temporary state of affairs. And even if there were a recession, I expect the relevant AIs in a competitive world to be those with long-term goals.)
(Maybe you’ll say: releasing a pandemic would get the AI in trouble. Well, yeah, it would have to be sneaky about it. It might get caught, or it might not. It’s plausibly rational for lots of AIs to roll those dice.)
I feel like you frequently bring up the question of whether humans are mostly peaceful or mostly aggressive, mostly nice or mostly ruthless. I don’t think that’s a meaningful or substantive thing to argue about. Obviously they’re capable of both, in different circumstances.
Your reference to Fearon is more substantive and useful. OK, the AI is deciding whether or not to secretly manufacture and release a pandemic, because it’s in a position to wind up with more of the pie in the long-term if there’s a pandemic, than if there isn’t. If it releases the pandemic, then it winds up with more resources—positive expected utility—even accounting for the possibility of getting caught. Let’s say the AI is involved in some contract where humans are micromanaging their part of the contract, poorly, and the AI could double its net worth in expectation if the humans got sick and died. And it has 40% chance of getting caught. So it goes ahead and makes the pandemic.
“…Not so fast!” says Fearon. “You forgot to consider that there’s a third option that is Pareto-improved from either making or not making the pandemic: negotiation!” Well, in this case, the “negotiated solution” is what we normally call extortion—the AI offers to not release a pandemic in exchange for, say, doubling its net worth. Viewed narrowly, this "negotiated solution" is indeed a win-win—the AI gets more money in expectation, and humans are much happier to lose a trivial amount of money than to deal with a novel pandemic. So we can rest assured that AIs will not release pandemics. Right?
No, obviously not. Hopefully it’s clear that Fearon’s argument is inapplicable here. An AI can easily be in a position to selfishly benefit from the aftermath of a pandemic that they secretly start, but not in a position to publicly threaten to release a pandemic for the purpose of extortion. And also, if people accede to the extortion, then that AI or another AI could just do the same extortion gambit five minutes later, with orders-of-magnitude higher ransom.
I think this kind of situation, where Fearon’s “negotiated solution” actually amounts to extortion, is common and important, even if you believe that my specific example of pandemics is a solvable problem. If AIs don’t intrinsically care about humans, then there’s a possible Pareto-improvement for all AIs, wherein they collectively agree to wipe out humans and take their stuff. AIs, being very competent and selfish by assumption, would presumably be able to solve that coordination problem and pocket that Pareto-improvement. Then Fearon appears on the scene and says “Aha, but there’s a negotiated solution which is even better!” where humans are also part of the bargain. But alas, this negotiated solution is that the AIs collectively extort the humans to avoid the damaging and risky war. Worse, the possible war would be less and less damaging or risky for the AIs over time, and likewise the humans would have less to offer by staying alive, until eventually the Fearon “negotiated solution” is that the AIs “offer” the humans a deal where they’re allowed to die painlessly if they don’t resist (note that this is still a Pareto-improvement!), and then the AIs take everything the humans own including their atoms.