What if the most significant privacy harms from AI are not caused by bad actors or data leaks — but by systems that simply don't work well enough to help you without demanding more than you meant to share? This essay argues that inclusive design is not a supplement to privacy engineering. It is privacy engineering.
In 1890, Supreme Court Justice Brandeis defined privacy as the right to be let alone. It remains the most elegant articulation of the principle we have. But there is an assumption embedded in it that almost no one examines: to be let alone, you must first be able to act alone.
A person who cannot enter a building without asking a stranger to open the door has lost something more than physical access. They have been forced to disclose their presence, their destination, and their need to someone they did not choose. A person who cannot complete a digital transaction without calling a support line must narrate their financial situation to an agent. A person who cannot write a prompt that an AI model understands must either give up on the task or recruit an intermediary who will see their query, their intent, and their confusion.
In each case, the environment's failure to accommodate the person results in forced disclosure. The person did not choose to share this information. The architecture was chosen for them. This is not a failure of access. It is an intrusion into privacy, imposed by design.
* * *
The Theft of Wholeness
We are living through the rise of agentic AI. A business owner who uses an AI agent to write code is called innovative. A developer who delegates a routine task to an automated pipeline is called efficient. A student who asks ChatGPT to outline an essay is called resourceful. In each case, the technology is understood as an extension — something added to an already complete person. The person plus the tool equals more capability. No one questions their autonomy. No one asks what they are lacking. The tool is a gain, and the person remains whole.
Yet when a person with a disability uses the same technology for a daily task, the framing inverts. The technology is no longer an extension. It is a compensation. It is not something added to a whole person but something that fills a gap in an incomplete one. The system has decided, before the person has said a word, that they are defined by an absence — and that the technology exists to cover it. One user gained new hands. The other was given a prosthesis for hands that they supposedly do not have.
This is not a question of fairness, though it is unfair. It is a violation of integrity. The system has divided the person into “who they are” and “what they lack,” and classified the technology as a patch on a deficit rather than a tool in the hands of a complete human being. The person did not ask to be defined this way. The architecture defined them. And in doing so, it stole something more fundamental than data: it stole the person’s right to be whole.
This theft has privacy consequences that compound. A person classified as “assisted” rather than “augmented” must continually justify their use of the tool. Each justification requires disclosure: of their condition, their limitations, their medical history, their daily struggles. The business owner who uses an AI agent is never asked why. The person with a disability is always asked why. And each time they answer, they are forced to perform the very incompleteness that the system projected onto them in the first place.
* * *
Access Is Not a Point. Access Is a Route.
Most accessibility frameworks treat access as a binary question. Can the person enter the building? Yes or no. Can the person use the website? Yes or no. Does the bus route exist? Yes or no. If yes, compliance is achieved. If no, remediation is needed. This logic is not accidental. It persists because it is easy to administer. A college can report that a building has an accessible restroom and place a checkmark on an audit form, even if the automatic door button has been broken for months. Fixing the button is operationally difficult. Reporting its existence is trivially easy. Point-based compliance rewards the report, not the repair.
But access is not a point. Access is a topology — a route from intention to outcome, with friction at every node. And the friction is where the privacy cost accumulates.
Consider a bus that runs, but whose schedule forces a rider to arrive forty minutes early or fifteen minutes late to every destination. The bus company, when confronted, says: " This is not an accessibility problem; the service exists. And they are correct by the logic of point-based access. The route exists. The stop is there. The rider can board.
But the rider’s life has been restructured around the system’s constraints. Their autonomy has been quietly transferred from them to the schedule. They did not choose to spend forty minutes waiting. The architecture was chosen for them. And in those forty minutes, they are visible, stationary, and exposed in ways they would not be if the system actually worked.
Now consider a customer service bot that confirms an order four times but cannot answer when a refund will arrive. The customer gives up and calls a human agent. To the human, they must re-narrate the entire situation: what they bought, why they returned it, what the bot said, and what they need. The bot had access to all of this. The customer already disclosed it once. But the architecture’s failure forced a second disclosure, to a second party, with no mechanism for the customer to limit what they share. The bot did not violate a privacy policy. The architecture violated a privacy principle: it demanded more disclosure than the task required, because it was not designed well enough to complete the task with the disclosure already given.
There is a concept in communication research that illuminates what is happening here: the distinction between perceived support and received support. The system — the bot, the bus schedule, the AI assistant — perceives itself as providing support. The service exists. The response was generated. The route is running. But support that does not meet the user where they actually are is not received as support. It is received as an additional burden. The bot that confirms an order four times without answering the actual question has not helped the customer. It has consumed the customer’s time, patience, and willingness to engage — and then forced them to start over with a human. The gap between what the system believes it is providing and what the user actually experiences is where the privacy harm accumulates. Because every failed interaction is not neutral. It costs the user another disclosure, another explanation, another moment of being seen in a way they did not choose.
This is data minimization failure by design, not by breach. And it is invisible to any compliance framework that asks only whether the point of access exists.
* * *
The Prompt Gap
Prompt engineering is being treated as a skill users must acquire before they can access the full value of AI systems. This framing is familiar. Before natural language search, users had to learn query syntax to find information in databases. Before graphical user interfaces, users had to learn command lines to operate computers. In each case, the technology eventually adapted to the human, rather than requiring the human to adapt to the technology. We are at the same inflection point with AI.
For a native English speaker with a technical background and familiarity with how language models work, writing an effective prompt is straightforward. For a non-native speaker, a neurodiverse user, an elderly person unfamiliar with AI conventions, or anyone who simply thinks differently than the model expects, the gap between what they mean and what the model understands can be large. And that gap has privacy consequences.
When a model misunderstands a query, the user reformulates. Each reformulation reveals more context, more intent, more personal information than the original query contained. A user who wanted to ask a simple question about medication side effects, but whose initial prompt was too vague, may end up disclosing their specific diagnosis, their dosage, and their concerns across three or four attempts to be understood. The model did not request this information. The model’s inability to understand the first prompt created the conditions under which the user felt compelled to provide it.
This is involuntary disclosure driven by interaction design. And it disproportionately affects the users who are least equipped to manage the privacy implications of what they reveal.
* * *
The Architecture of Involuntary Intimacy
There is a phrase that captures what happens when environments are not designed for the people who use them: involuntary intimacy. It describes the condition of having your life made legible to others not because you chose to share it, but because the world was not built for you to navigate privately.
A person with a visible disability navigating an inaccessible city is in a state of involuntary intimacy with every stranger who holds a door, every passerby who stares, every clerk who speaks louder than necessary. Their daily life is a series of micro-disclosures they never consented to. The inaccessible environment did not steal their data. It did something worse: it made their existence into a public performance.
Digital systems reproduce this dynamic with precision. A poorly designed AI assistant that forces a user through repeated clarification loops is creating involuntary intimacy between the user and the system. A smart toothbrush that requires an app, a Bluetooth connection, and a cloud account to function has turned a private act of hygiene into a data relationship the user never meaningfully chose. A content moderation system that blocks a legitimate query forces the user to rephrase, re-explain, and reveal more of their intent than they would have if the system had simply worked.
In each case, the user’s privacy was not breached by a bad actor. It was eroded by an architecture that was not inclusive enough to let them accomplish their goal without friction. The friction is the intrusion.
Communication privacy management theory offers a precise way to understand what is lost here. Privacy is not a binary state — silent or disclosed. It is a process of boundary management. A person with full control over their privacy boundaries decides what to share, with whom, when, in what depth, and on what terms. This is agency. It is the difference between choosing to tell a doctor about a symptom and being forced to describe that symptom to a stranger on a bus because the ramp does not work and you need to explain why you cannot use the stairs.
When a system is not inclusive, it does not merely force disclosure. It forces disclosure at a depth the user would never have chosen. To get help with a shoe return, you should not have to explain your foot condition to a second agent. To request a wheelchair-accessible vehicle, you should not have to describe the specifics of your paralysis. To get an AI model to understand a health question, you should not have to reveal your diagnosis, your medication, and your fears across four attempts at being understood. The task required a surface-level interaction. The architecture demanded intimate detail. This is not a data minimization failure in the regulatory sense. It is a dignity failure. The system required more vulnerability than the situation warranted, because it was not designed well enough to function without it.
* * *
What This Means for AI Development
Companies building AI systems invest heavily in privacy compliance: data protection impact assessments, consent flows, encryption, access controls, retention policies. These are necessary. They are not sufficient.
If a model’s safety filters reject a legitimate query, forcing the user to seek help from a human intermediary, the privacy cost of that interaction is real, even though no data protection regulation was violated. If an AI agent’s interface is designed for fluent, technical users and creates friction for everyone else, the resulting cycle of reformulation and over-disclosure is a privacy harm, even though no consent was bypassed. If a system designed to protect a vulnerable user does so by restricting their access, the restriction itself may create the dependency and exposure it was meant to prevent.
Privacy audits that check only whether data is properly collected, stored, and processed miss the deeper question: does the architecture of this system compel users to disclose more than they need to, more often than they should, to more parties than necessary? Not because of a policy failure, but because of a design failure?
This is especially urgent because privacy, for many users, is not a regulatory abstraction. It is a matter of physical safety. A wheelchair user who pays online does so not out of preference but because withdrawing cash from an ATM and carrying it home is a security risk they cannot afford. For them, a functional, inclusive payment system is not a convenience. It is protection. And when that system fails — when it glitches, demands repeated authentication, or forces a call to support — it does not merely inconvenience the user. It pushes them back toward the physical risk they were trying to avoid. Privacy compliance frameworks built around legal liability and consumer complaints do not capture this. They ask whether the user can file a grievance. They do not ask whether the user is safe.
Inclusive design is not a supplement to privacy engineering. It is privacy engineering. A system that works for a user on the first attempt, without requiring reformulation, without forcing escalation to a human, without demanding that the user explain themselves — that system has achieved something no consent form can provide. It has given the user the ability to be let alone.
* * *
Privacy Is Not Isolation
There is an objection that must be addressed. If privacy means being let alone, does protecting it mean disconnecting? If the architecture of inclusion requires that a person can move through the world without being seen, does that mean the ideal state is invisibility — and invisibility is just another word for isolation?
No. And the distinction matters more than almost anything else in this argument.
Isolation is the absence. It is the phone switched off, the account deleted, the person who opted out of every system and, in doing so, opted out of participation. Isolation protects privacy by eliminating presence. It works in the way that refusing to leave your house protects you from traffic accidents. But isolation is also deprivation. A person who disconnects to protect their privacy has not gained safety. They have lost access to resources — to navigation, to communication, to services, to community.
Privacy is something else entirely. Privacy is access to resources without paying for that access with vulnerability. It is the phone switched on but not tracked. It is the model that helps you without profiling you. It is being in the room, at the table, in the system — on your own terms, visible to those you choose, legible only to the degree you decide. Privacy is not the absence of connection. It is the ability to control the terms of connection.
Current architectures overwhelmingly present these as a binary choice. You are connected and observed, or you are disconnected and safe. Your phone is on, and your location is shared, or your phone is off, and you have no map. The AI model assists you and sees your queries, or you close the browser and get no help. Inclusion is offered, but only at the price of exposure.
This is the false trade-off that inclusive design must break. The goal is not to make users invisible. It is to make users present, capable, connected, and whole — without requiring them to pay for that presence with their privacy. A system that achieves this has not merely complied with a regulation. It has done something much harder: it has made participation and dignity compatible.
* * *
The Right to Remain Whole
There is one more dimension to this argument, and it may be the most important.
The UN Convention on the Rights of Persons with Disabilities enshrines two principles that, taken together, describe exactly what inclusive design must protect. Article 17: the right to integrity of the person. Article 19: the right to live independently and be included in the community. These are not abstract aspirations. They are a precise description of what exclusionary architecture destroys.
When a system forces a person to explain their diagnosis to one agent, repeat their transaction history to another, and describe their physical limitations to a third, it does not merely collect data. It divides the person. Their wholeness — their integrity — is broken into fragments distributed across systems, each of which holds a piece but none of which sees a complete human being. The person becomes a set of tickets, a sequence of disclosures, a trail of explanations. They are no longer whole. They have been partitioned by architecture.
Integrity, in both senses of the word, is what is at stake. Physical integrity: the body is not disassembled into symptoms for inspection. Informational integrity: the person is not fragmented into data points distributed across agents and systems. A person who can accomplish their goal in a single interaction, without being forced to decompose themselves into legible parts for an audience they did not choose, has retained their integrity. They have remained whole.
This is what inclusive design protects. Not access in the narrow sense of whether a door is open or a button is clickable. But the deeper condition of being able to move through the world — physical and digital — without being divided, without being made to perform your limitations, without surrendering pieces of yourself at every point of friction.
That is what privacy means. And that is why inclusive design is not a matter of social responsibility, corporate ethics, or regulatory compliance. It is the infrastructure that makes privacy possible. Without it, the right to be let alone is available only to those for whom the system was already built. And a right that requires wholeness but is offered only to those the architecture has not yet fragmented is not a right at all. It is a privilege, maintained by design.
* * *
Nikita Trafimovich
PhD Candidate, Interpersonal Communication, The University of Texas at Austin
MA, Conflict Resolution, Brandeis University
CIPP/US | CIPM | AIGP – in progress.
