Rules exist for a reason. In sports, rules facilitate and shape victory. Prevention of harm is another fundamental reason for regulations. Rules stabilise the chaos of hospitals, airports, schools, and military bases. One of the essential regulations, the "social contract" by Thomas Hobbes, reflected humans' desire to avoid violence and live better.
However, on the road of evolution, rules can transform or even mutate. Compliance nowadays has mutated into "protection of the letter" rather than "representation of the spirit." Henry David Thoreau observed this in 1849: lawyers and office-holders "serve the state chiefly with their heads; and, as they rarely make any moral distinctions, they are as likely to serve the devil, without intending it, as God." I am not implying ill intention from any lawyer or compliance officer. My purpose is merely to express concern about how compliance flaws can lead to harm among the very clients, patients, and customers they were supposed to protect.
The Total Disability Discharge procedure is a bright example of such an unfortunate mutation. The Department of Education can forgive a student loan for a person with a disability — but only if the student demonstrates that their diagnosis "prevents substantial gainful activity." Seems helpful. But it totally contradicts the ultimate purpose of education. The whole point of education is to open the door to gainful activity for an individual, regardless of physical deficits. The rule that was meant to help requires the student to declare defeat as the price of relief.
The AI domain reproduces the same pattern. AI customer support bots are supposed to make customers' lives easier. Instead, I repeated my personal information and order details four times to a bot — in an attempt to simply get an approximate timeframe for a refund — before finally escalating to a human agent. The tool that was supposed to help caused a loss of time, violated the data minimisation principle, and provided no assistance. The letter was followed. The spirit was abandoned.
The current design of LLMs is especially susceptible to this mutation. LLM responses resemble the stimulus-response dynamic of relational pressure: the user’s request demands an answer, and the model cannot stay silent. It must produce output. Produced output fulfils the letter. It does not mean the response will be helpful, complete, or truthful. A model that cannot say “I don’t know” is not following the spirit of helpfulness. It is following the letter of responsiveness. As a result, each user pays a “privacy tax” — disclosing more and more information in pursuit of a better outcome. This is the same dynamic described in my previous essay as involuntary disclosure — each failed interaction costs the user another layer of privacy, not because the system intended harm, but because it was not designed to succeed on the first attempt. The initiative that promised to be “a better version of a search engine” has opened access not only to more information, but to more confusion, uncertainty, and stress.
It is also essential to notice the zero-sum situation compliance creates. Without disability discharge, the loan is a burden. With it, the student must accept a declaration of incompetence and submit to three years of monitoring. To access specialized transit, a person with a disability must demonstrate — in an interview — that they are sufficiently impaired: how far can you travel in a wheelchair? The door opens only if you prove you cannot walk through it yourself. Without bots, there is no access to support. With bots, more data is exposed, and time is wasted. Without LLMs, the user is blind in many high-stakes decisions. After LLM interactions — tired and confused. Every door that opens extracts something in return.
Maturity is a vital concept for AI governance. The maturity of models and frameworks requires the maturity of human minds behind them. Compliance is not mature. Understanding risk, preventing harm, and facilitating user success is not the same as "staying out of trouble." Maturity means understanding risk as a route — a topology — rather than an isolated event. The flaw in disability discharge occurred at the very beginning, in the misinterpretation of what a positive outcome looks like. The flaw of customer bots resembles Einstein's definition of insanity — doing the same thing and expecting different results. The flaw of LLMs is simpler still: the inability to say "I don't know." All of these are conceptual flaws that require policy reconsideration — not technical fixes.
The letter has been followed. The spirit has left the building. The social contract was designed to move humanity from chaos to order. Compliance, in its current form, has inverted that promise — creating orderly systems that produce chaotic outcomes for the people they were meant to protect.
