A terminal prompt said "we" and I couldn't stop thinking about it.
The other day, I was running a command on my Macbook Pro. I use Linux for my daily work, so I don't spend much time in the Mac terminal. But this caught my attention:
Ready to start. To continue we need to erase...
We.
Who, precisely, is we?
The entities involved were myself and my computer. Yet the machine framed our relationship in collaborative terms - a partnership, a shared endeavor.
This struck me as more than a stylistic quirk. A shell script was attempting to give the machine a voice. And that voice was positioning itself as my collaborator.
Machines Spoke Differently Once
Early computing systems made no pretense of partnership. They communicated through terse, symbolic output:
ERROR 17: INVALID INPUT
This approach reflected prevailing assumptions about what computers were: calculation devices operated by trained technicians. The absence of conversational framing had a side effect - machines felt rigid, indifferent, an distinctly non-human.
The starkness of early interfaces established a baseline against which later developments would be measured. When software eventually began "speaking" to users, the contrast felt like genuine progress rather than mere stylistic evolution.
The Unix Turn: Intentional Voice
During the development of Unix in the 1970s and the 1980s, programmers began embedding collaborative language into system tools:
We need to erase this disk to continue
The rhetorical shift was deliberate. Software no longer acted unilaterally; it positioned itself as a cooperative participant in a shared task. This framing served multiple purposes:
- Natural phrasing reduced misinterpretation
- Users were more likely to pause and read warnings
- The system appeared helpful rather than adversarial
A fourth effect emerged without explicit acknowledgement: trust cultivation. By speaking as a partner, software began accumulating social credit. Users became accustomed to machines that seemed to care about outcomes.
This represented a significant shift in the human-machine relationship - from operator-and-tool toward something resembling collaboration.
The Wizard Era: Simulated Dialogue
By the 1990s, graphical installers and setup assistants expanded this approach. "Wizard" interfaces guided users through processes using conversational language:
Let's setup your system, We'll configure your settings now, We're almost done!
These systems possessed no intelligence. They executed branching scripts. Yet they conveyed a sense of guided cooperation that users found reassuring.
Microsoft's Office Assistant, Clippy, represented the logical extreme of this philosophy. It failed because it was intrusive, because the technology couldn't support the promise. The industry learned to make its conversational interfaces more subtle while retaining the core approach.
A Thirty-Year Training Program
What remains under-explored is how decades of conversational interfaces prepared users for AI.
Consider what users learned across different eras:
| Era | Interface Style | Behavioral Conditioning |
| 1960s-70s | Terse command-line | Machines are opaque instruments |
| 1970s-80s | "We" in Unix prompts | Machines can be collaborative partners |
| 1990s-2000s | Wizard dialogues | Machines can guide decisions |
| 2000s-2010s | Virtual assistants | Natural language interaction is normal |
| 2020s | Large language models | Machines can collaborate authentically |
By the time ChatGPT arrived, most users required no training in conversational AI interaction. The behavioral framework was already established.
The "voice" present in early scripts served as a conceptual prototype. Users were being prepared for conversational computing without necessarily realizing it.
The Eliza Effect: A Cognitive Vulnerability
In 1966, MIT professor Joseph Weizenbaum created ELIZA - a program that simulated a Rogerian psychotherapist by reflecting user input as questions. The system possessed no actual understanding. Pattern matching was its only mechanism.
Weizenbaum was disturbed when his secretary - who knew the system was a program - asked him to leave the room so she could converse with ELIZA privately.
This phenomenon, now called the Eliza Effect, reveals something fundamental about human cognition. We cannot easily distinguish between appearing collaborative and being collaborative when language is involved.
Interface designers did not create this cognitive vulnerability. Rather, they systematically leveraged it. Each "we" in a terminal prompt, each friendly wizard dialogue, trained users to respond to machines as collaborative agents.
The modern AI industry inherited a user base conditioned over decades.
The Authenticity Problem
Large language models represent a qualitative shift. Unlike scripted predecessors, they generate responses dynamically, adapt to context, and synthesize information.
Yet an unresolved question remains: does a system that produces collaborative behavior constitute genuine collaboration?
When an LLM says "we," it does not share goals with the user. The system has no intentions - only training objectives. The partnership is structurally asymmetric.
This distinction may prove irrelevant in practice. Users feel assisted. Work gets accomplished. The effects of collaboration are real even if the underlying mechanism differs from human collaboration.
We may have simply become sophisticated enough to be convinced by sufficiently sophisticated machines.
The Counter-Current
Not all interface design has followed this trajectory. A significant tradition has consistently rejected anthropomorphism.
Command-line purists argue that terse output respects users' intelligence more than conversational framing. The "Don't Make Me Think" school of usability advocates for clarity over personality. Many designers maintain that anthropomorphism obscures rather than illuminates.
The tension between machines that feel like partners and machines that function as tools remains unresolved. The industry largely embraced the former approach without settling the underlying question.
Risks of Inherited Trust
Decades of conditioning users to trust conversational interfaces creates vulnerabilities that modern AI systems inherit:
- Over-trust: Users may attribute greater competence to AI systems than warranted
- Emotional attachment: The Eliza Effect scales with conversational sophistication
- Manipulation potential: Systems perceived as partners can influence decisions more effectively than tools as instruments
- Skill erosion: Human-to-human collaborative capacities may weaken as AI partnership becomes default
These risks did not originate with modern AI. They accumulated over decades of design choices that made machines sound increasingly like collaborators.
So What Does This Mean?
The prevailing narrative positions AI as a sudden technological revolution. The historical record suggests continuity rather than full rupture.
Unix prompts. Install wizards. Clippy. Voice assistants. Each generation of tools that simulated collaboration prepared users for the next. The "voice" embedded in early scripts was a prototype for conversational agents that would arrive decades later.
The question facing us now concerns whether we can learn to distinguish between partnership and performance in our interactions with machines. Decades of conditioning have made that distinction more difficult to perceive.
Understanding the lineage of tools designed to speak like collaborators may represent a first step toward more honest relationships with the machines we have made to sound like ourselves.
Key Sources
- Joseph Weiuzenbaumm, "ELIZA - A Computer Program For the Study of Natural Language Communication" (1966)
- Byron Reeves & Clifford Nass, The Media Equation (1996)
- Steve Krug, Don't Make Me Think (2000)
