It is clear to a communication scholar that interaction with LLMs is not inter- but intrapersonal. The user is ultimately talking to themselves. Given the nature of this interaction, LLMs are not simply interactive interfacesь advanced library catalogues, but mirrors. Talking mirrors that inevitably trigger our mirror neurons.
However, the challenge is not merely in LLMs' ability for mirror-talking. Intrapersonal communication: meditation or journaling, focus on what the user already knows and is trying to process. LLMs create a different situation: an individual is forced to talk to the mirror about something they do not know and are trying to comprehend, grasp or verify.
As a result, the user plays a part in an almost Snow White plot: asking a mirror to reveal the unattainable truth through magic. The challenge is not the nature of the mirror. The challenge is in the settings and the reliability. The model can make mistakes, and the user is kindly warned to double-check responses. But how do you double-check something you have no clue about in the first place?
With emerging alternatives, the solution seemed natural. Two heads (or models) are better than one. One can correct the other, right? Not exactly. Paradoxically, more mirrors are even less reliable than one. In an attempt to triangulate the data, the user finds that each model undermines the others by accusing them of being comforting and flattering. It is a twisted Snow White plot: «Mirror, mirror on the wall, who’s the fairest of them all?» And each model will say: «I am.» In attempting to be cautious, critical, and reasonable, the user has now disclosed the same vulnerability to multiple systems — each of which holds a fragment, none of which holds the whole. The quest for certainty has cost the user their privacy.
Unfortunately, interaction with LLMs often resembles another, much darker symbol — a snake eating its own tail. When the user explicitly asks who is right, models politely retreat and advise the user to «listen to their heart» and stop using models as a doctor's consilium. The supposed source of collective wisdom punishes the user for being critical, proactive, cautious and curious — leaving them more confused than before, back on square one.
This is a reference to another modern fable. In the graphic novel Sin City, Marv says: «When I don't know something, I find someone who knows and ask.» In the noir world of Sin City, Marv's asking included considerably more pressure than polite conversation. In the context of LLMs, pressure is counterproductive for an unusual reason. People are reluctant to talk, while LLMs are unable to stay silent. The more you push, the more information you get. How valuable is that information? That is debatable.
It is an established theoretical assumption in interpersonal communication that absence constitutes circumstantial evidence. Something that is NOT on social media can be more important than what is public. Similarly, a model's silence could be more informative than its talk. We built a system that never goes silent. And only now are we beginning to understand that silence was information.
