Hide table of contents

How generative AI changes the economics of knowing together.

 SECTION 1 

 The deeper problem is not falsehood The dominant concern about generative AI is scale: more false content, produced more quickly, at lower cost. That concern is real. But it misidentifies what is changing. The deeper shift is not simply in the supply of falsehood. It is in the economics of evaluation. The relevant asymmetry is straightforward: generation is becoming much cheaper than verification. Generative systems can now produce plausible text, images, and arguments far faster than individuals or institutions can meaningfully assess them. Verification does not disappear. It fails to scale at the same rate as production. Catalini, Hui, and Wu identify the same structural constraint: as the cost of cognitive production falls, the limiting resource shifts toward verification capacity rather than output. The bottleneck is no longer generating claims. It is deciding where scarce attention should go. That changes the structure of public reasoning. When the flow of plausible content consistently exceeds what can be examined, epistemic systems do not stop evaluating. They begin evaluating in a different order.

 SECTION 2 

 The rise of two-stage epistemics When verification becomes scarce, epistemic systems do not simply slow down. They reorganize around a two-stage process in which attention is allocated before scrutiny begins. The first stage is gating. Before a claim is examined for accuracy, it must clear an earlier threshold: should it receive attention at all? That threshold is set by signals that are only loosely related to truth. Source, reputation, fluency, and style all help determine whether a claim enters the limited space where evaluation is even possible. The second stage is direct assessment. Once a claim has been admitted, institutions and individuals may ask whether it is true, well-supported, or worth acting on. But this stage is increasingly conditional on the first. Evaluation does not disappear. It becomes gated. That distinction matters. The shift is not from evaluation to non-evaluation but from direct evaluation to gated evaluation. Under conditions of abundance, attention becomes the scarce resource, and scarcity pushes epistemic systems to allocate attention before they allocate scrutiny. This helps explain why simple labels often fail. Recent work by Gallegos and colleagues finds that AI labeling alone does not reliably reduce persuasion or restore calibration. The reason is structural: labels operate on the second stage of judgment, but the deeper problem is often upstream, in the stage that decides whether judgment happens at all. Once attention is filtered before it is scrutinized, the decisive question is no longer only what is true, but what is admitted for evaluation in the first place.

 SECTION 3 

 When filters become targets Whatever controls the gate controls what gets evaluated at all. The signals used to decide what deserves review no longer function only as shortcuts for judgment. They become incentives. In earlier information environments, credibility signals emerged partly from costly processes. Institutional affiliation, reputation, consistency, formatting, and endorsement patterns were difficult to imitate at scale. Their value came partly from the friction involved in producing them. Generative systems weaken that friction directly, as a consequence of the generation-verification asymmetry established above. They make it easier to reproduce the surface features that many epistemic systems use as proxies for legitimacy: coherent language, confident tone, professional presentation, and plausible endorsement patterns. Zhao et al. show that AI-generated reviews can mimic the markers of trust closely enough to remain persuasive even when manipulation is expected. The result is not that reputation fails. It is that reputation becomes optimizable. Once a signal determines whether something gets through the gate, that signal becomes worth optimizing against. The system need not abandon reputation, style, or fluency as signals. It only needs to face content optimized to pass those filters rather than deserve to. In overloaded systems, the question slowly shifts from "what is true?" to "what reliably passes?" 

SECTION 4 

 Coordination can survive truth-loss When filters replace verification as the primary sorting mechanism, coordination does not stop. It is rerouted through whatever signals remain widely legible. Coordination does not require every participant to verify the same reality directly. In many systems, collective action depends only on a shared sense of what counts as credible, what deserves attention, and what others are likely to treat as credible. Those shared cues can sustain behavior even when the underlying claims being sorted are increasingly unexamined. That makes the adaptation easy to misread. The danger is often framed as epistemic collapse: institutions can no longer distinguish truth from fabrication, so coordination breaks down. That can happen. But it is not the only failure mode. Many systems respond to verification scarcity by adapting — leaning more heavily on signals that preserve coordination even when those signals track reality less reliably. A failing system attracts repair. An adapting system can remain stable long enough for its epistemic deterioration to become normal. Financial markets offer a familiar example. As Shiller has argued, markets often coordinate around narratives as much as fundamentals. The mechanism is recognizably Keynesian: participants do not need to verify the same underlying data in order to move together; they need sufficiently shared expectations about what others are likely to believe. When those expectations are shaped by content optimized to look credible rather than to be accurate, markets can continue clearing, pricing, and allocating capital while becoming less anchored to the conditions they are supposed to reflect. The same logic extends to systems designed to resist it. Mann and colleagues document how AI-assisted submission volumes are straining peer review: when claims arrive faster than qualified reviewers can examine them, gatekeeping increasingly substitutes for scrutiny. The system continues to function — papers are reviewed, knowledge is certified — but the certification process operates under the same asymmetry that affects less formal epistemic systems. The pattern holds wherever shared signals organize attention. Coordination persists. What erodes is its dependence on truth. The danger is not that coordination disappears. The danger is that coordination may continue while becoming progressively less dependent on truth itself. 

 CONCLUSION

 The immediate response to generative AI often focuses on detecting false content more efficiently. That matters. But detection alone addresses only the visible surface of the problem. The deeper challenge is preserving the conditions under which verification still shapes coordination. When generation becomes cheap and evaluation remains costly, epistemic systems default toward whatever can be processed fastest. Not toward whatever can be known most reliably. In that environment, the central question shifts from identifying falsehood to preserving forms of judgment that cannot be fully automated or cheaply simulated. That requires more than better classifiers. It requires preserving institutional friction: points in public reasoning where claims must still pass through costly human scrutiny before they can scale into collective belief. The central challenge may not be producing more knowledge. It may be preserving the conditions under which truth still has to be paid for.

Catalini, C., Hui, X., & Wu, J. (2026). Some simple economics of AGI (MIT Sloan Research Paper No. 6298838). SSRN. https://doi.org/10.2139/ssrn.6298838 [[54]]

Gallegos, I. O., Shani, C., Shi, W., Bianchi, F., Gainsburg, I., Jurafsky, D., & Willer, R. (2026). Labeling messages as AI-generated does not reduce their persuasive effects. PNAS Nexus, 5(2), Article pgag008. https://doi.org/10.1093/pnasnexus/pgag008 [[61]]

Mann, S. P., Aboy, M., Seah, J. J., Lin, Z., Luo, X., Rodger, D., Zohny, H., Minssen, T., Savulescu, J., & Earp, B. D. (2025). AI and the future of academic peer review. arXiv. https://doi.org/10.48550/arXiv.2509.14189 [[88]]

Shiller, R. J. (2019). Narrative economics: How stories go viral and drive major economic events. Princeton University Press. ISBN: 978-0-691-18229-2 [[68]]

Zhao, Y., & Mastorakis, S. (2025). Method and multi-domain benchmark for detecting AI-generated reviews. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP Findings). ACL Anthology. [[81]]

2

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities