Hide table of contents
  • Preface / About the Status of This Post

 This post offers a diagnostic framework, not a finished theory. The working hypothesis is that generative AI introduces a temporal asymmetry: the production of plausible — that is, superficially credible — claims scales far faster than their verification and institutional legitimation. The primary risk, on this view, is not primarily the loss of truth as such, but the degradation of the coordination layer that sustains collective action under shared uncertainty. The post focuses on one specific mechanism: the breakdown of shared verifiability — the ability of agents not only to verify claims, but to know that others can verify them as well — as the bottleneck for maintaining common knowledge. The normative part is deliberately left open. I do not attempt to propose interventions at this stage. I am looking for: criticism of the mechanism, especially where it breaks, where causal links are underspecified, or where assumptions are unrealistic; pointers to related work I may have missed; co-authors interested in developing this toward a formal model or concrete interventions.

  • TL;DR 

Generative AI creates a structural imbalance: plausible claims can be produced far faster than they can be verified and institutionally recognized. When this gap widens, verification becomes a scarce resource. Rational agents respond by shifting from independent verification to socially available signals of credibility — consensus, authority, fluency. This triggers a cascade: at the individual level — cognitive drift from truth-seeking to plausibility acceptance; at the linguistic level — erosion of semantic precision and signal quality; at the institutional level — procedural offloading and loss of reliable filtering; at the systemic level — breakdown of common knowledge. The core risk is not misinformation in isolation, but the loss of shared verifiability — the ability to align on what is known and mutually recognized as known. In the limit, coordination does not collapse entirely but shifts to a different regime: agents align on expectations of others’ beliefs rather than on a shared model of reality — a form of coordination without truth.

Prologue: The Economy of Persuasion and the Coordination Threshold

Civilization rests not so much on the sum of accumulated knowledge as on the protocols of collective action. Our capacity to build cities or prevent systemic risks depends on shared observability — the confidence that we not only observe the same reality, but understand how others perceive it and act on that basis. This shared observability forms the layer of Common Knowledge: a recursive structure in which agent A knows that agent B holds the same information, and knows that B knows this. Without this layer, large-scale coordination is impossible.

When it degrades, failures emerge not at the level of incentives — but at the level of mutual predictability of action.

Truth in this architecture is not an abstraction but a coordination interface: the mechanism through which agents convert dispersed observations into a shared basis for action. When it becomes unreliable, the system does not merely lose "truth" — it loses the capacity to build joint decisions upon it. The core problem is not that information becomes false, but that its epistemic status ceases to be coordinable between agents.

Generative AI transforms this dynamic through a specific mechanism: the cost of producing plausible claims approaches zero, while the cost of verifying them remains fixed. This asymmetry is what moves us from an economy of information to an economy of persuasion — an environment where the binding constraint is no longer access to data, but the capacity to reach agreement about which interpretations may be treated as valid for action. Agents no longer compete for information; they compete for the ability to establish their own interpretations as the accepted ones, in conditions where the bandwidth of verification is structurally limited.

When verification becomes scarce, trust shifts away from institutions and toward the methods of establishing facts themselves. It is this epistemic layer that comes under the greatest pressure when society begins to operate not on reality, but on its algorithmic projection.

This generates a new type of systemic risk: systems may retain access to truth — while losing the capacity to act on it collectively.


Temporal Asymmetry: Formalizing the Risk

To understand the systemic risk, we need to examine the temporal structure of the information environment. Let us define:

— the rate of production of plausible claims (in units of claims per unit of time);

 — the throughput of verification and institutional legitimation (in the same units);

 — the cost of reliably verifying a single claim.

A critical regime emerges when . At this point, the system begins producing more plausible claims than it can verify and reconcile. What arises is not merely an excess of information, but a growing share of claims whose status can no longer be reliably synchronized between agents. This is the first sign of a coordination threshold.

The transition accelerates when  begins to exceed the local value of accuracy for a given decision. If the cost of verification is higher than the payoff from it in a given context, a rational agent stops investing in independent verification. Instead of seeking truth, the agent switches to a consensus heuristic: selecting the signals that are easiest to confirm socially — authority, speed, brand, confident tone, visible consensus. Coordination does not disappear, but changes its nature: it no longer rests on jointly verified reality and begins to hold on proxy signals.

This creates a particular kind of legitimation lag. Generation becomes nearly instantaneous, fact-checking becomes fast, but institutional recognition remains inertial. The pace of meaning production begins to systematically outrun the pace of its reconciliation. In this gap, a zone of strategic paralysis forms.

The common argument that AI will itself become the fact-checker, thereby reducing , does not resolve the problem. Data verification can be automated, but trust does not scale at the same speed. Cryptographic verification confirms the origin or integrity of a message — not its legitimacy. Where institutional consensus is required, the bottleneck is not computation but the process of recognition: and that process cannot be accelerated by an algorithm.

This produces a limiting information regime: the marginal utility of additional verification falls faster than the necessity for it grows. As a result, rational agents increasingly choose not independent verification but adaptation to the most probable interpretations. This is how the cascade begins — from the cognitive drift of individual agents toward an environment where coordination through a shared picture of the world is gradually displaced by coordination through algorithmic protocols.

Empirical Anchor: The HMPV Case

This asymmetry was vividly demonstrated during the human metapneumovirus (HMPV) incident of December 2024 — January 2025. Social media and news aggregators circulated panic narratives about a new epidemic, reaching a global audience within hours. Official clarifications from the WHO and Chinese authorities — confirming that the virus was a known seasonal pathogen requiring no emergency measures — took approximately 14 days.

During this lag, the cost of independent verification for individual actors was prohibitive. A "desynchronization window" opened: different groups acted on incompatible interpretations not because some were informed and others were not, but because institutional consensus simply had not yet formed. Local coordination failures emerged well before the facts were reconciled.

The Asymptotic Shape of the Cascade

We are not facing a sudden collapse. We are entering a limiting regime in which traditional mechanisms of trust cease to be cost-effective for individual agents.

A paradox emerges: the utility of verification declines faster than the necessity for it grows. This process begins at the micro-level — through interfaces that reduce the cost of obtaining an answer while raising the cost of independently verifying it. From there it scales into a cascade: from individual cognitive drift to the erosion of the linguistic environment, to institutional drift, and in the limit, to coordination collapse.


Level 1: Cognitive Drift (The Architecture of Delegation)

At the first level, the risk is localized within the cognitive cycle of the individual agent. The problem here is not yet institutional or linguistic: it begins at the moment when a rational user chooses not maximum accuracy but acceptable sufficiency. In an environment where the cost of verification exceeds the local value of accuracy, cognitive drift becomes not an error but an adaptation.

I. The Architecture of Delegation and the Shifting of Friction

The engine of the process is the architecture of delegation: the redistribution of search, synthesis, and verification from the user to the interface. Systems optimized for frictionless UX create the impression of near-zero cognitive cost. But friction does not disappear — it shifts: from obtaining an answer to independently verifying it.

This shift is critical. A generative interface does not merely accelerate access to a conclusion — it reduces the visibility of the costs required to verify it. High processing fluency masks complexity, while confidence framing — the translation of a probabilistic output into the form of a confident assertion — eliminates explicit signals of uncertainty. In their absence, the user by default overestimates the reliability of the response.

The result is a change in the very standard of sufficiency. The user begins to accept not what has been verified, but what appears sufficiently consistent and readily actionable.

II. The Changing Function of Argumentation

In this environment, argumentation ceases to be primarily a tool for finding truth. It increasingly becomes a tool for navigating the expectations of other agents: a way of making one's position acceptable, convincing, and compatible with what others appear ready to receive. When the cost of generating arguments approaches zero, their value as traces of reasoning falls, while their value as proxy signals — indirect markers of reliability more amenable to social confirmation — rises.

Arguments begin to function as social alibis: they no longer so much prove as they mark membership in an acceptable interpretive space. For an agent, this is rational when verification costs more than consensus. This is precisely how epistemic inflation arises: more arguments are produced, while their diagnostic value declines. Correlational data from Gerlich et al. (2025, Societies/MDPI) are consistent with this dynamic: frequent use of AI tools is associated with increased cognitive offloading, which mediates declining scores on critical thinking measures.

III. The Capitulation Loop of the "Resilient"

Ordinary agents drift because of the cost of verification. Agents with high epistemic standards drift for a different reason: the environment socially devalues their accuracy, even when they are technically capable of checking. The greater the volume of synthetic content and the heavier the load on independent verification, the more the cost of maintaining one's own standards rises. At the same time, the individual payoff from accuracy falls: if the surrounding environment has already shifted to heuristics, the agent who continues to verify everything manually bears costs that do not convert into a comparable advantage.

Experimental data from Shaw & Nave (2026, Wharton/SSRN 6097646) document an extreme expression of this logic: subjects who were confident in the correctness of their own answer nonetheless changed it under the pressure of a confidently framed model response. The social pressure of the environment proves stronger than individual epistemic resilience.

This is how a feedback loop forms. Since the cost of independent verification rises faster than the individual benefit from it, cognitive drift becomes the dominant strategy. Even "resilient" agents begin to partially delegate verification — and in doing so, lose the habit of independently triangulating reality.

Interventions: Partial Correction

This effect is not deterministic. Interface architecture and institutional rules can shift the balance back toward verification. The most obvious measures are mandatory metacognitive steps before publication, required citation of sources, a pause before submission, and technical binding to provenance and ground truth. The point of these measures is not to slow the system for its own sake, but to make verification locally warranted again.

Level 1 Summary: Rational Deadaptation

Cognitive drift is the shift from independent verification to economically rational offloading under the pressure of high verification costs. It is complete when "plausible" becomes the functional equivalent of "true" for most practical decisions. The output is a subject perfectly adapted to the interface but significantly weakened in the capacity for independent assessment of reality.

This microdecay creates the conditions for Level 2 — semantic erosion — where individual adaptation begins to contaminate the linguistic environment itself.


Level 2: Semantic Erosion and the Decay of the Infrastructure of Agreement

If cognitive drift begins at the level of individual choice, the second level unfolds across the entire information environment. What degrades here is no longer only the individual judgment, but the linguistic infrastructure itself through which agents form a shared representation of reality. Language is the foundational layer of shared observability: it allows different participants to describe the world similarly enough to coordinate action. When generative models become the primary producer of text, language begins to optimize for statistical plausibility rather than for distinguishing the states that matter for action.

I. The TTR Paradox and the Decline of Informational Density

The primary threat of mass generation is not simply the growth in text volume, but the decline in mutual information between text and the state of the world. For coordination domains, this means that the form of a message increasingly fails to indicate what is actually happening and what should be done about it. Kendro et al. (2025, arXiv:2508.00086v2) document a characteristic paradox: by metrics of lexical diversity — Type-Token Ratio (TTR), the ratio of unique words to total words in a text — models can appear richer than human-authored text, but this richness often conceals a decline in semantic distinguishability. Text becomes more varied on the surface and simultaneously flatter in content. Words less frequently mark the boundaries between states — and more frequently signal an averaged norm.

This is the essence of the TTR paradox: lexical variability increases, but the capacity of language to mark the boundaries between states of reality declines. As a result, a message may appear persuasive without becoming more informative for decision-making.

II. Statistical Attractors and Syntactic Monoculture

This loss of distinguishability is built into the very way models operate. Since they optimize for predicting the most probable next token, language begins to gravitate toward statistical attractors — stable centers of averaged norms. Atypical, complex, or poorly representable ideas, when passed through the generative process, tend to be smoothed out and drawn toward the probabilistic standard.

With iterative training on synthetic data, this effect intensifies. Shumailov et al. (2024, Nature) describe the mechanism of model collapse: rare tokens, non-standard connectives, and low-probability concepts are gradually washed out of the distribution — not as a result of deliberate selection, but as a structural consequence of recursive training. At the structural level, this leads to syntactic monoculture — the dominance of similar constructions, similar registers, and similar ways of "explaining" the world. Such an environment is comfortable to consume but holds complexity poorly. It reduces cognitive friction while simultaneously narrowing the space of possible interpretations, making text less suited to precisely distinguishing real alternatives.

III. Semantic Smog in Coordination Domains

For low-stakes communication, this is not yet catastrophic. The problem arises in domains where text is not a form of self-expression but a mechanism for distributing responsibility and making decisions: law, science, policy, diplomacy. In these domains what matters is not expressiveness alone, but the precise distinguishability of terms and arguments — the very quality that statistical attractors systematically suppress.

When the TTR paradox combines with syntactic monoculture, semantic smog emerges: an environment in which the volume of text grows while its signal value falls. This sharply raises the transaction costs of understanding. Confirming that parties genuinely mean the same thing requires ever more resources. The price of agreement rises, and a standard, smooth form begins to conceal incompatible meanings. As will be shown through the judicial cases at Level 3, it is precisely in this gap that the conditions for institutional failure form.

IV. Bridge to Level 3: Institutional Drift

Institutions do not access reality directly. They rely on analytical layers — expert briefs, literature reviews, meta-analyses, and audit reports. When these layers become saturated with semantic smog, organizations begin making decisions on the basis of increasingly averaged and increasingly unreliable textual projections of facts. Since their "sensors" are already contaminated, the degradation of language begins to transition into the degradation of verification procedures.

This is where individual drift becomes an infrastructural problem. The linguistic environment ceases to be a neutral carrier of knowledge and begins to distort the very possibility of reaching agreement. This opens the transition to Level 3 — institutional drift.

Level 2 Summary: The Erosion of the Infrastructure of Agreement

Semantic erosion is not a decline in expressiveness as such, but the loss of language's capacity to reliably distinguish the states that matter for collective action. When text becomes statistically smooth but semantically thin, society loses its shared interface for precise coordination. Coordination begins to rest not on reality, but on algorithmically smoothed interpretations of reality.


Level 3: Institutional Drift and the Automation of Blindness

If the second level sees the degradation of the linguistic environment, at the third level the institutional filters themselves begin to break — the procedures of verification, selection, and decision-making. Courts, regulators, scientific journals, and administrative systems do not engage with reality directly; they rely on analytical layers — reports, reviews, meta-analyses, expert opinions. When these layers become saturated with semantic smog, institutions lose the capacity to reliably distinguish what is actually happening. As a result, they shift from epistemic control to management through averaged textual surrogates.

I. Procedural Offloading: A Rational Response to Semantic Smog

The central challenge for institutions is that semantic smog sharply raises the unit cost of verification. When the incoming flow of data grows faster than the capacity of people and procedures to check it, manual filtering ceases to be scalable. In this situation, algorithmic systems appear not as a luxury but as a necessity: they take over summarization, prioritization, initial sorting, and even preliminary audit.

But this solution has a limit. Algorithmic filters optimized for statistical plausibility may systematically discard precisely those signals that indicate an anomaly. For the model, this is noise in which rare patterns are lost; for the institution, it may be an early warning of failure. One of the first signals of such filter paralysis appeared in the case of Neurosurgical Review (2025), which was forced to suspend manuscript intake following an explosive growth in AI-generated submissions and retract 129 publications (Retraction Watch, 2025). This is not merely an editorial failure, but direct empirical confirmation that  can outpace  not in theory, but in a specific institutional procedure.

II. Recursive Legitimation and False Abstraction

Once algorithmic filtering becomes the standard, a recursive loop forms. AI-generated texts begin to serve as input data for other analytical systems, which in turn amplify their status through paraphrase, summarization, and repackaging. Decisions become less dependent on physical facts and more dependent on their multiply processed textual reflections.

This produces the effect of false abstraction: the greater the level of mediation, the more convincing the resulting conclusion appears, even as its connection to the original reality weakens. A historically analogous logic manifested in the financial crisis of 2008, when dependence on risk models allowed market participants to coordinate actions without directly verifying underlying assets. What matters here is not the historical episode itself but the general mechanism: when the model becomes the primary carrier of reality, the institution begins to mistake the derivative form — a multiply processed textual cast — for the object itself.

III. The Responsibility Gap and the Official Legitimation of the Synthetic

The introduction of AI intermediaries diffuses agency. A responsibility gap emerges: errors become so distributed between the person, the interface, and the procedure that they are difficult to attribute to any single responsible party. The mechanism is especially clear in judicial practice.

In the case of Judge Neals in New Jersey, a draft ruling prepared via ChatGPT contained a citation to the nonexistent case Hines v. Wilmington Savings Fund Society. In Mississippi, Judge Henry Wingate was forced to withdraw an order after his clerk trusted a Perplexity response that not only fabricated citations but attributed them to plaintiffs who had never participated in the proceedings (Bloomberg Law, 2025). These cases demonstrate not a localized user error, but the passage of synthetic material through multiple layers of human and procedural review without being caught in time.

An even more significant signal comes not from the courts but from politics. In March 2026, the National Republican Senatorial Committee (NRSC) released an 85-second video featuring an AI-generated version of candidate James Talarico (Reuters, March 2026). Despite carrying an AI generated label, the video was embedded in an official political context and maintained high plausibility, interweaving synthetic segments with real quotations. This is not an external failure or an accidental leak. It is a sign that synthetic material is beginning to be legitimated by institutional actors themselves. At this point, the question is no longer one of distinguishing truth from falsehood, but of the resilience of the communication process on which institutional trust rests.

IV. Epistemic Dependency: The Point of No Return

As this process deepens, institutions lose their own competencies for independent knowledge production. Analytical units are reduced, skills in working with primary sources weaken, and the capacity to manually restore contact with reality degrades. A dependency arises in which the organization continues to function, but no longer knows how to independently verify what it is doing.

This is epistemic dependency: a state in which the institution retains its form but loses its autonomy. At this stage, procedural offloading becomes lock-in. Even when the divergence from facts becomes obvious, the organization can no longer easily return to manual verification — its own infrastructure, personnel, and working rhythms have been restructured around algorithmic intermediation.

Transition to Level 4: Coordination Collapse

Institutional drift creates a trust vacuum. When institutions — the traditional anchors of truth — themselves begin to operate through recursive loops and legitimate synthetic reality, common knowledge dissolves. At the next level, this is no longer simply a problem of signal quality. It is a problem of the interaction regime itself: agents can no longer coordinate through a shared picture of the world and begin instead to align with each other's expectations.

Level 3 Summary: The Automation of Blindness

Institutional drift is the moment when verification systems begin to rely on tools that make them faster but less sensitive to reality. They retain operational speed while losing the capacity to reliably separate signal from noise. As a result, institutions do not so much err as gradually automate their own blindness.


Level 4: Coordination Collapse

The first three levels destroy signal quality, linguistic infrastructure, and institutional filters. The fourth level touches what remains — the very possibility of collective action. The problem here is no longer that information becomes less accurate, but that agents lose the shared reference point for aligning behavior. Coordination ceases to rest on a common picture of the world and shifts toward the reconciliation of expectations about what others will consider common.

I. The Collapse of Common Knowledge

For coordination, it is not sufficient for agents to possess true information. What is required is the structure of Common Knowledge: I know that you know that I know, and so on. It is this recursive confidence that makes stable forms of collective action possible — from markets to international agreements.

Under conditions of mass generation, semantic smog, and institutional legitimation of the synthetic, a presumption of fabrication emerges. Even a truthful signal loses its coordinating force if agents are uncertain that others will accept it as credible. What collapses is not simply trust in individual messages, but the structure of mutual predictability itself. Coordination tasks accordingly begin to be resolved not through the establishment of facts, but through the assessment of which facts are considered shared.

II. The Keynesian Beauty Contest as an Equilibrium Regime

When truth ceases to be a reliable reference point, rational agents shift to the strategy of the Keynesian Beauty Contest. What matters is not what the agent considers true, but what the agent believes others will consider true.

This is a shift from first-order beliefs (about facts) to higher-order beliefs (about others' beliefs). Stability is achieved no longer through correspondence with reality, but through the alignment of expectations. In this system, decisions begin to be built around the most salient and socially amplified signals — even if they are weakly connected to the facts.

Coordination does not disappear, but changes its foundation: a regime of coordination without truth emerges, in which behavioral consistency is preserved at the cost of losing any anchor in reality.

III. Epistemic Balkanization and the Security Dilemma

The collapse of common knowledge leads to the fragmentation of a unified epistemic space into isolated enclaves. Groups construct their own closed verification loops, in which the criterion of truth becomes internal consistency and conformity to group signals. Epistemic balkanization emerges: different communities operate with incompatible pictures of the world, between which no reliable mechanisms of reconciliation exist.

This process is better described not as a trust dilemma but as a security dilemma (Jervis, 1978). Each actor behaves rationally and defensively: strengthening filters, restricting incoming signals, reducing dependence on external sources, in order to protect against distorted information. Yet the aggregate effect of these actions is destructive to the common space: each new defensive measure reduces local risk while simultaneously narrowing the environment in which any reconciliation of signals between groups is possible at all.

As a result, cooperation becomes increasingly costly. Every interaction requires preliminary verification, but verification itself is already overloaded and unreliable. The rational strategy becomes a reduction in interaction, which further accelerates fragmentation.

IV. Indicators of Approach

The transition to this regime can be diagnosed through the growth of the gap between signal production and social confirmation.

The first indicator is declining mutual information (the mutual information between the signals of different groups — a measure of how well the messages of one group predict the interpretations of another): different communities begin describing the same basic facts in incompatible ways. The second is a growing verified backlog (the accumulated queue of unverified mission-critical data): it becomes long enough that institutions are forced to default to either trust or refusal. The third is reputational erosion: reputation ceases to mean a history of reliable verification and increasingly means the capacity to impose and maintain one's own version of reality.

V. Terminal Stage: Forced Synchronization

When the mechanisms of trust and verifiable truth stop working, the system seeks an alternative in the form of forced synchronization. Coherence begins to be ensured not by shared understanding of reality, but by rigid algorithmic protocols, rules, and constraints.

This is not the result of deliberate capture, but a structural response to the collapse of a shared epistemic environment. When coordination through facts no longer scales, only prescriptions remain — ones that require no interpretation. But this regime preserves order at the cost of a sharp reduction in the system's flexibility and adaptability.

VI. Implications for Response Design

The cascade model reveals that the problem cannot be reduced to individual false claims or the failures of specific institutions. The bottleneck is the gap between generation, verification, and legitimation.

This implies that effective interventions must operate at the level of environmental architecture. First, they must raise the local value of verification where it currently loses to the speed of generation. Second, they must restore the distinguishability between plausibility and confirmation, rather than simply increasing the volume of checking. Third, they must support protocols that preserve common knowledge under conditions of synthetic pressure.

The goal is not to restrict generation as such, but to prevent the stabilization of a regime in which the production of interpretations systematically outpaces the capacity of society to reconcile them. The temporal dimension is of fundamental importance here: measures implemented before the coordination threshold is crossed are structurally more effective than the same measures applied after it — the terminal stage is not merely harder to correct; it narrows the space of possible interventions itself.

Level 4 Summary: Coordination Collapse

At the fourth level, the cascade closes. With signal destroyed, the linguistic environment degraded, and institutional filters weakened, agents shift to coordination based on expectations rather than facts. A regime emerges that functionally maintains synchronization but is structurally fragile: behavioral consistency is preserved while resting ever less on shared reality.

In such a system, collective action remains possible, but requires increasingly rigid maintenance mechanisms. This is not a stable equilibrium, but a temporary configuration arising wherever the common epistemic foundation has ceased to perform its function.

References

  1. Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. MDPI. https://doi.org/10.3390/soc15010006
  2. Shaw, S. D., & Nave, G. (2026). Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender. Wharton School Research Paper. SSRN 6097646. https://doi.org/10.2139/ssrn.6097646
  3. Kendro, K., Maloney, J., & Jarvis, S. (2025). Do LLMs produce texts with "human-like" lexical diversity? arXiv:2508.00086v2. https://arxiv.org/abs/2508.00086
  4. Shumailov, I., et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755–759. https://doi.org/10.1038/s41586-024-07566-y
  5. [Retraction Watch / Neurosurgical Review — требует отдельной верификации]
  6. Henry, J. (2025, July 23–24). Judge Scraps Opinion After Lawyer Flags Made-Up Quotes. Bloomberg Law. Case: In re CorMedix Inc. Securities Litigation, No. 21-cv-14020 (D.N.J.). https://news.bloomberglaw.com/business-and-practice/ judge-withdraws-pharma-opinion-after-lawyer-flags-made-up-quotes
  7. Scarcella, M. (2025, July 29). Two U.S. Judges Withdraw Rulings After Attorneys Question Accuracy. Reuters. https://www.reuters.com/legal/government/ two-us-judges-withdraw-rulings-after-attorneys-question-accuracy-2025-07-29/
  8. [Reuters]. (2026, March 28). AI deepfakes blur reality in 2026 US midterm campaigns [NRSC/Talarico deepfake]. Reuters. https://www.reuters.com/
  9. Jervis, R. (1978). Cooperation under the Security Dilemma. World Politics, 30(2), 167–214. https://doi.org/10.2307/2009958

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities