I. Introduction
How can we account for fear, both publicly and privately, when decision-making in the critical field of AI Safety?
To scrutinize every decision for traces of fear risks deepening it, especially once we realize that recognizing fear is not the same as mastering it. We all labor to manage a psychological load much heavier than what our ancestors evolved to bear. It’s especially challenging for those in the EA ecosystem, where business itself is intertwined with attempting to interpret, predict, and prevent worldwide tragedies.
Fear is intrinsic to human nature, but in its modern scale and constancy, it requires a careful balance. Too little attention invites distortion yet too much invites paralysis. Psychological research suggests that when perceived severity of fear is extreme and perceived efficacy unclear, decision-making shifts in patterned ways. It causes behaviors to shift in different ways – sometimes towards stewardship, other times towards symbolic action, excessive deference, rivalry, or avoidance.
This paper argues that effective AI governance needs to treat the psychological climate as part of the policy environment itself. If sustained exposure to abstract catastrophic risk reliably biases judgment, then accounting for that bias becomes integral to safeguarding the systems we seek to govern. Rather than attempting a comprehensive empirical analysis, this is more of a theoretical piece that draws on prior U.S. technological regimes and political psychology to can examine how prolonged high-consequence risk shapes collective efficacy and institutional behavior.
The central claim is modest but consequential: governance frameworks must not only manage technical systems, but preserve the conditions under which institutions can act steadily over time. That requires institutional architectures that sustain efficacy under uncertainty. In an ideal world, it would consist of calibrated technical fluency among lawmakers, empowered and accountable expert bodies, legible regulatory thresholds, and public engagement oriented toward stewardship rather than panic.
Without these conditions, we can’t expect to make the right decisions. There's a gradual erosion of the emotional and institutional foundations upon which democratic governance depends, regardless of whether the political/social/economic risks are being solved. Not because AI is uniquely inscrutable, but because persistent exposure to existential risk can distort the capacities required to govern it.
II. Historical Governance Patterns: Governing Under Partial Understanding
Technological complexity and the fear that surrounds it has rarely been absent from major regulatory challenges in the US. In each transformative regime – railroads, electrical utilities, nuclear energy, financial derivatives, and internet platforms – lawmakers operated under conditions of incomplete and evolving technical understanding. The relevant constraint was not ignorance alone, but uncertainty: limited visibility into downstream harms, contested expert interpretations, and shifting assessments of systemic risk.
Historical experience suggests that technical opacity becomes politically consequential only when it interacts with institutional confidence and public perception. Where harms became legible and governance mechanisms were perceived as workable, regulation emerged despite limited mastery of underlying mechanisms. Where uncertainty obscured risk, diffused responsibility, or undermined perceived efficacy, institutional response faltered. The cases that follow illustrate this distinction.
Railroads & Electricity
The late nineteenth-century expansion of railroads and the rise of electrical utilities in the early twentieth century both involved infrastructure of enormous technical complexity. Legislators did not master rail engineering, grid physics, or system optimization. What they did identify were tangible harms, such as monopolistic pricing, unsafe conditions, unreliable service, and clear structural features such as natural monopoly.
In both domains, regulation focused on outcomes rather than internal mechanics. Congress established rate fairness standards and anti-discrimination rules for railroads through the Interstate Commerce Act, delegating oversight to the Interstate Commerce Commission. Electrical power was governed through rate regulation, safety standards, and reliability requirements administered by expert commissions. Technical opacity did not prevent regulation because harms were legible, leverage points were centralized, and durable institutions accumulated expertise over time.
Nuclear Technology
Nuclear governance followed a related but more precautionary path. Legislators lacked expertise in reactor physics, but they didn’t need it to understand its risks. The destructive potential of nuclear technology was immediately legible as catastrophic. Governance emphasized centralized authority, strict licensing, and containment protocols. Here, limited understanding did not produce paralysis because risk was both visible and politically salient, justifying strong institutional constraint.
The Internet & Financial Derivatives
In contrast, internet platforms and complex financial derivatives reveal the limits of this model. Platform harms, such as information manipulation, economic concentration, and social polarization, emerged gradually and diffusely. The passage of Section 230 prioritized innovation while underestimating network effects and algorithmic amplification. Similarly, derivatives markets grew in opacity under the Commodity Futures Modernization Act, with regulatory authority constrained and deference to industry prevailing. In both cases, technical complexity interacted with delayed or poorly measured harms and weak institutional counterweights.
Across these regimes, governance efficacy did not correlate with legislative technical mastery. Instead, three conditions recur in effective cases:
- Legible harms tied to observable outcomes.
- Enforceable leverage points rooted in centralized infrastructure or authority.
- Delegated expertise embedded within durable regulatory bodies.
Failures emerged when harms were diffuse, nonlinear, or politically minimized, especially when institutional design failed to compensate for opacity. Notably, in successful regimes, uncertainty about technical mechanisms did not erode collective efficacy. Harms were visible enough, and governance tools credible enough, to sustain confidence in oversight itself.
The historical record therefore complicates any claim that the complexity of AI’s risk alone would make regulation impossible. Technical ignorance has been a constant in democratic governance. The more consequential question is how to prevent fear and uncertainty from producing a lack of regulation as we witnessed with the internet and derivatives regimes.
III. The Missing Variable: Emotional Processing Capacity Under Threat
To further break down the component of fear in AI Safety, there’s a few important lessons from psychological research to highlight. They allow us to better understand the behavioral dynamic that institutions and individuals are constantly constrained by.
Fear and Perceived Efficacy
The Extended Parallel Process Model (EPPM), developed by Kim Witte, provides a foundational framework for understanding how fear influences behavior. According to EPPM, fear-based messaging produces constructive action only when two conditions are met simultaneously: perceived threat must be high, and perceived efficacy must also be high. Individuals have to believe both that the danger is serious and that the proposed response is likely to succeed.
When threat is high but efficacy is low, individuals engage in what Witte calls “fear control” rather than “danger control.” Instead of addressing the problem, they manage their emotional discomfort through avoidance, denial, or disengagement.
This framework has direct implications for technology governance. If policymakers perceive AI risks as severe but lack confidence in regulatory tools, institutional capacity, or expert consensus, the predictable response is not decisive action but deferral, fragmentation, or rhetorical posturing.
The Affect Heuristic and Risk Distortion
Work by Paul Slovic and his colleagues on the “Affect Heuristic” further complicates the role of fear in policy formation. They provide solid evidence behind a lesson we’re all familiar with, that strong emotional reactions often substitute for probabilistic reasoning. High-dread risks tend to compress nuance, increase polarization, and crowd out calibrated judgment.
In technological contexts characterized by uncertainty, emotional amplification can therefore degrade decision quality. You could argue that AI discourse has already become dominated by existential framing without operational clarity, and the result is brittle or overbroad policy proposals, diminished trust among stakeholders, and politicized backlash.
Cognitive Narrowing and Strategic Capacity
Barbara Frederickson's broaden-and-build theory of emotion suggests another constraint. Negative emotions such as fear narrow cognitive scope, directing attention toward immediate threats and away from creative or integrative thinking. Positive moral emotions, such as hope, responsibility, and elevation broaden cognition and support long-term resource building.
Governance under uncertainty requires exactly the capacities that chronic fear suppresses: probabilistic reasoning, institutional design creativity, coalition-building, and iterative learning. A political environment saturated with existential alarm may impair the very strategic thinking required for effective oversight.
Sustained Motivation and Self-Determination
Self-Determination Theory, developed by Edward Deci and Richard Ryan, identifies three psychological needs that sustain long-term engagement: autonomy, competence, and relatedness. Movements or policy environments that erode these needs generate burnout and withdrawal.
Chronic existential framing risks undermining all three. Policymakers who feel technically incompetent, politically constrained, and socially polarized are unlikely to sustain durable governance efforts. Similarly, advocates who perceive the problem as overwhelming and solutions as fragile may disengage or radicalize rather than build broad coalitions.
Sustained institutional effort requires not only awareness of risk but a felt sense of agency.
Lessons from Climate Anxiety
Research on climate anxiety offers a contemporary parallel. High awareness of catastrophic climate risk, combined with perceived political stagnation, has been associated with emotional exhaustion, avoidance, and reduced civic engagement, particularly among younger populations. Studies suggest that hope-oriented and agency-centered framing produces more durable engagement than fear amplification alone.
AI governance may be vulnerable to a similar dynamic. If discourse emphasizes existential threat without articulating credible institutional pathways, the result may be moral intensity paired with declining efficacy.
What all of this means for AI Governance
Taken together, these frameworks point to a missing variable in AI governance: emotional processing capacity under sustained threat. Political systems do not simply respond to objective risk; they respond to risk as filtered through perceived control, institutional competence, and collective efficacy. This is likely why a Global Moratorium is the position so many individuals and institutions lean towards in the field, because it’s the only immediate solution that seems highly efficable.
The historical record and psychological evidence converge on a simple principle: when threats are abstract and high-consequence, governance design must actively preserve the conditions for agency. Without deliberate efforts to sustain autonomy, competence, and trust, chronic exposure to existential framing may erode the capacities upon which effective oversight depends.
IV. Designing AI Governance for Collective Efficacy
If artificial intelligence represents a boundary case for democratic governance, the response cannot rely solely on historical analogy or threat amplification. Effective AI governance has to operate on two levels simultaneously: structural calibration and psychological sustainability.
1. Calibrated Technical Fluency as Institutional Capacity
Lawmakers don’t need to master the mathematics of machine learning. But artificial intelligence differs from prior regulatory domains in that its legal thresholds are technical parameters. Compute scale, capability evaluations, autonomy in deployment, and integration into critical infrastructure are not rhetorical categories, they are engineering variables. If legislation cannot internalize those variables, it cannot meaningfully govern them.
Recent U.S. efforts, such as the advisory framework created under the National Artificial Intelligence Initiative Act and the formation of the National AI Advisory Committee, reflect recognition that episodic hearings are insufficient. Yet advisory bodies without embedded authority remain structurally peripheral. Their recommendations inform deliberation; they do not define regulatory thresholds.
Congress has previously institutionalized technical foresight. The Office of Technology Assessment was created to provide sustained, nonpartisan analysis of emerging technologies capable of shaping legislation itself. High-risk domains such as nuclear energy are overseen not through testimony alone but through permanent regulators like the Nuclear Regulatory Commission, where engineering expertise is integrated into licensing authority. Financial stability is monitored through analytic institutions such as the Government Accountability Office, whose evaluative capacity is structurally embedded within oversight power. It seems the US can acknowledge that governance under technological acceleration requires more than awareness of risk. It requires durable institutions capable of translating technical knowledge into binding constraints. When regulatory parameters are themselves technical artifacts, consultation is insufficient, and embedded epistemic capacity is required.
To make legislative technical fluency operational rather than rhetorical, Congress would need:
- Statutory permanence insulated from partisan volatility
- Independent authority to audit and evaluate frontier AI systems
- Formal integration of technical analysis into regulatory drafting and threshold-setting
- Specialized pay authorities and rotational fellowships to attract frontier expertise
- Protected funding streams to sustain long-horizon analytic capacity
If democratic institutions intend to remain authors of the systems that increasingly structure economic, military, and informational life, they must build internal capacity commensurate with those systems’ complexity. Anything less concedes structural authority to the regulated domain itself.
2. Legible Regulatory Thresholds
It's critical that abstract risk be translated into operational criteria, yet current approaches reveal how difficult and politically fraught the process is. In the United States, governance frameworks often hesitate to define firm thresholds, balancing concerns about innovation with the reality that fixed limits can quickly become outdated.
Existing efforts reflect this tension. The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence ties reporting requirements to large-scale training runs, but compute is only a proxy for risk. Comparable capabilities can increasingly be achieved with less compute, while more direct capability-based evaluations remain difficult to standardize and verify. Even proposals like SB 1047 gesture toward hybrid models without fully resolving this gap.
The result is a system that is legible but incomplete: measurable thresholds that do not fully capture risk, paired with risk-relevant evaluations that lack enforcement infrastructure. In practice, thresholds risk functioning more as signals of intent than as mechanisms of control.
This produces an important question - which is more dangerous, imperfect enforceable thresholds, or no federally enforceable thresholds at all? If they’re critical to maintaining faith in a system that governs catastrophic risk, then absence of clear federal standards potentially creates more risk than imperfect boundaries ever would. Establishing baseline, nationwide thresholds, however provisional, would likely create a foundation for oversight, coordination, and accountability that does not currently exist.
These thresholds should be explicitly designed to evolve. Imperfect but adaptive thresholds, credibly enforced, are far more effective than theoretically optimal ones that are never implemented. Only by prioritizing deployment over precision can governance begin to match the scale and urgency of the risk.
3. Delegated Expertise with Public Accountability
Durable governance requires delegated expert institutions capable of accumulating technical knowledge over time. AI oversight bodies, whether within NIST, a potential AI Safety Institute, or internal corporate teams need to conduct evaluations, update standards, and adapt to empirical evidence.
Yet delegation is currently uneven. Companies often retain key evaluation frameworks internally, producing voluntary disclosures with little independent verification. Oversight bodies in government are limited in authority, funding, and statutory mandate.
To truly preserve collective efficacy, delegated expertise has to be empowered, transparent, and accountable. Enforcement mechanisms need to be credible. Oversight cannot be an aspirational function or a public relations gesture; it must visibly constrain high-risk activity and maintain public and private confidence.
4. Incremental Wins and Demonstrable Control
Efficacy is reinforced by visible, tangible success. Concrete measures like standardized safety evaluations, mandatory reporting, and controlled deployment of high-autonomy systems demonstrate that oversight works.
Currently, many initiatives are aspirational. Pilot programs often lack public reporting, deployment oversight is uneven, and evaluations are limited in scope. Governance must prioritize interventions that are small enough to be actionable but meaningful enough to be observable, strengthening confidence in both policymakers and the public.
5. Framing Governance as Stewardship Rather Than Panic
Fear-based rhetoric can generate short-term urgency but risks long-term erosion of engagement if solutions don’t appear powerful enough. Framing AI governance as strategic stewardship emphasizes responsible management, national capability, infrastructure modernization, economic resilience, and human dignity.
While some public messaging attempts this (e.g., the “safe, secure, trustworthy AI” framing in U.S. policy), alarmist narratives remain prevalent in both industry and media discourse. Additionally, the public organizations who do try to lead with stewardship-like narratives often are still not trusted by the public in a general sense.
Institutions have to consistently reinforce stewardship to maintain cognitive and coalitional breadth, avoiding the traps of panic-driven policy, and even that practice will likely only be effective in combination with more public faith in the entire system itself.
6. Sustained Public Demand Without Burnout
Public engagement is critical, but AI’s abstract risks complicate mobilization. Transparency mechanisms like the public reporting of model capabilities, evaluation outcomes, and regulatory decisions can maintain attention without escalating existential alarm.
Current disclosure practices are often voluntary, inconsistent, and difficult for the public to interpret. To sustain engagement, mechanisms can make oversight legible, understandable, and linked to tangible action, rather than producing anxiety or perceived helplessness.
7. Toward a Hybrid Model
A hybrid governance model synthesizes these elements:
- Legislative technical fluency embedded in institutional processes, not episodic advisory panels
- Delegated expert enforcement institutions with statutory authority, funding, and transparency
- Clear, adaptable, and verifiable regulatory thresholds
- Framing strategies that reinforce agency rather than panic
- Public engagement tied to observable oversight outcomes
This model doesn’t eliminate uncertainty, nor does it require predictive perfection. Its goal is to sustain perceived efficacy under abstract, high-consequence threat.
Across prior technological regimes, limited technical understanding was survivable when institutions were designed to learn and when publics believed governance mechanisms were actionable. AI demands a similar architecture… but with deliberate attention to the credibility, visibility, and enforceability of governance, both nationally and within organizations.
V. Conclusion
Fear is not a variable to eliminate in AI governance, but one to understand and properly calibrate. The central lesson is neither that fear should be suppressed, nor that universal technical mastery is required. Rather, effective democratic governance depends on aligning perceived severity with a credible sense of agency. When institutions communicate both the scale of risk and the plausibility of meaningful response, collective efficacy becomes possible.
This argument has clear limitations. Increasing technical fluency among lawmakers is inherently slow and uneven, and this analysis draws primarily from U.S. historical examples and a selective range of psychological research. Broader comparative work and deeper empirical grounding could refine or challenge these conclusions. However, the aim here is not to provide a comprehensive account, but to surface a framework for understanding how psychological dynamics shape governance under conditions of sustained, high-consequence risk.
The implication is straightforward: the challenge of AI governance is not only technical, but psychological. It requires structures that make risk legible without overwhelming, and responses credible without overstating control. Without this alignment, even well-designed policies may fail. Let's prioritize making sure we're giving them (and ourselves) a chance to succeed.
Thank you so much for reading! Please, feel free to leave any critiques, comments, or general thoughts.
