Hide table of contents

 AI safety funders invest substantial resources (overview) but don't require grantees to document how they make decisions. Simple governance documentation could improve decision quality and create a shared library of what works. LLMs now make this tractable. 


The Gap

When you apply for AI safety grants (example: OpenPhil Technical AI Safety RFP, AI Governance RFP), funders ask for:

  • What you'll research
  • How you'll spend money
  • What you accomplished

What they don't ask:

  • How you decide what to work on
  • How you handle safety concerns
  • Whether you're drifting from priorities
  • How you resolve disagreements

For most research this doesn't matter. For AI safety work with dual-use risks and catastrophic downside, it seems important.

This Already Exists in High-Stakes Fields

This isn't novel - fields where mistakes are catastrophic already require governance documentation:

Clinical trials: Documented decision logs, safety monitoring, and audit trails (Good Clinical Practice standards)

Nuclear facilities: Safety analyses, decision records, and justifications for changes

Space missions: Multi-agency safety reviews with documented objections and resolutions

AI safety research has comparable stakes. Why not comparable documentation?

(Disclaimer: I haven't worked in these fields - this is based on public documentation. If you have domain expertise, corrections welcome.)

Why Now?

LLMs can help:

  • Brainstorm objections you might miss
  • Suggest mitigations
  • Stress-test decisions
  • Draft documentation

What used to take hours takes minutes. This could be newly tractable. You would probably want to document prompts to make things less verbose though.

Why This Matters

This would build up over time: 

  • A shared library of real objections/mitigations
  • Patterns of what works/doesn't
  • Common failure modes visible
  • New teams learn from others

Better decisions on high-stakes research. This seems worth the overhead.

Questions for the Community

Funders: Would you consider this for 2026 grants? Could you coordinate with other major funders?

Researchers: Would this help or hinder your work? What format would be useful vs annoying?

Governance folks: What am I missing? What could go wrong?

3

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities