Independent AI safety researcher. Built BiasClear, an open-source tool for detecting structural persuasion in LLM outputs. 20+ years in banking operations and risk management. Published: Persistence of Information Theory (DOI 10.5281/zenodo.18676405).
Feedback on BiasClear's detection architecture, introductions to researchers working on output safety or AI evaluation frameworks, compute resources for cross-model validation studies.
Institutional risk management perspective on AI deployment, operational audit infrastructure design, practical experience deploying AI safety tooling.