htetkokonaing

Independent Researcher @ Independent

Bio

I am an independent researcher interested in AI safety, AI governance, and runtime oversight. My current work focuses on the Signal-Time-Authority (STA) framework, a pre-commitment controllability lens for asking when AI oversight is still control-relevant before harmful actions become committed.

My background includes a B.A. in English, and I am currently developing public research notes, toy simulations, and explanatory materials around runtime AI oversight, authority governors, tool-use agents, output release gates, and physical AI safe-stop systems.

How others can help me

I would welcome feedback on the Signal-Time-Authority (STA) framework, especially from people familiar with AI safety, AI governance, runtime assurance, control systems, safety cases, or agent/tool-use systems.

I am especially interested in criticism about where the framework may fail, whether the Signal-Time-Authority-Policy framing misses important governance dimensions, and how to better evaluate pre-commitment control in real AI deployments.

How I can help others

I can discuss ideas related to runtime AI oversight, pre-commitment control, authority governors, tool execution gates, output release gates, multi-agent cascade risk, and AI governance framing.

I can also share my STA papers, toy simulation materials, diagrams, and draft explanations if they are useful for someone thinking about AI safety, governance, or oversight design.