Travis Lee

Independent researcher / writer (AI governance & safety) @ Independent
0 karmaJoined Working (15+ years)South East Asia
humansovereigntyai.substack.com/

Bio

I work on AI governance and systems architecture, with a focus on how human authority, oversight, and decision-making can be preserved as AI systems become more autonomous and agentic.

My current work explores architectural approaches to governance that operate within systems rather than only through external policy or post-hoc controls. I’m particularly interested in questions around identity, agency, and jurisdiction in advanced AI.

I’m here to learn, test assumptions, and stress-test ideas with people thinking seriously about AI risk and long-term outcomes.

How others can help me

I’d appreciate thoughtful critique on governance approaches for agentic AI, pointers to relevant research (technical or institutional), and discussion on where architectural governance might fail or create unintended risks.

How I can help others

I’m happy to contribute perspectives on AI governance architecture, identity and agency in agentic systems, and to provide structured feedback on early-stage ideas or drafts related to AI safety and oversight.

Comments
1

Thanks for the careful treatment of objections here. I appreciate the effort to make the trend-continuation case more explicit and stress-tested.

Conditional on those assumptions, I agree that very rapid growth scenarios look plausibly underweighted in many discussions. One point I’m still uncertain about, though, is whether most intelligence-explosion framings place too much weight on capability scaling and too little on the structure of authority and autonomy that emerges during scaling.

In particular, many objections (and rebuttals) seem to assume governance, alignment, and social response are downstream variables that can be adjusted in parallel or after the fact. It’s not obvious to me that this separation is stable. Behaviour and impact may depend less on raw intelligence levels and more on what kinds of autonomy, persistent objectives, and decision authority are permitted to form as systems scale.

If that’s right, then some of the most important uncertainties might sit upstream of the explosion itself, rather than being something we can safely adapt to once growth accelerates.

I’d be curious how you think about this: do you expect constraints on autonomy and authority to emerge adequately in parallel with explosive growth, or do you think they need to be largely resolved beforehand for the optimistic trajectories to hold?