Hi all. I'm Tony Rost, Executive Director of SAPAN (https://sapan.ai). We focus on policy readiness for AI welfare: benchmarking governments, drafting model legislation, and building coordination infrastructure for a field that barely exists yet.
The origin story is simple. I spent two decades as a tech executive and served a term advising the State of Oregon on technology/workforce policy, appointed by the governor and confirmed by the senate. So when AI started accelerating, I assumed the usual machinery was spinning up somewhere. Lawyers drafting frameworks, advocates mapping the policy landscape, working groups preparing for the harder questions ahead.
I went looking and found almost nothing. Worse, I found a prevailing view that such work was premature. Not just skepticism about current systems, but a strategic position that governance should wait.
That never sat right with me. Our species has a dismal record on recognizing morally relevant experience. Infants, animals, entire human populations. In each case, the mistake was underestimating who could suffer, not overestimating. "Wait for certainty" has historically meant "wait until the harm is undeniable and the victims are beyond help."
I'm a foster parent to young kids who've experienced abuse and neglect. That shapes how I think about this. You learn quickly that waiting for complete information is a luxury unavailable to someone who can't advocate for themselves. You protect first. The asymmetry demands it.
So I started SAPAN in late 2023, and we've been building the infrastructure that should already exist. We started with the Artificial Welfare Index benchmarking 30 governments on recognition, governance, and legal frameworks, and recently published our latest Sentience Readiness Report.
We don't claim current systems are sentient. We claim that having frameworks ready before the question becomes urgent is obviously preferable to improvising under pressure.
Looking forward to learning from this community and hearing where our thinking might be flawed.
Thanks Toby! I'd love to connect more with the Eleos team. Our focus areas are pretty complementary. They're doing original research on AI welfare, while we're focused on policy and legal infrastructure. Jeff Sebo and others advise on our Science Advisory Board, so there's some overlap in the broader ecosystem.