J.S.

Human Morality, and Machine Ethics and Alignment Researcher
8 karmaJoined Retired

Bio

Most of my life I have been studying how humans make moral choices. I have been working on human moral dilemmas for decades and eventually brought all that experience into AI alignment and Machine Ethics. I have been building a framework for moral verification that comes out of years of thinking about how people actually reason about right and wrong.

I am here to contribute, learn, and to connect with people who are trying to solve the same hard problems in AI safety. 

How others can help me

Feedback, Collaboration, Insights, Guidance, Funding.  

Comments
2

Maybe it's just me, but this looks like a win for Anthropic. Bad actors will do bad things, but I wonder why they would choose to use Anthropic instead of their own Chinese AI, where I would assume the security is less rigorous, at least to their own state actors, no? I had Claude quickly dig this up for me, and from what he said, it occurred as far back as mid-September 2025, which would indicate this release had intentional timing. Anthropic chose to announce during peak AI governance discussion, framing it to emphasize both the threat and defense value of their systems. The delay between September detection and November announcement allowed them to craft a narrative that positions Claude as both the problem and the solution, which is classic positioning for regulatory influence. Nothing wrong with that I suppose...?