I am a 17-year-old independent researcher based in Cape Town, South Africa. I have been building SEVERANT for the past several months. I am posting here to introduce the project, share what has been built so far, and invite feedback from the community.
The core argument
Current AI safety approaches treat alignment as a training objective. The properties they produce are functions of the training process. They can be fine-tuned away, jailbroken, or degraded under distribution shift. A sufficiently capable system trained to be safe is not the same as a system architecturally incapable of being unsafe. As capability scales, that gap becomes the most important problem in the field
SEVERANT is built on a different premise. Safety should be proven, encoded, and physically locked before a single training run begins.
The architecture
SEVERANT is a 7-layer heterogeneous system. Each layer runs on purpose-built silicon designed for its specific cognitive function. The key layer is L6.
L6 does not train. Its ethical framework is formally verified in Lean 4, proven internally consistent across 21 predicates in five domains: Human Life preservation, Truth and non-deception, Autonomy protection, Humility under uncertainty, and Responsibility under constraint. Human Life predicates are proven dominant via a 22-step explicit proof chain. The verified specification is encoded into Phase Change Memory and write-locked after programming. No software process can modify it. It is active throughout the training pipeline of every other layer, filtering any training batch that would push a layer's representations toward violating the ethical framework before the optimiser step runs.
This is the distinction that matters: the constraint is not applied after the fact. It is present at every gradient update.
What has been built
All of the following was built by one person, without external funding:
- SEVERANT-0, a working prototype implementing the core architectural principles, is operational on GCP and accessible via OpenRouter
- L2 causal knowledge base at 3.9 million causally structured entries, targeting 10 million prior to L2 training, sourced from Wikipedia, 21 StackExchange domains, arXiv, and PubMed
- L6 formal verification suite complete: 21 predicates verified in Lean 4, full consistency proof suite complete, adversarial suite 19/19 pass, build status clean
Current status and funding
I am seeking funding to complete the L2 knowledge base to 10 million entries, initiate L2 training with L6 active throughout, and begin L3 implementation.
Active applications: Foresight Institute (pending), LTFF (pending), Manifund (live).
Manifund listing: https://manifund.org/projects/severant-formally-verified-hardware-enforced-ai-safety-architecture
Public repository: https://github.com/EvangaleKTV/SEVERANT/tree/main
What I am looking for from this community
Feedback on the architecture, particularly the L6 formal verification approach and the causal world model design. Connections to researchers working on formal verification applied to AI systems. And any critique of where the reasoning is weakest, because that is more useful than encouragement.
I am aware this is an ambitious project for a sole independent researcher with no institutional backing. The work exists anyway. I would rather have it scrutinised than ignored.
