Hello! I am the sole founder of VANTA Research - a bootstrapped AI research project focused building safe and resilient AI models optimized for human-AI collaboration. All of our published work has been made open source thus far, and in roughly 2 months has garnered 60k+ downloads on Hugging Face and Ollama across original model families.
I have several goals with VANTA Research, but among the first major stops is building a large (400B+) open source foundational model from scratch. I love learning, asking hard questions, and a good mystery.
DMs are always open.
Great list! I've actually been working on something that aligns closely with #3: I've been independently testing LLMs (including Gemini, Grok, DeepSeek etc.) for unexpected behavior under recursive prompt stress. I've been documenting my tests in red-team style forensic breakdowns that show when and how models deviate or degrade through persistent pressure.
The goal of this is absolutely to see and evaluate how agents behave in the wild. I believe that this is a critical safety test that cannot be missed.
I'd be curious to connect with others that are interested in research/testing from this angle.
It's cool to see a role like this open up. I'm curious to see how SLT plays out in practice, especially at scale. I've seen some pretty dramatic shifts in generalization between different versions of the same language model, even just from one quantization to another. Definitely feels like important territory to explore.