Today we are both launching our organization, Generally Intelligent, and open sourcing part of our research environment, Avalon, to enable the academic research community to make progress on understanding neural networks and creating safer, more robust RL agents.
Generally Intelligent is an AI research lab focused on better theoretical and practical understanding of deep learning, neural networks, and reinforcement learning agents. We believe that developing a better scientific understanding of these techniques is critical to the development of safe AI systems. We're excited about approaches like that of Chris Olah at Anthropic, as well as other more theoretical work. For more on our approach to safety, see our website.
We're also open sourcing Avalon today, one of our first projects. Avalon is a fast, accessible simulator designed specifically for reinforcement learning. We hope that Avalon will enable academic labs to contribute to questions about generalization, robustness, and the fundamental principles of agentic AI systems in a safe setting that is not intended to transfer capabilities to the real world. Given that academic labs often have access to much less compute, our hope with open sourcing this simulator is that we can enable them to perform more fundamental scientific research without really changing the capabilities frontier or having to create very large, compute-intensive models. For more about Avalon, see our launch post.
At a high level, our mission is to elevate the human condition by creating safe, robust, and capable AI systems. We expect to have much more to say about our approach to safety and to the development of robust AI agents over the next few months, but if there are particular questions or things you're wondering, we'd love to hear about them in the comments!
And if you're interested in helping to develop more robust, safe, generally capable AI systems, we're hiring! We also have some non-engineering safety related roles.