I'm an assistant professor of management and AI at Auburn University at Montgomery (AUM) — a Minority-Serving Institution in Alabama's capital city. I'm planning to submit a grant application to Coefficient Giving to launch something I'm calling the A.I.bama Safety Initiative, and I'm posting here first because I want honest feedback.
I'll say upfront: I'm new to the AI safety community. I've spent the last two years building a course that deploys AI agents in real local businesses, and watching those deployments go wrong in predictable and unpredictable ways pushed me toward AI safety and governance as a serious intellectual project. I've since read fairly widely, Christian, Christiano, ARC threat models, the GovAI technical overview, Anthropic's core views, but I haven't published in AI safety literature, I don't have deep relationships in this community, and I'm not going to pretend otherwise.
The Gap I'm Trying to Address
Geographically: no AI safety or governance capacity-building program currently operates anywhere in the US Deep South. The eleven states of the former Confederacy contain roughly 40% of the US population, including multiple research universities, a network of HBCUs, and dozens of MSIs whose students will enter the workforce during the most consequential decade in AI development. The field has no meaningful presence in those institutions.
Disciplinarily: programs like BlueDot Impact, MATS, and ML4Good do important work, but they're designed for people who already have technical AI backgrounds and are deciding whether to apply them to safety. A business school student who studied Organizational Behavior and Corporate Strategy, not Neural Networks and Probability Theory, has no equivalent on-ramp. Yet that student, if they become a policy staffer, a product manager at an AI company, a regulatory analyst, or a corporate board member, is exactly the kind of person whose decisions about AI will matter enormously.
The organizational side of AI safety, principal-agent problems, regulatory design, incentive structures inside AI development organizations, corporate liability, institutional risk management, isn't peripheral to the problem. It's central to it. And it's almost entirely absent from the existing capacity-building pipeline.
What Already Exists: BUSN 3150 as Infrastructure
I teach a course at AUM called "From Users to Builders: AI Agent Development for Business Applications" (BUSN 3150). Students build and deploy functional AI agents for real Montgomery-area small businesses, then manage the transition. The course enrolls 25–30 students per semester.
This has been a surprisingly useful foundation for thinking about AI risk. A student who has watched a client's workflows change in ways nobody anticipated after an AI deployment, who has encountered edge cases, stakeholder resistance, and the gap between what a system is specified to do and what it actually does in a real organizational environment, develops a concrete intuition for why alignment questions are hard. They've seen, in miniature, why "is this system doing what we want it to do?" turns out to be genuinely difficult.
The A.I.bama Safety Initiative would build on this existing infrastructure:
- Integrating a 5-week AI safety and governance module into BUSN 3150
- Developing a new upper-level elective (BUSN 4XX0: AI Risk, Safety, and Organizational Governance)
- Running a weekly reading group open to the broader community
- Launching a mentorship track pairing the most engaged students with external AI safety researchers and practitioners
- Hosting an annual workshop in Montgomery connecting AI safety researchers with Deep South business and civic communities
- Publishing open-source curriculum materials for adoption at other MSIs and HBCUs
Budget: roughly $160K for 18 months — comparable to what MIT-affiliated individual faculty have received for existential risk course development, and less than Condor Camp in Brazil (~$953K) or Carreras con Impacto (~$189K), both of which Coefficient Giving funded as geographic pipeline expansions. The Deep South is a more glaring domestic gap than either of those regions.
What I'm Genuinely Uncertain About
Whether management students can contribute meaningfully to AI safety discourse. I believe the organizational lens is genuinely relevant, but I'm uncertain whether I can get students to the point of engaging seriously with technical AI safety work, or whether their contribution remains at the level of governance frameworks and policy analysis. That might be fine, governance work is valuable, but I don't know how to calibrate expectations, and I'd welcome pushback.
How to recruit external mentors. The mentorship track is the piece I think could have the highest individual impact, and it depends on recruiting AI safety researchers willing to spend two hours per month with an early-career student they've never met, at an institution they've never heard of, in a city most AI safety researchers don't visit. I don't yet have a deep network here, and I'd genuinely welcome thoughts from people who've run mentorship programs on what makes mentor recruitment work.
The right content balance. I don't want to teach a watered-down version of what BlueDot already does better. But I also don't want a governance-only course that doesn't equip students to engage credibly with AI safety arguments. I'm uncertain where the right ratio between technical AI safety content and organizational/governance content sits for a business school audience.
How to connect with this community as a newcomer. There's a version of this program that does useful local work but remains largely disconnected from the national AI safety ecosystem. I want to avoid that, and posting here is part of my attempt to do so.
Ask for Feedback
If you've read this far, I'd welcome responses to any of the following:
- Is the organizational/management framing genuinely useful to the AI safety field, or does it risk being superficially adjacent without real depth?
- What has worked and what hasn't in mentorship programs for students new to AI safety?
- Are there programs or people I should be talking to that I haven't mentioned here?
- If you're a researcher who would consider mentoring a student in this kind of program, what would make that commitment feel worthwhile?
I'm happy to share the full grant narrative with anyone who wants to review it before I submit.
About me: I'm Jack Richter, assistant professor of Management & AI at Auburn University at Montgomery, founder of the A.I.bama Laboratory. My background is unusual for this forum: PhD in Strategy from Florida State, 15+ years as an IT executive at multinational corporations in Brazil (Vale, Vallourec, and others), MIT Applied GenAI, HBS Teaching with Cases, NSF I-Corps Instructor, US Army Reserve. I came to AI safety through the deployment side, watching what actually happens when AI systems meet real organizations, not through alignment research. I have pending NSF and NIH grants related to AI adoption, but this would be my first direct engagement with the AI safety capacity-building ecosystem. I'm approaching this community as a newcomer who thinks he has something to contribute, and who genuinely wants to find out whether that's right.
