This is exactly the kind of response I was hoping the post would generate, thank you genuinely. I was familiar with Cecil's work at ILINA and the African Hub at UCT, but Sumaya's CASA centre is new to me and I am going down that rabbit hole right now. The Oxford AIGI connection is particularly interesting given the intergovernmental policy.
What strikes me reading these is that the ecosystem is more alive than it appears from the outside, the problem is not that the work does not exist, it is that it is not visible enough to the broader EA and AI safety community, which is part of what I was trying to address with the post. These efforts deserve to be in the same conversations as the Anthropic safety teams and the GovAI fellows, not operating in parallel universes.
I will reach out to Cecil and Sumaya directly. If anyone reading this thread is working on connecting these dots more systematically — building the network between African AI safety researchers and the global safety ecosystem, I would love to talk. That connective tissue is precisely what I am trying to build through AI Safety Nigeria, and collaboration is worth far more than duplication.
Thank you again for these pointers. This thread is already doing what good EA Forum threads should do.
Cross-posted from my personal notes. I'm sharing this because I think the EA/AI safety community needs to hear it, and because I've been living it.
I lead AI safety work in Nigeria. When I tell people this, the most common reaction is a polite pause, the kind that says: that's interesting, but is that really AI safety?
I want to argue that it is. And that the gap it represents is one of the most neglected problems in the entire AI safety ecosystem.
This is the right framing and I think about it constantly. The retrofit approach, fine-tuning a Western-trained base model on local data, is better than nothing but it is architecturally compromised from the start. You are trying to correct value misalignment at the surface while the deep structure of the model remains shaped by the corpus it was originally trained on. It is like translating a concept that does not exist in the target language and wondering why something is lost.
On who is working on this seriously: ILINA under Cecil Abungu is doing some of the most rigorous thinking on African-context AI development rather than adaptation. The MASAKHANE community has been building African NLP infrastructure from the ground up for years and is probably the closest thing to what you are describing in practice. My own work on GENSCORE, building culturally situated mental health NLP for Hausa, Yoruba, and Pidgin English communities using lived-experience corpora rather than translated Western instruments, is a small piece of this puzzle applied to a specific domain.
But to be honest, no one is doing this at the scale the problem demands. The compute costs of training frontier models from scratch put it out of reach for most Global South research groups without major institutional backing. Which is part of why the governance and funding conversation matters as much as the technical one. If the resources to build genuinely diverse foundational models only flow to labs in San Francisco and London, the alignment problem remains a Western problem with a Western solution applied everywhere else.
Worth a much longer conversation. What is your context for the question?