Hide table of contents

Cross-posted from my personal notes. I'm sharing this because I think the EA/AI safety community needs to hear it, and because I've been living it.

I lead AI safety work in Nigeria. When I tell people this, the most common reaction is a polite pause , the kind that says: that's interesting, but is that really AI safety?

I want to argue that it is. And that the gap it represents is one of the most neglected problems in the entire AI safety ecosystem.

The Situation on the Ground

I'm based in Ibadan, Nigeria. I am part of the AI Safety Fundamentals: AI Governance cohort for Nigeria under AI Safety Nigeria, and I hold an AI Lead appointment under the ITU — the UN's agency for ICT. In my day-to-day work, I deploy AI systems in low-resource healthcare settings, build multilingual NLP tools for communities that global AI largely ignores, and try to translate AI safety discourse into something meaningful for African researchers and policymakers.

Here's what I observe from that vantage point:

The global AI safety community is, with very few exceptions, a Western conversation. Its canonical texts were written in Oxford and Berkeley. Its conferences happen in San Francisco and London. Its implicit assumptions,  about who builds AI, who governs it, who is harmed or helped by it,  are shaped almost entirely by high-income, high-resource contexts.

Meanwhile, Africa has 1.4 billion people, the world's youngest and fastest-growing population, and some of the most acute governance vacuums on the planet. Frontier AI is not arriving after Africa figures out its institutions. It is arriving now, into contexts with limited regulatory capacity, under-resourced civil society, and almost no AI safety literacy among the researchers, policymakers, and civil servants who will have to manage its consequences.

This is not a small gap. It is a civilisational-scale oversight.

Why This Is an AI Safety Problem, Not Just an "AI for Good" Problem

I want to be precise here, because I think the distinction matters.

A lot of Global South AI work is about deploying AI to solve local problems , better crop yields, faster disease diagnosis, smarter financial inclusion. That work is valuable. But it is not what I'm describing.

What I'm describing is this: the safety and alignment of advanced AI systems will be shaped, in part, by the governance frameworks, regulatory norms, and institutional capacity that exist when those systems arrive. If Africa ,  a continent with 54 countries, significant geopolitical weight, and rapidly growing AI adoption, has no seat at that table, the frameworks we build will be incomplete. Worse, they may actively fail African populations in ways that go unnoticed because African researchers aren't in the room to flag them.

A few concrete examples of what I mean:

Alignment to whose values? The majority of RLHF and value alignment work uses predominantly Western annotators, Western-language corpora, and Western ethical frameworks. I work directly on this problem , building culturally situated NLP tools for Yoruba, Hausa, and Pidgin English communities. When I do this work, I encounter, repeatedly, the fact that what an AI system "learns" about mental distress, appropriate behaviour, or social norms from Western training data is often systematically wrong for West African contexts. This is not just an accuracy problem. It is an alignment problem. It is a question of whose values get encoded.

Governance vacuums as catastrophic risk factors. One underappreciated pathway to AI-related catastrophe runs not through a misaligned superintelligence but through the gradual erosion of meaningful human oversight in contexts where regulatory institutions are weak. Africa is full of such contexts. Authoritarian-adjacent governments are already adopting AI surveillance tools. Disinformation systems are already exploiting low-information-literacy environments. The slow-burn risk of AI undermining democratic institutions and human oversight is more acute here, not less , and almost no one in the safety community is working on it.

The absence of African voices at critical junctures. Right now, the norms, standards, and frameworks being negotiated at the ITU, the UN, and in bilateral AI agreements will shape the global AI governance architecture for decades. African countries participate in these processes with thin technical capacity and almost no exposure to AI safety thinking. I sit in some of these rooms. The gap is stark. And the window to build that capacity before the critical junctures pass is closing.

 

What I'm Actually Doing About It

I don't want this to be an abstract lament. Here's what the work looks like in practice:

I am part of a running structured AI safety and governance cohorts in Nigeria, bringing together researchers, policymakers, and practitioners and walking them through alignment, oversight failure, and catastrophic risk. Most participants have never encountered this framing before. The response is consistently: why has nobody brought this to us?

I'm working on research that interrogates how AI systems encode culturally embedded representations of human experience, and what governance responsibilities arise from that. This sits at the intersection of technical alignment and AI welfare in a way that I think is genuinely underexplored.

I'm using my ITU role to inject AI safety thinking into intergovernmental policy dialogue,  trying to ensure that "AI governance" in international fora doesn't just mean "economic regulation" but includes meaningful engagement with catastrophic risk.

None of this is easy. There's almost no funding for it. There's almost no community for it. The people doing AI safety work in Africa can be counted on two hands, and most of them are doing it alongside other work, with no dedicated support.  

7

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities