I have just completed the AGI Strategy course by BlueDot Impact, and I wanted to share a few reflections, especially from the standpoint of someone working on science and innovation in Nigeria. The course is essentially an entry point into one of the most important questions of our time: how do we shape the trajectory of advanced AI systems so that things actually go well? In about ~25 hours, it moves quite deliberately from understanding the landscape to thinking about what one can do, not just what one can say (BlueDot Impact).
A few things stayed with me:
1. Strategy, not abstraction: What I appreciated most is that the course treats AGI not just as a technical issue, but as a strategic one. Incentives, coordination failures, institutional capacity, these are the real drivers. It forces you to think less about models in isolation and more about decision-making systems around them.
2. From risk to intervention: There is a strong push toward actually building something. You’re not allowed to just sit with the ideas; you have to translate them into a project, whether research, policy, or an organization (LinkedIn). That shift from thinking to doing is important.
3. The narrowing window: There’s an underlying sense, never overstated, but quite clear, that the window to shape outcomes may not remain open for long. That naturally pushes you toward more focused, high-leverage work.
4. A structural blind spot: the Global South
Many AGI strategy discussions assume strong institutions, reliable infrastructure, and coordinated governance systems. That assumption simply doesn’t hold in many parts of the world.
Practical Questions
From where I sit, this raises some practical questions:
- What does “AGI preparedness” look like in infrastructure-constrained environments?
- How do we avoid creating governance blind spots in regions without centralized systems?
- What would low-cost, decentralized monitoring systems look like in these contexts?
My project is coming out of the course
As part of the course deliverable, I worked on a project trying to engage directly with this gap:
https://docs.google.com/document/d/1N1eI0STfS-Yfm66h64LqM-EmJMCCMuaGDIc6CsKaK3c/edit?tab=t.0#heading=h.hsyis9sy4428
The core idea is to build sentinel detection systems in surveillance-blind regions, drawing on:
- Air metagenomics
- Decentralized biosurveillance
- AI-enabled anomaly detection
The motivation is quite straightforward. In many places, there is no wastewater infrastructure, no centralized monitoring—so entire regions become invisible to early warning systems. That’s already a problem in biosecurity. It may become a bigger problem in other domains. I wonder if anyone might be interested in collaborating on this, or will likely fund a pilot.
Where this leads
I find myself increasingly drawn to work at the intersection of the following:
- AGI strategy and AI risk
- Biosecurity and early warning systems
- Infrastructure for underrepresented regions
More broadly, I think distributed sentinel systems could become a useful general model for risk detection, not just in biology, but potentially beyond.
Open question
If AGI is ultimately a global coordination problem, then leaving large parts of the world out of the picture may not just be an equity issue; it may be a strategic one. I’d be interested to hear from others thinking along these lines, particularly those working outside the usual US/EU policy and lab ecosystems. If helpful, I’m also open to feedback on the project itself or conversations around collaboration.
