Safe Minds, Safe Machines
We are a collective of Africans passionate about reducing catastrophic risks and shaping the future of AI by contributing to AI safety research.
We conduct interdisciplinary research at the intersection of computer science, policy, and African studies.
Advancing foundational work in AI alignment, interpretability, and robustness with particular attention to contexts of technological scarcity.
Developing evidence-based frameworks for AI governance that balance innovation with safety, informed by African legal traditions.
Training the next generation of AI safety researchers through rigorous fellowship programs, mentorship, and collaborative opportunities.
Safe AI Lab Africa addresses one of the most pressing challenges of our time: ensuring that artificial intelligence systems remain safe, beneficial, and aligned with developers' intentions as they become increasingly capable. We recognize that the development of transformative AI technologies will have a profound impact on every society around the globe.
Through rigorous scientific research, policy analysis, and community engagement, we work to strengthen institutional capacity across the continent. Our research covers technical AI safety, governance frameworks, upskilling, sociotechnical systems analysis, and the unique challenges of deploying advanced AI in resource-constrained environments such as those found in Africa.
Our research agenda addresses critical questions in AI safety through both theoretical investigation and practical application.
An intensive six-month research fellowship that trains exceptional scholars in technical AI safety, governance, and policy.
Investigating how cultural values, linguistic diversity, and local institutional structures should inform the design of aligned AI systems.
Developing actionable policy recommendations for African governments and institutions based on rigorous analysis of AI risks.
Advancing methods for understanding and explaining AI decision-making processes, with applications to high-stakes domains.
Systematic investigation of potential failure modes in advanced AI systems, focusing on scenarios affecting developing nations.
Translating complex research findings for policymakers, civil society, and the public through workshops and publications.
Whether you're a researcher, policymaker, or concerned citizen, we welcome collaboration and engagement.
Connect with researchers, students, and safety advocates in real-time on our community server.
Join Our Discord