
Nigeria is on the verge of redefining how artificial intelligence is governed on the continent, as policymakers move closer to approving the National Digital Economy and E-Governance Bill. Expected to pass by March 2026, the legislation would make Nigeria one of the first African countries to introduce binding, enforceable AI regulations, shifting the conversation from strategy documents to hard law.
At the heart of the framework is an expanded mandate for the National Information Technology Development Agency, which would gain formal authority over AI systems, algorithms, and data governance. The bill adopts a risk-based regulatory approach, placing stronger obligations on AI deployed in sensitive areas such as finance, surveillance, healthcare, and public administration. Under the proposal, developers of high-risk AI systems would face mandatory annual audits, while all AI providers would be required to obtain licences before deployment.
Enforcement is a defining feature of the proposed rules. Regulators would have the power to demand technical information, issue compliance directives, and block AI systems deemed unsafe or unethical. Financial penalties are designed to bite, with fines of up to ₦10 million or 2 percent of annual revenue, signaling that non-compliance will carry real commercial consequences rather than symbolic warnings.
Despite its firm tone, the bill attempts to strike a balance between control and creativity. Provisions for regulatory sandboxes are intended to give startups and researchers room to experiment under supervised conditions, reflecting a recognition that over-regulation could stifle innovation. The goal, according to policymakers, is not to slow AI adoption but to guide it toward responsible and transparent use.
If passed, Nigeria’s move would place it ahead of peers such as Egypt, Benin, and Mauritius, which currently rely on AI strategies without binding legal force. Beyond national borders, the bill could serve as a continental reference point for AI governance in Africa. As NITDA Director-General Kashifu Abdullahi has noted, regulation may not outrun innovation, but it shapes behaviour—setting the rules that determine whether AI becomes a tool for broad societal good or unchecked risk.
Leave a Reply