AI-Driven Errors in Nigerian Banking: Legal Accountability and Data Subject Rights Under the Nigeria Data Protection Act, 2023

INTRODUCTION

Nigerian banks are rapidly adopting Artificial Intelligence (AI) and automation to improve their efficiency and services. While these offer benefits, they also bring risks, such as systemic errors and data privacy concerns, especially under the Nigeria Data Protection Act (NDPA) 2023. AI mistakes, potentially caused by biased data or flawed design, can lead to financial losses, customer harm, and legal penalties. For example, miscalculated account statements can inflict real financial and reputational harm on customers. In Bonje v. Guaranty Trust Bank Plc (Unreported),[1] the High Court of Lagos State per Honourable Justice O. A. Oresanya, in a judgment delivered on 18th September, 2025, underscored that distorted customer data violates the Nigeria Data Protection Act (NDPA) and infringes the data subject’s right to privacy. Likewise, the Federal High Court’s recent Domino’s Pizza decision[2] confirms that data breaches or errors breach consumers’ privacy under the NDPA and incur damages. These cases signal that automated banking errors may attract both common-law and statutory liability.

This article explores how the NDPA 2023 governs AI and the use of automation in banking, focusing on legal accountability for errors and the rights of individuals whose data is used. We will also examine NDPA’s specific rules for AI, and strategies for banks to manage these risks responsibly. Furthermore, we will interrogate how Nigerian courts are likely to allocate liability for AI and automation failures under tort law and the NDPA 2023. This analysis is vital for the banking sector and legal professionals navigating AI integration and compliance in Nigeria.

 

1. THE RISE OF AI IN BANKING

Nigeria’s financial sector is undergoing a profound metamorphosis, transitioning rapidly from traditional branch-based operations towards a digitally-driven ecosystem.[3] This transformation is significantly accelerated by the adoption of Artificial Intelligence, which banks are increasingly deploying across various functions to gain a competitive edge and meet evolving customer demands. The integration of AI is no longer a futuristic concept but a present-day reality, becoming progressively entrenched in the core operations of leading Nigerian financial institutions.[4] Research indicates that banks view AI investment as a key strategy to improve return on equity and overall financial performance, signalling a strong commitment to leveraging these advanced technologies.[5]

Several key applications of AI and automation are already making a tangible impact within the sector. For instance, customer interaction is being revolutionized through AI-powered chatbots and virtual assistants, exemplified by platforms reportedly used by institutions like United Bank for Africa (UBA) and Zenith Bank.[6][7] These tools handle customer inquiries, provide support, and facilitate basic transactions 24/7, enhancing accessibility and reducing operational loads on human staff. Beyond customer service, AI and automation are streamlining back-office functions through Robotic Process Automation (RPA), automating repetitive tasks and improving overall efficiency.[8]

Perhaps the most critical applications lie in risk management and decision-making. AI algorithms are increasingly employed for sophisticated credit scoring, enabling faster and potentially more data-driven loan origination processes.[9] Simultaneously, AI plays a crucial role in bolstering security through real-time fraud detection and prevention systems.[10] By analyzing vast datasets and identifying anomalous patterns indicative of illicit activities, AI assists banks in meeting stringent Anti-Money Laundering (AML) and Know Your Customer (KYC) compliance requirements.[11] Furthermore, AI facilitates personalized banking experiences, analyzing customer data to offer tailored product recommendations, targeted marketing, and customized financial advice, sometimes delivered via robo-advisers.[12]

The rationales behind this widespread adoption are multifaceted. Nigerian banks are harnessing AI to achieve significant operational efficiencies, reduce costs, and improve the speed and accuracy of critical processes like credit analysis.[13] Enhancing the customer experience through seamless digital interactions and personalized services is another major objective. Competitive pressures within the dynamic fintech environment also compel banks to innovate continuously. Moreover, AI is seen as a vital tool in advancing national goals such as financial inclusion, with organizations like the Shared Agent Network Expansion Facilities (SANEF) exploring AI to bridge gaps in financial access by enhancing efficiency and personalizing services for underserved populations.[14] As AI continues to mature, its integration into Nigerian banking system is set to deepen, further reshaping service delivery, risk management paradigms, and the overall structure of the financial sector.

It is against this backdrop of deep operational reliance on AI and automation that questions of legal responsibility, foreseeability of harm, and regulatory compliance become unavoidable.

2. AI SYSTEM FAILURES AND REAL-WORLD CONSEQUENCES

While the integration of AI into the Nigerian banking system heralds significant advancements, it simultaneously introduces complex vulnerabilities and the potential for substantial harm when systems falter. AI failures are not merely theoretical risks; they can manifest in tangible ways with severe real-world consequences for financial institutions, their customers, and the broader economy. Understanding the nature and impact of these failures is crucial for effective risk management and regulatory compliance.

2.1. ALGORITHMIC BIAS, THE “BLACK BOX” AND DATA QUALITY

Beyond formal rules, AI brings qualitative risks with legal dimensions. Algorithmic bias, where models encode stereotypes, can produce discriminatory outcomes (e.g., systematically denying credit to certain groups). Such bias could violate NDPA’s fairness and purpose limitations (and potentially anti-discrimination laws), giving harmed customers a legal claim. Likewise, the “black box” problem (AI opacity) undermines accountability: if a model’s reasoning is inscrutable, a customer who’s wronged may struggle to prove fault. Nigerian courts have not yet faced this scenario, but it could be argued that opacity exacerbates negligence: a prudent actor must understand its tools. Finally, poor data quality (incomplete, outdated, or skewed training data) can break the bank’s duty of care. For example, if an AI credit-scoring tool routinely rejects low-income applicants because its data lacked representation, affected customers could claim arbitrary and unfair decision-making.

These technical issues feed back into legal standards. The NDPA imposes data accuracy and purpose-limitation principles; violations can lead to liability. In one industry study, AI systems amplified existing social biases, penalizing entire sectors (e.g., lower scores for women-owned businesses despite identical fundamentals). When such errors become systematic, victims may demand redress. Courts have begun to acknowledge these risks, and commentators note that Nigeria’s new law implicitly recognizes the need to govern AI’s hidden biases. Banks should therefore implement algorithmic auditing and “explainable AI” mechanisms. Not only does this reduce legal risk, but it also aids compliance with data-subject rights (enabling contestation of automated decisions).

2.2. AI AND AUTOMATION FAILURES: TORTS AND NEGLIGENCE

Banks deploying AI and automation in Nigeria face familiar legal frameworks. Under common law, liability for harm turns on negligence or strict duty rules. As in Donoghue v Stevenson[15] (the “neighbour” case), a duty of care is owed to those foreseeably harmed by one’s actions. It could be argued that a bank deploying an automated system owes a duty to avoid reasonably foreseeable harm to its customers or even third parties. Whether a court will recognize a novel “duty to ensure safe AI” is unsettled, but precedents on product liability and negligence offer analogies. For example, if an AI‐driven credit tool wrongfully denies a loan, the bank (or vendor) could be negligent for failing to test or monitor it properly. In Union Bank v Ogbonna (Unreported)[16], the National Industrial Court of Nigeria, in 2023, held the bank negligent for failing to provide safe working equipment, because an employer owes a duty to take reasonable care of those it employs.

By analogy, one might expect a bank to have a duty to take reasonable care that its automated systems do not inflict harm. Thus, courts would likely apply the familiar “duty, breach, causation, damage” test to AI failures. It remains to be seen whether Nigerian courts will treat AI mistakes as they would traditional negligence cases, but the neighbour principle set out in Donoghue v. Stevenson and affirmed by the Supreme Court in a plethora of cases[17] suggests they should. Moreover, if an AI error causes direct financial loss to a customer, a claim in negligence could proceed much as if a teller had erred.

Criminal or injunctive sanctions are also possible. In one US case,[18] lawyers were sanctioned for submitting a ChatGPT‐generated brief with bogus citations – the judge fined them and forced notification to affected judges. This underscores that courts will not tolerate “automation” as a complete defence: legal actors are responsible for supervising AI. By parity, a Nigerian court might similarly penalize officers who blindly rely on an AI without due oversight.

2.3. VENDOR AND VICARIOUS LIABILITY

Banks often rely on third‐party vendors for AI and automation. Who bears liability when those systems err? If a vendor is an employee or agent, the bank may be vicariously liable for the vendor’s torts (just as employers are liable for employees’ negligence). Indeed, commentators note that deployers can be treated as vicariously liable for their AI “agents. For example, if an outsourced robo‐advisor misleads a customer, the bank might be held liable as if the AI were its employee. However, most vendors are independent contractors, and ordinarily a principal is not liable for an independent contractor’s negligence unless the principal retained control. In practice, Nigerian banks should assume responsibility: Section 29 (1) of the NDPA requires controllers to put technical and organizational measures in place when using processors. Likewise, Section 29 (2) of the NDPA mandates a Data Processing Agreement, of which any bank that “engages a data processor” must have a written contract ensuring the processor complies with the law. A well‐drafted contract can allocate liability, for instance, obliging the vendor to indemnify the bank for breaches. Failure to enter into such agreements carries regulatory risk. The NDPC may fine or sanction controllers that neglect processor controls.

It is yet to be litigated whether Nigerian courts will carve out special rules for AI vendors. But one can analogize existing law. Under the Consumer Protection Act and Product Liability standards, a manufacturer of a defective product can be held strictly liable even where negligence is absent. Liability in such a case does not depend on the proof of fault, carelessness, or breach of a duty of care. Once it is established that the product is defective, that the defect existed at the time it was placed into circulation, and the defect caused the injury or loss complained of, strict liability may arise notwithstanding the manufacturer’s compliance with industry standards. An AI system embedded in a bank’s services may be deemed a “product.” It could be argued that a vendor who supplies a defective or harmful algorithm owes a duty to end-users under these principles. Nigerian courts have not decided an AI case, but they have enforced strict duties in other contexts. In any event, the deploying bank cannot escape oversight, as the NDPA makes the controller ultimately accountable for data handling (even if a processor misbehaves).

3. DATA PROTECTION AND AUTOMATED DECISIONS (NDPA 2023)

The Nigeria Data Protection Act 2023 (NDPA) explicitly governs automation in data processing. Among other things, Section 37 of the NDPA guarantees that individuals “have the right not to be subject to a decision that is solely based on automated processing of personal data… which produces legal or similarly significant effects.”

In practice, this means a bank cannot use an AI/automation tool to make a high-stakes decision (like loan approval/denial) with zero human oversight, unless an exception applies. (For example, fully automated processing is allowed if it is necessary to perform a contract with the individual, or if explicit consent was given.) Even when automation is permitted, NDPA requires “human involvement, the opportunity to present [one’s] point of view, and the right to contest” the AI decision. In short, NDPA enshrines a “right to a human in the loop.”

In practical terms, banks must design systems so that automated decisions can be reviewed by people. Judges will likely interpret Section 37 of the NDPA with reference to traditional duty rules. For instance, by analogy to Donoghue’s case, a court could view a wholly automated process as akin to a manufacturer’s defect: if it foreseeably harms others, liability follows. It could be argued that continuing a service after a customer’s adverse request (e.g., a wrongful loan denial) is itself a wrongful act independent of contract, opening the door to tort. Nigerian law also recognizes statutory privacy guarantees: the Federal High Court has already hinted that exposing personal data can violate the constitutional right to privacy (Section 37 of the Constitution) as well as NDPA duties. In Odunola v. Vesti (filed 2025),[19] a customer claims that a fintech’s public disclosure of her withdrawal history violated her constitutional privacy and Sections 24(1)(a) and 30 (confidentiality provisions) of the NDPA.  If the court upholds these rights, it would reinforce strict data duties: Section 24(1)(a) of the NDPA indeed requires controllers to implement safeguards ensuring accuracy, integrity, and confidentiality.

Further, Part VI of the NDPA confers other relevant rights. For example, Section 34(1)(a)(viii) mandates that data subjects be informed before collection of “the existence of automated decision-making, including profiling, the significance and envisaged consequences” for them. This “right to notice” is designed to prevent hidden AI harm. The NDPA also gives individuals the right to withdraw consent, to access or correct their data, and ultimately to seek redress at the Data Protection Commission or courts if rights are violated. In Polaris Bank v Olatokun (Unreported), the High Court of Lagos State, per Honourable Justice Y. A. Adesanya, in a judgment delivered on 5th December, 2024,[20] held that continuing to send marketing emails after a customer’s request violated NDPA privacy rights, a sign that courts will enforce the Act vigorously. By analogy, if an AI system processes data in breach of NDPA principles, victims could claim damages or injunctions.

4. PRIVACY BY DESIGN, DPIAS, AND HIGH-RISK PROCESSING

The NDPA embraces modern data‐protection principles. Controllers owe a statutory “duty of care” and must integrate privacy at every stage. Notably, Section 28 requires a Data Privacy Impact Assessment (DPIA) before any processing likely to pose a high risk to individuals. The NDPC’s GAID (2025) clarifies that high-risk triggers include “automated decision-making with legal or similar significant effects.” Thus, before deploying an AI model for loan decisions or fraud detection, a bank must perform a DPIA and report it to the regulator. The DPIA must “contain measures which guarantee privacy by design and by default.

These requirements tie into tort law as well. Compliance with Section 28 can be seen as the benchmark of reasonable care. If a bank skips the DPIA or ignores its findings and harm ensues, a court might consider that it has breached its duty of care. Conversely, documenting a DPIA (and following it) can help the bank defend itself by showing it took precautions. The NDPA’s design obligations reinforce legal accountability: an AI system built without due regard to privacy, bias, or security may expose the bank to liability.

 

CONCLUSION

The rise of AI and automation in the Nigerian banking system is checked by the existing legal frameworks. Tort principles (duty of care, negligence, and product liability) apply to technology-induced harm, and may lead to lawsuits if banks or their vendors fail to act responsibly. Meanwhile, the NDPA 2023 provides concrete obligations: banning purely automated decisions without human review, requiring DPIAs and embedding privacy-by-design, and safeguarding data subject rights (notice, access, correction, and objection).

Nigerian courts have already begun enforcing these rules (as in the Polaris Bank’s case), and more cases will emerge. It could be argued that a bank’s liability for an AI error should mirror its liability for any negligent system it adopts. After all, Donoghue’s neighbour is now digital. It is yet to be seen exactly how courts will adjudicate novel AI disputes in banking, but lawyers and bankers alike should treat automation with the same caution as any other potentially hazardous tool.







Footnotes

[1] view link

[2] view link 

[3] Proshare (Date Unknown). Artificial Intelligence Revolutionising Nigeria’s Banking Sector. Available at: view link

[4] Southern African Times (Apr 28, 2025). Nigeria’s Banking and Financial Services Sector in 2030. Available at: view link

[5] International Academic Conference on Information Systems (2024). Effect of artificial intelligence investments on the performance of deposit money banks in Nigeria. Available at: view link

[6] 2. Ibid

[7] ResearchGate (Dec 6, 2024). Chatbots and Virtual Assistants: AI Innovations in Nigerian Banking Services. Available at: view link

[8] ResearchGate (Dec 23, 2024). Effect Of Artificial Intelligence (AI) On Fraud Detection In Deposits Money Banks In South East Nigeria. Available at: view link

[9] NDIC Quarterly (2021). Artificial Intelligence in Risk Management and Financial Stability. Available at: view link  also available at AKSU Journal of Arts & Cogent Social Sciences (PDF, Date Unknown). Integration of Artificial Intelligence Applications for Financial Process Innovation by Commercial Banks in Nigeria. Available at: view link

[10] Central Bank of Nigeria (2024). Risk-Based Cybersecurity Framework and Guidelines. Available at: view link  see also Oracle Nigeria (Aug 28, 2024). Anti–Money Laundering AI Explained. Available at: view link

[11] Ibid

[12] 8. Ibid

[13] Scalefocus Blog (Sep 26, 2024). AI in the Banking Sector: Risks and Challenges. Available at: view link

[14] Ibid

[15] (1932) A. C. 562 at 580, per Lord Atkin.

[16]  Ngozi Ogbonna v Union Bank of Nigeria Plc (National Industrial Court of Nigeria, NICN/ABJ/241/2020, unreported).

[17] Nig. Bottling Co. Ltd. v Ngonadi (1985) 1 NWLR (Pt. 4) 739.

[18] Mata v. Avianca, Inc., view link.

[19]view link 

[20] view link

AUTHORS

Associate

Associate

Trainee Associate

Share the Post:

Related Posts