The Nigerian Religious Coalition on Artificial Intelligence (NRCAI) has called for responsible development and use of artificial intelligence in Nigeria, stressing the need for ethical standards that reflect the country’s moral and religious values.
The call was made during a training programme for journalists on effective reporting of the coalition’s position on reducing the risks associated with artificial general intelligence.
The training, organised by the Christian Council of Nigeria (CCN) and JamaltuNasril Islam (JNI), was also designed to strengthen collaboration between religious bodies and the media in shaping Nigeria’s AI policy framework. Participants were encouraged to publicise faith-based AI education materials and amplify credible voices on responsible technology use to filter the consumption of children.
Speaking at the event, Rt. Rev. Dr. Evans Onyemara, secretary Christian council of Nigeria said the training was aimed at ensuring that media reports on artificial intelligence align with the moral perspectives of faith communities. “It is key that whatever information reported as journalists reflect what we, as religious people, are saying concerning the issue at hand,” he said, adding that accurate reporting was essential to prevent distortion or misrepresentation of religious concerns about AI.
NRCAI said its focus is centred on responsible AI development as outlined in the National Artificial Intelligence Strategy (NAIS) 2024, with the objective of building public trust and establishing ethical frameworks. The coalition noted that religious leaders have a duty to provide moral guidance on the direction of AI in Nigeria. “It is highly necessary and important that religious leaders provide wise, moral leadership about the direction of AI in Nigeria,” the group stated.
Outlining its core priorities, NRCAI proposed the creation of a well-respected and diverse AI Ethics Expert Group to provide independent oversight. “There should be a group that will be checking whatever is being produced by AI to ensure that our values and our values system are not being sidelined,” the coalition said. It also called for a clear set of ethical principles addressing fairness, transparency, accountability, privacy and human well-being, as well as a standardised assessment process for evaluating AI projects.
Delivering a keynote session, the Very Rev. Kolade Fadahunsi, Director of the CCN Institute of Church and Society, Ibadan, spoke on the National Artificial Intelligence Strategy 2024, with emphasis on its relevance to religion.
“We must be at the table. This strategy is becoming a case of ‘nothing for us without us’,” he said, warning that young Nigerians were becoming overly dependent on technology. “They press their phones everywhere, even in churches. When you give assignments, they go to ChatGPT and copy answers. You think your child is learning, but they are not.”
Fadahunsi cautioned that unchecked AI use could undermine learning and public trust in technology. “If this continues to spread in Nigeria, it will have serious consequences,” he said. He added that Nigeria risked becoming a consumer of foreign technology without contributing to its development. “We jump on things because of entertainment, and then we become the market, buying what we did not produce or contribute to.”
He further argued that a comprehensive AI ethics assessment framework would help address moral concerns before deployment. “The framework will assess the moral implications of AI technologies throughout their life cycles, not just in the middle of the game when goals are already achieved,” he said, citing rising cases of fake content and loss of originality in academic work as evidence of misuse.
International journalist, Ms. Vanessa Adie, trained participants on practical use of AI tools in journalism. She urged reporters to combine AI assistance with human judgement.
“After you gather information from AI, review it carefully and give it to an expert to read,” she advised, stressing the importance of fact-checking and humanising AI-generated content to avoid misinformation.
Veteran journalist Tope Oluwaleye also addressed participants on the basics of reporting religion and AI, warning against assumptions and bias. “You cannot afford to miss details when the facts are there and can be verified,” he said. He added that journalists must understand religious traditions before reporting on them. “If you don’t know, ask the experts before you publish. Avoid assumptions and be aware of your own biases,” he warned.
The organisers said the training marked a step toward building a responsible media culture around AI in Nigeria, anchored on ethical values and professional standards. They expressed hope that continued engagement with journalists would promote balanced reporting and support the development of an AI framework that respects both technological progress and religious sensitivities.
NRCAI explained that its position is anchored on responsible artificial intelligence development in line with Pillar 4 of the National Artificial Intelligence Strategy (NAIS) 2024, with the aim of building public trust through strong ethical safeguards. The coalition said it seeks to establish a robust AI ethics framework that includes the creation of a well-respected and diverse AI Ethics Expert Group to provide independent and objective guidance on AI development and deployment, the adoption of clear ethical principles covering fairness, transparency, accountability, privacy and human well-being, and the use of a standardised assessment tool to ensure all AI projects comply with these principles.
NRCAI added that religious leaders must play a central moral leadership role in shaping the direction of AI in Nigeria, while faith communities should be mobilised to engage emerging technologies through structured dialogue and information-sharing mechanisms. NRCAI further disclosed plans to collaborate with the National Information Technology Development Agency (NITDA) under sections 4.1.1 to 4.1.3 of the National AI Strategy to support the creation of a high-level national AI ethics body or commission made up of diverse and inclusive stakeholder.
