AI in Ugandan universities, and what should be done to ensure proper Lecturers’ assessment of students

AI in Ugandan universities is quickly changing how students learn, research, and complete coursework. When I met Raphael Porro, a first-year nursing student at the Islamic University in Uganda, he openly shared how artificial intelligence tools like ChatGPT have become part of his academic life.
He says the AI tool he first came across was ChatGPT through a friend, and he has not looked back since then.
“I often use AI when answering multiple-choice questions. Many of them can be difficult, and going to textbooks takes a lot of time,” Porro explains. “So, I go to AI and simply type in the question, and it provides an answer within a very short time, and I feel very okay.”
His colleague, Nalukooya Sharifah, a law student, sees things differently. She accepts having been introduced to AI tools on her very first day at university. However, she still refuses to rely on them for coursework. “I feel it discourages me from thinking,” she says. “As a future lawyer, you have to think and read many sources.”
These are two students from the same campus but with opposite relationships with the same technology. In that gap between Porro and Nalukooya, one discovers the central challenge facing every university in Uganda today.
The debate over AI use in educational institutions many times rotates around cheating versus learning, integrity versus convenience. However, several experts have said that such a framing could be too narrow and dishonest.
AI is not something that can be resisted by making test rules stricter or writing plagiarism rules more strongly. It is an infrastructure and the calculator of this generation. The real question is not whether students should use it, but whether our institutions are prepared to teach them how.
Godfrey Kyazze, an education expert and lecturer at African Bible University, makes a point that when combine harvesters replaced hand ploughing, farmers did not stop farming, instead, they farmed differently. When calculators arrived, mathematics education had to adapt to them.
“What teachers are saying today, that AI is a threat,” Kyazze observes, “I want to relate it to what farmers possibly said when tractors came.”
Kyazze is right, but the analogy also contains a warning for a farmer who hands over the plough and stops learning the land. In the same way, a student who outsources every question to an algorithm stops developing the muscle of thought. The danger is not the tool. The danger is dependency without understanding.
Dr. Adam Ali, the academic registrar of Islamic University in Uganda, suggests that institutions of higher learning need to encourage both lecturers and students to accept that AI tools like ChatGPT are not the enemy.
“We find it is very important,” he says, “for our lecturers and students to know that it is not wrong to use AI tools to make themselves effective and efficient.”
He further urges that there is a need to shift away from the traditional model of the lecturer as the sole custodian of knowledge, toward one that equips students with the skills to progress in a world where information is generated and accessed in real time.
This is not being naive, it is necessary, but it needs more than just school rules. It needs a new way of testing students and a clear understanding of what a university is really meant for.
If the purpose of a degree is just to show that a student can read and remember information, then AI has already made that useless. But if a degree is to prove that a person can reason even when things are not clear, judge evidence, defend an argument, and solve problems that do not have direct answers, then AI has not reduced its value at all. Instead, it has made that purpose even clearer.
A 2025 review published by the American National Center for Biotechnology Information adds another view to this conversation that Ugandan educators cannot afford to ignore. While AI offers genuine advantages in higher education, like personalized learning and improved communication, it also carries documented risks of loneliness, digital stress, reduced interpersonal skills, and social isolation.
Nalukooya Sharifah may not explain the science behind why she feels uncomfortable with AI, but she has felt something true. Thinking is something you practice, and like anything you do not use, it becomes weak. Therefore, if we build academic systems that reward students for submitting AI-generated work without ever developing the capacity to evaluate, critique, or extend it, we are not producing graduates, we are producing prompts.
What, then, should Ugandan universities actually do?
First, they must stop denying the truth. Pretending that students are not using AI, or just banning it when it cannot be controlled, only makes institutions lose credibility. Dr. Ali’s honest acceptance that AI cannot be stopped, but should instead be guided, is a better and more useful starting point.
Second, assessment must change. Essays and multiple-choice questions that can be answered in seconds by a Chatbot are no longer adequate measures of student ability. Universities need to invest in formats that are genuinely AI-resistant and require the kinds of human judgment that machines cannot replicate, such as oral defenses, supervised practical work, interactive projects with documented reasoning, and presentations where students must respond in real time to probing questions.
Third, and most importantly, students must be taught to think alongside AI rather than beneath it. The goal is not to forbid the tool but to ensure that the student who uses it remains the author and can be capable of questioning the output, identifying its errors, and producing something that comes from a mind that was genuinely engaged.
The Writer is a Ugandan Journalist, Communication and Media Specialist, Exploring Media Ethics, Religion, Politics, Culture & Society

This has been a very legit research and I surely say it has helped me since I’ve got a point of my academic registrar of Islamic University in Uganda Main campus and being I’m a student too.
I also submit that AI should be a guiding trench of understanding but not being a learner’s leaning point to faster answering. It should be our tracing point to relating and contracting issues through evaluation before implementation.
I appreciate all the ideas created from here.
CLIS YEAR 2 STUDENT AT IUIU
This is a very educative research speaking to the current confusion at university about the use of AI.
As a tutor at higher learning institutions , I agree that students should be taught how to use AI so that they are able to ask the right questions and reason with AI to come up with the best answers, It’s worth noting that AI is GIGO(garbage in garbage out) and most students ask the wrong questions to AI and end up getting answers that are out of context, so teaching them to reason first before asking AI will go a long way in changing the output from AI.
In addition, Tutors are still hesitant in using AI to supplement their teaching, yet AI has broader knowledge bases than the content that tutors have, therefore Tutors should also be taught how to use AI to supplement their teaching
Thank you for the feedback, the discussion about AI should be open because it is an interesting one