‘Use AI as a teaching tool rather than a cheating tool,’ says UWI professor
A university lecturer is suggesting that one tactic to ease the apprehension in academic circles that students will use artificial intelligence (AI) to cheat is to incorporate it in the teaching process.
Professor Gunjan Mansingh, senior lecturer and head of the Department of Computing at The University of the West Indies, Mona, made the recommendation during a recent forum exploring AI and its impact on the world put on by Jamaica Technology and Digital Alliance.
“Right now in academia there is a big concern [about] whether these things can be used for cheating. And my thing is that: How do we use it as a teaching tool rather than a cheating tool? So we have to integrate it in our learning process,” Professor Mansingh said.
“We have to change how academia is currently functioning, we have to change the assessment methods, we have to even change the teaching methods,” Mansingh, who is also deputy chair of the National ICT Advisory Council, told the forum.

Noting that AI, like literacy, impacts every person, Mansingh said even if individuals use the technology to assist them there is still need for a personal knowledge base.
“As humans we have to understand what is being created by these machines. I think the challenge for us right now in the computer science department is to see how to use it in the learning process because people will try to find short cuts. But the truth is they need to know both. They need to be that subject matter expert and they also need to use this tool and see how it merges together,” she stated.
“For certain other disciplines, I think the assessment methods will have to change; it can’t all be about memory recall kind of things.
“And the students have to be mindful that this is not a search engine, this is generating content — words that belong together — in a predictive way. Now the words that it is picking up as belonging together, do they really make sense in the context of the question you are being asked? You have to be mindful about that,” she cautioned.
Joel Dean, a senior software engineer at Automattic (owners of WordPress.com), in commenting on the issue noted that AI’s ability to generate fictitious details cannot be ignored.
“There are good sides to that and there are bad sides to that. I would say the good side is that currently we have seen a lot of users and people who are pretty excited about generating essays and helping it to do work for them to accelerate the pace at which they work — I am able to generate codes that would take me hours to build. But on the flip side of that we are looking at certain challenges where artificial intelligence is capable of a lot of devious tasks,” Dean said.
“One of the core things we have been using to govern our usage is that AI in its current form is here to augment the human experience, to augment knowledge work — not automate it — and I think this is a big part of the discussion, because a lot of folks think it is here to automate the work that we do,” added Dean.
He told the forum that while the excitement around AI cannot be ignored, there is need for “legislation and policy that we can put in place, from a Government standpoint, to ensure that we control its usage and deployment”.
Trevor Forrest, CEO of 876 Technology Solutions, in his remarks, said, “While AI can make people better at what they do, there has to be some degree of subject matter knowledge”.
He further noted that responsible use of the technology needs to be promoted.
“As we use it, we need to create systems that fact-check the AI so that, in much the same way that ChatGPT created something that could check whether or not something that was written was written by AI, we need to make sure that the information that was put out could be verified,” stated Forrest as he pointed out that the innovation carries implications for media houses as well.
“This has some very serious implications as it relates to information and the media, because the era of social media has created hundreds of thousands of media houses (aka bloggers), and the notion that everybody is a credible source has come into interesting focus. A similar kind of thing can happen with AI, it’s just that you are not seeing the source of the information in front of you,” said Forrest.
He, in the meantime, called for policymakers to involve subject matter experts to help guide their deliberations about the use of AI and the guardrails required.