Deep-fake warning
Researcher urges ban on use of generative AI in political ads
Artificial intelligence (AI) researcher Matthew Stone is urging a ban on the use of generative AI in political ads for the upcoming general election while warning of the irreparable damage that could result from the use of deep-fakes.
Stone, who is CEO of Stone Technologies Limited — a robotics and AI company — and president of the Jamaica Artificial Intelligence Association, made the recommendation during a recent episode of virtual talk show Heart to Heart.
He told host Tyrell Morgan that given the difficulties in tracing the source of AI-generated fake images or videos (deep-fakes), deterrence is the best approach.
“Sometimes it’s hard to even know who started it…so you can’t even punish the people who started it; it could start from the dark web, which makes it a lot harder to trace the source. Anybody could have posted it, reshared it; the laws should, as much as possible, deter people from that.
“So one of the suggestions I make, especially when it comes to the use of political ads, [is] ban the use of generative AI for that,” Stone told Morgan, adding that if left up to chance, political rivals could use the technology to the detriment of each other at the polls with lingering negative effects.
“Generative AI is the ability for AI to create new information, so we are not talking about taking information from people…with generative AI technology it could literally create a fake video of, for example, the prime minister or the Opposition leader saying something they didn’t actually say.
“So a political opponent could literally use an AI-generated video, accuse the person of doing something, create the video of them doing it with AI and it will look exactly — you can’t tell the difference — and even if the person denies it and it comes out that it wasn’t actually the person, the fact is that it’s out there, it’s on social media. People are going to share it and the narrative has already been played out and things like that can definitely swing an election,” Stone said.
Noting that the tool has been used in electoral run-offs in other countries such as the United States, Stone said, “I’m sure if the laws don’t exist right now to ban the use of that, a political party or a representative of a political party is going to use generative AI at some point in the upcoming elections, I am almost sure”.
In the meantime, Stone said he is wary of the even more surreptitious ways that AI can be used by the parties contesting the upcoming parliamentary polls which are constitutionally due by this September.
“In 2015 there was a company called Cambridge Analytica, a data mining company… which essentially gathered a lot of information from Facebook and was able to utilise this technology to figure out who to target certain political advertisements at. There were many claims as to this having a large impact on the result of the elections. I am not going to say this is what caused whichever party to win, but it definitely is a very powerful tool and it’s definitely something that we must look out for even in our own elections,” added Stone.
Last February, the Government, in a statement issued on its official Jamaica House website, said, “The emergence of AI-generated content, particularly deep-fakes, has been recognised as a significant threat that could undermine the democratic process, especially as the country approaches crucial electoral milestones”.
Senator Dr Dana Morris Dixon, minister with responsibility for skills and digital transformation in the Office of the Prime Minister, commented: “Central to the Administration is our commitment to preserving democracy and the democratic process. The Government understands the critical importance of maintaining trust and transparency in our electoral system.”
The statement went on to note that the Government was actively taking steps to formulate a comprehensive response to the “emerging threat” pointing to the establishment of the National AI task force, comprising experts and stakeholders from private sector, academia, civil society, and the public sector in July 2023.
That task force was mandated to explore effective strategies for mitigating the risks posed by AI and deep-fakes while at the same time safely unlocking opportunities.