Deepfake victims could find relief in Cybercrimes Act
As Jamaican lawmakers ponder the possibility of regulation or legal sanction for people who use artificial intelligence (AI)-generated content to create mischief, a senior prosecutor has offered that there is room for relief in the Cybercrimes Act.
Deputy Director of Public Prosecutions Andrea Martin-Swaby pointed to one of the provisions in the Act when the Jamaica Observer asked for a response to the views expressed by parliamentarians on the issue last Sunday.
Members of Parliament (MPs) Dr Christopher Tufton, Fitz Jackson, and Julian Robinson had pointed to the growing threat to the democratic process posed by deepfakes and AI-generated content.
The three MPs basically agreed that there needs to be agreement between the island’s political parties around the potential abuse of the technology. At the same time, their views varied on the nature of any sanction to deal with the issue.
Dr Tufton had suggested that regulation is urgently needed as the problem “will only get more intense, either from the local government election but also a pending national election going forward over the next year and a half”.
Robinson agreed, but said any move at regulation would need to “balance freedom of expression versus where somebody is clearly putting out something that is fake news, and those things can have great consequences in an environment where news travels at lightning speed”.
However, Jackson was more direct on the need for legislation, arguing that, “The technology without the appropriate regulations [and] safeguards can be very dangerous, and misrepresentation that can tarnish and damage persons’ reputation and well-being is something that ought not to be taken lightly.”
Martin-Swaby, who heads the Cybercrime Unit in the Office of the Director of Public Prosecutions, told the Sunday Observer that there is no criminal liability for disseminating material which constitutes deepfake or AI-generated misrepresentation of facts.
“However, where such material is published and causes damage, for example, where it is defamatory, civil remedies could be pursued within the courts,” she explained.
“We believe that criminal liability may only arise if the material fits within the parameters of Section 9 of the Cybercrimes Act, where it is obscene, threatening in nature, and sent with a view to cause harm. If it doesn’t fall within such a category, civil remedies would have to be pursued where any damage is caused by the dissemination. The treatment of such material in general may be similar to the treatment of material which constitutes fake news,” Martin-Swaby added.
While the use of deepfake content is not widespread in Jamaican politics, its impact is being felt in other jurisdictions, raising fears about its ability to sway voters, especially given that 2024 is a busy election year.
Two weeks ago,Agence France Presse (AFP) reported that in the United States the Federal Communications Commission (FCC) declared scam “robocalls” made using voices created with AI as illegal.
The phenomenon gained attention last month when a robocall impersonation of US President Joe Biden urged people to not cast ballots in the New Hampshire primary.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” AFP reported FCC chairwoman Jessica Rosenworcel as saying in a news release. She added that state attorneys general “will now have new tools to crack down on these scams”.
The FCC unanimously ruled that AI-generated voices are “artificial” and thus violate the Telephone Consumer Protection Act (TCPA).
The TCPA is the primary law the FCC uses to curb junk calls, restricting telemarketing calls and the use of automated dialling systems.
The ruling makes voice cloning used in robocall scams illegal, allowing those behind such operations to be prosecuted, according to the FCC.