Official wants protection for potential victims of AI scamming
A senior technology official wants safeguards for individuals who are affected, negatively, by the rise in use of AI-generated text, audio, images and videos (deep fakes).
President of the Jamaica Technology and Digital Alliance (JTDA) Adrian Dunkley wants Jamaica to put legal guardrails in place soon.
“They need to expand the scope of the regulations because if a scammer is using an AI tool to scam, it’s the same crime effectively. The difference is the tool that is utilised; but we may not have specific laws in place that speak to using generative AI to impersonate someone’s voice and automatically scam them,” Dunkley, who is also founder and CEO of StarApple AI, said while speaking at a forum exploring AI and its impact on the world put on by the Alliance in early July.
Dunkley said while there might be legislation to penalise individuals who scam others by impersonating a contact, there is now a need to address situations where technology is used.
“You may have [laws which address] when you call a person [and] impersonate someone else as a human and you scam them out of that. So, we need to expand it and look more at the effect and then tie it to the cause.
“AI is just a tool. There is a big situation going on with deep fakes being used to harass women abroad, influencers, and none of them have been able to get any form of recourse because the laws don’t speak to using deep fakes to harass women; but it’s still harassment, it’s still preying on them,” he said.
In emphasising the point, Dunkley said “it’s the same bad actors; they just have a new tool. They are stealing money, stealing assets, stealing people, taking away some form of freedom. They might say it’s just a fake image, but it is still representing you in a bad light. So, I think that’s something they really need to look at,” he said.
Professor Gunjan Mansingh, senior lecturer and head of the Department of Computing at The University of the West Indies, Mona, in commenting on the issue during the forum, said the media should exercise caution in the current open environment.
“I think media is also very concerned about generative AI and its capabilities, and I think they have to play a bigger role because social media should remain for social purposes but when it comes to real news it should be with the media houses, they should take responsibility for what they are putting out, double checking, triple checking if it’s AI-generated or not AI-generated. So those are some of the things media houses will have to play a bigger role with, and I think they are mindful of that,” she said.
Joel Dean, senior software engineer at Automattic (owners of WordPress.com), commenting further said countries in failing to develop policies and release policies alongside the technology when it was being deployed are now playing catch up due to how fast the technology is moving.
“The capability of multi-modal bodies will soon be here where these models will not only be consuming text but also voice and video and images. So, the models will be able to see. So, they will watch YouTube, for example, to get relevant information based on current trends. We’re hoping that the creators of these models and the policies of governments will put guardrails in place,” he said.
Last Tuesday, Police Commissioner Major General Antony Anderson said legislation should be crafted to address digital public mischief.
“We’re witnessing a growing trend where individuals exploit social media to create fear, spread chaos and disorder,” Anderson said while speaking at a press briefing hosted by the constabulary.
“As the digital environment evolves, we’re increasingly seeing the need for some specific legislation that defines new offences within the context of social media, artificial intelligence, deep fakes and wider use of the cyber space,” the police commissioner said.
“These offences would range from digital public mischief, to more serious and damaging violations,” he added.