A new era of trust and safety
Navigating the challenges of AI transformation
It should be clear to all of us that the full extent to which artificial intelligence (AI) will transform everyday life is not yet clear. The only thing about which we can be certain is that innovation occurs at the speed of thought, influenced by the mantra, “move faster”. In this scenario, it is not acceptable for society’s governing frameworks to lag behind technological innovation.
Furthermore, the consequences of inaction are already being felt. The spread of misinformation and disinformation has eroded trust in institutions, governments, and digital platforms. Low levels of trust in governments are a significant challenge in many developing countries. According to the United Nations, 70 per cent of the Latin America and the Caribbean (LAC) region’s population distrust their government. This lack of trust will be worsened by AI’s opaque decision-making processes.
The risks of unregulated AI
These concerns are only some of the risks associated with AI, and they underscore the need for a more comprehensive, nuanced response to governing AI development and deployment.
Yet, regrettably, momentum is building in some quarters for a retreat from regulation, based on the idea that regulation equates to the stifling of innovation. This is particularly concerning when it comes to content moderation, where Big Tech is shifting responsibility to users. While democratising platform moderation is an attractive aspiration, it presumes a certain level of digital literacy, consensus on values, and trust in institutions, none of which can be taken for granted.
Moreover, the proposal by Meta to rely more on machines for content regulation is unsettling. Our normal human relations are already complicated by language, geography, culture, gender, race, religion, literacy, emotions, and the like. It does not take much contemplation to discern the tension and complexity if we insert human-like technological species in this social arrangement.
The Intersection of AI
In addition to these concerns, AI and neurotechnology are advancing, in parallel, at breakneck speed. We must anticipate the profound implications as these technologies advance and intersect. At stake is our mental autonomy, the foundation of our identity as a unique species.
The brain, a complex and vulnerable operating system, must be safeguarded against harmful or involuntary intrusion by neural computation. Yet, current regulatory frameworks and industry practices are ill-equipped to handle this emerging complexity, leaving our cognitive, emotional, and experiential integrity vulnerable. Regulation must include a new jurisprudence of the mind that safeguards the brain — our last bastion of freedom and independence.
A three-pronged solution
To address these challenges, a three-pronged solution is suggested, particularly for global majority countries. Firstly, they must craft moderate regulations that are culturally rooted. This should include user-centric guidelines and standards that promote transparency in AI decision-making processes, as far as practicable.
In conjunction with this, there should be incentives for industry to self-regulate, provided there is transparency, public accountability, and sensitivity to cultural differences. This should be supported by regulatory sandboxes, where innovative solutions can be tested and refined with oversight.
Finally, governments must prioritise investments in digital, media, and information literacy from the earliest stages of the educational system. People must be competent in the essential literacies for today’s digital economy and society — media literacy, information literacy, information and communications technology literacy, and digital literacy.
A call to action for the Caribbean
The Caribbean has been actively engaging in the global conversation on AI governance. At a recent event on the sidelines of the Paris AI Summit, Marsha Caddle, Barbados’s minister of industry, innovation, science, and technology, emphasised the importance of a sustainable data infrastructure to support AI development and the need for use cases to determine AI priorities. This approach ensures that AI development is aligned with the country’s unique needs and challenges. Minister Caddle’s perspective is indicative of the region’s commitment to harnessing AI for economic development, social empowerment, and human well-being. That commitment is also reflected by the Caribbean AI Initiative, which was launched in 2020 to raise awareness, promote reflection, and stimulate actions to harness the opportunities and address the challenges posed by AI. The resulting Caribbean AI Policy Roadmap, updated in 2023, is a valuable regional resource to guide AI deployment as a tool to break free of the region’s developmental traps.
As we navigate this new era of digital transformation, the Caribbean is presented with an opportunity to be a leader. By prioritising digital literacy, innovation, and regulation, we can create an environment of trust in which public interest AI is harnessed to drive economic development, social empowerment, and overall human well-being. The time for action is now.
Cordel Green is an attorney-at-law and executive director of Broadcasting Commission of Jamaica. His other affiliations include being vice-chair of the UNESCO Information For All Programme (IFAP) and chair of the IFAP Working Group on Information Accessibility. Email: info@broadcom.org