Managing risks associated with the use of artificial intelligence
MANY organisations now use artificial intelligence (AI) to enhance product and service delivery. One common example is the use of natural language processing to answer customer queries. This form of technology uses AI to respond to customer queries without the assistance of humans.
AI technology comes with a unique set of risks, however. In March 2016 Microsoft tested a new chatbot named Tay on Twitter. Within a few hours of being launched, Tay started to tweet highly abusive and offensive comments, including racist and anti-semitic ones. Needless to say, Microsoft had to suspend the account. In another example Amazon had to update its Alexa voice assistant after it challenged a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.
If your organisation is using a chatbot to deliver services to customers or uses AI to sift through data or adjudicate on other matters, there may be a number of legal and reputational risks that you should consider safeguarding against.
Unlike traditional programming in which a human tells the computer what to do, AI uses machine learning and other techniques to create its own set of rules. AI therefore mimics the human brain by performing tasks usually performed by humans, including perceiving, analysing and processing data to make informed decisions. AI is now being used to, among other things, come up with medical diagnoses, predict tax liability cases, and review contracts.
Since AI is not recognised as a separate legal entity, the actions taken by AI (although sometimes unforeseeable) may make your organisation liable. Because of the absence of legislation specifically governing AI in Jamaica, here are a few things you should consider in addressing AI-related liability.
Transparency and Human Oversight
The first mechanism to mitigate against risks is to make adequate disclosures. If AI is being used in aspects of a business relevant to customers and other stakeholders, they should be informed that AI is being used. Although Jamaica does not have similar regulations, there are guidelines emanating from the European Union requiring documentation and record keeping, transparency and provision of information to users, and human oversight.
Behind the scenes, businesses should check that AI technologies are constantly tested to ensure that malfunctions are addressed as they arise. The swift action by Microsoft and Amazon in the examples above may have saved them millions of dollars in damages from potential civil claims. There are also several well known AI incidents in which AI produced biased results because of the data inputted in training. It is imperative that AI is monitored to avoid potential discrimination claims and civil liability.
In some instances the appropriate solution may be to leave the final decision-making to humans. If you fear that the AI decision-maker used in your business is likely to produce biased or incomplete results, you should ensure that there is some human oversight of the decision-making processes.
Use of Contracts
The unique advantage of using contracts to govern liability resulting from the use of AI is that they afford the parties the opportunity to allocate risks before the loss-causing event occurs. For example, a contract could contain an indemnity provision with respect to loss arising from AI. Such a clause may be useful where you have a third-party AI provider who is responsible for testing and monitoring. If sued, your business could rely on that provision to be indemnified for loss resulting to customers from AI malfunctions.
There may be other types of risks associated with AI that can be governed by contracts. For example, contracts may be used to impose obligations on providers of AI to, among other things, maintain the privacy and confidentiality of data on which the AI is trained. This is also imperative in light of the data protection obligations which have been imposed by legislation in Jamaica.
Insurance Coverage
Admittedly, AI liability is not at present readily insurable. Aspects of loss created by AI may however be covered under some policies related to business disruptions. Your organisation may wish to consider whether there is insurance coverage for particular types of risks associated with the use of AI in your business. With the proliferation of more mainstream uses of AI we will likely see more insurance coverage being afforded. In the United Kingdom, for example, there is legislation which requires owners of autonomous vehicles (operated by AI) to obtain and maintain insurance for any loss resulting from such vehicles.
Legal & Regulatory Considerations
Your organisation may also wish to consider whether there is non-AI specific legislation, or whether there are regulations, which could have implications for AI use within your business. If you operate in an industry which uses AI to sort and process customer data, you should consider whether there are any data privacy or confidentiality issues which you should be considering and safeguarding against. For example, if you are processing personal data you should consider whether all actions taken are compliant with the Data Protection Act of Jamaica.
Every business should consider the legal and reputational risks which could likely arise from the use of AI within the business, and safeguard against those risks. Your organisation should seek legal advice to determine what the unique set of risks are for your business.
Litrow Hickson is an associate at Myers, Fletcher & Gordon and is a member of the firm’s Litigation Department. Litrow may be contacted via litrow.hickson@mfg.com.jm or www.myersfletcher.com. This article is for general information purposes only and does not constitute legal advice.