Future-proofing pensions: Navigating the AI revolution
THE incorporation of artificial intelligence (AI) technology within pension management systems has gained momentum due to its potential to make better decisions, streamline operations, and improve the overall retirement savings journey.
In keeping with the theme of this article, I thought it best to put ChatGPT to the test in defining artificial intelligence. I asked ChatGPT: “What is a very simple definition of artificial intelligence?” and in less than 3 seconds ChatGPT provided this definition:
Think of artificial intelligence (AI) as teaching computers to think and learn like humans do. It’s about making machines smart enough to understand, reason, and solve problems without needing constant human guidance. AI helps computers learn from experiences, adjust to new information, and perform tasks that typically require human intelligence, such as recognising speech, playing games, or making decisions.
AI has been used in the financial services industry for many years in deciding whether to grant homeowner loans, determining credit scores, conducting risk monitoring, and many other tasks. However, in November 2022 the release of ChatGPT moved AI out of the shadows, repositioning it from a tool for software engineers to a tool that ordinary people can use without any need for technical expertise.
AI will undoubtedly revolutionise many industries, and the pension sector is no exception. With the growing silver economy
— all those economic activities, products and services designed to meet the needs of people over 50
— the desire for more curated and personalised retirement options, and the increasing complexity of financial markets, AI offers promising solutions. Let us look at some of the advantages.
Benefits and Advantages
1. Customer Service and Engagement – The most common AI-related engagement tool is the chatbot. Amazon’s Alexa is one well known example. In the pensions sector, these technologies could offer immediate, personalised responses — whether they pertain to basic queries such as account balances, to more complex ones involving investment options, or retirement planning. AI can also enable the chatbot to evolve from a reactive service (eg, I have a question and need help) to being a proactive device informed and activated by broader participant milestones such as salary raises.
2. Investment Management Performance – The use of AI could also alleviate some of the concerns pertaining to the investment performance associated with investment managers. Japan’s Government Pension Investment Fund (GPIF) is the largest retirement fund in the world with approximately US$1.5 trillion in assets. In response to concerns about investment performance associated with investment managers, GPIF commissioned a study to explore an AI system that would enable GPIF to select and monitor fund managers. The AI system detected and compared investment styles against expected performance on a real time basis, based on select data such as trading items, timing, volume, unrealised gains and losses. The initial results gave GPIF the capability to detect and compare investment styles attributed to the 16 fund managers evaluated, and to then determine the best managers for the GPIF.
3. Improved Communication with Participants –
Language can dramatically impact planned participant engagement and behaviour. Investment managers can use AI to customise planned communications so as to maximise positive participant responses. A recent study conducted by Invesco using AI-generated results showed that simple modifications, such as saying “staying on track” rather than “managing risk” can meaningfully improve participant engagement and increase levels of trust. Other examples were positive phrases such as, “Plan the retirement you deserve” and “Save enough today to enjoy a comfortable future”, which scored higher than prevention statements such as, “Unexpected expenses can derail you in retirement.”
4. Fraud Prevention – Pensioners are particularly susceptible to online fraud. They are often poorly equipped to deal with identity theft and at risk of accepting unsolicited offers online. A recent UK Financial Conduct Authority (2021) study found that 72 per cent of pensioners could not identify a common sign of a pension scam. However, AI can be used to monitor fraud risks in real time, identify individuals, or limit access to accounts, thereby providing an extra layer of security.
There are, of course, many other advantages such as:
•
Predictive Analytics for Market Trends – AI can predict the life expectancy of a pensioner, considering various factors like lifestyle, health records, and environmental factors. This would help in making better investment decisions and therefore enhance portfolio management.
•
Compliance and Regulatory Adherence – AI can continuously monitor transactions and operations for compliance issues. This proactive approach could help reduce the risk of costly regulatory fines and penalties.
•
Cost Reduction – Automation through AI of various administrative tasks — like data entry, paperwork processing, and basic customer interactions — can lead to significant cost savings and allow team members to focus on more complex matters.
Challenges and Risks
The benefits of AI in the pensions context need, however, to be balanced against the challenges and risks.
1. Efficient use of AI tools – “One challenge present in developing all AI tools is whether the right questions are being asked and therefore answered by the AI tool. After all, different questions will lead to different answers.” (Mercer Global Pension Index, 2023).
The data inputted into AI models inform what it spits out. That is not only in relation to what a customer asks, but also the data fed into it by developers of the AI. The quality, diversity, and relevance of the training data directly influence the efficiency of AI models in formulating personalised pension strategies. Prompt engineering and training of AI tools are fast becoming complementary industries.
2. Accuracy & Reliability – The complexity of financial markets and the unpredictable nature of economic events pose challenges in that AI models may not accurately predict or adapt to sudden market shifts. There is also the risk of over-reliance on historical data which may not adequately capture unprecedented events or changes in economic conditions. And then there are examples when an AI model may be so accurate as to create unintended results.
Some years ago, Target’s marketing department explored how it could determine whether female customers were pregnant because there are certain periods in life — pregnancy foremost among them — when women are most likely to radically change their buying habits.
If Target could reach out to customers in that period it could, for instance, cultivate new behaviours, getting them to turn to Target for specific goods. Target had been collecting data on its customers via shopper codes, credit cards, and surveys. It then combined that data with demographic data and third-party data it purchased. Crunching all that data enabled Target, using artificial intelligence, to generate a “pregnancy prediction” score.
The marketing department started targeting high-scoring customers with coupons and marketing messages. Several news outlets reported that about a year after creating the pregnancy-prediction model, a man walked into a Target outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry. “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager apologised and then called a few days later to apologise again.
On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”
(https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/?sh=678189556668).
3. Wrong Outcomes / Hallucinations – AI algorithms may have biases, offer unjustified responses (known as hallucinations), and not know right from wrong.
An unfortunate example of wrong outcomes is the Robodebt programme in Australia where unlawful and incorrect automated debt collection letters were sent to 470,000 social security recipients due to an incorrect algorithm. The programme ran from 2016 until a court determined in 2019 that it was illegal. It resulted in some of Australia’s poorest people being asked to pay off debts they did not owe, after receiving notices claiming they owed thousands of dollars.
More than half a million Australians were affected by the policy, resulting in suicides and considerable mental illness among many recipients. Many were forced into worse financial circumstances — taking out loans, selling their cars, or using savings to pay off a debt they did not owe but were told they had to pay off within weeks
(https://www.bbc.com/news/world-australia-66130105).
It is important that models are fully tested to ensure that inappropriate outcomes do not occur and that the recommendations are sensitive to the context of the individual. An incorrect algorithm could see errors in benefit statements and pension projections, causing negative outcomes for pensioners (and liability for pension providers and fiduciaries).
4. Lack of transparency and human touch – While AI offers efficiency and automation, some individuals may prefer human interaction and guidance when it comes to their retirement planning. AI cannot replace the empathy and experience that human financial advisors provide. Human judgement and intuition will remain essential in certain retirement planning situations that require a nuanced approach, such as dealing with unexpected life events or market volatility.
In a recent survey of 227 pension professionals, 74 per cent would not be happy taking financial advice from an AI robot. However, 72 per cent agreed the integration of AI systems into the pensions industry has the potential to deliver better outcomes for pension scheme members, and 91 per cent of those surveyed were currently using AI in their pensions’ business.
5. Algorithmic Decision Biases – AI algorithms might inherit biases present in training data, leading to unfair or skewed decisions. Amazon’s AI-powered recruitment tool serves as a prominent case study highlighting the challenges and consequences of algorithmic biases in AI systems. Amazon developed an AI tool to evaluate job applicants’ resumes. The AI system was trained on historical resumes submitted to Amazon over a 10-year period.
Since the majority of these resumes were from male applicants due to the tech industry’s gender skew, the AI system learned to favour male candidates by associating certain terms, schools, or experiences more frequently found on male applicants’ resumes with successful candidates.
The AI tool consistently downgraded resumes containing terms associated with women, even if these qualifications were relevant and significant. For example, it penalised resumes that included the word “women’s,” as in “women’s chess club captain”. The biased algorithm led to a discriminatory outcome, potentially excluding qualified female candidates from consideration, raising ethical concerns and legal implications. After discovering the bias Amazon decided to abandon the AI recruitment tool. The case underscored the crucial need to actively mitigate biases in AI systems, especially in scenarios where decisions significantly impact individuals’ opportunities and rights.
There are other challenges, and the list will never be exhaustive. For instance:
•
Identity Theft – The fact that AI can faithfully reproduce a person’s voice, writing style, photo or video, combined with the growth of sophisticated, cyber-breaching, security programs may lead to an increase in incidents of identity fraud or unauthorised access to retirement savings, which may threaten public confidence in long-term pension systems.
•
Data privacy and the need to protect and safeguard members’ data – AI needs data to survive. Furthermore, we need the tool of AI to process data, but we also need data to train AI. This raises concerns about data privacy and security. Data protection laws are undergoing radical changes in many countries, including Jamaica with its recent promulgation of the Data Protection Act on November 30, 2023. Trustees must ensure that security controls are in place to protect plan data and that a solid framework is developed with respect to privacy policies and protocols.
Environmental Considerations
AI has the potential to positively impact the pension industry through environmentally friendly investments. By integrating environmental considerations into investment strategies with the help of AI, pension funds can contribute to long-term value creation. This aligns with the interests of beneficiaries who are increasingly concerned about the sustainability and ethical implications of their investments.
However, the rapid rise of AI has raised concerns about its negative environmental impact, particularly in respect of energy consumption. It is projected that by 2025 the IT industry could consume up to 20 per cent of the world’s electricity and contribute approximately 5.5 per cent of global carbon emissions.
Training AI models requires vast amounts of energy, for example the training of ChatGPT resulted in 552 metric tons of carbon emissions, equivalent to driving a passenger vehicle for over 2 million kilometres ( https://dig.watch/updates/ais-impact-on-environment). According to one study by University of Massachusetts, training AI models to do natural language processing can produce the carbon dioxide equivalent of five times the lifetime emissions of a car, or the equivalent of 300 round-trip flights between San Francisco and New York.
https://www.forbes.com/sites/glenngow/2020/08/21/environmental-sustainability-and-ai/?sh=3836d3207db3.
AI is also a significant user of physical resources, including gold and rare earth metals, the mining of which threatens to cause future environmental damage with even greater emissions. Trustees, fund managers and administrators should be mindful of the impact of AI as part of their overall environmental, social, and governance (ESG) strategy. As AI continues to advance, the pension sector and policymakers need to seek to strike a balance between the transformative capabilities of AI and its substantial carbon footprint. AI innovations can become a blessing and a curse for both humanity and the planet.
What Role Does the Regulator Play in the AI Revolution?
Policymakers and regulators have a role in ensuring the use of AI is consistent with promoting financial stability, protecting consumers, and promoting market integrity and competition. The reality, though, is that the rapid advancements in AI technology outpace regulatory frameworks, thereby complicating oversight and enforcement.
Some of the measures regulators could implement include:
1. Policy Development – Regulators can establish policies and guidelines to govern the use of AI and machine learning in the pension industry. These policies can ensure that pension industry stakeholders adhere to ethical and legal standards when implementing AI systems.
2.
Transparency and Accountability – Regulators could promote transparency by requiring pension institutions to disclose how AI algorithms are used in decision-making processes. This transparency ensures that stakeholders understand how these technologies impact pension-related decisions. In Texas, some judges require attorneys appearing before them to file a certificate attesting they either did not use generative AI at all or that, if they did, they checked the results.
3.
Continued Education and Awareness – Regulators should educate pension industry professionals about the ethical and responsible use of AI. For example, published ethical guidelines such as IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, to ensure that AI-driven pension systems align with societal values and ethical norms, could be a good starting point for regulators.
Regulations must maintain a balance between safeguarding public interests and fostering the growth and development of those regulated industries. Navigating this regulatory balancing act requires acknowledgement of the varying risks associated with AI and devising strategies that align regulation with risk, without stifling innovation through overbearing regulatory intervention.
Responsible Stewardship of Trustworthy AI
In May 2019 OECD adopted its Principles on AI, the first international standards agreed to by governments for the responsible stewardship of trustworthy AI. These state that:
1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4. AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning, in line with the above principles.
We are all at the beginning of a journey to understand the true power of AI, its reach and capabilities. AI has the potential to revolutionise pension management by improving risk assessment, personalising retirement planning, streamlining administrative processes, and enhancing fraud detection and prevention. However, AI adoption in pension systems also raises ethical, regulatory and societal implications — such as data privacy, bias, and socioeconomic disparities.
Future-proofing pensions in the AI era requires all stakeholders of the pension industry to take a proactive approach towards the AI revolution, and to play our role in ensuring that our pension schemes are sustainable, equitable, and secured for future generations.