Data Science and AI

Generative AI for Actuaries: Risks and opportunities

Claim your CPD points

In the second article of this series, we dive deeper into some of the commercial and ethical aspects of generative AI for actuaries. The possibilities LLMs provide to improve productivity are explored, concentrating on the most value-adding tasks and risks associated with relying on this technology.

In Part 1 of this article series , we introduced the exciting new developments and potential applications of generative AI, a type of artificial intelligence that can create new content, such as text, images, audio and video. One of the most advanced examples of generative AI is ChatGPT, a chatbot powered by a Large Language Model (LLM) that can generate realistic and coherent text responses to any text prompt. We explored how LLMs work, what benefits they bring, and what challenges they pose for actuaries and their work. 

Applications for Actuaries

Communication

LLMs can help actuaries communicate complex mathematical concepts or other recommendations into language that is easily digestible for audiences such as the Board of a company - or the readers of Actuaries Digital. 

Many sections of this article have been written with the assistance of LLMs. Some editing and word-smithing was still needed, but working in partnership with AI saved a lot of time in the initial writing stages. One of the first uses for LLMs was translation and harnessed properly, can help actuaries communicate in languages other than their native tongue.

Education

LLMs can also be a useful tool for actuaries in education. When studying for actuarial exams, LLMs can help to clarify aspects of subject materials, rephrasing the same matter in different ways to help students better understand certain topics in preparation for their exams. For example, let's say you are studying the Data Science Applications subject and would like to understand how neural networks can be used to solve actuarial problems. Here's what ChatGPT had to say on this:

WordPress Image 111247829

Source: OpenAI ChatGPT   Accessed March 2023. 

In the sample output from ChatGPT above, while the response was verbose and repetitive in style, it was tailored to completing traditional actuarial tasks as requested. Our experience has been that ChatGPT will not always provide a perfect response to a question, so it is helpful for users to ask ChatGPT to refine its answers or provide more details on a certain parts of the initial response with further prompting.

Coding

Coding in languages such as SQL, Python and R has increasingly become a BAU task for many actuarial and data professionals, particularly entry-level analysts. As a result, a significant portion of working time is spent developing and reviewing coding scripts for purposes ranging from descriptive data analysis to more advanced modelling. LLMs can be used to speed up this process and act as a personal coding assistant for actuaries.

Based on English instructions, we were able to get ChatGPT to write working Python code that can perform simple data science: inputting financial time series data and transforming it to model a response. It was also able to explain the code in detail, as shown in the ChatGPT response snippet below.

WordPress Image 111247830
WordPress Image 111247831

Source: OpenAI ChatGPT   Accessed March 2023.

An analyst can very easily use scripts like the one produced above, make minor tweaks where necessary, and focus more time on analysing the results and providing value-adding insights.

Text Summarisation and Generation

Actuaries are often required to convey technical concepts in simple terms. LLMs may be great at helping actuaries achieve this. To see this in action, we tried the following prompt on ChatGPT.

WordPress Image 111247832

Source: OpenAI ChatGPT Accessed March 2023.

Apart from summarising texts, its ability to remember earlier conversations (in the second prompt, without users needing to mention the Australian Privacy Principles again), as well as to allow users to provide follow-up corrections to its responses (by clicking the thumbs-down icon), are powerful features.

Moreover, whilst it's widely expected that ChatGPT can perform low-level automation tasks such as Q&A and text summarisation, it can also be used to perform creative tasks such as content creation. To see this in action, we tried the following prompt on ChatGPT.

WordPress Image 111247833

Source: OpenAI ChatGPT Accessed March 2023.

In this 'generative' tweet, ChatGPT seems to have considered Twitter's character limits, utilised emojis and hashtags which suited the profile of the tweet, and come up with a reasonably catchy slogan. Could this be used to streamline some marketing-related tasks for actuaries, such as writing job descriptions or creating a hiring slogan that is more engaging with qualified applicants on LinkedIn?

Some have pointed out how AI's ability to both generate text from a brief list of points and to quickly summarise large amounts of text can lead to some rather absurd situations… 

WordPress Image 111247834

Source: Twitter. Sam Altman, CEO of OpenAI.

On a separate note, The Australian Financial Review invites submissions on how professionals are using ChatGPT to this blog . There have been some interesting submissions already.

Risks

These LLMs are new. Whilst through the demonstrations above, they appear to be ready-set-go technology,  there may be (known and unknown) inherent risks associated with their use. Here are some of the known risks:

  • Hallucinations

Large language models can hallucinate. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. Some examples of hallucinations in LLM-generated outputs include factual inaccuracies, unsupported claims, nonsensical statements, and improbable scenarios. 

Consequently, actuaries (and others) using LLMs must have expertise in the areas they ask the LLMs to write about and remain hyper-vigilant for errors in the reasoning or conclusions drawn.

  • Financial Advice

One of the concerns around using LLMs is the risk of giving inappropriate financial advice to customers. To prevent this, companies can limit the use of LLMs in chatbots to responding to general inquiries or simple customer service issues rather than allowing them to provide financial advice. Companies can also ensure that a chatbot has been thoroughly trained and vetted for accuracy and that there are human experts available to review and verify any advice given by the chatbot.

  • Jailbreaks

Although ChatGPT has been trained to remain politically correct generally, unlike the disastrous earlier results of Microsoft Tay , it has been possible to use long conversations or unusual prompts to take the conversation in strange or disturbing directions. This may potentially present brand and reputation risks for organisations looking to use LLMs in a customer-facing context.

  • Data Ethics and Privacy

Another key consideration is data ethics. There are concerns about the sensitivity of certain types of communication, particularly when it comes to topics like death and grief. For example, if an insurance company needs to draft a consolation letter to the family of someone who has passed away, using machine-written communication could be seen as insensitive or impersonal.

Furthermore, transparency around whether communication is happening with a bot or with a human is important for sound ethical conduct. If the customer were misled to believe they were speaking with a human when they were actually chatting with a bot, there is a higher chance of legal action against the company. To mitigate this, companies must be transparent about the use of ChatGPT and other LLM-powered solutions, ensure that they are collecting and using customer data ethically and in compliance with regulations, and that they have appropriate safeguards in place to protect customer privacy. [1]

To mitigate some of the risks and limitations as discussed previously, the following flowchart provides a useful thought process to follow when deciding whether it is safe to use ChatGPT for a particular application:

WordPress Image 111247835

Source: Tiulkanov, A (2023) [ Twitter ] .

Concerns have been raised about submitting data, via ChatGPT, to its developer, OpenAI, but there are other LLMs that can be run securely on local hardware without data transmission to third parties. Whilst running state-of-the-art LLMs requires expensive hardware, high-quality models are likely to become increasingly accessible even for smaller businesses due to more recent innovations. 

Concluding thoughts

Generative AI models have been a recurring highlight in our Data Science Newsletter for members. In 2019, we featured the GPT-2 model but recent advances in scale have led to Large Language Models (LLMs), such as ChatGPT and GPT-4, with increasingly impressive performance, particularly in following human instructions. 

For actuaries, these models offer opportunities to improve the productivity of everyday workflows as illustrated by the examples above. In the next article, we will deep-dive into a case study of using ChatGPT for exploratory data analysis.

References

[1] For example, organisations currently have obligations related to the disclosure, collection and usage of personal information stemming from the Privacy Act and GDPR. Further, the Privacy Act is currently under review, so these obligations will continue to evolve. See https://www.ag.gov.au/rights-and-protections/publications/privacy-act-review-report for further detail.

About the authors
Jacky Poon
Jacky is the current Chair of the Young Data Analytics Working Group and the Head of Finance - nib Travel, the Travel Insurance division of nib Health Funds. He is an editor of the monthly Data Analytics Newsletter for Actuaries Institute members. He is also member of the IFoA Machine Learning Reserving Working Party and has a keen interest in research on the use of data analytics and machine learning techniques to complement the traditional actuarial skillset in insurance.
Ean Chan headshot
Ean Chan, Senior Manager at EY
Ean is a Senior Manager within EY's Actuarial Services team, with experience in Life Insurance and Data Analytics, primarily concentrating on Health and Human Services clients. As a member of the Institute's Young Data Analytics Working Group and formerly the Life Insurance Data Analytics Working Group, Ean is dedicated to driving progress in the actuarial field by augmenting our expertise with the latest data science and machine learning methodologies.
Jin Cui
Jin is the Senior Manager, Data Analytics at TAL Australia. He is passionate about advancing data analytics in the life insurance industry. Jin is a Fellow of the Actuaries Institute of Australia and a proud member of the (young) Data Analytics Working Group.
Kriti Khullar
Kriti is a Risk Intelligence Insights Lead at IAG with experience in provisioning, stress testing, credit risk portfolio reporting, risk data, risk systems, risk infrastructure, risk applications, divisional and enterprise-wide risk reporting for executive and Board audiences. She is an Associate Actuary with a keen interest in applying data analytics and the intelligence tradecraft to hone the future of risk management.
Amanda Aitken
Amanda is a member of the Institute’s Actuarial Education team. She brings her passion for teaching and years of actuarial experience to this role, which is focussed on delivering high-quality Actuary and Fellowship Program subjects. Amanda is currently a member of the Data Science Practice Committee and the Data Science Education Faculty.
Estelle Liu
Estelle joined Aware Super (formally First State Super) as the Actuarial Practice Lead in August 2020. Prior to that, Estelle was a consultant at Rice Warner and focused on retirement solutions during her time at Rice Warner and prior to that at Mine Super. Estelle is a Fellow of the Actuaries Institute (FIAA) and a Chartered Enterprise Risk Actuary (CERA). Estelle is currently the convenor of the Actuaries Institute’s Superannuation Projection and Disclosure Sub-committee.
Henry Ma
Henry works in the Balance Sheet Risk Management team at Commonwealth Bank and is part of the Institute’s Young Data Analytics Working Group (YDAWG). He has experience in both traded and non-traded market risk, capital analysis and ALM topics. He is passionate about applying analytics to risk quantification and stress testing.