Claim your CPD points
It can be put to good use and to bad, and like all systems and tools there are dangers that it be poorly implemented, misused, misapplied and its results misrepresented.
So, what are the implications for actuaries from a professional standpoint?
At the risk of upsetting readers, I suggest that we not get too caught up on definitions.
The subject we are talking about here covers Artificial Intelligence (AI), Large Language Models (LLMs), machine learning, generative AI and other variants or descriptors, including some that have yet to be coined. When precise definitions are important, as they might be for some questions of compliance with legislation, then the governing documents (e.g., legislation) will provide those definitions or a source for them. For this discussion, that detail is not required.
The starting point for all professionalism considerations is our Code of Conduct. The Code adopts a principles-based approach, with the six principles being:
The principles-based approach encourages us to consider not just whether we can do something, but also whether we should.
Likewise, the emphasis moves from the question “Does this strictly apply in this specific case?” to reflecting on “How might I use this point/idea/example/discussion to assist me with my professional decision-making?”
The Professionalism Committee believes that, at present, the Code itself does not need change to meet the challenges that come with widespread use of AI; however, we have recognised that members may benefit from enhanced guidance in this area. Accordingly, we have refreshed the Guidance document that supports the Code.
The updated Guidance document explicitly references LLMs and the like. There is a new sub-section within the “The Code – Application, Scope and Purpose” section that speaks specifically to contemporary considerations.
It makes it clear that the Code applies to design, use and oversight of tools and algorithms, including AI. It highlights the responsibility to keep abreast of further developments in this area – not just the responsible use of the current tools. There are also specific call-outs in the Competence and Care, Objectivity, Communication Guidance sections.
You can find a copy of the updated Guidance by navigating to the Professional Standards and Guidance or you can view it here .
The Professionalism Committee would also like to draw members’ attention to the guidance published by the Institute and Faculty of Actuaries (IFoA), and specifically to a document updated in November 2024 that provides useful content and case studies that might further assist our members.
Navigating to the IFoA website , you can find a copy of this document in the Standards and Guidance; non-mandatory guidance section. The IFoA ‘Actuaries Code’ is principles-based, like our own, and the principles therein resonate with our own, so the IFoA guidance document is readily applicable.
Here's a couple of things to note and/or ponder:
Join our upcoming Insights Session, Don't Outsource Your Ethics to AI on Thursday 26 June to tackle a hypothetical case study with industry colleagues that explores how to maintain professional and ethical obligations while using AI tools in daily practice.
This practical workshop perfectly complements the guidance discussed in this article, giving you hands-on experience navigating AI-related professional dilemmas in a supportive learning environment. Participate either physically in our Sydney office or form discussion groups at your workplace and join virtually as a team. Register now .