Professionalism Training
Data Science and AI

Is AI Making it Easier or Harder to be a Professional?

A digital concept of AI.

Claim your CPD points

AI is everywhere and is (almost?) inescapably being woven into our daily lives. 

It can be put to good use and to bad, and like all systems and tools there are dangers that it be poorly implemented, misused, misapplied and its results misrepresented.

So, what are the implications for actuaries from a professional standpoint?

First – let’s address terminology

At the risk of upsetting readers, I suggest that we not get too caught up on definitions. 

The subject we are talking about here covers Artificial Intelligence (AI), Large Language Models (LLMs), machine learning, generative AI and other variants or descriptors, including some that have yet to be coined. When precise definitions are important, as they might be for some questions of compliance with legislation, then the governing documents (e.g., legislation) will provide those definitions or a source for them. For this discussion, that detail is not required.

The starting point for all professionalism considerations is our Code of Conduct. The Code adopts a principles-based approach, with the six principles being:

  • Integrity
  • Compliance
  • Competence and Care
  • Objectivity
  • Speaking up
  • Communication

The principles-based approach encourages us to consider not just whether we can do something, but also whether we should

Likewise, the emphasis moves from the question “Does this strictly apply in this specific case?” to reflecting on “How might I use this point/idea/example/discussion to assist me with my professional decision-making?”

The Professionalism Committee believes that, at present, the Code itself does not need change to meet the challenges that come with widespread use of AI; however, we have recognised that members may benefit from enhanced guidance in this area. Accordingly, we have refreshed the Guidance document that supports the Code. 

Update to support the Code

The updated Guidance document explicitly references LLMs and the like. There is a new sub-section within the “The Code – Application, Scope and Purpose” section that speaks specifically to contemporary considerations.

It makes it clear that the Code applies to design, use and oversight of tools and algorithms, including AI. It highlights the responsibility to keep abreast of further developments in this area – not just the responsible use of the current tools. There are also specific call-outs in the Competence and Care, Objectivity, Communication Guidance sections.

You can find a copy of the updated Guidance by navigating to the Professional Standards and Guidance or you can view it here .

The Professionalism Committee would also like to draw members’ attention to the guidance published by the Institute and Faculty of Actuaries (IFoA), and specifically to a document updated in November 2024 that provides useful content and case studies that might further assist our members.  

Navigating to the IFoA website , you can find a copy of this document in the Standards and Guidance; non-mandatory guidance section. The IFoA ‘Actuaries Code’ is principles-based, like our own, and the principles therein resonate with our own, so the IFoA guidance document is readily applicable.

In summary

Here's a couple of things to note and/or ponder:

  • The use of AI does not absolve you of responsibility for the outputs and advice.
  • The requirement to act with Competence & Care does not mean a member should not get involved in new and evolving areas  — we are, after all, a profession that deals in risk. It is about being deliberate about what you do and don’t know and systematically working through how to fill the gaps in knowledge and verify results.
  • Use of AI has the potential to have significant impact for society, impacts which can be beneficial or harmful. Understanding and calling out the potential for harmful outcomes is part of the professional responsibility.
  • One of the more challenging principles might be “compliance” as legal and regulatory activity is high, and jurisdictional differences might be considerable.
  • With the advance of AI as a tool, the time may already have arrived when the Actuaries’ Code requirement to exercise appropriate “competence and care” means its use has become a professional responsibility in some specific areas of practice.

Ready to apply these AI professionalism principles? 

Join our upcoming Insights Session, Don't Outsource Your Ethics to AI on Thursday 26 June to tackle a hypothetical case study with industry colleagues that explores how to maintain professional and ethical obligations while using AI tools in daily practice.

This practical workshop perfectly complements the guidance discussed in this article, giving you hands-on experience navigating AI-related professional dilemmas in a supportive learning environment. Participate either physically in our Sydney office or form discussion groups at your workplace and join virtually as a team. Register now .

About the authors
User.svg
Julie Evans
Julie has enjoyed 35ish years of challenges as an actuary and is a member of the Institute's Professionalism Committee.
User.svg
Stuart Rodger
Stuart Rodger has followed a business and actuarial career for over 40 years, in Australia, the UK, Singapore and Japan.