Column
Data Science and AI

Check Your AI: A Framework For its Use in Actuarial Practice

Two colleagues reviewing AI coding.

Claim your CPD points

As AI reshapes actuarial practice, how can we ensure ethics keep pace? 

In this Responsible AI Column, Fei outlines the Ethical AI Lifecycle—an applied extension of the Actuarial Control Cycle that weaves fairness, privacy, transparency and accountability into every step of AI use.

Recent controversies around AI-driven decision-making in insurance and finance have shown that even well-intentioned algorithms can lead to discriminatory outcomes, regulatory scrutiny and the loss of public trust. In the UK, there have been investigations into potential biases in the pricing algorithms used in personal lines insurance (such as the research by Citizens Advice, Discriminatory pricing: Exploring the ‘ethnicity penalty’ in the insurance market). Meanwhile,  the EU’s AI Act  signals a broader shift towards enforceable AI ethics standards. These developments underscore the growing need for actuaries to engage with AI’s ethical dimensions.

AI ethics is not merely about regulating technology; it is about embedding principles such as fairness, accountability and transparency into every stage of the actuarial modelling process. As professionals entrusted with managing uncertainty and risk, actuaries are well positioned – but also obligated – to lead in applying these ethical standards.

Major principles of AI ethics

Several core ethical principles should guide the design and deployment of AI systems in actuarial practice:

  • Fairness and non-discrimination are foundational: decision makers must balance economic efficiency with social equity, while actively mitigating any direct and indirect discrimination that is embedded in data or model design
  • Transparency and explainability are essential – especially in pricing contexts, where decisions must be interpretable by not only developers but also customers and regulators
  • Accountability raises questions about who is responsible when AI makes errors: actuary, developer or firm? 
  • Privacy and data ethics are increasingly salient, particularly as insurers use third-party sources, telematics or even social media
  • Contestability ensures that people affected by automated decisions have meaningful channels for appeal and recourse 
  • Stability and robustness ensure that AI models remain reliable and perform as expected under real-world conditions. 

These principles align broadly with international guidelines, such as the OECD’s  Recommendation on AI  and the European Commission’s  Ethics Guidelines for Trustworthy AI .

While many of these principles aim to govern the system as a whole, the involvement of human experts – particularly actuaries – remains essential. This will ensure that critical judgments, contextual nuances and ethical concerns are not delegated entirely to automated systems but are reviewed actively and shaped by professional discretion.

An ethical AI lifecycle for decision-making

Translating ethical principles into practice requires the identification of critical ethical checkpoints across the AI lifecycle. Here, I propose a structured lifecycle with six stages: problem definition, data collection, exploratory data analysis, modelling, evaluation and deployment (Figure 1). Each stage encompasses distinct ethical risks and responsibilities. The lifecycle draws inspiration from established frameworks, including the OECD’s  Framework for the Classification of AI systems  and the European Commission’s Ethics Guidelines for Trustworthy AI.

A figure showing the AI lifecyle for decision-making.

Figure 1: An ethical AI lifecycle for decision-making.

This lifecycle can be embedded within the actuarial control cycle to support robust and accountable AI governance. Doing so enables actuaries to systematically incorporate ethical risk assessment into their core activities of risk management, regulatory compliance and performance monitoring. Specifically, stage 1 corresponds to defining the problem, stages 2-5 to designing the solution and stage 6 to monitoring the results – mirroring the actuarial control cycle structure. Here, I identify and elaborate on the important ethical checkpoints at each stage of this integrated framework.

In the problem definition phase, consider whether the objective is aligned with the ethical AI principles of fairness, transparency, explainability and regulatory standards. Does the optimisation focus solely on firm profitability, or is this balanced with customer welfare and societal equity? Who are the stakeholders responsible for AI use, oversight and risk ownership? Have risk tolerances and success criteria been defined and documented clearly?

During data collection, ethical scrutiny turns to the sources and structure of data. Does the data invade privacy? Do certain proxies (such as postcode or occupation) encode sensitive attributes indirectly? Is there meaningful consent for use of the data? Does the data reflect historical biases? Is it of sufficient quality to support fair and robust modelling? Do data sourcing and use policies align with internal governance and legal frameworks? 

In the exploratory data analysis stage,  assess whether the input features could have discriminatory effects, either directly or through correlation with protected groups. Can these features be explained, justified and defended to both internal stakeholders (such as model validators and compliance teams) and external stakeholders (such as regulators and customers)? Are assumptions, transformations and selection criteria documented transparently? Is there a clear rationale for including or excluding features that may pose fairness or legal risks?

Modelling involves selecting the right model for the right purpose, balancing predictive accuracy with fairness, explainability and transparency. Can model outputs be assessed for disparities using fairness metrics across relevant social groups? Are we mitigating the risk of amplifying any historical biases that are encoded in the data? Are the models interpretable enough to support accountability, auditability and regulatory scrutiny? Has the model been validated independently, and does the documentation justify the key modelling choices clearly?

In the evaluation stage, it is important to assess the model’s performance regarding not only accuracy, but also fairness, robustness and explainability across relevant subgroups. Do the results generalise well under different scenarios? Have trade-offs been communicated clearly to decision makers? Are there scenario tests and sensitivity analyses to assess model resilience?

Finally, during deployment, continuous monitoring is essential. Are models audited regularly for fairness, transparency, explainability and accuracy? Can customers appeal or contest unfair pricing outcomes? Is there a structured feedback loop for ethical evaluation and ongoing revision? And, crucially, does deployment address the original problem statement effectively, and incorporate feedback into future iterations?

While many actuaries are already familiar with model risk management frameworks such as validation procedures, performance monitoring and documentation standards, the ethical checkpoints proposed here extend these foundations by integrating broader concerns of fairness, transparency, contestability and social impact. These questions are not always explicit in existing model governance practices, particularly regarding discrimination risks, stakeholder involvement or societal equity. As such, the ethical AI lifecycle should be viewed as a complementary extension to existing practices, enriching them with ethical foresight and accountability.

This framework is intended as a practical illustration rather than an exhaustive checklist. Its checkpoints and guiding questions are meant to stimulate critical reflection and promote responsible practices, particularly in actuarial and risk-sensitive domains. Depending on the application context, additional questions may be warranted, and some elements may require modification or deeper specificity. As AI systems continue to evolve, so too must the ethical frameworks governing them – requiring ongoing adaptation, professional judgment and stakeholder engagement.

The actuarial profession should lead, not follow, in building public trust in AI systems.

The future: from ethics to regulation and social trust

Looking ahead, actuaries must prepare for a fast-evolving regulatory landscape. From the EU’s AI Act to the US’s  Algorithmic Accountability Act , emerging frameworks are reshaping the way in which AI is governed. Compliance and effective risk management are only the starting points. The actuarial profession should lead, not follow, in building public trust in AI systems, and should advocate for responsible, transparent and equitable AI practices. 

As AI use expands across actuarial domains, actuaries must move beyond technical competence and take ownership of ethical oversight. This means actively incorporating ethical checkpoints into modelling workflows, examining assumptions critically, engaging diverse stakeholders and helping to shape governance frameworks that reflect both actuarial rigour and social responsibility. In doing so, we can ensure that AI systems serve not only efficiency, but also fairness, accountability and the public interest.

This article was first published in The Actuary Magazine .

About the authors
Dr Fei Huang
Dr. Fei Huang is an Associate Professor in the School of Risk and Actuarial Studies at UNSW Business School. Her research focuses on responsible data-driven decision-making, including fair and non-discriminatory insurance pricing, interpretable machine learning, mortality modelling and customer relationship management. Specifically, she examines ways to make insurance equitable, affordable and sustainable in the contexts of AI and climate change. For more information, please refer to her profile webpage https://www.unsw.edu.au/staff/fei-huang.