Data Science and AI

Understanding Australia's AI6: A framework for AI Governance

Young female giving a data science/AI presentation

Claim your CPD points

The National AI Centre has released Guidance for AI Adoption (AI6). Here's what the six essential practices mean for actuaries and why the Actuaries Institute supports them.

Earlier this year, the National AI Centre (NAIC) released the AI6, updated guidance consolidating previous frameworks into six core practices for responsible AI adoption. The Actuaries Institute supports the release, noting that the guidance enables leaders across sectors to develop a shared fluency in AI governance and strengthen collaboration.

The guidance comes in two formats: Foundations (10 pages) for organisations getting started, and Implementation Practices (53 pages) offering detailed guidance broadly aligned with international AI management standards (ISO/IEC 42001:2023). This tiered approach recognises that organisations are at different stages of AI maturity.

The Data Science and AI Practice Committee (DSAIPC) views the AI6 guidance as offering a useful set of practical guidelines for governing AI. They are free to access and written for organisational leaders, not just technical specialists. It helps to ensure AI governance knowledge is accessible and enables productive conversations between actuaries, executives, data scientists, engineers, compliance teams and other stakeholders. The international alignment means organisations implementing these practices will be well-positioned for global operations.

For actuaries working with AI across traditional and emerging roles, from pricing and reserving to data science and AI governance, these six practices provide a practical roadmap for ensuring AI systems are governed well.

The six essential practices
1. Decide who is accountable

This practice requires organisations to establish clear governance structures with specific accountability for AI systems throughout their lifecycle.

Key elements include:

  • assign a senior leader as the overall AI governance owner
  • create an AI policy setting out how your organisation will use AI responsibly
  • train accountable people so they can make informed decisions about AI's risks and behaviours
  • make specific people accountable for every AI system
  • clarify supply chain responsibilities when multiple parties are involved.

In actuarial practice: Consider a life insurer implementing AI-driven underwriting. Who owns the algorithm when it's built by a third-party vendor, customised by the insurer's data science team and used by underwriters to make decisions? Clear accountability means defining who's responsible for initial validation, ongoing monitoring, model updates and ultimate decisions to override or retire the system. When the AI flags an applicant as high-risk based on patterns in historical data, who's accountable for ensuring that decision doesn't encode unreasonable biases?

This practice extends familiar actuarial model governance across the entire AI lifecycle. It's not enough to have a model owner at deployment. You need clear accountability from conception through decommissioning, with explicit documentation of who can intervene, who monitors performance and who owns the decision when the AI system's recommendations conflict with professional judgement or customer circumstances.

2. Understand impacts and plan accordingly

This practice focuses on identifying and managing AI's effects on stakeholders. Organisations should:

  • carry out stakeholder impact assessments to identify who may be affected by AI systems and how
  • create contestability channels for people to report problems, challenge AI decisions, or question outcomes
  • engage stakeholders early and continue throughout the AI lifecycle
  • pay particular attention to vulnerable groups.

In actuarial practice: An insurer deploys an AI system to streamline claims processing, automatically approving straightforward claims and flagging complex ones for human review. The stakeholder impact assessment should identify not just efficiency gains, but potential harms. Could the system disadvantage claimants who don't fit typical patterns? Might it systematically flag claims from certain demographic groups for additional scrutiny? How will customers with disabilities or language barriers contest automated decisions?

Contestability might mean building interfaces where claimants can easily request human review, establishing clear timelines for responses and maintaining records of contested decisions to identify systemic issues. If claimants from vulnerable cohorts are disproportionately having their claims flagged, that's a pattern that should trigger review of the underlying AI system, not just individual case-by-case corrections. For actuaries, this practice embeds fairness considerations throughout the AI lifecycle, ensuring professional judgement accounts for impacts beyond pure statistical performance.

3. Measure and manage risks

This practice requires AI-specific risk management that accounts for context-dependent behaviour.

Organisations should:

  • create risk screening processes to identify and flag AI systems posing unacceptable risk or requiring additional governance attention
  • conduct risk assessments for each specific use case
  • apply controls proportionate to risk levels
  • establish processes to investigate and learn from AI-related incidents.

In actuarial practice: An AI pricing model for motor insurance might be low-risk when used to generate indicative quotes, medium-risk when used to set actual premiums with human oversight and high-risk if deployed to automatically decline high-risk applications without review. The same underlying model presents fundamentally different risks depending on the deployment context and the autonomy level.

Risk management must account for AI-specific characteristics. The model might perform well on average but badly on edge cases, could drift over time as driving patterns change, or might amplify historical biases encoded in claims data. Unlike traditional actuarial models, where all logic is explicit, AI models can behave unpredictably when encountering scenarios unlike their training data. An AI system that worked well in stable conditions might make poor decisions during a natural disaster or pandemic when normal patterns break down. Context-dependent, ongoing risk assessment is essential, not just point-in-time validation at deployment.

4. Share essential information

This practice emphasises transparency about AI use and capabilities.

Organisations should:

  • create and maintain an AI register documenting all AI systems
  • disclose AI use, clearly communicating when AI makes or influences decisions, generates content that impacts people, or might be mistaken for human judgment
  • identify and communicate system capabilities and limitations
  • ensure transparency across the AI supply chain.

In actuarial practice: A customer receives a premium increase notification. They should understand whether an AI system influenced that decision, what factors the system considered and what they can do if they believe the decision is incorrect. This doesn't mean explaining every technical detail of the model, but it does mean clear communication about AI's presence and role.

The AI register becomes particularly important when actuaries use multiple AI systems across the business: one for pricing, another for fraud detection, a third for customer service chatbots. Documenting these systems (their purpose, capabilities, limitations, training data sources, and responsible owners) enables proper governance and supports regulatory compliance. When the regulator asks how many AI systems you have and what they do, you need a comprehensive answer. For third-party AI systems, transparency means knowing what you've bought: What data was it trained on? What are its known limitations? What happens when it encounters scenarios outside its training distribution?

5. Test and monitor

This practice requires rigorous testing before deployment and ongoing monitoring afterwards.

Organisations should:

  • if buying an AI system, ask for proof it's been properly tested
  • test before deployment
  • monitor systems after deployment, tracking key performance metrics relevant to identified risks
  • extend data governance and cybersecurity practices to AI systems
  • for high-risk systems, consider independent testing and auditing.

In actuarial practice: An insurer implements an AI system to predict claim costs for reserving purposes. Pre-deployment testing should validate performance against historical data, but also stress-test the system. How does it handle unusual claim patterns? What happens with sparse data in smaller portfolios? Does it maintain calibration across different claim types, policy durations and customer segments?

Post-deployment monitoring is critical because AI systems can drift. If claims patterns change (perhaps due to emerging risks like climate change impacts or new medical treatments), does the AI system adapt appropriately or does it continue assuming historical patterns hold? Regular monitoring should track not just overall accuracy but fairness metrics. Are prediction errors consistent across demographic groups, or is the system systematically over-reserving for some customer segments and under-reserving for others? Unlike traditional actuarial models that remain stable until explicitly updated, AI systems may change behaviour as they encounter new data, requiring ongoing vigilance from actuaries to ensure professional standards are maintained.

6. Maintain human control

This practice ensures meaningful human oversight of AI systems.

Organisations should:

  • ensure meaningful human oversight matching the system's autonomy level and stakes involved
  • build in intervention points where humans can pause, override, roll back, or shut down AI systems if needed
  • provide training to anyone overseeing AI systems
  • maintain alternative pathways so critical functions can continue if AI systems malfunction.

In actuarial practice: An AI system recommends declining a business interruption claim based on policy wording interpretation. Even if the AI is highly accurate on average, the claim assessor needs the authority, information and training to override that recommendation when professional judgement suggests otherwise in a particular claim situation. Perhaps the case involves unique circumstances the AI hasn't encountered, or the customer has a specific context that matters, which is not visible to the AI.

Human control means more than having an override button. It requires that humans have sufficient information and ability to exercise judgement effectively. If the AI provides a recommendation without explanation, humans become rubber stamps rather than providing meaningful oversight. Similarly, if humans aren’t given enough time to review things, they probably won’t. Actuaries overseeing AI systems need training to understand the system's capabilities, limitations, failure modes and when to intervene. They need clear authority to override the AI without facing pressure to simply accept its recommendations. Critically, they need contingency plans: if the AI pricing system fails during peak renewal season, can underwriters still process business using alternative methods? Professional judgement cannot be outsourced entirely to automated systems.

Voluntary guidance and regulatory context

An important detail is that the AI6 guidance is voluntary best practice, not mandatory regulation. This distinction matters because when the previous voluntary standard (VAISS) was released alongside proposed mandatory guardrails in September 2024, the strong linkage created the impression that all elements of the VAISS would effectively become mandated.

The Actuaries Institute was among 275 organisations that submitted feedback when consultation on proposed mandatory guardrails closed in October 2024, emphasising actuaries' unique position to contribute expertise spanning advanced analytics, risk management and ethical considerations. We observed challenges with the VAISS in particular situations, noting that it was not written in a way that could be moved to a regulatory mandate, if implemented line-by-line. While the content of this guidance is similar (and a comparison has been published showing the alignment), the framing of this as guidance is a step forward. Organisations can choose to do the things that make sense in their context and ignore or adapt those which do not.

So, while the AI6 guidance helps organisations prepare and demonstrates commitment to responsible AI, it shouldn't be conflated with legal requirements. However, we would expect that implementing these practices now would position organisations well for whatever mandatory framework might emerge.

What actuaries should do now

Implement the AI6 practices in a way that makes sense for you: Whether or not they become mandatory, they represent recommended best practice and provide solid foundations for responsible AI use. But use your brain – if a particular point doesn’t work for your organisation, don’t do it just because it’s on the list. If you think of a good idea that’s not included in the AI6, do it anyway – it’s still a good idea!

Map to existing frameworks: Most organisations already have model risk management frameworks and broader risk and governance frameworks. In some cases the governance and management of AI models will mirror these existing processes. By aligning the guidance to ISO 42100, the AI6 also provides a globally recognised AI framework to leverage. Map the AI6 practices to existing processes, identify gaps and address them.

Use the tools and templates: NAIC has provided practical resources including an AI policy template and AI register template to help organisations get started quickly.

Document things, especially in higher-stakes situations: Every AI6 practice emphasises documentation, essential for audit trails, regulatory compliance and demonstrating appropriate professional judgement.

Stay informed on regulatory developments: The AI space is developing quickly. Organisations implementing AI6 practices now will be well-prepared for whatever mandatory requirements might come.

Conclusion

The AI6 guidance represents the Australian government's clearest articulation yet of responsible AI governance. As the Actuaries Institute noted, good governance enables organisations to realise AI's benefits. For actuaries (professionals entrusted with managing uncertainty and risk), these practices provide a practical framework that complements existing model risk management while extending it to address AI-specific considerations around fairness, transparency and accountability.

The guidance gives us the framework. Now it's up to the profession to implement it thoughtfully, building the trust and capability needed to ensure AI systems serve not only efficiency, but also fairness, accountability and the public interest.

Join the conversation

Want to explore how to apply these practices in your organisation? Join Lauren Solomon, Special Advisor, Governance Practice at the National AI Centre, and Chris Dolman, Member of the Institute's Data Science and AI Practice Committee, for an interactive Insights Session on AI governance.

Whether you're starting out or scaling up, you'll gain actionable guidance to navigate AI adoption with confidence and responsibility.

Resources
About the authors
Ean Chan headshot
Ean Chan
Ean is a Senior Manager within EY's Actuarial Services team, with experience in Life Insurance, Data Analytics and AI, primarily concentrating on Health and Human Services clients. As chair of the Institute's Young Data Analytics Working Group and member of the Data Science and AI Practice Committee, Ean is dedicated to driving progress in the actuarial field by augmenting our expertise with the latest data science, AI and machine learning methodologies.
Chris Dolman
Chris Dolman is working to ensure that AI and other data-driven decisions operate responsibly, thoughtfully and ethically. He is an active member of the Institute’s General Insurance and Data Science Practice Committees, and was the 2022 Actuary of the Year.

Latest articles

Be informed. Stay ahead. Subscribe.

Receive industry-leading perspectives, straight to your inbox.

Two people climbing a snowy mountain