Claim your CPD points
Many private health insurers (PHIs) now have access to Artificial Intelligence (AI) tools integrated into office workflows, with some also exploring applications such as call centre assistants. AI offers significant potential, and its value is most fully realised when supported by clear objectives and context.
This article explores how AI is being applied in practice and whether it improves outcomes, not just efficiency, including how decisions are made and where they are effectively controlled. Outcomes include financial performance, member value, and risk sustainability, often requiring trade-offs across stakeholders and time horizons. We focus on three core questions:
The following examples, drawn from different PHIs, suggest a pattern in how AI is being applied. AI appears most effective when supporting well-defined objectives and can lead to misaligned outcomes when those objectives are implicit or unclear.
AI is often introduced to improve speed, cost, and efficiency. These are valid and may not be sufficient, as a process that did not add value before is unlikely to do so simply by being done faster. A more fundamental question is what outcomes we are trying to improve, and for whom. For PHIs, this includes multiple stakeholders, often requiring trade-offs.
In one PHI, AI was used to explore whether the Financial Condition Report (FCR) could be reduced while still meeting prudential standards. Sections identified for removal included areas valued by the organisation, such as industry insights and emerging risk interpretation. Without these, the report’s value to stakeholders would likely be reduced. AI answered the question that was asked, focusing on compliance rather than usefulness.
An alternative approach is to first define what makes the report decision-useful, then use AI to streamline production while preserving those elements. Analysis is becoming easier. Influence and alignment remain valuable.
Once goals are clear, the focus shifts to design: how systems, models, and processes are structured, and how assumptions are embedded. Organisations don’t always behave as intended; they tend to behave in line with how they are designed.
AI can explore scenarios, test assumptions, and challenge decisions at scale. These design choices do not operate in isolation; they interact with how products are compared and understood in the market, shaping the behaviours those assumptions depend on. This speed and range can create challenges. High-certainty outputs or similar-looking scenarios can reduce critical judgement.
At another PHI, AI is being explored to support premium submissions, based on parameters such as retention, product characteristics, and strategic positioning. These need to be defined within a broader context. Community rating limits pricing flexibility, and product design and member behaviour still influence outcomes.
Given a defined objective, AI may prioritise segments with stronger retention or profitability, for example, in closed or less competitive products. Whether this is appropriate depends on how objectives are defined. If assumptions are unclear, AI will produce an answer, but not necessarily one aligned to organisational intent. The risk is misaligned optimisation. An alternative is to explicitly define the objective function and trade-offs, then use AI to test and refine them.
If a fund competes on value rather than price, this must extend beyond the model. AI comparison tools make price differences more visible and actionable, increasing price competition and focusing attention on what is easily compared. Organisations competing on value may need to work harder to ensure that value is visible and credible, especially where it is not captured well by comparison tools.
Without this, assumptions around retention or lifetime value may not be realised, creating pressure on margins and capital. Pricing decisions therefore rely not only on economic assumptions, but also on whether the value proposition is understood by customers and drives behaviour.
As AI becomes more embedded in private health insurance, the shift is not only in how work is performed, but in how decisions are made and who effectively controls them.
In many cases, decision-making is no longer confined within the organisation. Comparison tools, embedded journeys, and AI-assisted interactions increasingly shape how products are evaluated and selected. This creates a subtle but important shift: value move toward whoever controls the decision point, rather than the entity providing the product.
This raises a different question. Not only whether AI improves efficiency or outcomes, but whether it changes where decisions are made, how they are made, and how well those decisions align with organisational intent.
These design choices influence whether AI improves decisions or reinforces existing processes. Efficiency is easy to measure, and many outcomes involve trade-offs across stakeholders, time horizons, and risk.
AI can generate answers quickly and confidently. Without critical review, this can lead to poor decisions, especially if efficiency gains are not matched with stronger judgement. The opportunity is not just to do the same work faster, but to use that time to improve decisions. There remains a need for clear, independent advice. AI enables faster decisions, so errors can scale more quickly.
In one PHI, predictive models identify members at risk of lapse, and interventions are tested through real-world experiments. These experiments (typically not pricing-related) not only test interventions but also help understand where and how decisions are being shaped
AI supports this by helping design experiments, identify data needs, and accelerate synthesis. However, it does not replace experimentation. Outcomes still depend on testing, interpretation, and judgement.
Efficiency and cost can be valid outcomes, particularly where reducing effort is the objective. However, they may not capture the full value delivered, especially where trade-offs exist. AI will optimise for whatever objective it is given; if unclear, efficiency may come at the expense of broader outcomes. AI itself may not be the advantage. Advantage may come from how decisions are defined, interpreted, and where they are effectively controlled.
AI can improve both efficiency and outcomes, and efficiency gains primarily create capacity. Whether that capacity improves outcomes depends on how it is used. If used only to reduce time or cost, outcomes may not change. If used to improve analysis, decisions, and alignment, outcomes can improve materially.
There is an opportunity to combine human judgement with AI capability to achieve better decisions than either alone. Used well, AI shifts the focus from producing outputs efficiently to making them more decision-useful and aligned to organisational objectives.
AI will optimise what it is asked to. Much of the advantage lies in choosing the right questions and aligning the organisation to deliver them. This includes being explicit about stakeholder outcomes, trade-offs, and how capacity created by AI is used, as well as recognising how internal models and comparison tools shape outcomes. Ultimately, the advantage will not come from AI itself, but from how decisions are designed, where they are made, and who controls them.
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivatives CC BY-NC-ND Version 4.0.
From natural hazard pricing to claims analytics, explore actuarial perspectives on Australia's general insurance sector.
Subscribe to Actuaries Digital for free and receive the latest actuarial analysis, research, and commentary direct to your inbox