Claim your CPD points
Imagine driving through dense fog. Tail lights flicker ahead, but the road is unclear. Do you brake hard, slow down or swerve?
That moment — decision-making under uncertainty — is exactly what actuaries face daily. It was also the metaphor that opened UNSW PhD candidate Eric Tian Dong’s insightful session at the 2025 All Actuaries Summit, supervised by Professor Bernard Wong, Dr Patrick Laub, and Professor Benjamin Avanzi from the University of Melbourne.
Titled “ Uncertainty Quantification in Neural Network Models for Actuarial Use ”, the session invited us to look beyond averages and start asking: How confident are we in what we know — and what don’t we know?
Their message was clear. To keep pace with today’s evolving risks, we need to rethink how we measure what we don’t know. Neural networks provide the tools to make that shift.
Dong used the fog metaphor to distinguish between two fundamental types of uncertainty:
This duality is familiar to risk professionals. But what’s new is how cutting-edge machine learning tools like Bayesian Neural Networks (BNNs) and Distributional Refinement Networks (DRNs) can help actuaries and risk teams explicitly model both types of uncertainty.
DRNs refine traditional actuarial models (e.g., General Linear Models) using neural networks to output full probability distributions — not just averages. BNNs treat model weights as probabilistic variables, quantifying what we don’t know (epistemic uncertainty).
Notably, these same modelling concepts power AI systems like ChatGPT. The difference? Actuaries can bring domain-informed structure, transparency, and risk awareness to ensure these models serve regulated, high-stakes environments.
Dong’s session emphasised the need to model epistemic uncertainty, which arises from knowledge gaps, not randomness. This is especially useful in areas like cyber insurance, where data is sparse and evolving rapidly.
Actionable approach: In risk models, explicitly flag low-confidence areas. Use ensemble models or BNNs to visualise where your model is uncertain — guiding expert reviews, stress testing or the application of conservative assumptions.
Example: In cyber pricing, BNNs can highlight high epistemic uncertainty around emerging threats. This directly informs reinsurance structures or policy exclusions—turning model gaps into actionable risk controls.
In risk management, knowing the average loss isn’t enough. DRNs provide a way to begin with a traditional actuarial model and refine it using neural networks to model the full conditional distribution. The result? Not just expected values, but full distributions capturing fat tails, skewness and volatility shifts.
Actionable approach: Wherever feasible, replace single-number forecasts with probabilistic outputs — especially in areas subject to volatility or regulatory capital requirements.
Example: In motor-insurance severity modelling, you might begin with a Gamma-GLM that predicts claim cost based on driver age, vehicle type, and region. A DRN can then refine that baseline — flexibly shifting skewness and tail thickness across all quantiles — to produce full conditional loss distributions. These richer forecasts feed directly into capital models and help design more efficient reinsurance layers.
The actuarial control cycle requires continuous model monitoring and validation. Eric’s session reminded us that modern AI models can self-diagnose. Techniques like Monte Carlo dropout or deep ensembles can quantify model confidence — directly linking to model risk.
Actionable approach: Incorporate model uncertainty as a risk flag in validation reports and internal capital assessments. Highlight areas of low confidence to support board-level discussions on risk posture and assumptions.
Example: In capital stress testing, if models reveal low confidence in pandemic recovery projections, that uncertainty should inform solvency buffers or adjustments to risk appetite statements.
What’s powerful about these tools is that they bridge traditional actuarial silos. Whether you’re:
…tools that quantify both randomness (aleatoric) and knowledge gaps (epistemic) help transform “best estimates” into robust, defendable decisions.
As a member of the Risk Management Practice Committee, I see Dong's session not just as an academic showcase, but as a call to action for risk actuaries.
In a world of climate volatility, AI disruption, and interconnected financial systems, actuaries are evolving beyond quantifying known risks. Our role now includes navigating uncertainty gaps —recognising what we don’t know and advising stakeholders how to act despite incomplete information.
Like AI systems that thrive by quantifying uncertainty, actuaries have always turned ambiguity into insight. Now, by merging our domain expertise with modern tools, we can equip the profession to meet tomorrow’s challenges.