Claim your CPD points
Traditional disaster and climate risk models rely on logic trees and deterministic scenarios that struggle to capture cascading failures, feedback loops and deep uncertainty. As climate change intensifies, compound events and infrastructure systems grow more interconnected, these limitations have real consequences for insurers, governments, and engineers. We argue for a paradigm shift, combining Bayesian inference with causal hypergraphs to create risk models that learn from data, represent multi-causal processes and make the transition from localised to systemic loss explicit and quantifiable.
When a hurricane makes landfall, the damage rarely follows a single, predictable chain of events. Floodwaters surge through streets while high winds tear at rooftops. Power substations fail, cutting electricity to pumping stations. Those stations can no longer drain floodwater. Debris from collapsed buildings blocks drainage channels, making flooding worse. Each of these failures feeds back into the others, creating a cascade that no single hazard scenario could have anticipated.
Yet the risk models that insurers, governments, and engineers rely on were largely designed for a simpler world. Most still use logic trees, branching diagrams that assign probabilities to a handful of discrete scenarios and calculate expected losses by summing across them. These tools have served the industry well for decades, but they were built to handle uncertainty in isolated hazards, not the interconnected, compounding failures that define modern catastrophic events.
A logic tree is essentially a decision diagram. For a flood risk assessment, it might divide into three scenarios (one-metre, two-metre, and three-metre floods), each assigned a probability by expert judgment (Figure 1). Each scenario feeds into a fixed depth-damage curve that translates water depth into dollar losses. The expected loss is the weighted average across branches.
This approach has three critical limitations:
Figure 1: Classical logic tree approach to flood risk assessment. Uncertainty is represented through discrete scenarios with expert-assigned branch weights (w), using a single intensity measure (flood depth).
Bayesian statistics treat probability not as a fixed property of the world, but as a measure of belief that updates as new evidence arrives (Figure 2). Applied to risk modeling, damage estimates, hazard parametres and vulnerability relationships all become living quantities that improve with each new observation. Expert judgment becomes a prior belief, which can be tested against data, refined and documented.
The result is a model that blends practitioner intuition with empirical rigour. This integration addresses a long-standing tension in risk assessment, the disconnect between formal methods and the tacit knowledge that experienced practitioners rely upon (Figure 3).
Figure 2: Bayesian network for flood risk. Hazard variables (D: depth, V: velocity, T: duration) propagate through exposure and fragility states to loss. Parametres update via Bayesian inference as evidence accumulates.
Figure 3: Three views of science, practice, and mathematical formalism in risk assessment. Adapted from Taleb (2020).
Even Bayesian networks have a structural limitation, they connect variables in pairs. But many real-world processes involve multiple variables acting jointly in ways that cannot be decomposed into pairwise links. When a flood combines high velocity, extended duration and debris loading to simultaneously cause foundation scour and structural degradation, that is a fundamentally multi-causal process. This is where causal hypergraphs enter the picture.
Drawing on Wolfram's work on computational physics, a hypergraph allows a single hyperedge to link many variables at once. Crucially, probabilities emerge from the structure of causal connections: the more pathways leading to an outcome, the more likely that outcome becomes (Figure 4).
Figure 4. Causal hypergraph concept. A primary hazard triggers multiple effects that converge through cascade nodes to systemic loss. Feedback loops (dashed) and self-reinforcing mechanisms (dotted) create additional causal paths, amplifying the probability of severe outcomes. Unlike logic trees, probability emerges from path density rather than expert-assigned weights.
Figure 5: Causal hypergraph showing cascading dam failure with feedback loops converging to systemic loss (Λ).
Consider a large hydropower dam in a seismically active region (Figure 5). A classical risk model might assess earthquake and flood hazards independently with expert-assigned branch weights. A Bayesian approach improves on this by encoding dependencies and learning from observations. A causal hypergraph reveals what both approaches miss: how an earthquake simultaneously triggers dam cracking, reservoir slope failure, and spillway damage; how the resulting landslide generates an impulse wave; how overtopping flow exploits seismic cracks through self-reinforcing piping; and how dam breach propagates through the power grid, creating feedback between infrastructure failure and loss of control. The probability of catastrophic failure emerges from the density of causal paths converging through these feedback loops.
Recent research on U.S. flood insurance data has revealed that a small number of large, clustered events account for over half of all historic insurance payouts. Without these "hyperclusters," the National Flood Insurance Program would actually be solvent under current premiums. This finding underscores a critical gap: current risk models struggle to distinguish between dispersed, manageable losses and the concentrated, correlated events that threaten financial stability. As climate change intensifies compound events, we need models that can represent cascading failures, learn from new data, and make the transition from localised to systemic loss explicit and quantifiable.
We are not arguing that logic trees should be abandoned. For well-bounded problems where a single intensity measure dominates and components fail independently, they remain adequate. But when cascading effects, multi-hazard interactions, and long-range dependencies enter the picture, the simple model becomes a brittle approximation requiring ad hoc patches. The tools to make this shift are increasingly available: physics-informed neural networks, hypergraph neural networks, and Bayesian inference.
The challenge is less about technical feasibility and more about whether the risk modeling community is ready to adopt them.
This article is adapted from "Velasco-Reyes, Erick and Pui, Alexander, Rethinking Uncertainty: Why Disaster and Climate Risk Models Must Move Beyond Logic Trees (January 21, 2026)”. Available at SSRN: https://ssrn.com/abstract=6142049 or http://dx.doi.org/10.2139/ssrn.6142049
References
Karniadakis, G. E., et al. (2023). Physics-informed machine learning. Nature Reviews Physics, 5, 6–23.
Li, X., et al. (2023). DisasterNet: A causal Bayesian network approach for modelling cascading failures. Reliability Engineering & System Safety, 229, 108871.
Nayak, A., et al. (2025). Catastrophic "hyperclustering" and recurrent losses. npj Natural Hazards, 2, 83.
Salvaña, M. L., et al. (2025). A multi-hazard Bayesian hierarchical model for urban damage prediction. Natural Hazards, in press.
Sarhadi, A., et al. (2016). Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula. Water Resources Research, 52(11).
Taleb, N. N. (2020). Statistical Consequences of Fat Tails. STEM Academic Press.
Wolfram, S. (2020). Finally we may have a path to the fundamental theory of physics. Stephen Wolfram Writings.
From physical risk modelling to transition pathways, actuaries bring rigour to climate analysis. Explore our latest thinking on sustainability and climate risk.
Receive industry-leading perspectives, straight to your inbox.