Climate and Sustainability
Risk Management
General Insurance

Upgrading disaster risk models: Beyond logic trees

Aerial view of a residential neighbourhood submerged in floodwater, with rooftops, trees, and power lines partially visible above the surface.

Claim your CPD points

Traditional disaster risk models rely on logic trees that struggle to capture cascading failures and deep uncertainty. This article makes the case for combining Bayesian inference with causal hypergraphs to build risk models that learn, adapt, and quantify systemic loss.

Traditional disaster and climate risk models rely on logic trees and deterministic scenarios that struggle to capture cascading failures, feedback loops and deep uncertainty. As climate change intensifies, compound events and infrastructure systems grow more interconnected, these limitations have real consequences for insurers, governments, and engineers. We argue for a paradigm shift, combining Bayesian inference with causal hypergraphs to create risk models that learn from data, represent multi-causal processes and make the transition from localised to systemic loss explicit and quantifiable.

When a hurricane makes landfall, the damage rarely follows a single, predictable chain of events. Floodwaters surge through streets while high winds tear at rooftops. Power substations fail, cutting electricity to pumping stations. Those stations can no longer drain floodwater. Debris from collapsed buildings blocks drainage channels, making flooding worse. Each of these failures feeds back into the others, creating a cascade that no single hazard scenario could have anticipated.

Yet the risk models that insurers, governments, and engineers rely on were largely designed for a simpler world. Most still use logic trees, branching diagrams that assign probabilities to a handful of discrete scenarios and calculate expected losses by summing across them. These tools have served the industry well for decades, but they were built to handle uncertainty in isolated hazards, not the interconnected, compounding failures that define modern catastrophic events.

Why logic trees are limited

A logic tree is essentially a decision diagram. For a flood risk assessment, it might divide into three scenarios (one-metre, two-metre, and three-metre floods), each assigned a probability by expert judgment (Figure 1). Each scenario feeds into a fixed depth-damage curve that translates water depth into dollar losses. The expected loss is the weighted average across branches.

This approach has three critical limitations:

  1. It reduces complex hazards to a single metric; but a flood is not just water depth: velocity, duration, debris load, and contamination all determine actual damage.
  2. Logic trees assume branches are independent, yet extreme events are full of dependencies.
  3. Logic trees do not learn. Branch weights are set by experts at the outset and rarely updated when events reveal that certain failure pathways were more likely than assumed.
A flowchart titled "Panel A: Classical Logic Tree" showing a flood hazard risk assessment. Three branches extend from a central "Flood Hazard" node, representing flood depths of one metre (probability weight 0.5), two metres (weight 0.3), and three metres (weight 0.2). All three branches feed into a single fixed depth-damage curve, producing estimated losses of $2 million, $8 million, and $15 million respectively. A footer notes: three scenarios, single intensity measure, no feedback.

Figure 1: Classical logic tree approach to flood risk assessment. Uncertainty is represented through discrete scenarios with expert-assigned branch weights (w), using a single intensity measure (flood depth).

What Bayesian Models offer

Bayesian statistics treat probability not as a fixed property of the world, but as a measure of belief that updates as new evidence arrives (Figure 2). Applied to risk modeling, damage estimates, hazard parametres and vulnerability relationships all become living quantities that improve with each new observation. Expert judgment becomes a prior belief, which can be tested against data, refined and documented.

The result is a model that blends practitioner intuition with empirical rigour. This integration addresses a long-standing tension in risk assessment, the disconnect between formal methods and the tacit knowledge that experienced practitioners rely upon (Figure 3).

A flowchart titled "Panel B: Bayesian Network" showing a hierarchical probabilistic model with learning. Three hazard variables — depth (D), velocity (V), and duration (T) — connect via arrows to three exposure nodes (E1, E2, E3), which in turn connect to two fragility nodes (F1, F2), converging on a single loss node (L). A separate inset box illustrates the Bayesian updating process: prior belief (theta) updated by data to produce a posterior estimate (theta prime).

Figure 2: Bayesian network for flood risk. Hazard variables (D: depth, V: velocity, T: duration) propagate through exposure and fragility states to loss. Parametres update via Bayesian inference as evidence accumulates.

Three Venn diagrams illustrating relationships between science, practice, and mathematics in risk assessment. Diagram A, labelled "The Disconnect," shows three separate, non-overlapping circles. Diagram B, labelled "Naive Empiricism," shows science and mathematics overlapping, with practice only partially connected. Diagram C, labelled "Rigorous Integration," shows all three circles overlapping equally, with a shared central region highlighted in green. A legend identifies science as empirical observation, practice as heuristics and tacit knowledge, and mathematics as formal methods and probability.

Figure 3: Three views of science, practice, and mathematical formalism in risk assessment. Adapted from Taleb (2020).

Beyond pairwise relationships: Causal hypergraphs

Even Bayesian networks have a structural limitation, they connect variables in pairs. But many real-world processes involve multiple variables acting jointly in ways that cannot be decomposed into pairwise links. When a flood combines high velocity, extended duration and debris loading to simultaneously cause foundation scour and structural degradation, that is a fundamentally multi-causal process. This is where causal hypergraphs enter the picture.

Drawing on Wolfram's work on computational physics, a hypergraph allows a single hyperedge to link many variables at once. Crucially, probabilities emerge from the structure of causal connections: the more pathways leading to an outcome, the more likely that outcome becomes (Figure 4).

A flowchart titled "Causal Hypergraph Concept" illustrating how feedback loops create emergent path density. A primary hazard node branches into three effects (A, B, and C). Effects A and B converge on a cascade node; Effect C connects to an amplified state node. Feedback arrows (dashed gold) run between the cascade node and amplified state, while a self-reinforcing loop (dotted red) curves back to the cascade node. Both nodes feed down to a systemic loss node. A legend distinguishes causal, feedback, amplification, and self-reinforcing connections. A footer notes: seven nodes, 12 edges, two feedback loops.

Figure 4. Causal hypergraph concept. A primary hazard triggers multiple effects that converge through cascade nodes to systemic loss. Feedback loops (dashed) and self-reinforcing mechanisms (dotted) create additional causal paths, amplifying the probability of severe outcomes. Unlike logic trees, probability emerges from path density rather than expert-assigned weights.

A complex flowchart titled "Panel C: Causal Hypergraph" showing cascading dam failure with multi-causal processes and emergent probability. An earthquake node at the top branches into four immediate effects: crack, landslide, spillway damage, and slope failure. These trigger a chain of interconnected events including wave generation, reservoir disturbance, overtopping, piping, foundation damage, and dam breach. Downstream effects include flooding, downstream dam impact, community impact, contamination, recovery delay, control loss, and blind operations. Gold dashed feedback arrows and red dashed self-reinforcing loops connect multiple nodes. All pathways converge on a systemic loss node (lambda) at the bottom. A legend distinguishes causal, feedback, and self-reinforcing connections.

Figure 5: Causal hypergraph showing cascading dam failure with feedback loops converging to systemic loss (Λ).

A practical example

Consider a large hydropower dam in a seismically active region (Figure 5). A classical risk model might assess earthquake and flood hazards independently with expert-assigned branch weights. A Bayesian approach improves on this by encoding dependencies and learning from observations. A causal hypergraph reveals what both approaches miss: how an earthquake simultaneously triggers dam cracking, reservoir slope failure, and spillway damage; how the resulting landslide generates an impulse wave; how overtopping flow exploits seismic cracks through self-reinforcing piping; and how dam breach propagates through the power grid, creating feedback between infrastructure failure and loss of control. The probability of catastrophic failure emerges from the density of causal paths converging through these feedback loops.

Why this matters now

Recent research on U.S. flood insurance data has revealed that a small number of large, clustered events account for over half of all historic insurance payouts. Without these "hyperclusters," the National Flood Insurance Program would actually be solvent under current premiums. This finding underscores a critical gap: current risk models struggle to distinguish between dispersed, manageable losses and the concentrated, correlated events that threaten financial stability. As climate change intensifies compound events, we need models that can represent cascading failures, learn from new data, and make the transition from localised to systemic loss explicit and quantifiable.

Not an 'all-or-nothing' proposition

We are not arguing that logic trees should be abandoned. For well-bounded problems where a single intensity measure dominates and components fail independently, they remain adequate. But when cascading effects, multi-hazard interactions, and long-range dependencies enter the picture, the simple model becomes a brittle approximation requiring ad hoc patches. The tools to make this shift are increasingly available: physics-informed neural networks, hypergraph neural networks, and Bayesian inference.

The challenge is less about technical feasibility and more about whether the risk modeling community is ready to adopt them.

This article is adapted from "Velasco-Reyes, Erick and Pui, Alexander, Rethinking Uncertainty: Why Disaster and Climate Risk Models Must Move Beyond Logic Trees (January 21, 2026)”. Available at SSRN: https://ssrn.com/abstract=6142049 or http://dx.doi.org/10.2139/ssrn.6142049

References

Karniadakis, G. E., et al. (2023). Physics-informed machine learning. Nature Reviews Physics, 5, 6–23.

Li, X., et al. (2023). DisasterNet: A causal Bayesian network approach for modelling cascading failures. Reliability Engineering & System Safety, 229, 108871.

Nayak, A., et al. (2025). Catastrophic "hyperclustering" and recurrent losses. npj Natural Hazards, 2, 83.

Salvaña, M. L., et al. (2025). A multi-hazard Bayesian hierarchical model for urban damage prediction. Natural Hazards, in press.

Sarhadi, A., et al. (2016). Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula. Water Resources Research, 52(11).

Taleb, N. N. (2020). Statistical Consequences of Fat Tails. STEM Academic Press.

Wolfram, S. (2020). Finally we may have a path to the fundamental theory of physics. Stephen Wolfram Writings. 

About the authors
Erick R. Velasco-Reyes
Erick R. Velasco-Reyes, Ph.D. is a Postdoctoral Scholar at Oregon State University’s College of Engineering. His research centers on coastal hazard modeling, risk and resilience assessment, and the cascading impacts of extreme events such as tsunamis, hurricanes, and floods. He specializes in hydrodynamic, infrastructure-aware, and sediment transport simulations, as well as probabilistic risk modeling, with applications that connect science, policy, and insurance.
Alexander Pui , Senior Vice President Climate Advisory at Marsh
Alex is currently Senior Vice President Climate Advisory at Marsh based in Tokyo. He is also Adjunct Fellow at the Climate Change Research Center (CCRC) at the University of New South Wales (UNSW), and Visiting Scholar to Kyushu University. Alex has significant international experience and has held senior roles across the banking and (re)insurance sector, including Head of Group Climate Analytics at the Commonwealth Bank of Australia (Sydney), and Head of Nat Cat and Sustainability (APAC) at Swiss Re (Singapore, Tokyo). He was awarded Risk Leader of the Year (2022) by the Risk Management Institute of Australia (RMIA) and is a recognised thought leader within the financial climate risk space. He is also a frequent contributor to Actuaries Digital and The Japan Times.

The actuarial view on climate risk

From physical risk modelling to transition pathways, actuaries bring rigour to climate analysis. Explore our latest thinking on sustainability and climate risk.

Be informed. Stay ahead. Subscribe.

Receive industry-leading perspectives, straight to your inbox.

Two people climbing a snowy mountain