
Guillermo Franco, Head of Catastrophe Risk Research - EME13.
It seems reasonable to expect a degree of uncertainty in catastrophe model results. It is not uncommon, however, for models to produce results that differ by several factors. In order to assess how much of this uncertainty is epistemic, due to our incomplete knowledge of the physical phenomena involved, this existing uncertainty needs to be quantified.
It is sometimes suggested that the quantification of uncertainty can be simply achieved through sensitivity testing, altering the model parameters and tracking changes in results. It is important to note, however, that finding the valid and scientifically plausible range of parameter values and their respective likelihood comes first.
This requires a significant research effort. Consider for instance the vulnerability assumptions for wind perils. By surveying the research (1), it seems evident that the overall body of knowledge available to characterize the damageability of buildings to wind perils is relatively slim. Yet wind models do, indeed, exist for dozens of countries and each makes assumptions on the behavior of hundreds of building classes representing varied combinations of structural features. Therefore, one can expect a great amount of uncertainty. The assessment of whether each of the model assumptions is, however, reasonable, needs to be anchored on the comparison with existing unbiased estimates, data and theories.
Carrying out this type of comparison is challenging for two reasons: First, not all models show their assumptions transparently, and second, it is difficult to find datasets with which we can compare them. Model vendors suggest that their new platforms will increase and encourage transparency, which should progressively address the first challenge. The second, so far, is to a large extent left for the market to handle and, at Guy Carpenter, the Scientific Appraisal component of the Model Suitability Analysis (MSA)SM initiative is focused on this objective.
As new models make it into the (re)insurance market through traditional or new providers, a critical evaluation of the assumptions embedded becomes ever more important. In order to successfully justify and argue a model choice or a recalibration it will be increasingly necessary to rely on quantitative scientific comparisons.
Note:
1. “A Critical Comparison of Windstorm Vulnerability Models with Application to Extra-Tropical Cyclones in Northern Europe” by M. Lopeman, G. Deodatis, and G. Franco, published at the 11th International Conference on Structural Safety and Reliability (ICOSSAR), New York, June 2013.