Download Title goes here – this sample illustrates a two

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Statistical Limitations of Catastrophe
Models
CAS Limited Attendance Seminar
New York, NY
18 September 2006
Timothy Aman, FCAS MAAA
Managing Director, Guy Carpenter Miami
Introduction

Given the limited Atlantic hurricane sample size, speakers discuss the
limitations of predictive modeling from three perspectives:
– A frequentist (broker) approach using bootstrapping techniques
– A Bayesian (modeler) approach incorporating new events into a
prior assumption framework
– A practical (insurer) approach reconciling the politics of actual
claims experience with model-based expectations
2
Introduction

When cat models first came out, loss estimates at various return
periods AND upper confidence bounds around those loss estimates
were regularly shown as output

Over the course of time, fewer and fewer output summaries have
focused on confidence bounds and uncertainty

This panel attempts to remind us of the magnitude of that uncertainty,
from various perspectives
3
Outline

Definitions

A frequentist approach

An update

Statistical limitations of cat models
4
Definitions
5
Definitions

Frequentist: One who believes that the probability of an event should
be defined as the limit of its relative frequency in a large number of
trials
– Probabilities can be assigned only to events
– Need well-defined random experiment and sample space

Bayesian: Probability can be defined as degree to which a person
believes a proposition
– Probabilities can be applied to statements
– Need a prior opinion (ideally, based on relevant knowledge)
6
Definitions

A bootstrap sample is obtained by randomly sampling n times, with
replacement, from the original data points [Efron]

Bootstrap methods are computer-intensive methods of statistical
analysis that use simulation to calculate standard errors, confidence
intervals, and significance tests [Davison and Hinkley]
7
Definitions

In statistics bootstrapping is a method for estimating the sampling
distribution of an estimator by resampling with replacement from the
original sample
– Most often with the purpose of deriving robust estimates of
standard errors and confidence intervals of a population parameter

The bootstrap technique assumes that the observed dataset is a
representative subset of potential outcomes from some underlying
distribution
– Random subsamples from the observed dataset are themselves
representative subsets of potential outcomes
8
A frequentist approach
9
A frequentist approach


David Miller: “Uncertainty in Hurricane Risk Modeling and Implications
for Securitization”, (Guy Carpenter, 1998)
– CAS Forum 1999, Securitization of Risk
David Miller “thought experiment”
– Create multiple catastrophe simulation models, each based on a
simulated historical event set
10
A frequentist approach

Miller’s approach
– Frequency is historical number of hurricanes over time period

Assume distributed Poisson
– Conditional severity is based on bootstrap technique

Assume stationary climate

Each bootstrap replication represents an equivalent realization
of the historical record, and consists of random draw, with
replacement, of N hurricanes from the observed record

Confidence intervals can then be determined from the boostrap
replications
11
A frequentist approach

Miller’s approach
– Essentially, each bootstrap replication represents a new
catastrophe simulation model, created as if the observed historical
event set had been the replicated rather than the actual event set
– “Blended” approach

Severity distribution is calculated using a given catastrophe
model

This severity distribution is fit to a parametric model (Beta
distribution)

New parametric severity distribution is fit for each bootstrap
replication

Use fitted parametric distribution for severity
12
A frequentist approach

Miller’s conclusions for hurricane loss 90% confidence intervals for
three US nationwide portfolios (personal, commercial, and specialty)
– Low return periods (<10 years)

Lower bound is 0

Upper bound diverges (as multiple of mean)
– Remote return periods (>80 years)

Lower bound 0.5 times mean estimate

Upper bound 2.5 times mean estimate
13
A frequentist approach
L(.95)
L̂
L(.05)
L̂
Return Period (Years)
14
An update
15
An update

With the addition of more years of hurricane data, how have relative
confidence intervals changed?
16
An update

Suppose we want to estimate “100-year loss” to a portfolio

Suppose we have a reliable sample of 100 years of data
– We might have seen a 100-year loss in the sample (63% of
samples, assuming Poisson frequency)
– We might not (37% of samples)

Now suppose we have a reliable sample of 110 years of data
– The above probabilities are revised to 67% and 33%

…and so on…

With a sample of 300 years, the probabilities are 95% and 5%

With a sample of 450 years, the probabilities are 99% and 1%
17
An update

Bootstrap from cat model output
– Simulate datasets using cat model event sets
– “Direct” approach

Eliminates need to specify, fit, and re-fit conditional severity
distributions
– Determine relative confidence intervals at various return periods
18
An update

For a given return period n…

Mean
– Generate samples of n years
– Identify largest element of each sample year
– Take the average over all sample years of the largest observation
in each year

Confidence intervals
– Capture through repeated experiment the distribution of the above
mean
– Take the 5th and 95th sample percentiles of the maximum value
across all sample years
– Obtain 90% confidence interval around mean estimate
19
An update
20
An update
21
An update
22
An update
23
An update

Now a look at the 250-year level…
24
An update
25
An update
26
An update
27
An update
28
Statistical Limitations of Cat Models
29
Statistical Limitations of Cat Models

John Major: “Uncertainty in Catastrophe Models: Part I: What is it
and where does it come from?” and “Part II: How bad is it?”, (Guy
Carpenter, 1999)
30
Statistical Limitations of Cat Models

Sources of uncertainty in catastrophe modeling
1. Limited data sample

For example, estimating 250-year EQ losses with only 100
years of detailed data
2. Model specification error

For example, Poisson frequency (iid assumption)
3. Nonsampling error

Identification of all relevant factors

For example, global climate change
4. Approximation error

For example, limited simulations and discrete event sets
31
Statistical Limitations of Cat Models

Cat models are collections of event scenarios
– Discrete approximations, with probabilities attached to each
scenario
– Not exhaustive
– Limited perils
– Calibrated using historical experience

Recalibrated as required, based on research and actual event
experience
32
Worldwide Property Catastrophe Insured Losses
$90,000
$80,000
$70,000
$60,000
$50,000
$40,000
$30,000
$20,000
$10,000
$0
'85
'87
'89
'91
'95
'93
USA
'97
'99
'01
'03
'05*
Non-US
* Preliminary estimate. Source: Swiss Re Sigma
33
Statistical Limitations of Cat Models

Uncertainty factors due to limited sample size are substantial

Data quality can add significantly to uncertainty

Are we capturing all material factors?

Scientific input can be used to reduce uncertainty
– Hazard sciences (meteorology, seismology, vulcanology)
– Engineering studies
34
Statistical Limitations of Cat Models

Factors potentially influencing relative confidence interval widths
– Larger data sample / destabilizing recent experience
– Improvements in science / weakening of stationary climate
assumption
– Improvements in technology
– Differences in modeled portfolios
– Negative Binomial frequency
– Increased awareness of factors contributing to uncertainty

Further exploration of the general factors influencing relative
confidence interval widths is material for another presentation
35
Statistical Limitations of Cat Models

Relative widths of individual company confidence intervals will depend
on specifics
– Geographical scope

e.g., US hurricane, Peru earthquake, UK flood
– Insured portfolio

e.g., Dwellings, Petrochemical facilities, Hotels
– Financial variables

e.g., Excess policies, EQ sublimits, Business interruption

Further exploration of the portfolio-specific factors influencing relative
confidence interval widths is material for another presentation
36
Statistical Limitations of Cat Models

“Don’t believe the cat model point estimates too much, but don’t
believe them too little.”
37