Download Three Components of a Premium - Forum for Agricultural Risk

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dragon King Theory wikipedia , lookup

Forecasting wikipedia , lookup

Transcript
Three Components of a Premium
The simple pricing approach outlined in this module is the Return-on-Risk methodology. The sections
in the first part of the module describe the three components of a premium for this methodology:
•
Expected Loss : The Expected Loss is defined as the arithmetic average payout of the
contract from the historical cleaned dataset for the station. This component will require an
understanding of statistical concepts, such as the uncertainty in the mean, standard error in
the mean, and the Central Limit Theorem. The section will describe both the process used to
calculate Expected Loss and the process used to adjust for uncertainty to arrive at an Adjusted
Expected Loss. The section will also discuss the impact that the quality of underlying data
used to calculate expected loss will have on pricing.
•
Probable Maximum Loss (PML) : This is the maximum payout that is likely to occur. This
component explains three approaches to estimating the PML, which are based on the
Historical Burn Analysis, the Historical Distribution Analysis, and the Monte Carlo Simulation.
The concept of PML and the challenges in defining a PML estimate are given.
•
Administration and Business Expenses : The third component that is required for the
calculation of the premium of an insurance contract is the administrative and business
expenses incurred by the insurer to provide the weather insurance contracts. The calculation
of these expenses is provided in this section.
The next section outlines the Return-on-Risk pricing approach itself, which depends on these three
components.
The methodology outlined in this first part of this module is for stand-alone (not portfolio) contract
pricing. It is recommended that insurers use a stand-alone approach until their portfolio stabilizes and
they develop a greater understanding and intuition about their overall risk and business. The second
part of the Module outlines a portfolio pricing approach.
178| Module 7A Designing Index Based Weather Risk Management Programs
The Return-on-Risk Approach to Pricing
Pricing must account for events that occur on average and for events that have more severe impacts.
The potential losses from average events are, therefore, represented in pricing by the expected
losses or average expected (EL and AEL). The Return-on- Risk (RoR) pricing approach
accommodates those risks whose occurrences are not represented simply by looking at the AEL.
Risk, in this case, is defined in terms of payouts in excess of the adjusted expected loss. The
probable maximum loss (PML; 1-in-100) is often used in pricing to represent this risk. PML values
must be established from the historical data estimates. The premium calculations for index-based
weather insurance should always be performed on cleaned and, where appropriate, detrended data
(as discussed in previous modules).
This RoR approach to pricing is proposed in several publications, such as World Bank (2005), ISMEA
(2006), and Henderson et al. (2002). This approach is recommended over other methodologies since
it considers both expected loss and the tail risk—the potential extreme negative deviations in
indemnities—as well as the associated capital charge. The tail risk will include the potential extreme
negative deviations in indemnities. Other examples of methodologies include those that involve using
the standard deviation or multiples of the expected loss for the risk loading.
The simple recommended premium calculation for a retailed farmer weather insurance contract is
defined as follows:
Premium = AEL + α * ( PML (1-in-100)−AEL ) + Administrative & Business Expenses, where:
•
•
•
AEL is the Adjusted Expected Loss; the expected loss adjusted by a data uncertainty factor
PML(1-in-100) is the 1-in-100 year Probable Maximum Loss of the contract (i.e., the maximum payout
that is likely to occur once in 100 years)
α is the target Return-on-Risk (RoR), or Return-on-PML, assuming the insurer is required to reserve
capital against its portfolio at the PML(1-in-100) level
The target return-on-risk α for the risk-taker is chosen by the risk-taker given business imperatives
and ambitions; as a result, α can range from 5 percent to 20 percent. These values often also depend
on the payout size and frequency of a given transaction and how that interacts with a risk taker’s
portfolio and risk appetite. Other risk metrics such as the PML (1-in-250) could also be used and the
methodology can be easily adapted to do so if this benchmark is chosen.
* This is the same as is the contract Value-at-Risk (VaR) at the 99th percent confidence level (i.e., the
maximum loss that will occur with a one percent probability or less).
179| Module 7A Designing Index Based Weather Risk Management Programs
Component A: Expected Loss
Expected loss is defined as the arithmetic average payout of the contract from the historical cleaned
dataset for the station.
Uncertainty in the Mean
Normally, at least 20 years (preferably 30 years or over) of continuous daily data, with less than 5
percent of data missing, is the accepted minimum in the international weather market. Data not
satisfying these criteria will be subject to higher premium rate s or, in some cases, will not be
accepted by the market. Reinsurer s will take the short dataset lengths and missing data into
consideration when pricing reinsurance treaties. They will then adjust prices upwards according to
their risk appetite as well as their business and underwriting practices.
As with the expected loss component, the uncertainty adjustment for the data should also be
calculated from the historical data.
To reflect the uncertainty associated with only having a limited number of historical years or when
there are gaps in the data from which one can calculate the expected loss, the expected loss can,
and in certain cases should, be adjusted by a data uncertainty factor. This sampling error introduces
some uncertainty into the estimate of the expected loss and, hence, uncertainty in the pricing
calculation. Note that there is no standard way of pricing the uncertainty associated with the quality
and length of the underlying weather data.
180| Module 7A Designing Index Based Weather Risk Management Programs
Uncertainty in simulated data is discussed later in the section. This uncertainty in simulated data will
also depend on the uncertainty associated with calibrating a simulation model to a limited sample size
of historical data, as well as potential model error.
Standard Error in the Mean
There are a countless number of ways one could quantify and incorporate data uncertainty into a
ratemaking methodology. In order to develop an Excel-based pricing tool, the following simple
spreadsheet approach for capturing data uncertainty can be used.
Efforts to incorporate data uncertainty due to historical weather data and quality into a ratemaking
methodology should be considered by the insurer as they start and develop their business. The
recommended approach is taken and adapted from Jewson et al. (2005). This process requires the
insurer to differentiate between stations with good and poor quality data, especially if they have not
considered this issue before.
In the case where no detrending has been applied to the underlying index, a sample-based estimate
of the expected loss will follow a normal distribution irrespective of the underlying distribution of
payouts. A normal distribution will have:
•
•
•
•
A mean equal to the actual but unknown population mean
A standard deviation of s/sqrt(N) also known as the standard error, where:
s is the population standard deviation, and
N is the sample size, such as in a historical record of years and 30 payouts, N = 30.
Applying this equation tells us that using 25 years of historical data gives a standard error on the
expectation of a fifth of the standard deviation of the index and so on. To evaluate s/sqrt(N) we use
our estimate of the standard deviation from the data.
This formula no longer applies where detrending has been used because the number of degrees of
freedom has changed. However, for simplicity, we will assume that the uncertainty in the detrended
case can also be estimated approximately by the s/sqrt(N) rule.
Note: Assuming detrended data where there is none will tend to underestimate the uncertainty a little,
but the differences are not large and the method is simpler than proper alternatives to quantify
uncertainty, such as using the Monte Carlo method.
The Central Limit Theorem
The Central Limit Theorem states that the sample mean is normally distributed with a standard
deviation that is equal to the standard error of the mean. This leads to an expression of confidence
levels for the mean, as follows:
90 percent confidence level = Mean +/− 1.64 * s/sqrt(N)
95 percent confidence level = Mean +/− 1.96 * s/sqrt(N)
99 percent confidence level = Mean +/− 2.58 * s/sqrt(N)
181| Module 7A Designing Index Based Weather Risk Management Programs
In the above equation, the 1.64, 1.96, 2.58, etc. are taking from the inverse of the standard Normal
Cumulative Distribution for each confidence level. [1]
In order to calculate expected loss for pricing a weather insurance contract, we are only interested in
the upper bound uncertainty level. This is the bound that places X% confidence that the expected
loss is less than or equal to some specific number.
For example, at the 90 percent confidence level we can say the expected loss is less than or equal to
[2]:
•
Expected Loss + 1.28 * s/sqrt(N)
Therefore, a possible Data Uncertainty Factor is defined as follows:
•
•
•
•
•
Data Uncertainty Factor = F (β) * s/sqrt(N) (Eq 1)
where F (β) is the inverse of the standard Normal Cumulative Distribution [3] for a given
probability β and, therefore:
Adjusted Expected Loss = Expected Loss + F (β) * s/sqrt(N), where:
β, the required confidence level, is chosen by the insurer at their discretion
Note that if β = 50%, and F (0.5) = 0, then β >= 50%. The insurer must adjust this level up or
down to reflect the risk taker’s risk preferences
[1] In Excel, NORMSINV (0.95) = 1.64, NORMSINV (0.975) = 1.96, NORMSINV (0.995) = 2.58.
[2] The equivalent of which is (Expected Loss + NORMSINV (0.9) * s/sqrt(N)) in Excel.
[3] F (β) = NORMSINV (β) in Excel.
182| Module 7A Designing Index Based Weather Risk Management Programs
Component A: Expected Loss
Data to Calculate Expected Loss
It is recommended initially that the expected loss calculation and the data uncertainty adjustment
should be derived from cleaned data over simulated data unless the insurer is certain of the quality of
the simulated data and its ability to capture the statistical and temporal properties of the
meteorological variables (see later in this section).
It is important to remember that the simulated data or parameters derived from a Historical
Distribution Analysis, where a probability function is fitted to the historical index values, will be
calibrated to the cleaned data. Therefore, the uncertainties in the simulated data will be related to the
uncertainties associated with the number of historical years and quality of the underlying data, as well
as the potential simulation model error.
Cleaned data is the only touchstone that all stakeholders, including farmers and reinsurers and
insurers have. It should, therefore, be the basis for expected loss calculations. The simulation models
or distributions calibrated to this data should then agree with its characteristics, particularly in the
mean.
The underlying index must be detrended first when there is a trend in the historical payouts or the
underlying index on which the payouts are based. This should take place before performing the
expected loss calculation. The analyst should always be looking to detect trends even if the
underlying daily weather data has already been detrended.
Quality of Underlying Data
The Adjusted Expected Loss calculation only considers the length of the historical record. It does not
take the quality of the underlying data into account. However, there are times when there are missing
data in the weather stations’ records. This will need to be incorporated into the pricing of the weather
insurance contracts.
As there is increased uncertainty in data that is received from a station with missing data, the
resulting insurance contract will be priced higher due to the higher level of uncertainty regarding the
underlying risk. Thus, a contract where the data is based on a weather station with more missing
values will be more expensive than another contract that is based on data from a weather station with
less missing data.
While not statistically correct, it is a simple way of incorporating the “data quality risk” into the existing
equation.
The data uncertainty adjustment is done by multiplying the sample size N by (1−j), where j is the
percentage number of missing raw data in the underlying data used to calculate the N payout values.
In this equation, N decreases as the percentage of missing values in the raw data increases.
Nonetheless, this method is a little less ad hoc than other methods. Intuitively, the less data that is
available, the smaller the sample size from which one can estimate the expected loss (e.g., if a whole
year of data is missing, N would simply reduce by 1).
183| Module 7A Designing Index Based Weather Risk Management Programs
Component A: Expected Loss
Quality of Underlying Data
The Adjusted Expected Loss calculation only considers the length of the historical record. It does not
take the quality of the underlying data into account. However, there are times when there is missing
data in the weather stations records. This will need to be incorporated into the pricing of the weather
insurance contracts.
As there is increased uncertainty in data that is received from a station with missing data, the
resulting insurance contract will be priced higher due to the higher level of uncertainty regarding the
underlying risk. Thus, a contract where the data is based on a weather station with more missing
values will be more expensive than another contract which is based on data from a weather station
with less missing data.
While not statistically correct, it is a simple way of incorporating the ‘data quality risk’ into the existing
equation.
The data uncertainty adjustment is done by multiplying the sample size N by (1 - j), where j is the
percentage number of missing raw data in the underlying data used to calculate the N payout values.
In this equation N decreases as the percentage of missing values in the raw data increases.
Nonetheless, this method is a little less ad hoc than other methods. Intuitively the less data that is
available, the smaller the sample size from which one can estimate the expected loss, e.g. if a whole
year of data is missing, N would simply reduce by 1.
184| Module 7A Designing Index Based Weather Risk Management Programs
Component A: Expected Loss
Adjusted Expected Loss
A suggested calculation for the Adjusted Expected Loss is:
•
•
•
AEL = Expected Loss + F (β) * s/sqrt(N*(1 – j)) (Eq 2), where:
β, the required confidence level, is chosen by the insurer, and:
j is the percentage of missing data in the raw historical dataset
If the missing data has been filled, and the cleaning procedure has been verified and robust, the
percentage of missing data in the underlying historical cleaned dataset can be used instead. This will
also depend on the insurers risk preferences.
This data quality adjustment factor is proposed so that risk takers may be aware of the data quality
issues that will be considered by reinsurers. It is recommended that an insurer experiments with
several methods before settling on a method that they are comfortable with.
For example, the insurer can use the proposed methodology for Adjusted Expected Loss only if a
station does not reach a pre-defined “good-quality data” benchmark. A station would not meet such a
benchmark if it does not have at least 30 years of historical data with less than 5 percent missing.
Alternatively, it is recommended that the insurer must at least have experimented with other
methodologies to differentiate between good (long historical record and few missing data points) and
poor (short historical record and many missing data points) data in terms of pricing.
185| Module 7A Designing Index Based Weather Risk Management Programs
Component B: Probable Maximum Loss (PML)
Regardless of the pricing methodology adopted, both expected loss and the risk of the most extreme
payout must be factored into pricing. In some years, payouts in excess of the expected loss can
occur, and the risk-taker must be compensated for this uncertainty. Therefore, internal provisions
must be made to honor these potentially large payouts. For example, regulators and rating agencies
use such tail-risk measures to determine the capital that a bank, an insurer, a reinsurer, or a
corporation is required to hold in order to reflect the risks that it is bearing. Similarly, an insurer must
also reserve capital against its portfolio.
It is assumed in this section that the benchmark reserve level is the PML (1-in-100). This can be
easily adjusted if necessary. The key advantage of using a RoR approach is that it directly refers to
the loss side of the payout distribution, which is the potential financial loss to the insurer. Therefore, it
directly corresponds to a capital charge required to underwrite the risk at a target level for the
business.
A PML calculation is aimed at determining the loss that will not exceed a specified return frequency
(often set at 1-in-100) over a given time horizon. In the case of weather insurance, this time horizon is
the life of the contract, and the PML (1-in-100) is the maximum payout that is expected to occur in
100 contract lifetimes.
Advantage of PML (1-in-100)
A PML set at the 1-in-100 return frequency is referred to as PML (1-in-100). The advantage of setting
a PML (1-in-100) is that it is computed from the loss side of the payout distribution. In this way, the
loss is defined with respect to the expected payout. Therefore, PML captures the potential financial
loss to the seller.
Using the Return-on-PML, from here on referred to as Return-on-Risk (RoR) method, is more
appropriate for pricing structures that protect against low-frequency, but high-severity, risk. These
kinds of risks have highly asymmetric payout distributions, such as weather insurance for farmers.
Disadvantage of PML (1-in-100)
The disadvantage of setting a PML at the 1-in-100 return frequency or PML (1-in-100) is that it is a
difficult parameter to estimate. At particularly high strike levels, which are set far away from the mean,
it becomes particularly hard to estimate this parameter.
PML (1-in-100) is usually established through a Historical Distribution Analysis or Monte Carlo
simulation. Nevertheless, the worst case recorded historically can often be used as a cross-check for
the PML.
Note that PML (1-in-100) is not straight forward to implement in Excel. Specific software, such as AtRisk, is required for the PML (1-in-100) analysis. Certainly, knowledge of a programming language,
such as VBA, R, or C, in order to be able to write routines to fit distributions or simulate data is
extremely helpful.
186| Module 7A Designing Index Based Weather Risk Management Programs
The concepts of Probable Maximum Loss (PML) and the similar concept of Value-at-Risk (VaR) are
terms that have become widely used by insurers, corporate treasurers, and financial institutions to
summarize the total risk of portfolios. Central bank regulators, for example, use VaR in determining
the capital that a bank is required to hold in relation to the market risks that it is bearing.
If an insurer is keen to build a weather business, it is recommended that the insurer invest in the
appropriate tools and software in order to be able to estimate variables such as PML in a more robust
manner. This would result in a more detailed analysis than simply looking at the worst-case recorded
historically from a Historical Burn Analysis (HBA).
187| Module 7A Designing Index Based Weather Risk Management Programs
Component B: Probable Maximum Loss (PML)
Estimating the PML
Historical Burn Analysis
From our previous modules, we know that HBA is considered the simplest method of weather
contract pricing. HBA involves taking historical values of the index, from cleaned and possibly
detrended data, and applying the contract in question to them. While HBA is a simple analysis to
perform, it gives a limited view of possible index outcomes and may not capture the possible
extremes while also being overly influenced by individual years in the historical dataset. Estimates of
parameters, such as the PML can, therefore, become very difficult. The largest historical value is
always a good reality check when considering the possible variability of payouts. Additionally, the
confidence level that can be attached to averages and standard deviation calculated from historical
data is limited by the number of years of data available. However, it should be noted, that this can be
incorporated into the adjusted expected loss calculation.
Therefore, if an insurer is keen to build a weather business, it is recommended that the insurer invest
in the appropriate tools and software in order to be able to estimate variables such as PML in a more
robust manner. This would result in a more detailed analysis than simply looking at the worst-case
recorded historically from a Historical Burn Analysis (HBA). Some of the methods are outlined below.
Historical Distribution Analysis
Two ways to estimate the PML from a limited number of years of data is to fit a parametric or nonparametric probability distribution to the available data:
•
•
Fit a parametric or non-parametric probability distribution to the historical index
Fit a parametric or non-parametric probability distribution to the contract payout values
The contract payout statistics can then be calculated from the properties of this distribution.
When using the Historical Distribution Analysis approach, care should be taken with regards to the
assumptions about the distribution of the payouts or the underlying index. In particular, information
about payouts that do not happen often, but cover more extreme risk, need to be handled with care.
Monte Carlo Simulation
An alternative for estimating the true distribution of potential contract payouts is through a simulation.
More information about the potential true distribution of payouts can be observed by running
simulations as compared to simply considering the historical payout values. This is because a limited
payout history can mask greater underlying variability of a contract.
The simplest way to perform a simulation for three-phase contracts, for example, is through the
dekadal [1] rainfall Monte Carlo simulation. This simulation will fit a distribution to the historical
cumulative rainfall of each dekad within the contract. Thus, a correlation matrix can be established
between the cumulative rainfall totals recorded in each dekad. Using this correlation matrix, a Monte
Carlo simulation can be performed that preserves this correlation structure and the individual dekadal
188| Module 7A Designing Index Based Weather Risk Management Programs
distributions. The contract design webtool at the end of the course has a rainfall simulator that allows
a simulation of dekadal rainfall in this way.
Each simulation will produce one sample year of possible cumulative rainfall totals for the dekads
within the contract. From these simulations, contract payouts can be calculated for a particular
simulation year. Running many of these simulations will generate a distribution of possible contract
payouts from which the pertinent contract statistics can be estimated. This approach can also be
used for contracts with a dynamic start date. However, more dekads within the rainfall season must
be simulated to accurately capture the moving start date. During these simulations, the mean and
standard deviation of the simulated rainfall needs to be checked for consistency with the historical
data. This also allows the simulated data to be used with confidence.
For an even more robust analysis, it is possible to run a Monte Carlo simulation at the daily level.
However, running a Monte Carlo simulation at the meteorological variable level can be the most
complicated approach. This is because the process requires simulating thousands of years of daily
rainfall or temperature data at each station with respect to the daily correlations and seasonal cycles.
All the underwritten index values can then be calculated from these data. Subsequently, the weather
contracts could be applied to each simulated index value to create thousands of simulated payouts
from which the expected and variable payout statistics of the contracts can also be calculated.
Building daily simulation models that correctly capture the statistics of the underlying data is very
challenging. It is recommended that the approaches outlined above should be used to estimate PML,
and careful thought should be given to embarking on a daily simulation and modeling project.
189| Module 7A Designing Index Based Weather Risk Management Programs
Component B: Probable Maximum Loss (PML)
Pricing Using the PML
The PML (1-in-100) should be estimated using one of the methods outlined previously. If the
maximum payout has been reached in the historical cleaned data, the discussions on the various
PML estimate methodologies that can be employed are moot, and the limit of the contract should be
used as the PML estimate. After using several approaches, the question remains, as to what value
should be used for the PML (1-in-100) estimate?
There is no correct answer, but an estimate can be determined by using intuition. By putting aside
uncertainty issues regarding the quality and length of the underlying data—and if one is confident with
the approaches used— an estimate of PML can be determined by looking at the largest number of
the estimates. This number should be at least equal to the maximum payout in the historical record as
seen through the HBA analysis. The insurer will have to determine an estimate depending on their
risk preferences and overall portfolio.
Indeed, discussions of catastrophic risk loading can be made simpler when, for each contract, the
insurer chooses to only consider the sum insured as the maximum historical loss to include in the risk
margin formula. Although this method would be very simple to implement, it could make some
contracts more expensive for farmers.
If the estimated PML number is less than the historical maximum payout, after detrending, then it is
recommended that the historical maximum payout is used instead:
PML (1-in-100) = max (Estimated PML (1-in-100), Maximum Historical Payout) (Eq 3)
This cross-check against the maximum historical payout is recommended, even though statistically it
could be a PML overestimate, particularly if a simulation methodology is used. Although simulated
data can capture the average well, in some cases, it tends to underestimate the variability. This also
underestimates the risk of the historical data record, as the expected loss and payout frequency is
lower than the historical loss and payout frequency.
This underestimation of risk means that prices derived from the simulated data will be lower than the
pricing derived from the historical cleaned data. Contract designs that require daily-level simulations
are particularly prone to this problem as simulating daily meteorological data correctly, particularly
rainfall, is challenging. Hence, it should not be surprising that there may be some discrepancies
between the simulated and raw data.
As reinsurers may run simulations, which may be very different from the insurer’s simulations, it is
recommended that the historical cleaned data is used for the expected loss calculation with an
uncertainty adjustment, as described in the previous section. Therefore, unless the insurer is very
sure of their simulation or historical data analysis methodology, using the historical cleaned data is
the preferred approach.
However, better estimating the tails of the payout distribution and the PML for a given return period is
strongly recommended. This cannot be done accurately by only running a Historical Burn Analysis on
a limited number of years (unless the maximum payout has been reached historically). Therefore, as
simulations, or an HDA, provide a good value for estimating the tails of the payout distribution, they
190| Module 7A Designing Index Based Weather Risk Management Programs
should be used for the PML (1-in-100) calculation. This, of course, still necessitates a check against
the historical cleaned data maximum payout.
Note: As with the expected loss, there are uncertainties associated with the PML estimate when
estimating the tails of the payout distribution, irrespective of the method used to determine its value.
Instead of adding an uncertainty adjustment for this a 1-in-250 return frequency PML (i.e., VaR
(99.6)) could be considered instead of the PML (1-in-100), for example. This will reduce vulnerability
to model and assumption risk when estimating the tails of the payout distribution.
191| Module 7A Designing Index Based Weather Risk Management Programs
Component C: Administration & Business Expenses
The Technical Premium (TP) is defined as follows:
TP = AEL + α * (PML (1-in-100) −AEL) (Eq 4), where:
AEL is the Adjusted Expected Loss and PML (1-in-100) is the maximum likely payout in 100 contract
life times.
The administrative and business expenses must be included to arrive at the final gross premium for a
contract. These expenses are often expressed in terms of percentages of the technical premium .
Administrative and business expenses are determined by the insurer and reinsurer. These are not
fixed or pre-determined, but are set based on the costs incurred by the insurer and reinsurer doing
the business.
To arrive at the final premium , the technical premium must be grossed up by multiplying by the factor
(1 + TE), where TE is the total administrative and business expenses reflecting the insurer’s fixed
costs. The final premium is defined as:
Premium = TP*(1 + TE) (Eq 5)
The complete calculation for the final gross premium per contract, P, is:
P = (1 + TE) * (AEL + α * (PML (1-in-100) − AEL)) (Eq 6), where:
AEL = Expected Loss + F (β) * s /sqrt (N*(1− j)),
The Expected Loss is calculated using a Historical Burn Analysis on cleaned and, where appropriate,
detrended data for all the historical years available.
PML (1-in-100) = max (Estimated PML (1-in-100), Maximum Historical Payout)
The Maximum Historical Payout is determined by a Historical Burn Analysis.
If the insurer wants to take the timing of cash flows into account, the premium can be discounted with
respect to the time when the premium is collected:
Discounted Premium = exp[r (t − T)]*TP*(1 + TE), where:
r is the interest rate, t is time, and T is the contract maturity date and date of a potential payout.
192| Module 7A Designing Index Based Weather Risk Management Programs
Areas for Improvement in Premium Calculation
There are a number of improvements that should be considered by insurers interested in developing
the weather index insurance business:
•
•
Improving the PML and expected loss estimates needed for the premium calculation. While robust PML
estimates are often limited by the amount and quality of the underlying data, the more accurate the
PML and the expected loss estimate, the more appropriate the premium costs will be for the product.
Reflecting data uncertainty risk in the pricing and implementing of a data uncertainty adjustment. At the
very least, insurers should be aware of these issues and the limitations and potential pitfalls of using a
limited data history for ratemaking. The proposed adjustment in this module is simple to implement in
spreadsheets; however, insurers should experiment with several methods to find a method for the data
uncertainty adjustment that they are comfortable with.
Alternative and additional methods that are strongly recommended and used in the market (available
in weather derivative pricing software, such as Climetrix and Speedwell ) include observing the
sensitivity of the contract statistics of expected loss and PML to:
•
Contract Dates , i.e., changing the start date by a few days, e.g. +/−1 day, +/−2 days... +/−10 days, on
either side of the fixed start date, to see how it changes the pricing parameters
•
Triggers and other contract parameters, i.e., adjusting the triggers up and down by small increments to
see if new payouts occur with small trigger changes which can change the pricing parameters
•
Trend Sensitivity , i.e., looking at how different detrending methodologies impact the historical payouts
and, therefore, pricing parameters
•
Missing data in-filling methodologies, i.e., looking at how different cleaning or in-filling methodologies
impact the price as above
These steps can help to extract more information from the historical dataset, reduce the potential
sample error in the expected loss and other payout statistics, and minimize the risk of missing critical
information about the payout potential of a contract by looking at, and stressing, the historical payout
series in more than one way.
193| Module 7A Designing Index Based Weather Risk Management Programs
Note on Accuracy of Methods
When employing any data analysis techniques, it should always be remembered that the results are
only as good as the model and data used. Model outputs are subject to model error as well as the
quality of the data available to calibrate the models and run them. At the end of the day there is a limit
to the information that one can confidently extract from poor or short historical weather data records.
Consider the example of fitting a distribution to historical payouts using a Historical Distribution
Analysis. The uncertainty in the results will be driven by the underlying uncertainty of using only a
limited number of values on which to fit a distribution. The uncertainty level in the estimates, such as
the standard error in the mean and variance, is not reduced.
Fitting a daily simulation model to meteorological data uses much more information to calibrate the
model. An argument can be made that this can better represent the index distribution and its
extremes. An example of this would be 30 daily data points per year, rather than one per year when
modeling a monthly contract. However, the required models are much more complex and there is a
greater potential risk of model error.
The simulation and Historical Distribution Analysis approach can reduce uncertainties in relying on a
Historical Burn Analysis only to a limited extent. The uncertainty analysis defined in the pricing
methodology presented in this module can be applied even if methods other than the Historical Burn
Analysis are used. This uncertainty is a fundamental characteristic of weather, and weather data, and
should be born in mind throughout the pricing process. Although the Historical Burn Analysis is
simple, its advantage lies in making the fewest assumptions. Hence, it should always be the starting
point and touchstone for all pricing analysis.
Jewson et al. (2005) have tried to address the issue of the potential accuracy of daily modeling over
Historical Burn Analysis, or over a Historical Distribution Analysis. However, their results depend on
the underlying model accuracy and the quality of the underlying historical data. Jewson recommends
that unless a daily model works very well and all the relevant statistics have been thoroughly
checked, a sensible approach is to use a combination of methods to estimate parameters such as the
expected loss and the PML, as recommended above.
* Jewson, S., A. Brix, and C. Ziehmann. Weather Derivative Valuation: The Meteorological, Statistical,
Financial and Mathematical Foundations. Cambridge: Cambridge University Press, 2005.
194| Module 7A Designing Index Based Weather Risk Management Programs