Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
What’s Reasonable About a Range? Casualty Loss Reserve Seminar September 11-12, 2006 Roger M. Hayne, FCAS, MAAA How Did We Get Here? Traditional actuarial methods are deterministic At their best they say if certain specific things happen then the ultimate payments (incurred losses, reserves) will be such and such No models, so no statement as to how likely those “certain specific things” are to happen, nor how likely the final payments will be “close” to “such and such” Some Estimates are Better than Others Actuaries have long known estimates for certain lines are “better” than others, but how? Traditionally apply many “methods” (i.e. many “certain specific things”) and review resulting “such and such” outcomes If the “such and such” “bunch” the actuary “feels good” about the final estimates If the “such and such” are spread out considerably, the actuary “feels uncomfortable” Enter Reasonable Estimates How can one tell a “good” “such and such” from a not-so-good “such and such”? Without underlying models we cannot tell which is “more likely” We do the next best thing and ask whether the “certain specific things” are “reasonable” or not If they are “reasonable” we say the “estimate” (whatever that is) is “reasonable” Range of Reasonable Estimates Various “certain specific things,” seem “reasonable” to a “reasonable person” and others do not A “Range of Reasonable Estimates” is a collection of “such and such” that follow from “reasonable” “certain specific things” As with beauty, “reasonable” is in the eye of the beholder Honest “reasonable” people can disagree Can be the fodder for disputes Enter the Accountants Problem – There can be many “reasonable” estimates – Methods do not quantify uncertainty – Accounting statements require single numbers that are treated as “fact” Solution, book a “reasonable estimate” of the unpaid claim costs Implication – book the number that will happen But, no single number is very likely Let’s Put Some Numbers Around This Begin by assuming the goal is to estimate the amount that will ultimately happen In addition assume that you know all possible outcomes (Z) and the probabilities associated with those outcomes, i.e. you know the “distribution of outcomes” with certainty What would be “reasonable?” – X with 1/3≤Pr(Z≤X)≤2/3 – Between X and Y where X and Y are values that minimize Y-X subject to Pr(X≤Z≤Y)=1/3 – Between E(Z)-T and E(Z)+T where T is selected such that Pr(E(Z)-T ≤Z≤ E(Z)+T)=1/3 – Something else? The Infamous Die Running the risk of Bob’s ire suppose the ultimate payment is determined by the roll of a fair die (rolled and hidden when the policy is written) Here we know the distribution – each number 1 through 6 equally likely Symmetric so the “reasonable range” of between 2.5 and 4.5 satisfies all 3 of the conditions but not necessarily unique Comes down to what “feels right” Focus is the outcome not the estimate Even with perfect knowledge outcome is uncertain Narrowing the Gap Instead of focusing on the (impossible to forecast) outcome why not some summary statistic of the distribution? In our ideal world statistics of the variable will have known values For example, even though the outcome is unknown, its mean or “expected value” is known If we define the mean as a reasonable estimate then the “range of reasonable estimates” is a single point What Is It Going To Be We have a fundamental choice to make – Do we pick the reserve booked as a point on the distribution (“what will happen”)? – Do we pick the reserve booked as some statistic or descriptor of the distribution? If a descriptor which one? – – – – Mean Mode Median Other? My Favorite Descriptor What is the “best” amount to book (Z)? For each potential outcome X calculate the decrease in value of the company if the company booked Z but actual losses are X, call this amount g(X,Z) A rational amount to book as reserves is that value of Z for which the expected value of g(X,Z), expected over all values of X is as small as possible (the least pain) It turns out the mean solves this problem if g(X,Z)=(X-Z)2 Is the mean really “rational” or even “reasonable”??? Which is “Better?” Actuaries like to think about the average – It all evens out in the end (pluses and minuses cancel over the long term) – It may even be possible to predict the mean (or at least get “close”) – May make life “easier” – But usually not “verifiable” by actual events Our publics, however, often think in terms of “what will happen” We are measured by how close our estimates are to what “actually happens” Don’t Confuse Me With Facts… In reality we seldom know the distribution of outcomes We must estimate the distribution We can think of a distribution of distributions If we pick a descriptor we can calculate the value of the descriptor for each of the distributions We thus have a distribution of the descriptors We can have “reasonable ranges” of the descriptors as just as we had “ranges of reasonable” outcomes before Very Simple Example Losses have lognormal distribution, parameters m (unknown) and σ2 (known), respectively the mean and variance of the related normal The parameter m itself has a normal distribution with mean μ and variance τ2 Here we have a distribution of distributions, one for each value of the parameter m Distribution of Outcomes & Means Expected is lognormal – Parameters μ+ σ2/2 and τ2 – c.v.2 of expected is exp(τ2)-1 “What will happen” is lognormal – Parameters μ and σ2+τ2 – c.v.2 is exp(σ2+τ2)-1 c.v. = standard deviation/mean, measure of relative dispersion Note expected is much more certain (smaller c.v.) than “what will happen” Ranges Suppose Pr(Z<α)=1/3 for Z~N(0,1) Take “reasonable range” 1/3≤Pr(Z≤X)≤2/3 Reasonable range of outcomes between exp(μ-α(σ2+τ2)) and exp(μ+α(σ2+τ2)) Reasonable range of means between exp(μ+σ2/2-ατ2) and exp(μ+σ2/2+ατ2) As expected the reasonable range of outcomes is wider than the reasonable range of expected values Evolution of Estimates Think of the distribution of parameters as describing different possible “states of the world” with assumed likelihoods Base these estimates on – Prior analysis of data – Judgment – Other? We can now talk about reasonable ranges of outcomes or of any descriptor (mean, median, mode, least pain, …) Evolution of Estimates (Cont.) Now observe the “real” losses for the next year For each parameter value calculate the likelihood of observing that value given the value Re-weight the prior distribution giving more weight to the parameters with a higher likelihood of observing the real value This gives revised descriptors and ranges An Evolutionary (Bayesian) Model Again take a very simple example Use the die example For simplicity assume we book the mean This time there are three different dice that can be thrown and we do not know which one it is Currently no information favors one die over others The dice have the following chances of outcomes: 1 1/6 1/21 6/21 2 1/6 2/21 5/21 3 1/6 3/21 4/21 4 1/6 4/21 3/21 5 1/6 5/21 2/21 6 1/6 6/21 1/21 Evolutionary Approach “What will happen” is the same as the first die, equal chances of 1 through 6 The expected has equally likely chances of being 2.67, 3.50, or 4.33 If you set your reserve at the “average” both have the same average, 3.5, the true average is within 0.83 of this amount with 100% confidence There is a 1/3 chance the outcome will be 2.5 away from this pick. We now “observe” a 2 – what do we do? How Likely Is It? Likelihood of observing a 2: – Distribution 1 – Distribution 2 – Distribution 3 1/6 2/21 5/21 Given our distributions it seems more likely that the true state of the world is 3 (having observed a 2) than the others Use Bayes Theorem to estimate posterior likelihoods Posterior(model|data)likelihood(data|model)prior(model) Evolutionary Approach Revised prior is now: – Distribution 1 – Distribution 2 – Distribution 3 0.33 0.19 0.48 Revised posterior distribution is now: 1 2 3 4 5 6 0.20 0.19 0.17 0.16 0.15 0.13 Overall mean is 3.3 The expected still takes on the values 2.67, 3.50, and 4.33 but with probabilities 0.48, 0.33, and 0.19 respectively (our “range”) Next Iteration Second observation of 1 Revised prior is now (based on observing a 2 and a 1): – Distribution 1 – Distribution 2 – Distribution 3 0.28 0.05 0.67 Revised posterior distribution is now: 1 2 3 4 5 6 0.25 0.21 0.18 0.15 0.12 0.09 Now the mean is 3.0 The expected can be 2.67, 3.50, or 4.33 with probability 0.67, 0.28, and 0.05 respectively Some “Take-Aways” Always be clear on what you are describing – The entire distribution of future outcomes (“what will happen”)? – Some statistic representing the distribution? Our publics probably think in terms of the former – we in terms of the latter The reality is that the distribution of future outcomes (“what will happen”) is quite disperse – maybe too wide for “comfort” Disclosure, disclosure, disclosure