Download Possible Appendix/Side panel/Sidebar/Footnote(s) re: references

Document related concepts

Investment fund wikipedia , lookup

Beta (finance) wikipedia , lookup

Securitization wikipedia , lookup

Present value wikipedia , lookup

Life settlement wikipedia , lookup

Business valuation wikipedia , lookup

History of insurance wikipedia , lookup

Risk wikipedia , lookup

Financial economics wikipedia , lookup

Financialization wikipedia , lookup

Moral hazard wikipedia , lookup

Actuary wikipedia , lookup

Corporate finance wikipedia , lookup

Systemic risk wikipedia , lookup

Transcript
IAA Monograph:
Risk Adjustments Under IFRS
Exposure Draft
October 2016
Table of Contents
Page
Table of Contents
2
Overview
3
Chapter 1 – Introduction
4
Possible Appendix/Side panel/Sidebar/Footnote(s) re: references highlighting Solvency II technical
provisions which are similar to the principles under IFRS X
12
Chapter 2 – Principles Underlying Risk Adjustments
13
Chapter 3 – Risk Adjustment Techniques
38
Chapter 4 – Techniques and Considerations in Quantitative Modelling
51
Chapter 5 – Qualitative Assessments and Other Factors to Consider
77
Chapter 6 – Effect of Risk Mitigation Techniques
86
Chapter 7 – Validation of Risk Adjustments
98
Chapter 8 – Remeasurement of the Risk Adjustment
106
Chapter 9 – Disclosure and Communication
113
Chapter 10 – Case Studies
117
Chapter 11 – Bibliography
135
2
Overview
Practical challenges arise from the application of the International Financial Reporting Standards (IFRS)
requirements on the risk adjustment within a current fulfilment value approach to financial reporting. This
monograph aims to provide a range of practices and techniques that are currently in use or could potentially
be applied after appropriate consideration.
Chapter 1 introduces the objective of risk adjustments and requirements of IFRS X Insurance Contracts, in
comparison to risk adjustment concepts in other frameworks.
Chapter 2 considers the elements that form the underlying framework for risk adjustment measurement
developed for IFRS, and discusses the general techniques that an entity may select to calculate a liability
for risk adjustment in compliance with the IFRS framework.
The assessment of the more commonly found techniques under the IFRS framework is the theme of chapter
3. In particular, this monograph considers the advantages of quantile, cost-of-capital, and other techniques
to estimate the risk adjustment under IFRS X Insurance Contracts.
Probability distributions that underpin the choice of the risk adjustment technique are the subject of chapter
4. The chapter also discusses the inherent limitations that modelling probability distributions would have on
the assessment of the uncertainty that the risk adjustment is designed to represent in the entity’s financial
statements. Chapter 4 also covers statistical techniques that an entity could apply to support its calculations
of risk adjustment liabilities. It also discusses the merits of commonly-used statistical methods such as
stochastic methods and option pricing, copulas, and probability distribution transforms, such as the Wang
Transform.
Chapter 5 focuses on the qualitative considerations an entity would reflect to ensure its approach to risk
adjustment measurement is consistent with the other components of the current fulfilment value under IFRS
X Insurance Contracts.
Portfolio characteristics, including the pooling of risk and other factors, such as the various risk mitigation
techniques that insurers may have in place, are discussed in chapter 6.
Chapter 7 discusses aspects to be considered in the validation of those risk adjustments that an entity has
selected.
The impact of the passage of time on the risk adjustment value—including practical considerations on how
to remeasure risk adjustment liabilities in the context of open portfolios and in light of new information
emerging from experience—is covered in chapter 8.
The disclosure requirements for risk adjustment liabilities under IFRS X Insurance Contracts are discussed
in chapter 9.
In chapter 10, several case studies are presented to give an overview of real-world applications of risk
adjustment methods for a cross-section of property/casualty, life, health, and annuity insurance contracts.
3
Chapter 1 – Introduction
Abstract
The Introduction provides the context for the following technical sections and discussions of practical
application of risk adjustments.
The first section explains the main purposes and applications for adjusting expected values to reflect the
risks associated with such values. It aims to provide the reader with a clear understanding of the needs of
users of financial statements with respect to financial values that are not certain but are subject to risk and
uncertainty. The International Accounting Standards Board (IASB) standard on insurance contracts
provides the specific requirements regarding risk adjustments that will be addressed.
In this chapter the reader will find a summary of the key requirements in the IFRS standards and moredetailed explanations of the rationale and considerations that underlie the IFRS requirements related to risk
adjustment. This chapter should help the reader understand the key issues and considerations that may
impact possible interpretations of the risk adjustments as required by the standard.
The third section compares and contrasts the use and application of risk metrics used for operational,
capital, and solvency management to the risk adjustment under IFRS requirements.
Section 1.1 Objectives of risk adjustments for financial reporting
1.1.1 Purposes and applications for adjusting expected values to reflect the risks
associated with expected values
Risk and uncertainty are inherent features of nearly all human endeavours. Insurers have been in the
business of managing and pooling business and personal risks for centuries. Taking on someone else’s
risk or the transfer of the financial aspects of activities involving risks through mechanisms as insurance
has a unique feature: the proceeds to compensate the party that accepts the transfer of risk are collected
prior to the potential disbursement that the occurrence of the risk-related event would trigger. This results
in the inversion of the usual business cash cycle, in which entities incur costs to produce goods and services
prior to collecting the proceeds from their sale. The consequence of this feature on financial reporting of a
business that sells insurance is that the estimates of the likely disbursements or outflows are of fundamental
importance in order to communicate the insurer’s performance and financial position at any given time to
its stakeholders.
From the point of sale and throughout the economic life of an insurance contract, the success of the entity 1
that issues the insurance contract as a business is dependent on its ability to estimate the net expected
outflows that the portfolios of insurance contracts it has assembled will generate. The primary purpose of
this information is for the entity to obtain from its policyholders the commensurate amount of
resources/inflows that would be sufficient to fund the expected outflows and to reward the insurer for its
ability to effectively relieve them from the risks they have transferred to it.
The fundamental statistical law of large numbers applies to many risks covered by insurance contracts. For
that reason, a common goal of the insurance business is to achieve a sufficiently large pool of risks—as
represented by portfolios of insurance contracts issued—to benefit from the fact that the average of the
1
IFRS X Insurance Contracts, which is planned for publication in early 2017, applies to any issuer of insurance contracts (with certain
specified exclusions) and does not apply to a defined entity. In practice, the issuers of insurance contracts will normally be insurance
companies, sometimes referred to as insurers. The monograph will use the term “entity”, which is how IFRS X Insurance Contracts
refers to the organization that is reporting under IFRS.
4
outcomes from a large number of similar insured risks should be distributed around the mean, and will
become closer to the mean as more similar insured risks are added to the same portfolio.
If the actual cash flows turned out to be equal to the expected value of the cash flows, no profit or loss from
the pooling of those risks would result to reward the insurer’s activity.
In addition, the convergence of cumulative actual outcomes around the expected value would not
necessarily be achieved in a particular period where positive or negative deviations from the expected value
can be experienced.
Finally, catastrophic, extreme, or extraordinary events may occur that cannot be easily captured in the
expected value.
All these reasons suggest that to faithfully report the financial information surrounding an expected value
of insured risks requires an additional element to go with the expected value. The risk adjustment (or risk
margin) fulfils this role. The International Actuarial Association (IAA) research paper Measurement of
Liabilities for Insurance Contracts: Current Estimates and Risk Margins notes:
The objective of the risk margin can be viewed from different perspectives. It can be seen (1) as
the reward for risk bearing, measured in terms of the inherent uncertainty in the estimation of
insurance liabilities and in the future financial return from the contract or (2) in a solvency context
as the amount to cover adverse deviation that can be expected in normal circumstances, with
capital to cover adverse deviation in more unusual circumstances.
This monograph considers the first of these objectives. In particular, it focuses on how the IASB has
incorporated in IFRS X Insurance Contracts the requirements and guidance surrounding the financial
reporting of the compensation for risk bearing that an insurer demands as to issue its insurance contracts,
otherwise known as the risk adjustment.
Risk and uncertainty are used interchangeably in this monograph. In chapter 6.6, there is a discussion of
the difference in how these terms might be used in some cases.
1.1.2 Needs of users of financial statements with respect to uncertain financial values,
subject to risks of outcomes that can differ significantly from the statement values
The IASB’s conceptual framework2 explains:
The objective of general purpose financial reporting is to provide financial information about the
reporting entity that is useful to existing and potential investors, lenders and other creditors in
making decisions about providing resources to the entity. Those decisions involve buying, selling
or holding equity and debt instruments, and providing or settling loans and other forms of credit.
This definition clarifies that the users of insurers’ financial reporting products, such as the insurers’ financial
statements, were considered by the IASB when developing an IFRS to include the capital holders and
creditors of an insurer.
In that context, it is important to remember that insurance policyholders are also important users of an
insurer’s financial statements, in that their collective rights under issued insurance contracts would normally
represent the biggest creditor group of an insurer.
The conceptual framework also discusses the fundamental components of the financial reporting system
that the IASB has produced by issuing IFRSs. The definition of resources and claims as the two economic
inputs in any business activity has replaced the previous notions of assets and liabilities and allows the
general discussion on how to define financial reporting standards to be anchored on basic economic theory.
2
Conceptual Framework for Financial Reporting 2010, issued by the IASB in 2013.
5
Of particular interest to the matter covered in this monograph is the discussion within the conceptual
framework of users’ need to understand the changes in economic resources and claims and the degree of
variability that reported results may have as a result (emphasis added):
Information about a reporting entity’s financial performance helps users to understand the return
that the entity has produced on its economic resources. Information about the return the entity has
produced provides an indication of how well management has discharged its responsibilities to
make efficient and effective use of the reporting entity’s resources. Information about the variability
and components of that return is also important, especially in assessing the uncertainty of future
cash flows. Information about a reporting entity’s past financial performance and how its
management discharged its responsibilities is usually helpful in predicting the entity’s future returns
on its economic resources.
When applied to an entity in the insurance business responsible for paying insurance claims, this statement
illustrates well the nature of users’ need to understand the degree of variability that expected values
calculated to measure the obligations an insurer has towards its policyholders may have. Indeed, the
financial reporting practices applied to insurance contracts to date have become more and more sensitive
to this need. Several instances can be observed of the efforts undertaken to explain the nature and sources
of variability that surround uncertain financial values to the users of financial information.
For example, attempts within the European life insurance industry to codify embedded value techniques
have focused heavily on the explicit allowance for risk. The European Insurance CFO Forum Market
Consistent Embedded Value Principles issued in 2008 adopted the requirement of an explicit disclosure of
the provision of the residual non-hedgeable risk. In the basis for conclusions this requirement is explained
as:
Additional allowance should therefore be made for non-hedgeable financial risks and non financial
risks . . . Non-hedgeable financial risks include illiquid or non existent markets where the financial
assumptions used are not based on sufficiently credible data. Non financial risks include, mortality,
longevity, morbidity, persistency, expense and operational risks.
Another codified example of an explicit measure of the uncertainty that surrounds the estimate of insurance
cash flows can be found in the Australian Accounting Standards Board standard 1023 General Insurance
Contracts, where it is stated that “[t]he outstanding claims liability includes, in addition to the central estimate
of the present value of the expected future payments, a risk margin that relates to the inherent uncertainty
in the central estimate of the present value of the expected future payments.”
These developments appear to reflect an effort to evolve financial reporting for insurers such that the
financial results of a period are reported on a basis that offers a view that is more closely aligned with the
economic substance of the risk-taking and risk management activities that drive profits, in contrast with
prior practices that were extensively affected by the influence of solvency conservatism.
This evolution in financial reporting practices permeates the whole of the IASB’s work to codify a framework
for IFRS where the concept of prudence is abandoned in favour of neutrality.
Section 1.2 Requirements of IFRS X Insurance Contracts (specific
requirements regarding risk adjustments)
1.2.1 Summary of the IASB’s deliberations and conclusions regarding risk adjustments
The measurement of insurance contracts under IFRS X Insurance Contracts aims to provide a faithful
representation of the entity’s view on the fulfilment of the combined rights and obligations arising from the
portfolios of insurance contracts at the reporting date.
The measurement basis underlying this standard is referred to as “current fulfilment value” and is based on
the requirements in IFRS X Insurance Contracts. In particular, it requires the measurement to be current
rather than based on outdated information, and it focuses on the entity’s own assessment rather than that
6
of the market. It requires representing the degree of fulfilment of the contractual rights and obligations to
account for the profit or loss that emerges from in-force contracts over time.
The requirements in IFRS X Insurance Contracts include the explicit reporting of a liability for risk
adjustment that is added to the expected cash flows in determining the current fulfilment value of an
insurance contract.
The other components of the current fulfilment value are the unbiased estimate of the probability-weighted
current estimate of future cash flows, a current discount rate to reflect time value of money and, under
certain circumstances a contractual service margin (CSM) liability. IFRS X Insurance Contracts summarises
these requirements and explains the measurement context in which the risk adjustment operates 3:
An entity shall measure an insurance contract initially at the sum of:
a) the fulfilment cash flows (an explicit, unbiased and probability-weighted estimate, i.e.,
expected value) of both the expected present value of the future cash outflows less the
expected present value of the future cash inflows that will arise as the entity fulfils the
insurance contract, adjusted to capture the compensation that the entity requires for
bearing the uncertainty about the amount and timing of those future cash flows; plus (less)
b) any cash flows that are paid (received) before the insurance contract is initially recognised
(excluding any amounts relating to premiums that are outside the boundary of the
insurance contract); plus
c) any contractual service margin, (represents the unearned profit in an insurance contract).
A contractual service margin arises when the amount in (a) is less than zero (i.e., when the
expected present value of the future cash outflows plus the risk adjustment is less than the
expected present value of the future cash inflows).
The risk adjustment is therefore acting as the adjustment to capture the compensation that the entity
requires for bearing the uncertainty about the amount and timing of the fulfilment cash flows that the contract
generates.
This adjustment is a function of the uncertainty that surrounds the net cash flows. The measure of this
uncertainty in the current fulfilment value is a function of the insurer’s own current view of the amount that
would make it (i.e., the specific reporting entity) indifferent between holding such uncertain contractual
obligations and those for the same amount but no underlying uncertainty. IFRS X Insurance Contracts
captures this concept in this requirement: “the objective of risk adjustment should be the compensation the
insurer requires for bearing the uncertainty inherent in the cash flows that arise as the insurer fulfils the
insurance contract”.
The IFRS application guidance exemplifies this concept as follows:
[. . .] the risk adjustment measures the compensation that the insurer would require to make it
indifferent between (1) fulfilling an insurance contract liability which would have a range of possible
outcomes or (2) fulfilling a fixed liability that has the same expected present value of cash flows as
the insurance contract. For example, the risk adjustment would measure the compensation that the
insurer would require to make it indifferent between (1) fulfilling a liability that has a 50% probability
of being 90 and a 50% probability of being 110 or (2) fulfilling a liability of 100.
In other words, the current fulfilment value comprises the present value of the expected fulfilment cash
flows, representing the statistical mean in terms of the probability-weighted cash flows under the different
scenarios for the risks to which that an insurance contract is exposed. The risk adjustment should be an
explicitly reported amount that, added to the present value of the fulfilment cash flows, would make the
insurer indifferent to have on its balance sheet this present value of fulfilment cash flows (expected value
3
IFRS X Insurance Contracts (planned for publication in 2017).
7
plus risk adjustment) or the same value but without exposure to the underlying uncertainty in the fulfilment
cash flows.
1.2.2 Summary of the key requirements and more detailed explanations of the rationale
and considerations that underlie the IFRS requirements related to risk adjustment
Central to the risk adjustment calculation is the entity’s appetite for the risks associated with the insurance
contracts in force at the balance sheet date. As noted above, IFRS X Insurance Contracts requires each
insurer to seek its specific financial break-even point between accepting the exposure from the in-force risk
and absence of risk.
The factors that would affect this calculation are not limited under IFRS X Insurance Contracts and will differ
depending on the specific nature of the entity’s business. For this reason the computation of the risk
adjustment should reflect risk appetite considerations at the reporting entity level.
One factor that will be present for all entities calculating the risk adjustment is the effect from pooling similar
risks. Drawing on the law of large numbers, assembling portfolios of insurance contracts with similar risks
usually results in the expected fulfilment cash flows being closer to the accumulated actual cash flows paid
and received from the portfolio of risks. This factor is implicit in any portfolio of insurance contracts and
delivers its benefit irrespective of all other factors present in the entity.
IFRS X Insurance Contracts identifies five basic qualitative principles for consideration when estimating the
liability for risk adjustment as follows 4:
[. . .] the risk adjustment shall have the following characteristics
(a) risks with low frequency and high severity will result in higher risk adjustments than risks
with high frequency and low severity;
(b) for similar risks, contracts with a longer duration will result in higher risk adjustments than
those of a shorter duration;
(c) risks with a wide probability distribution will result in higher risk adjustments than those
risks with a narrower distribution;
(d) the less that is known about the current estimate and its trend, the higher the risk
adjustment shall be; and
(e) to the extent that emerging experience reduces uncertainty, risk adjustments will decrease
and vice versa.
These are additional considerations to the pooling of similar risks that would affect the insurer’s assessment
of the risk adjustment.
Risks arising from a portfolio of insurance contracts that an entity holds could be negatively correlated with
each other, thus offering a reason for the entity to add a smaller amount to the present value of fulfilment
cash flows in order to make it indifferent to the cash flows without uncertainty. For example, portfolios that
expose the entity to the risk of paying cash flows on the deaths of its policyholders are negatively correlated
with portfolios where the obligation is to pay cash flows to policyholders for as long as they survive beyond
a particular date.
IFRS X Insurance Contracts allows the effect of negative correlation among different portfolios of insurance
contracts that belong to the same reporting entity to be considered if the entity considers it in determining
the compensation it requires to bear the uncertainty contributed by those portfolios.
4
IASB exposure draft of IFRS X Insurance Contracts, issued in 2013.
8
Furthermore, insurers may enter into risk-mitigating activities that affect the compensation an insurer
requires. For example, an entity could enter into reinsurance contracts in order to transfer portions of the
uncertainty from the in-force insurance contracts to reinsurers. In this instance, the impact of reinsurance
contracts purchased by the entity is subject to specific requirements under IFRS X Insurance Contracts.
These requirements mandate that the entity measures the risk adjustment for the insurance contracts
issued by the entity without the benefit of the reinsurance protection purchased. Instead, the risk mitigation
achieved by purchasing reinsurance is reported as an explicit component of the reinsurance contract’s
current fulfilment value in reporting the value of the reinsurance assets separately in the entity’s balance
sheet.
Section 1.3
Risk margins or risk adjustments in other contexts (e.g.,
pricing, economic capital, Solvency II)
1.3.1 Risk metrics used for operational, capital and solvency management as compared
to the risk adjustment under IFRS
The valuation of insurance contract liabilities under IFRS X Insurance Contracts is based on principles and
measurement objectives that are similar to, but differ in several aspects from, other insurance contract
valuation frameworks5.
The IFRS principles for risk adjustment are comparable in concept to those used in other types of actuarial
valuations, and there are common terms underlying the IFRS framework that are used in other insurance
contract frameworks.
However, there are also several significant differences for IFRS risk adjustments that will need to be
considered in applying the principles in practice. One important difference is that the IFRS principles for the
valuation of insurance contract liabilities are not based on the solvency requirements of an insurer. Capital
requirements, capital adequacy assessments, risk-based capital requirements and other solvency tests
applied to an insurer’s capital or surplus are not directly relevant6 to the risk adjustment under IFRS X
Insurance Contracts.
Another difference is that under IFRS X Insurance Contracts the valuation of insurance contract liabilities
is not directly based on the management of the insurance asset-liability cash flows nor on the market yields
or price of the entity’s invested assets. Rather, insurance liabilities are valued by applying the applicable
discount rates (yield curve) to the estimated fulfilment cash flows.
The valuation of insurance liabilities under IFRS may be compared to a market-consistent valuation
approach to assist in understanding the IFRS framework for risk adjustments. Under a market-consistent
valuation, the cost of capital is based on an amount of capital chosen by the entity. It could be based on
the entity’s own capital requirements or risk appetite. It could also be defined based on capital adequacy
requirements (of the applicable supervisory jurisdictions or the markets in which the entity operates) or on
the actual capital held. The amount of capital also depends on the time horizon for which the capital
amounts are estimated to be held. The cost-of-capital rate under a market-consistent valuation can be
thought of as a measure of the excess return over the risk-free rate that investors expect to be compensated
for investing in the entity.
Cost-of-capital is recognised as a valid technique for estimating risk adjustments under IFRS X Insurance
Contracts. However, there are no rules or detailed guidance provided regarding the choice or criteria for
the amount of capital or the cost-of-capital rate. The appropriate time horizon for the capital amount for
IFRS risk adjustments is the lifetime of the fulfilment cash flows. The guidance under IFRS X Insurance
5
Risk adjustments are also incorporated into the IAIS Insurance Core Principles (ICPs) as Margin over Current Estimates.
6
Under the cost-of-capital approach, solvency requirements may be a starting point for an entity in allocating or assigning capital to
associated cash flows.
9
Contracts provides a principles-based measurement objective for the risk adjustment as the basis for
determining the elements and parameters to be used for the cost-of-capital technique.
In other words, the capital requirements imposed by insurance supervisors or other insurance regulatory
frameworks serve a different purpose than that called for under IFRS reporting. Similarly, the intended
measure of the risk adjustments under IFRS X Insurance Contracts may not necessarily align with external
market demands for certain levels of capital, solvency protection or market returns on capital. In fact, the
reporting entity should consider its own compensation requirements for bearing risk and uncertainty,
focused on the fulfilment cash flows associated with its insurance liabilities as of the reporting date. In
particular, this does not include consideration of the compensation for bearing investment risk or the risk
that the company will not be able to obtain new business or to retain business without renewal options.
The principles under IFRS X Insurance Contracts are applied to the entity’s particular point of view with
respect to its desire for compensation for bearing uncertainty. This concept of compensation under IFRS
may not align with regulatory solvency metrics or other notions of risk-adjusted value. However, similar
concepts of risk versus return (compensation for bearing uncertainty) are appropriate considerations for an
entity to consider when calibrating its specific compensation model.
1.3.2 Key differences in measuring risk, estimating risk values, and reporting of risk
adjustments under IFRS
The approach to the valuation of the insurance liabilities under IFRS X Insurance Contracts is also different
from what might be used for market-consistent, fair value, transfer valuation, settlement value, market
model valuation, or valuations based on specific entity costs. Moreover, the concept of risk adjustment
under IFRS is not tied to the market’s valuation of risk, but rather the specific entity’s valuation of risk. For
example, an entity’s insurance pricing practices reflect its risk preferences as a result of its own risk appetite,
and are therefore potentially relevant when evaluating the risk adjustment under IFRS. In addition, an
entity’s consideration in pricing its insurance products may reflect its risk preferences in terms of its desired
competitive position in the marketplace. These types of entity-specific and market related inputs are treated
differently in the determination of the risk adjustment under IFRS compared to other valuation frameworks.
Therefore, risk adjustments under IFRS X Insurance Contracts are intended to reflect the risk preferences
of the specific entity. Consequently, risk adjustments comparisons between insurers with similar insurance
liability risks will not be purely comparisons in risk measurement, but rather such comparisons will have the
combined effect of the estimated risk in the cash flows and the value that the entity assigns to such risks
based on its own risk preferences. Similar entities could have very different risk preferences or different
assessments about the measurement of the risk and uncertainty associated with their specific insurance
contract fulfilment cash flows.
The requirements under IFRS X Insurance Contracts include the explicit reporting of a liability for risk
adjustment that is added to the expected cash flows in determining the current fulfilment value of an
insurance contract. IFRS X Insurance Contracts also requires the disclosure of an equivalent confidence
level associated with the entity’s reported risk adjustment as a means of benchmarking the entity’s
performance against that of other entities.
There are existing financial and regulatory reporting systems that require risk adjustments or risk margins
to be calculated, such as Solvency II, the Swiss Solvency Test, and Australian Financial and Regulatory
Reporting as defined in AASB Standard 1023.
For example, the Swiss test defines the risk margin of an insurance portfolio as the hypothetical cost of
regulatory capital necessary to run off all the insurance liabilities following financial distress of the company.
The focus of the risk margin is on policyholder protection in the case of solvency.
Under AASB Standard 1023, the risk margin is to allow for the inherent uncertainty in the central estimate
of the present value of the expected future payments for insurance claims. It is determined on a basis that
reflects the insurer’s business, with considerations given to robustness of the valuation models, reliability
of the available data, past experience of the insurer and industry, the characteristics of the written business,
etc. The bibliography provides additional materials relevant to these reporting systems.
10
11
Possible Appendix/Side panel/Sidebar/Footnote(s) re: references
highlighting Solvency II technical provisions which are similar to the
principles under IFRS X
The Solvency II Directive (Article 77):
Calculation of technical provisions
1. The value of technical provisions shall be equal to the sum of a best estimate and a risk
margin [. . .]
2. The best estimate shall correspond to the probability-weighted average of future cash-flows
[. . .]
The best estimate shall be calculated gross, without deduction of amounts recoverable
from reinsurance contracts [. . .]
The implementation details set out general rules for the calculation of these amounts, including:

The amount recoverable shall be calculated consistently with the boundaries of the underlying
insurance or the reinsurance contracts to which they relate.

The cash flows used in the calculation shall only include payments in relation to compensation of
insurance events and unsettled insurance claims.
Solvency II establishes that the calculation of the risk margin shall be based on the following assumptions:
a) the whole portfolio of insurance and reinsurance obligations of the insurance or reinsurance
undertaking that calculate the risk margin (the original undertaking) is taken over by another
insurance or reinsurance undertaking (the reference undertaking);
c) the transfer of insurance and reinsurance obligations includes any reinsurance contracts [. . .]
relating to these obligations;
g) after the transfer, the reference undertaking has assets which amount to the sum of its Solvency
Capital Requirement [SCR] and of the technical provisions net of the amounts recoverable from
reinsurance contracts [. . .]
i)
the Solvency Capital Requirement of the reference undertaking captures the following risks:
(i) underwriting risk with respect to the transferred business,
(ii) where it is material, the market risk referred to in point (h), other than interest rate risk,
(iii) credit risk with respect to reinsurance contracts [. . .],
(iv) operational risk[.]
In a transfer of insurance obligations, a transfer of assets that cover those obligations will typically also take
place. Consequently, there might be market risk linked to those assets.
In this context, it can be assumed that the reference undertaking will de-risk these assets to reduce the
SCR related to market risk. For example, the reference undertaking can sell investments in equity or
property to avoid the corresponding risks, or it can sell corporate bonds and buy government bonds instead
to reduce credit spread risk.
However, for particular kinds of insurance obligations not all market risks can be avoided. For example, if
the insurance obligations have a very long duration, it may not be possible to match the cash flows
completely. The mismatch may give rise to a significant interest rate risk.
The logic behind point (i) is that the reference undertaking is subject to underwriting risk corresponding to
the transferred insurance and reinsurance obligations, and these risks exist throughout the obligations’
lifetime.
12
Chapter 2 – Principles Underlying
Risk Adjustments
Abstract
This chapter provides background and concepts underlying a principles-based framework for the estimation
and evaluation of risk adjustments for financial reporting under IFRS X Insurance Contracts.
The first part discusses principles and concepts of financial value related to the adjustment of economic
value measurement for risk considerations. It also discusses risk adjustment as a core part of valuation
principles of insurance liabilities, and how this is related to the valuation of insurance liabilities under IFRS
X Insurance Contracts.
The second part discusses the types of risk and characteristics of risk and uncertainty that risk adjustments
are intended to reflect, including the concept of diversification, and the ability to produce meaningful
quantification that reflects the entity’s risk preferences.
The third part discusses the criteria for assessing the appropriateness of risk adjustment methods under
IFRS X Insurance Contracts.
Section 2.1 Valuation principles
As background to understanding the valuation of risk for financial reporting, the following discussion will
describe a comparison of financial value concepts and risk-adjusted valuation.
2.1.1 Concepts of financial value
In general, financial values are proxy (substitute or equivalent) measures of economic value, which could
be based on transactions between willing parties that are observed in an open market, or measures that
are not market-observable but reflect benefits specific to certain entities that have the ability to realise the
benefits. Examples of financial values are cost, value in use, or trading (transaction or market) value, related
to the control or ownership of an economic good or service. These financial values have common economic
elements but vary in terms of whom the concept applies to and how the value is observed or derived, as
described below.
Cost
This is the monetary amount exchanged to buy or sell an economic good or service. Cost is usually based
on the actual price paid (the monetary amount provided by the buyer to the seller) for an economic good or
service at the point of exchange. In the insurance world, the economic good or service can refer to the
insurance protection or benefits offered by an insurance product. This cost is an indication of the value to
the buyer. However, the cost will depend on the parties making the exchange. The value to the buyer could
be greater than the cost paid by the buyer. The value to the seller could also be less than the cost received
by the seller. Of course, buyers and sellers do not always equate their cost to the value of the good or
service after the transaction. Cost can also be used to mean the average price paid in a number of similar
transactions in a liquid market during a period of time.
Trading, transaction, or market value
In general, this is the value that can be realised by means of a transaction between willing parties, i.e.,
buyer and seller. This value can be observed in a market where similar trades are commonly made, usually
an exchange of money for an economic good or service. This type of value is most applicable where there
13
are a large number of buy/sell transactions of similar items (economic goods/services). The value is
represented by the average of the prices that buyers and sellers are willing to exchange, i.e., money for the
economic goods or services. While there may not be a single price that dominates the majority of the similar
transactions, the trading value is sometimes represented by a closing price, or an average price, within a
specified period. For financial markets, such prices are typically represented by the closing price for the last
transactions of the day (or week/month) as the market exchange for such transactions closes.
The concept of a trading or market value is difficult to apply to the valuation of the liability for insurance
contracts associated with future uncertain cash flows associated with the fulfilment of insurance contracts.
The main difficulties arise from the lack of frequent buy/sell transactions for similar insurance liabilities or
insurance assets. Without an active market or an organised exchange with a significant volume of similar
transactions, it is not possible to observe prices, which is the basis of a trading, or market, value.
However, the observation of the prices of common insurance contracts (e.g., premium rates) that are bought
and sold regularly may be of some relevance. The market for such transactions involves insurance
companies, who sell new or renew insurance contracts to individuals or businesses who desire insurance
protection or insurance benefits related to specified events typically covered by insurance contracts.
The price paid for insurance contracts is normally only recorded by the company whose insurance price
(premium) is accepted by the policyholder. The prices for other insurance sales by other companies are not
typically collected in a way that might allow for the determination of a market price among different sellers.
However, there may be other sources of individual prices or aggregated pricing data that may provide some
indications of the relationship between the prices paid for insurance contracts (premiums) and the expected
costs associated with providing the benefits and services covered by the insurance contracts. This
relationship could be an indication of the expected profitability of a portfolio of insurance contracts sold in
the market over a specified period7. Therefore, the difference between the premium paid by policyholders
for a portfolio of insurance contracts and the expected cost of the services and benefits to be provided at a
point in time can be an indicator of the expected profits of the portfolio.
Value-in-use
This is a value that is specific to the economic benefits than can be derived from owning or controlling an
economic good (having the ability or the right to use an economic good or service). It is specific to the entity
that is able to realise the expected economic benefits since another party might not be able to realise the
same economic benefits simply by owning or controlling the economic good or service.
For example, consider insurance, such as fire insurance, where the probability of a policyholder or
beneficiary actually receiving any insurance benefits is small. The purchaser of the insurance is able to
reduce or eliminate the potential need for financial resources (including contingent resources) if the property
were not insured and then was damaged or destroyed by fire. The risk of a decline in the value of the
property due to damage from a fire is mitigated by the purchase of insurance. Consequently, the value-inuse of the property is increased since the value of the property is protected by the insurance that will pay
for the repair/replacement of the property in the event of fire. The cost of the insurance is usually very low
compared to the potential loss of value if the property were damaged or destroyed, and the probability of a
fire may not be a major factor relative to the value of the property when it is protected by insurance.
Settlement value
This is a value specific to a set of obligations between parties involved, and represents the ability to
negotiate or determine a monetary amount that can be agreed upon by the parties to be paid in return for
settlement of the obligations, and thereby be released from those obligations. For example, an insurance
claim for damaged property may involve reimbursement of the cost of repairs, replacement, rebuilding, and
rental of temporary property. A claim settlement amount may be agreed upon so that the insurance
7
For example, industry or competitor loss ratio data (losses divided by premiums) for short duration insurance business.
14
company can pay a single amount and then is relieved of paying a number of bills for property repairs,
replacement costs, or rebuilding costs.
Another example would be the settlement value in the case of transferring benefits or obligations under a
long-duration insurance contract, such as life insurance. The settlement value of a life insurance contract
would need to consider the expected net cash flows during the remainder of the coverage period, as well
as the existing cash value under the contract.
Current fulfilment value
In comparison, IFRS X Insurance Contracts requires a current fulfilment value measurement at the end of
each reporting period, based on fulfilment cash flows arising from insurance contracts (i.e., an entity would
fulfil its obligation by delivering those cash flows). The use of a current fulfilment value measurement model
for the insurance contract liability provides transparent reporting of changes in the insurance contract
liability and complete information about changes in estimates, as well as a measure of the economic value
of options and guarantees embedded in insurance contracts.
2.1.2 Value adjustment for risk
The measurement objective for the valuation of liabilities for insurance contracts under IFRS X Insurance
Contracts includes a risk adjustment to appropriately reflect the effect of risk and uncertainly on the
economic value of the insurance contract liabilities.
The economic value is a measure that reflects the risk preferences and risk tolerances of people and
businesses in the value of an economic good or service with respect to the risk and uncertainty that is
recognised to exist. In some situations, common risk preferences can be observed in the marketplace with
respect to the value of goods and services whose costs or benefits are uncertain. The risk adjustment under
IFRS X Insurance Contracts represents the economic value of risk and uncertainty that is unique to the
reporting entity based on its risk preferences. As to specifically whose risk preferences are measured for
the purpose of the risk adjustment, and who represents the entity when determining an amount of
compensation for bearing the risks, the IFRS X Insurance Contracts guidance indicates that the risk
adjustment is from the perspective of the shareholders who bear the risks of the reporting entity. However,
management is responsible for interpreting the perspective of the shareholders and risk preferences and
applying them to the management of the insurance business, in order to carry out the calculation.
Probability models, sometimes described as stochastic or statistical models, are useful mathematical
formulas used to represent the uncertain outcomes associated with the insurance liabilities. The term
“random variable” is used in such models to denote the unknown variable of interest. Such a random
variable is believed to follow a pattern of occurrence based on observations, theories about the stochastic
nature of the variable, or by assumption. The pattern of occurrence is stated in terms of the probability of
various outcomes where such outcomes will be the value of the random variable, e.g., an insurance claim,
or in some cases the occurrence (or non-occurrence) of an event such as the timing of an accident or time
of death. The pattern of occurrence is described mathematically by a probability function.
In the application of probability models to insurance liabilities, data from observations—such as from past
events—is analysed in one or more ways to develop or calibrate a probability model that is believed to
approximately represent the pattern of occurrence, i.e., the probability of different outcomes. The probability
model will attempt to appropriately represent the outcomes that are significant to the risk preferences being
applied. Some extreme outcomes may be thought to have very low probabilities. In such cases, it would be
helpful to have an assessment of the probability-weighted impact of such extreme outcomes to indicate if it
is potentially material to the risk preferences.
The objective of a risk adjustment is to provide a quantitative assessment of risk based on the entity’s risk
preferences and the financial effect that the risk assessment has on the value being measured. In meeting
this objective, there are elements that can be considered in developing the risk adjustment:
15
1. The risk preferences of the entity, which represents management’s views about the compensation
for bearing risk and uncertainty that is appropriate for the entity and how the value of such
compensation is reflected in the entity’s reported financial statement values;
2. The ability to identify, understand, analyse, and quantify the key drivers of risk and create one or
more models that can be used as an approximate measure of risk;
3. The level of risk and probability analyses, the complexity of the probability or stochastic models,
and the interactions or interdependencies of the risk drivers; and
4. The ability to explain and quantify the risk adjustments in the context of the financial statements
where the risk adjustments are reported.
2.1.3 Valuation principles for insurance liabilities – the framework under IFRS X Insurance
Contracts
The valuation of insurance liabilities is fundamental to the work of many actuaries. The application of the
underlying principles for insurance liabilities are based on a thorough understanding of the insurance
contract obligations of an insurer, and the drivers of the cash flows associated with fulfilling those
obligations.
The actuary typically evaluates those fulfilment cash flows using various mathematical approaches,
statistical methods, or models representing the risk associated with uncertain cash flows. The evaluation of
the cash flows will take into account the relevant cash inflows and cash outflows associated with the
insurance contract. Such cash flows can include both the cash flow payments by the insurer as well as the
cash flows received by the insurer. For example, insurance can provide for the payment of benefits, the
payment of losses (claims by third parties 8), the investment of funds, the indemnification of costs, and the
delivery of services. The insurance contract will specify the insurer’s obligations to the policyholder, the
beneficiaries, or third parties. It will also specify the insurer’s rights to receive cash flows from the
policyholder or owner of the insurance contract. These are referred to under IFRS X Insurance Contracts
as the insurance contract fulfilment cash flows 9. It is important to note that these are limited to the cash
flows specified by the contract. For example, under IFRS X Insurance Contracts, fulfilment cash flows do
not include the general overhead expenses of the insurer that might be allocated to a contract or group of
contracts. Similarly, the investment cash flows from the invested assets supporting the insurance liabilities
would not be fulfilment cash flows unless the investment cash flows were directly linked as specific
obligations to the policyholder per the insurance contract.
The insurance contract cash flows under IFRS X Insurance Contracts consist of several elements and are
typically subject to variability, risk, and uncertainty (these attributes are referred to interchangeably in this
monograph). Under IFRS X Insurance Contracts, the valuation of insurance contract liabilities for future
uncertain contract fulfilment cash flows is divided into three building blocks.
Building block one
The first block is the estimation of the expected value of the uncertain future cash flows associated with
insurance contracts for which a value is being assigned.
The concept of expected value under IFRS X Insurance Contracts is an unbiased estimate of the statistical
mean. This is also described as the probability-weighted expected value of the cash flows. The importance
of this estimate being unbiased is to avoid the inclusion of any implicit or explicit provision, adjustment, or
margin for risk or uncertainty. That is, the level of uncertainty in the estimate should not be considered in
an unbiased estimate of the mean. The underlying probabilities in a probability-weighted expected value
8
Third parties refers to someone other than the insurer, the policyholder, or the beneficiaries of the policyholders. For example, losses
paid for liability claims.
9
The guidance in IFRS X Insurance Contracts provides a more complete description of the fulfilment cash flows.
16
should be free from bias or considerations that might be described as prudent, conservative, or risksensitive.
Under IFRS X Insurance Contracts, the unbiased estimate of the statistical mean is meant to exclude any
level of prudence or other implicit reflection of conservatism, especially where the estimate is subject to
significant risk and uncertainty. The unbiased estimate does take into account the current knowledge about
possible outcomes and the realistic (unbiased) probabilities of such outcomes, to the extent that there are
relevant data or analyses, or there is knowledge, supporting the unbiased estimate of the mean. For
example, where a best estimate is based on historical averages, but it also considers the variability of
historical averages or the uncertainty in using such averages as a basis for the estimate, there may be
some implicit “bias” to ensure that the estimate is a reasonable one. Under IFRS X Insurance Contracts,
such implicit (or explicit) considerations that otherwise may be acceptable as reasonable and appropriate
would not meet the measurement objective under IFRS for building block one. The reason for these
limitations on an unbiased estimate is to clearly separate out the considerations for risk and uncertainty and
include such considerations in the risk adjustment.
The principles of building block one under IFRS X Insurance Contracts for an unbiased, probabilityweighted current estimate of the expected value (mean) are consistent with the guidance for the Best
Estimate under Solvency II. However, actuarial estimates of insurance liabilities under other frameworks
may not meet the unbiased criteria, either statistically or in concept. In such cases, the actuarial estimates
will need to be revised or modified to meet the requirements of IFRS X Insurance Contracts – an unbiased
estimate that represents the probability-weighted expected value over the range of possible outcomes.
Building block two
Building block two is an adjustment to the expected value of cash flows for the timing of the cash flows. The
timing of cash flows is recognised to have an impact on their economic value. The guidance in IFRS X
Insurance Contracts requires an adjustment for the time value of money10. This block is needed to reflect
the preference in the value of cash flows for the time value of money that occurs further in the future as
having a lower value at the present time, i.e., the present value 11.
The selection of appropriate discount rates should be based on the following guidance in IFRS X Insurance
Contracts:

Current observable market prices for financial instruments whose characteristics are similar to the
insurance liability cash flows in terms of the timing, currency, and liquidity,

An appropriate yield curve that reflects current market returns for the actual portfolio of assets, or
current market returns for a reference portfolio that is selected based on observable prices for
financial instruments with cash flow characteristics similar to the fulfilment cash flows,

Adjustments to the portfolio of financial instruments cash flows (and the corresponding adjustment
to the yield curve) to reflect matching the timing of the fulfilment cash flows (duration),

Adjustments to the portfolio of financial instruments cash flows (and the corresponding adjustment
to the yield curve) to remove the impact of the cash flows risks reflected in the market price of the
financial instruments (credit risk, other investment risks) that are not relevant to the fulfilment cash
flows, and

Where insurance liabilities are subject to future inflation and the estimates of fulfilment cash flows
reflect an estimate of the future inflation, the discount rates should also reflect the same future
economic conditions associated with the inflation rates reflected in the fulfilment cash flows.
10
The paper The Principles Underlying Actuarial Science provides further explanation of the economic rationale for the time value of
money.
11
Present value is also sometimes referred to as the discounted value, discounted present value, or net present value.
17
IFRS X Insurance Contracts provides the following guidance on the last item discussed above:
Estimates of cash flows and discount rates shall be consistent to avoid double-counting or
omissions. For example:
(a) To the extent that the amount, timing or uncertainty of the cash flows that arise from an
insurance contract depends wholly or partly on the returns from underlying items, the
characteristics of the liability include that dependence and the discount rate used to
measure those cash flows shall therefore reflect that dependence.
(b) Nominal cash flows (i.e., those that include the effect of inflation) shall be discounted at
rates that include the effect of inflation.
(c) Real cash flows (i.e., those that exclude the effect of inflation) shall be discounted at rates
that exclude the effect of inflation.
The IAA’s educational monograph Discount Rates in Financial Reporting – A Practical Guide provides
extensive discussion and additional considerations related to the determination and use of discount rates
for building block two.
Building block three
The main purpose of building block three is to capture considerations related to uncertainty in the estimate,
the variability of possible outcomes and the risk of the misestimation. This block contains two elements,
and this monograph addresses the first: an adjustment for the risk associated with the uncertain, variable
cash flows. The second element is a CSM, an additional requirement under IFRS X Insurance Contracts
that is not addressed in this monograph. However, there is a relationship between the risk adjustment and
the CSM for some types of insurance contracts and their related insurance liabilities. The CSM is reduced
by the risk adjustment estimate at the inception of the insurance contract. The estimated risk adjustment
for future benefits, losses, and services can change from period to period. Consequently, the updating of
the CSM for each period will depend on the estimated risk adjustment at the end of each period.
An important principle underlying the risk adjustment required under IFRS X Insurance Contracts is to
recognise that the level of risk and uncertainty associated with future cash flows affects the valuation of the
cash flows before they occur.
The risk adjustment may be dependent on whether the cash flows are payments or receipts. This definition
of risk adjustment under IFRS X Insurance Contracts, and the related guidance contained in the standard,
is a measurement specific to each entity. Risk aversion, risk preferences, risk appetite, and risk tolerance
are some of the important considerations in determining the risk adjustment. Each insurer, however,
decides on the measurement methods or models that can consistently be applied to the future cash flows
associated with its insurance liabilities. As will be discussed further in this monograph, there are additional
considerations, such as the effect of aggregation, which can affect the determination of the risk adjustment.
Those liabilities where an insurer has obligations to make payments would have a risk adjustment that
increases the value of the insurer’s liabilities. The greater the risk and uncertainty there is in the net cash
outflows, the larger the risk adjustment. For insurer cash flows which are receipts of uncertain cash inflows,
the result of the uncertainty in those inflows would be a positive increment to the risk adjustment,
notwithstanding that the cash flows are inflows rather than outflows. Thus, the mean of those cash inflow
is effectively reduced by a positive increment to the risk adjustment. The more risk and uncertainty
associated with the receipt cash inflows, the larger the increment to the risk adjustment, which offsets a
portion of the mean present value of the insurance contract cash flow receipts.
The cash flows associated with insurance contracts can include both payments and receipts. In other words,
the risk adjustment associated with the insurance contract fulfilment cash flows would take into account the
collection of cash inflows (receipts) and cash outflows (payments). These cash flows could be considered
collectively in estimating the risk adjustment, rather than separating the risk adjustment from the cash
inflows from the risk adjustment from the cash outflows, or their components. In many situations, the risk
and uncertainty associated with future cash outflows from insurance contracts are expected to be more
significant than the risk and uncertainty associated with future cash inflows from insurance contracts.
18
However, the risk adjustment for the combined cash flows would always be a positive liability in such
situations. Moreover, the intent of the risk adjustment under IFRS X Insurance Contracts is only to adjust
the valuation of insurance liabilities where the risk adjustment is a positive liability. Consequently, the risk
and uncertainty in the cash inflows and cash outflows would be considered together for the risk adjustment,
but the value of the risk adjustment should always be a positive liability. Otherwise, the risk adjustment
would be zero.
The guidance in IFRS X Insurance Contracts that does not permit a negative liability risk adjustment for the
combined cash inflows and cash outflows pertains to the level of aggregation selected by the entity, based
on the level of aggregation that impacts the entity’s views on the compensation it requires for bearing the
risk and uncertainty in the cash flows. Consequently, the risk adjustments computed at more granular levels
may not be additive with respect to a selected level of aggregation. When the risk adjustments are
computed for component cash flows, the sum of such component risk adjustments may be greater than the
risk adjustment for the aggregated cash flows. For example, the confidence level approach would not
produce additive risk adjustments, in general. However, some risk adjustment techniques may produce
additive risk adjustments, such as the Wang transform approach, depending on how the techniques are
applied.
The cash inflows may be linked or correlated to the uncertainty associated with the cash outflows.
Therefore, the measurement of the insurance liabilities for financial statement purposes will include risk
adjustments that reflect such linkages or correlations between cash inflows and cash outflows associated
with an insurance contract or within a portfolio of insurance contracts at the selected level of aggregation
for the risk adjustment. As noted above, the risk adjustment under IFRS X Insurance Contracts is usually a
liability valuation adjustment and is defined in a way where it will only be an adjustment that increases the
valuation of the insurance liabilities to be greater than the mean present value of the cash flows. However,
we note that when reinsurance is purchased, there is a risk adjustment reported as an asset in the balance
sheet.
The valuation of insurance liabilities aims to arrive at the estimate of the cost to an insurer associated with
producing the (uncertain) cash flows needed to fulfil the insurer’s obligations under the applicable insurance
contracts, i.e., the payments promised to policyholders and beneficiaries, or losses obligated to be paid
(e.g., to third parties) by the insurer under its contracts, over the expected future lifetime 12 of the contract
cash flows. The insurance liabilities’ valuation then can be derived from the mean present value of the cash
flows associated with fulfilling the insurer’s obligations.
In general, the insurer fulfils its cash flow obligations by collecting premiums or other considerations from
the policyholder, and investing them in a sufficient amount of financial assets. The fulfilment cash flows are
paid out from the assets, which, in future periods, include cash flows received from the policyholder,
investment returns on the financial assets supporting the fulfilment cash flows, and the sale and reinvestment of assets.
Under IFRS X Insurance Contracts, the fulfilment cash flows are more specifically defined and include the
cash flows between the insurer and the policyholder related to:
12

The claims, benefits, and services that the policyholder is entitled to under the insurance contract;

Other expenses associated with fulfilling the contract, such as claim adjustment costs and
policyholder services; and

Premiums, policy fees, and other amounts that the insurer receives from the policyholder as per
the terms of the contract.
The lifetime of the insurance contract extends until the insurer has fulfilled its obligations under the terms and conditions of the
contract, or until such obligations are no longer required to be fulfilled.
19
However, since the cash flows for these claims, benefits, services and expenses are typically uncertain, an
additional consideration (the risk adjustment) is recognised as compensation to the insurer for the
uncertainty associated with the amount and timing of the fulfilment cash flows.
It is important to note that despite the uncertainty associated with such future cash flows from invested
assets, those cash flows from investment returns on the financial assets supporting the fulfilment cash
flows, and from the sale and reinvestment of assets, are excluded from the consideration in estimating the
risk adjustment unless the cash flows from the contract depend on the values of those financial assets.
Section 2.2 Types and characteristics of risks and uncertainty
A fundamental component of insurance is the uncertain and contingent nature of the fulfilment cash flows
underlying the insurance business. The risk adjustment is a quantification that reflects the value associated
with the reporting entity’s (insurer’s) preference for certain and known cash flows versus uncertain timing
of the cash flows, and possibly unknown amounts of the cash flows. The size of the risk adjustment will be
influenced by the nature and the degree of uncertainty involved in those cash flows. While the determination
of an adjustment may involve complex statistical techniques, probability models, or actuarial estimates, the
adjustment’s practical purpose is to help users of financial statements better understand the impact of
uncertainty in the valuation of the insurance liabilities. Furthermore, an adjustment enables a user to better
compare the amount of risk and uncertainty between two insurance contracts with the same expected cash
flows but different probability distributions of those fulfilment cash flows.
The IASB has explained that in estimating the risk adjustment, the entity considers favourable and
unfavourable outcomes in a way that reflects its degree of risk aversion. Hence, the adjustment would be
the amount that makes the insurer indifferent between fulfilling an insurance contract liability with a range
of possible outcomes versus fulfilling one that will generate fixed cash flows with the same expected value
of cash flows.
2.2.1 Risk categories
This definition is consistent with the concepts espoused by Sam Gutterman in his 1999 paper The Valuation
of Future Cash Flows, in which he states that the most appropriate meaning of risk is “the estimated
probability that a given set of objectives will not be achieved” and he says that “risk has sometimes been
measured by the degree of volatility of the value or price of a set of cash flows”. Furthermore, the IASB has
stated that the risk adjustment should reflect “all risks associated with the contract”.
This would imply that three general categories of risk commonly identified in actuarial analysis of cash flows
need to be considered.

Model risk—Sometimes referred to as model specification risk, this refers to the possibility that an
actuary may select a model that is not a reliable representation of the cash flow probabilities. For
example, an actuary may observe a sample of realised values of a random variable, but may utilise
the wrong model to estimate the expected value or the probabilities of the future cash flows, or the
selected model variables may not sufficiently represent the cash flow probabilities. This can occur
because natural phenomena and human behaviour, as observed by actuaries, may not follow
commonly understood probability models.

Parameter risk—This is also sometimes referred to as estimation risk or parameter estimation risk
and refers to the possibility of misestimating the parameters of the model used to estimate the cash
flows. For example, the mean and variance of the frequency and/or severity distribution used to
model future cash flows will not be known precisely.

Process risk—This is also sometimes referred to as variability risk, and refers to the natural random
variations that will inevitably occur in future cash flows even when the model and underlying
parameters used by the actuary are accurate representations of the random elements.
There are differences in the nature of insurance liabilities for many types of life insurance and annuity
insurance contracts as compared to insurance liabilities for non-life insurance, such as property/casualty
20
and short-term health insurance contracts. These differences can be seen in the risks likely to affect the
fulfilment cash flows and therefore the risk adjustment. Below is a brief list of selected risks that are often
considered in risk adjustments for life insurance and annuity liabilities. It is not meant to be exhaustive, but
rather to illustrate the concepts and introduce the nature and definitions of key risks, and the sources of
uncertainty.

Mortality (longevity) risk—For life insurance and annuity products, future mortality (or longevity)
rates are key assumptions representing expected decrements from an insured population. There
may be systematic or non-diversifiable risk of misestimation of the mean value of the mortality
decrements. The risk could be due to misestimating the current baseline mortality resulting from a
misunderstanding of the mix of insured population and potential anti-selection, or misestimating the
mortality improvement resulting from not fully understanding the trend of baseline mortality and
adequately reflecting such trend in the cash flow risks resulting from medical advances or other
demographic shifts. Also, an extreme shift in demographics or pandemic events could have an
unanticipated impact on the estimated assumptions.

Policyholder behaviour risks, including lapse risks—Policyholder behaviour, especially for life
insurance and annuity products that have embedded options in certain types of insurance
contracts, can often reflect the fact that a policyholder’s propensity to exercise options available
can be influenced by external factors. For example, the decision on whether to surrender the policy,
or exercise certain living benefit riders on variable annuities, may be influenced by equity market
performance relative to the underlying guarantee provided in the contract. The policyholder
behaviour risks arise from the possibility that corresponding assumptions will be different from the
baseline assumptions and will adversely impact the value of guarantees. Such risks increase the
uncertainty about future insurance contract cash flows. Even for the baseline assumptions, such
as base lapse rates, the risk of misestimation could also exist due to demographic shifts or events
in the marketplace that affect the competitiveness of the in-force policies versus similar products.

Financial market risks—These relate to the volatility of the market value of traded securities
involving exposure to movements in financial variables, such as stock prices, bond prices, bond
yields, equity returns, interest rates, market volatilities, or exchange rates. Such risks arise from
the exposure to unanticipated movements in financial variables or to movements in the actual or
implied volatility of asset prices and traded options.

Long-duration trend risks—The valuation of life insurance and annuity liabilities often requires
projections of 30–50 years depending on the specific insurance product design. The long duration
of such insurance contracts creates unique challenges in the valuation of liabilities, which can
require unique or customised approaches to risk adjustments, such as projection of maintenance
expenses that can be influenced by inflation, productivity, and reductions in units over time over
which fixed expenses can be spread.
Non-life insurance risks have unique considerations. Those risks stem from not only the manner in which
products are designed and sold but also in the manner in which claims are reported and settled. Non-life
insurance risks can be broadly categorised as short-tailed risks (e.g., accident and health, property, or
motor) and long-tailed risks (e.g., long-term care, product liability, employers’ liability and workers’
compensation, general liability, or errors and omissions). While the period of the contract is usually one
year (insuring events that occur or claims reported during a year) for both short- and long-tailed risks, the
uncertainty associated with the expected cash flows is less for short- than long-tailed as claim payments or
guaranteed benefits provided by the insurance contract may span many years beyond the contract term.
Below is an illustrative list of selected risks that are often considered in risk adjustments for non-life
insurance liabilities:

Frequency risk—This refers to uncertainty in the number of claims that will produce the cash
outflows. It relates to what are typically called IBNR claims, which have been incurred but not
reported by the financial statement date. For some lines, such as general liability, claims may be
reported many years after the insured event occurs.
21

Severity risk—This refers to the cost of individual claims. For some lines, this may be limited by
policy limits or statutory13 limits. In many cases, costs to defend claims are in addition to the stated
policy limits and therefore contribute to the uncertainty. In some long-tailed lines of non-life
insurance, claims might not be settled for 25–50 years beyond a financial statement date and so
the risk of future inflation impacting claim liability cash flows can be significant.

Legal environment risks—Court decisions and interpretations of policy language can create
uncertainty in predicting future cash flows. For example, future laws or legal decisions may result
in significant adjustments to past observed loss experience in order to be relevant in estimating the
future cash flows associated with insurance claim liabilities. However, under IFRS X Insurance
Contracts, the expected cash flows and the risk adjustment are based on current laws and
circumstances, such that risks from unknown future changes in the legal environment are not
considered relevant in estimating the risk adjustment14.

Unknown risks—New types of claims or new causes of losses can arise on current and previous
policies. Asbestos, pollution, and construction defects are examples of areas where such claims
are made by many policyholders. Such claims, and others like them, can create significant risk in
predicting fulfilment cash flows in terms of the number and cost of claims. Under IFRS X Insurance
Contracts, unknown future types of claims or causes of losses are not considered in estimating the
risk adjustment.
Finally, the nature of health insurance poses unique risks. Further, the nature and severity of risks absorbed
by health insurers depend greatly on the structure of the healthcare system and type of health insurance
contracts offered within each jurisdiction. In general, health insurance products can be classified in two
product categories: long duration, such as disability insurance and long-term care insurance, and short
duration, such as medical insurance and prescription drug/medicine coverage. However, exceptions do
exist, such as certain contracts offered in Germany and Japan. The risk profile of the long-duration products
is similar to that of life and annuity products (with, in some cases, morbidity risks substituting for mortality
risks). The following list is a brief summary of risks common to providers of short-duration health insurance
and applicable to many forms of long-duration health insurance:

Morbidity risks—This refers to the risk that actuarial predictions of claim frequency and severity will
be significantly different than past experience or expectations. Many factors will influence the level
of claims activity in a given insurance period. The most commonly identified include age, gender,
policy characteristics, and geography. In addition, in many healthcare systems the rate at which
claim levels increase—commonly referred to as the healthcare cost trend—as well as the potential
claim continuance period, has a significant impact on the accuracy of predictions and involves
considerable actuarial judgment.

Regulatory environment risks—This refers to the risk that regulations imposed on health insurers
may infringe on the insurer’s ability to effectively operate in the marketplace. Because of the high
perceived value and cost of healthcare services, governments often exert considerable influence
on the regulation of health insurance. Many insurers operate cognisant of the fact that regulatory
bodies could impact important cost factors in the short term (e.g., limitations on the pricing of
renewals on such insurance contracts, or insurers’ mandatory participation in a governmentmandated pool of high-risk insureds) and permanently change the cost structure in the long term
(e.g., health reform in the U.S.).

Service provider risks—This refers to the risk that providers, with whom the insurer has contracted
to provide services, are unable or unwilling to fulfil the terms of their contract. For example, in the
U.S., insurers may negotiate discounted fees for services with physicians or hospitals or other
13
Statutory limits refers to limits specified by law, i.e., in a statue.
14
The effect of past changes in laws or past legal decisions may not be easily estimated for many years. The uncertainty from such
changes are considered relevant in estimating risk adjustments under IFRS X Insurance Contracts.
22
providers. If the provider fails to fulfil its contract, there could be significant implications on the
insured cost of services, changing the estimated cash flows needed to fulfil the insurance contract.
2.2.2 Diversification of risk
Insurance companies may issue products containing naturally-offsetting risks. For example, mortality risk
for life insurance products may be at least partially offset by mortality risk for pay-out annuity products. In
that case, all else being equal, higher levels of mortality would typically:

Increase the insurer’s cash outflow for life insurance products; and

Decrease the insurer’s cash outflows for annuity products in the pay-out phase.
This is just one example of the concept of diversification benefits, when cash flows for different products
are less than perfectly correlated. To the extent that insurance companies have diverse products with cash
flows that can be sensitive in varying degrees to certain risks, there may be a potential to realise the benefit
of diversification in the aggregation of risks. For non-life insurance, naturally offsetting product lines may
not exist but diversification benefits are available by aggregating types of commercial and personal
coverage or through geographic dispersion of risk.
The level at which diversification benefits are recognised is a key aspect of a risk adjustment. Diversification
may also be referred to as the result of the level of risk aggregation. The risk adjustment might be set at
the enterprise level incorporating all diversification benefits in the organisation aggregated across all of its
product lines. The use of this level of aggregation would likely produce the highest level of diversification of
risk (and therefore the smallest risk adjustment) for the organisation given its products and risk appetite.
Alternatively, risk adjustments could be set at some product level in the organisation, where the only
diversification benefits would be those achievable at that product level. Consequently, the risk adjustment
can be impacted by the level of aggregation used to estimate the risk adjustment for an individual entity.
This illustrates that the level at which diversification benefits are recognised is a key aspect of a risk
adjustment.
Under IFRS X Insurance Contracts an entity is permitted to set the risk adjustment at the level of
aggregation consistent with the measurement objective. In other words, at a level of aggregation that
represents the compensation the insurer requires for bearing the uncertainty regarding the amount and
timing of the fulfilment cash flows. In determining the level of compensation for uncertainty for the specific
entity, the insurer would also consider the amount that makes the insurer indifferent between fulfilling the
liabilities from aggregated insurance contracts with a range of possible outcomes versus fulfilling the
liabilities from aggregated insurance contracts with fixed cash flows with the same expected value of the
cash flows, adjusted for the time value of money (i.e., present value).
This IFRS principle of risk compensation for a specific entity recognises that each reporting entity can have
different risk preferences, risk aversion, risk appetite, and risk tolerance 15. Consequently, the risk
adjustment reflects the measurement of risk as well as the value that the entity places on different levels
and characteristics of cash flow risks.
2.2.3 Risk adjustment and its composition
Risk adjustment and probability
In principle, a risk adjustment is an adjustment to an expected value in the determination of the liability for
an insurance contract. The purpose of a risk adjustment is different than the underlying expected value.
The expected value represents an estimate of the probability-weighted average of possible outcomes (in
15
The concepts of risk preferences, risk aversion, risk appetite, and risk tolerance are not specific measurements. Rather, they are
qualitative or descriptive concepts. Consequently, estimating the risk adjustment requires one or more quantitative models or
methods to apply these concepts to the characteristics of risk and uncertainty associated with the entity’s cash flows.
23
probability theory, referred to as the values of a random variable). The expected value, also referred to as
the mean value, gives weight to different possible outcomes in proportion to the probability of their
occurrence. Risk adjustment is a measurement that recognises the economic preferences of possible
outcomes, particularly where significant unfavourable outcomes are possible. The economic preferences
can be thought of as an adjustment to the probability weights associated with the different possible
outcomes. The risk adjustment is the difference between the risk-adjusted value and the expected value.
For most financial reporting applications, the risk-adjusted value of a liability would be greater than the
expected value of the liability. In other words, risk adjustments will increase the liability valuation when
applied to the expected value of insurance contract liabilities.
A probability distribution can be used to estimate the expected value and to help quantify other measures
of risk. The attributes of a probability distribution as a structure for measuring risk include:

A mathematical representation of risk, based on the concept of a random variable from probability
theory and statistical inference.

A model of risk—in terms of probability—derived, tested, or calibrated with representative data.

Risk characteristics, which can be reflected in a probability model as an approximate representation
of risk. While such models may not be a complete representation of all significant risks involved,
they are useful in presenting a foundation and framework for assessing and measuring risk.

There are often limitations on the data available to derive, validate, or estimate parameters of a
probability distribution. In addition, the use of assumptions, surrogate data, or other approaches
can provide alternatives that might be used to develop probability distributions to model risk.
Probability models can be subject to limitations, including:

Availability of representative data;

Credibility and relevance of available data;

Availability of representative probability models;

Availability of representative assumptions;

Availability of representative scenarios, as might be applied in a model that uses scenarios;

Ability to test or validate models and their parameters;

Stability of data, models, assumptions, and scenarios;

The fact that the historical period over which relevant data are available may not be representative
of the expected future, due to changes in circumstances, intervening changes, or trends.
The development of appropriate risk adjustments will involve an assessment of such limitations and
consideration of selection criteria for risk adjustment factors that reflect them.
Another important consideration would be an evaluation of data attributes, such as:

Quality (accuracy or completeness);

Use (quantitative);

Number of data points;

Measurement of data—total volume, e.g., total amount of losses;

How representative it is of the future possible outcomes;

What other data might be useful; and

Whether there are other models of similar risk processes.
Composition of risk adjustments
24
The risk adjustment is a value adjustment based on an estimate of compensation for bearing risk. It is
compensation that is not expected to actually be explicitly received (or paid), even though it is stated in
terms of compensation for bearing risk (and uncertainty) and often considered in determining a price.
Receipts or disbursements of future monetary cash flows associated with existing insurance contracts are
commonly estimated16 in actuarial work. When those cash flows are subject to risk and uncertainty, usually
from several potential sources, a theoretical insurance or reinsurance transaction can be a benchmark
where there would be the transfer of a significant portion of the risks and uncertainty in the cash flows
associated with specific events, such as death, injury, accidents, fire, and theft. The risk-adjusted price of
such a transaction might be approximated at the level at which one is indifferent between retaining the
uncertain cash flows and paying the price of full and final transfer of the cash flow risks.
The risk adjustment is a financial value that is a point estimate, not a range, stated in the monetary terms
(currency units) similar in nature to the other monetary values in the financial statement of the entity, as
required under IFRS X. It is not an estimate of income or expense, nor is it an amount that will be realised
as an actual cash flow. It is an adjustment to the value of liabilities (or assets) associated with the fulfilment
of the entity’s rights and obligations under its insurance contracts. The risk adjustment is a reflection of the
risks and uncertainties in the cash flows, as well as the risk preferences of the specific entity. The
management of the entity is responsible for making representations that underlie the reported values in the
entity’s financial statements. Consequently, the risk adjustment estimate is based on management’s views
about the compensation for bearing risk and uncertainty that is appropriate for the entity and how the value
of such compensation should be reflected in the reported financial statement values of the entity.
Key elements form the basis for risk adjustment and may be helpful in understanding approaches to
developing estimates for adjustments and, if needed, to decomposing or aggregating adjustments. These
elements are:

Risk assessment—The determination of risk compensation begins with an assessment of the risk
and uncertainty involved. This assessment will require a process to identify the risks impacting the
insurance contract fulfilment cash flows. The identification of the key drivers or sources of such
risks will be important. Following identification, those drivers and sources can be further analysed
to consider their impact on the risk adjustments.
The purpose of the risk assessment process is to identify the key drivers and sources of risk
associated with the insurance contract fulfilment cash flows. Some of the drivers of risk and
uncertainty may be related to unavailable information, missing data, insufficient knowledge of future
circumstances, incomplete examples, limited credibility or a lack of experience. These types of
“unknowns” are important and relevant to identifying and assessing the risk and uncertainty. The
assessment of such sources of risk and uncertainty may be difficult to express in terms of probability
distributions because there is little, if any, objective evidence, such as historical data, from which
to develop probability models and to derive the associated parameters. Nonetheless, such drivers
and sources of risk and uncertainty will need to be evaluated in the risk assessment. The risk
assessment process should identify information about such risks and uncertainties from internal
and external sources or via methods to compare and contrast the unknowns in a specific case to
similar cases for which more data and experience about subsequent outcomes is available.
Surrogate risk models, which include judgments about additional risk variables or parameters, may
also be useful in assessing the uncertainties from such unknowns.
It also can be important to formulate the components of the risk. For example, for mortality the risk
can include misestimation of the implications of the population’s demographics on mortality results,
the effect of anti-selection due to policyholder behaviour, future mortality improvement, and
16
The cash flow estimates may be an expected value, most likely value or other actuarial measurement, based on selected
assumptions, professional judgments, or a point estimate within a range of estimates.
25
discontinuities in mortality experience due either to favourable effects, such as a medical
breakthrough, or unfavourable effects, such as pandemics or climate change.

Risk drivers—The next step is to determine how each risk driver or source of risk (or groupings of
drivers or sources) might impact the contract cash flows. Some drivers or sources of risk and
uncertainty may simply be scenarios that can be considered in terms of the impact of the scenario
on costs or revenues. Other scenarios may indicate a ranking of risk and uncertainty characteristics
that would impact the risk adjustments. There may be situations where the risk drivers are
incorporated more directly in the risk models to be used for the estimation of risk adjustments. For
example, the lack of data or other knowledge about the future cash flow risks can be a significant
risk driver in many situations. In addition, it could be difficult to incorporate such a risk driver directly
into a probability risk model that might be used. Nonetheless, this risk driver could be a
consideration for a larger risk adjustment as compared to other situations where the cash flow risks
can be analysed and incorporated into a risk model based on a quantitative analysis of past history.
There are situations where there is significant uncertainty in the estimate of the unbiased expected
value of the cash flows. The selection of a risk measurement model and the selection of the
parameters for such risk measurement models might also consider the sources of risk. If there is
significant risk that the selected model is not a good representation of the risk of the amount or
timing of the cash flow, or that the selected parameters for the model may not accurately portray
the risks being considered, then those selections might impact the risk measurement used to
estimate a risk adjustment. Where there is more (or less) evidence and confidence associated with
the models and their parameters, then the principles under IFRS X Insurance Contracts suggest
that the risk adjustments would reflect the effect of such risk drivers. Less evidence and lower
confidence would result in larger risk adjustments, and more evidence and higher confidence in
lower risk adjustments. For example, one means to reflect this type of risk driver might be to select
a higher confidence level where there is lower confidence in the risk models or in the model
parameters selected to measure the risk in the cash flows. However, there can be situations where
selecting a higher confidence level would not produce meaningful results.

Risk models—These are mathematical representations of the risks and uncertainties inherent in
the contract fulfilment cash flows and in the estimates of the unbiased expected value of those cash
flows. Such models, or methods, are intended to represent the possible/probable actual outcomes
as a means to develop risk metrics, e.g., probabilities, or other quantitative measures related to
probability, such as standard deviation or other risk statistics.
By their nature, risk models are approximations and are more difficult to validate when there is less
known information about the risks or where evidence of the uncertainties are not directly
observable. For example, a scenario approach might be used to illustrate the impact of risks and
uncertainties. However, such an approach usually requires a selection of probability assumptions
for each scenario. A more advanced approach might use a scenario generator model that implicitly
includes the underlying probability assumptions associated with different scenarios. Consequently,
the selection of risk models can provide a means to consistently compute risk probabilities and
other risk metrics, but such models may not adequately represent the uncertainties about whether
the selected risk models, or the model parameters, are realistic representations of the cash flow
risks. In some cases the criteria for the risk adjustment, such as the percentile used for the
confidence level, can be selected by considering the uncertainties that might not be reflected in the
selected risk model, or its parameters. For example, where there are greater underlying
uncertainties, such as where model validation is not possible due to a lack of data or changed
circumstances, the selection of a higher confidence level might be considered for computing the
risk adjustments. This approach, and its limitations, has been discussed above.

Risk aggregation—For the estimation of risk adjustments, it is recognised that the aggregation of
risks can impact the total amount of the risk adjustments. This is sometimes referred to as the
diversification benefit that is related to aggregating dissimilar risks that could offset each another,
or the effect of aggregating similar risks due to the statistical law of large numbers. Where the
aggregation or summation of uncertain outcomes encompasses larger volumes of business, there
26
is normally a reduction in the measure of total uncertainty. Correlation between risk variables can
impact the degree to which such reduction in uncertainty can be expected. Where there is very high
correlation among the outcomes that comprise the aggregate total under consideration, the
reduction in uncertainty would tend to be small. The cash flow risks under insurance contracts may
have some heavily correlated risk drivers, such as inflation risks where contract cash flows are
subject to inflation. However, many risks assumed under various types of insurance contracts do
not exhibit such high correlations among the cash flows from many other types of contracts. Thus,
measuring risk in a way that recognises the aggregation of risk and uncertainty will better reflect
the economics of risk and uncertainty as they affect the valuation of insurance liabilities by means
of the risk adjustments.
The level of aggregation is a significant factor to consider when using risk measures to estimate
risk adjustments. The objective of risk adjustments is to reflect the compensation that the entity
requires for bearing risk. Consequently, the level of aggregation would also reflect the level at which
the entity considers the risks and uncertainties inherent in its insurance liabilities and, therefore,
the compensation that it believes is relevant. The analysis of risk and uncertainty may be performed
at a more detailed level and would include the elements mentioned above: risk assessment, risk
drivers, and risk models. However, such detailed analyses could be combined at the appropriate
level of aggregation to achieve the recognition of the diversification of risk and uncertainty where it
is possible and reflected in the entity’s consideration of its compensation for bearing risk.
The tools and techniques for computing risk measures at the selected level of aggregation are
discussed in chapters 3 to 4.

Risk preferences—The basic concept of risk preferences in the context of the valuation of risk and
uncertainty is that an entity will consider the likelihood and the financial impact of different outcomes
when making financial decisions. An entity may naturally prefer favourable outcomes to
unfavourable outcomes, but the level of risk or uncertainty associated with each of the possible
outcomes can also be important factors. As noted previously, the concept of risk preferences and
related concepts of risk aversion, risk appetite, and risk tolerance are not necessarily expressed
quantitatively. Rather, these concepts are generally more qualitative and descriptive in nature.
Although metrics can be designed to attempt to quantify these concepts, such measurements are
not intrinsic to the concepts involved. The IFRS principles recognize that each reporting entity will
have its own risk preferences. Therefore, it is important to understand how to evaluate and reflect
those preferences in order to appropriately quantify risk adjustments.
For example, when the outcomes of a financial decision are uncertain, the possible outcomes
resulting from it can usually be broken down into scenarios or ranges of outcomes. There can be
uncertainty with respect to the financial result associated with each scenario and uncertainty with
respect to the estimate of the probability that the outcome will occur. The characteristics of these
uncertainties (e.g., what are the risk drivers associated with the uncertainties and how significant
are the unknowns) will be key factors in an entity’s view of its risk preferences for the specific
financial decision.
Risk aversion can be described as the preference to avoid or mitigate the impact of unfavourable
outcomes as compared to favourable outcomes. The more unfavourable the outcome, the greater
the preference to avoid or mitigate it. When faced with a choice between a favourable outcome
versus an unfavourable outcome, with equal chance of occurring, and an equal amount of
favourable or unfavourable outcome, risk aversion will result in decisions that give more weight to
avoiding the unfavourable outcome versus the weight given to the favourable outcome.
Risk appetite can be described as the decision-making preferences for taking risk in order to
achieve a return. When the amounts associated with unfavourable outcomes are limited or capped
and there is opportunity for rewards or returns on investment associated with favourable outcomes,
the risk appetite is a representation of the entity’s risk-return or risk-reward preferences.
Risk tolerance can be described in terms of the maximum amount that the entity is willing to lose
or the level of control needed to eliminate risk or mitigate risk to an acceptable level. The entity is
27
unwilling to accept risk in terms of the maximum (or minimum) amount that would put it in financial
jeopardy or in terms of the level of control needed to avoid jeopardy. This description relates the
level of risk that an entity will accept to the need to take specific action to avoid, reduce, or control
the level of risk. The term “risk” in this description of risk tolerance considers both the measure of
unfavourable results, usually a financial amount, and the probabilistic measure of risk, usually the
chance of failure (e.g., falling outside the acceptable limits).
The above risk concepts, and other concepts used to describe risk preferences in alternative terms,
can be useful in understanding different aspects of risk preferences.

Risk compensation—An insurer that assumes risk and uncertainty in the cash flows requires
compensation for the expected cash flows and its acquisition and operating expenses. In addition,
it normally requires compensation for the risk it has accepted under its insurance contracts for the
possibility that the actual cash flows from those contracts could be different than the expected cash
flows. This risk reflects the full range of possible cash flows. However, the compensation recognises
that the risk of adverse outcomes (more unfavourable than expected cash flows) can be offset by
the possible favourable outcomes (more favourable than expected cash flows). Also, the
compensation will reflect the likelihood that the outcomes (i.e., actual cash flows) could be adverse
or favourable, and the degree or amount of potential variances from the expected values. The risk
preferences of an entity considers the financial impact of the full range of possible outcomes when
considering its compensation for bearing the risk of such outcomes.
Key terms used in the definition and estimation of risk adjustment are:

Compensation required by the insurer—This is an entity-specific concept such that different entities
might have different risk preferences. The compensation reflects the contract fulfilment cash flows
risks as they apply to the contract fulfilment cash flows for the specific entity. This concept of
compensation for bearing risk is not a specifically calculated valuation measure such as fair value,
market value, market consistent value, embedded value, settlement value, capital adequacy,
solvency risk margin, technical provisions, or profit expectations. However, common principles
underlie these concepts of value and provisions for risk that are potentially useful to an entity in
selecting how it measures various cash flow risks and in helping to arrive at a calibration of
compensation for bearing risk that can be used by the specific entity.

Compensation for bearing risk—Here bearing risk refers to the risk and uncertainty concerning
actual outcomes of ultimate cash flows related to insurance contracts versus the unbiased estimate
of expected value, considering those outcomes that are relevant to the unbiased estimate of
expected value.
o
Compensation—an amount reflecting a risk adjustment that when added to the unbiased
estimate of expected value would be such that the insurer is indifferent between:
(a) Retaining the uncertain cash flows associated with the fulfilment of its insurance contracts;
and
(b) Having fixed and certain cash flows, i.e., fixed amounts with certainty of timing of the cash
flow amounts, or a single fixed amount.
o
Insurer’s view of the risk and uncertainty associated with the future cash flows needed to fulfil
all in-force policies and all unpaid obligations on expired policies:
(a) Sources of risk and uncertainty and drivers of variability of actual versus expected cash
flows; how accurate are the estimates of the future cash flows and of the unbiased
expected values; what basis is there for the estimates—the credibility and relevance of the
available data, the degree of confidence in the specific risk models, the unbiased expected
value, and the assumptions needed for the estimates and the models;
(b) Assessment of the impact of risk and uncertainty on the future cash flows (e.g., evaluation
of different scenarios);
28
(c) The considerations that will determine the minimum profit (compensation for bearing risk)
at which the company would be indifferent to retaining, assuming, or transferring the
portfolio of uncertain fulfilment cash flows; and
(d) The selection of risk-pricing models (or functions) that assign the compensation for risk that
is based on an assessment of the risk characteristics and risk metrics.
The compensation required for bearing risk is a core element in the pricing of insurance contracts by an
insurer at the time when such insurance contracts are offered for sale. Hence, such contract pricing at
inception reflects not only the expected costs of fulfilling the obligations under the insurance contract, but
also the compensation required for bearing the risk that such contracts will have actual fulfilment costs
greater than the expected costs.
Consider the situation where there is an explicit loading in the pricing of insurance contracts in order to
compensate for the risk of higher than expected costs. In such situations, the explicit loading could be an
appropriate risk adjustment for those contracts. Consider another situation where the compensation for
bearing risk is incorporated in the pricing through conservative assumptions used to estimate the claim
costs used for pricing (e.g., reflecting the uncertainty in the mean estimates for claim costs such as mortality,
morbidity, claim frequency or claim severity). In such situations, alternative unbiased cost assumptions
could be used to derive the implicit risk adjustment for those contracts. However, not all loadings for risk
and uncertainty in the pricing of insurance contracts are solely related to the contract cash flow risks, such
as an overall profit loading that is a blend of risk and market considerations. For example, an insurer may
expect to achieve higher profits in its pricing where it has a competitive cost advantage; where markets for
a particular type of insurance, type of policyholder, or type of exposure to loss are less competitive; or
where insurance buyers are willing to pay more for the quality or convenience of the insurance-related
services it provides. Consequently, one approach to calibrating the compensation for bearing risk would be
based on a relationship to the minimum acceptable profitability for new and renewal contracts in a stable
competitive market. In many situations, one might expect the targeted profit level to be greater than the
compensation for bearing risk. Such compensation under IFRS X Insurance Contracts is described in terms
of the level of compensation that would result in an insurer being indifferent between having uncertain cash
flows versus having certain cash flows (or a single present value of the uncertain future cash flows).
Section 2.3 Criteria to assess the appropriateness of risk adjustment
techniques
As described in section 1.2.2, the IFRS X Insurance Contracts application guidance identifies five
characteristics that a risk adjustment should possess to the extent possible.
The IFRS X Insurance Contracts application guidance also indicates that when determining the most
appropriate risk adjustment technique the insurer must apply judgment and consider the following:
1. The technique must be implementable at a reasonable cost, and in a reasonable time; and
2. It provides concise and informative disclosure so that users of financial statements can benchmark
the entity’s performance against the performance of other entities.
Additionally, based on suggestions contained in a 2009 monograph from the IAA 17, some desirable
characteristics for risk adjustment techniques are:
1. They have a consistent methodology that can be applied for the entire lifetime of the contract and
its associated fulfilment cash flows and is not expected to be significantly changed each year;
17
Measurement of Liabilities for Insurance Contracts: Current Estimates and Risk Adjustments.
29
2. They involve assumptions consistent with those used in the determination of the corresponding
current estimates;
3. They involve a method that is applied in a manner consistent with sound insurance pricing principles
and practices;
4. They can be applied to insurance products (lines of business) based on risk differences among
them;
5. Calculations are easy, recognising that there can be fulfilment cash flow risks with complexity,
interdependence, and sensitivity to technical assumptions and those situations may indicate the
need for complex models, simplifying assumptions, or suitable approximations;
6. It can be consistently determined between reporting periods for an entity such that the risk
adjustment varies from period to period only to the extent that there are real changes in risk;
7. It can be determined that different levels of risk adjustment between two entities with similar risks
in their fulfilment cash flows are reflective of differences in their risk evaluation, risk preferences, or
their compensation required for bearing risk;
8. They facilitate disclosure of information useful to stakeholders;
9. They provide information that is useful to users of financial statements;
10. They reflect the different objectives for risk measurement applicable to IFRS X Insurance Contracts
versus other objectives, such as regulatory solvency; and
11. They are consistent with the principles and measurement objectives under IFRS X Insurance
Contracts.
In that same monograph the IAA also considered that it would be useful to consider the extent to which a
technique is “market-consistent” in theory, i.e., whether the risk adjustment is based on assumptions and
approaches a market participant would use and whether adjustments are sensitive to changes in the market
to the extent observable. This last consideration appears very similar to the measurement objective that
the risk adjustment shall be the compensation the insurer requires for bearing risk.
Finally, the IAIS18 indicates that a provision over the current estimate should be determined in a manner
that promotes transparency and comparability between insurers and markets in an objective manner. Under
IFRS X Insurance Contracts, transparency and comparability are considered in a more limited way.
Transparency is provided by the reporting of risk adjustments separately as a component of the insurance
liabilities, but only in the aggregate for the reporting entity. Comparability is provided via a required
disclosure under IFRS X Insurance Contracts of the equivalent aggregate confidence level associated with
the total risk adjustments for the entity.
Section 2.4 Special case – the use of replicating portfolio
A special case exists where the replicating portfolio approach can be used instead of the three building
blocks approach. Under this approach, fulfilment cash flow risks are already reflected in the cash flows’
valuation. Consequently, risk adjustments simply do not apply to the valuation of those replicated cash
flows.
IFRS X Insurance Contracts permits the use of replicating portfolios for the reporting of certain portions of
insurance liabilities. To understand the concept as applied under IFRS X Insurance Contracts, the following
section discusses it in the context of market-consistent valuations that are similar to insurance liability
18
ICP 14, IAIS.
30
valuation under IFRS X Insurance Contracts but do not have exactly the same valuation requirements as
IFRS X Insurance Contracts.
A replicating portfolio valuation under IFRS X Insurance Contracts is not subject to the building block
approach and the market value of the replicating portfolio determines the value of the replicated liability
cash flows, without the need to estimate the unbiased expected value of those cash flows, the applicable
discount rates or the risk adjustment. The remaining non-replicable insurance liability cash flows would be
valued using the building block approach (except for the premium allocation approach as defined under
IFRS X), including a separate risk adjustment.
Market-consistent valuation
This is based on a framework defined by:

Replication of the insurance liability cash flows with financial instruments;

Use of current information from the markets; and

Consistency with how markets arrive at prices and valuation.
The basis of a market-consistent valuation can be viewed as a cash flow production cost, i.e., there are
financial instruments that would produce expected cash flows that are a reasonably close match, if not a
perfect match, to the expected fulfilment cash flows associated with an entity’s insurance liabilities. The
value of insurance liabilities is determined by the amount of financial assets that need to be held to produce
the corresponding liability cash flows.
To value insurance liabilities in a market-consistent manner, one would determine the cost of reproducing
the insurance liability cash flows using the market prices of financial instruments.
Transfer valuation
This is typically used for the valuation of assets, particularly those invested in financial instruments, where
they are traded in deep and liquid markets and have market prices that emerge from the sale and purchase
(i.e., transfer) of assets between many counterparties. In a transfer valuation, the production cost of a
hypothetical third party taking over the insurance liabilities would be the basis for the valuation.
For insurance liabilities, however, there is an absence of deep and liquid markets. Consequently, any
transfer value would depend on the particular counterparty taking on the insurance liability. Unfortunately,
for many types of insurance liabilities, a transfer valuation cannot be based on market pricing of traded
insurance liability obligations, due to this lack of a deep and liquid market.
Entity-specific cost
This aims to determine the cost of producing the insurance liability cash flows assuming that the entity
retains the liabilities. Entity-specific costs do not require assumptions of hypothetical third parties that
potentially take over the liabilities. They require that the entity clearly splits parameters given by the market
from those that are entity specific.
The following discussion addresses a market-consistent valuation that is entity specific, which is similar to
the basis underlying IFRS X Insurance Contracts.
Insurance liability cash flows are decomposed into the following components to determine the amount of
financial assets needed:

A component that can be perfectly replicated (produced) by a set of financial instruments under all
possible future scenarios; and

A component that cannot be replicated, i.e., the remainder.
Under a perfect replication, the cash flows are exactly replicated under all possible future scenarios and
over the entire lifetime of the insurance contract cash flows.
31
The set of financial instruments that perfectly replicates the first component is called the replicating portfolio.
Such a set of component cash flows would have a market value. However, such market value would include
some portion of the value that is dependent on the credit default risk of the referenced set of financial
instruments. The insurance liability cash flows may not be produced in the event of a future credit event
involving the specific issuer(s) of the financial instrument that would change the financial instrument cash
flows.
The remaining component of the fulfilment cash flows would reflect only those component cash flows that
cannot be replicated. Such non-replicated cash flows can be valued similar to other insurance cash flows
that do not have significant cash flow components that can be replicated.
The financial assets needed to produce the certain liability cash flows would produce the market value (or
fair value) of the replicating portfolio. Note that in the replication approach the entity does not need to
actually hold these financial assets. The insurer’s actual asset portfolio can consist of different financial
instruments. The replicating portfolio is a reference value determined for the purpose of valuation only.
Discount rates and risk adjustments for replicating portfolio valuations are not based on a yield curve for
discounting the insurance liability cash flows. Rather, the set of acceptable financial instruments determines
the yield curve applicable to those cash flows. In a replicating portfolio valuation, the yield curve used for
discounting and the implicit risk adjustment are determined based on the specific selected set of financial
instruments used for replication.
For example, if the set of acceptable financial instruments were to be chosen to consist of credit-risk-free
government bonds only, this would imply a risk-free discount rate. In this example, the insurance liability
cash flows are decomposed into those component cash flows for which the uncertain fulfilment cash flows
can be replicated by the uncertain cash flows in the replicating set of financial instruments. The market
value of the replicating portfolio, i.e., the market value of credit-risk-free government19 bonds, would
represent:

The discounted present value of the deterministic expected value estimate of the liability
component cash flows using the risk-free government rate; and

The implicit risk adjustment.
The above was provided as a hypothetical example. We recognize that in reality it is impractical to utilize
government bonds to replicate most types of insurance contract cash flows and the associated
uncertainties. The replicating portfolio often needs to include credit-risky instruments and sometimes even
derivatives. For example, in the U.S., a variable annuity product with guaranteed minimum accumulation
benefit rider can be replicated with an equity put option, if the term and the guaranteed amount of the rider
benefit are fixed.
As an alternative example, assume that credit-risky corporate bonds would also be included in the financial
instruments for which the cash flows are acceptable for replication. In this case, the implied discount rate
would be higher and include a component of the spread of the corporate bonds over risk-free government
bonds to be used in the replication. The market-based valuation would then—in contrast to the case of
replication with credit-risk-free government bonds—also contain a component for the credit risk that would
be introduced. Note that, in this case, the discount rate cannot be defined as a fixed yield curve, as it
depends for each insurance liability cash flow on the specific replicating portfolio, which might consist of
government bonds and corporate bonds with different spreads.
The replicating portfolio valuation, therefore, depends on the credit risk embedded in the portfolio. The use
of credit-risk-free financial instruments does not recognize any ability or need to diversify the credit risk in
19
The credit risk of government bonds is not always risk free. Government bonds may be issued by entities that are recognized to
have less than the highest level of sovereign credit available, or may be issued in a currency subject to currency value risk that is
not the lowest available
.
32
the market value. Alternatively, the use of other financial instruments with some level of credit risk may
presume that the market value is relevant for the replicating portfolio and that the credit risk is readily
diversified away.
To ensure that a market-based replicating portfolio valuation is consistent, it is therefore essential that the
replicating portfolio, the embedded risk adjustment, and the implicit discount rate are all based on the same
underlying assumptions, i.e., on consistent assumptions regarding the criteria for the selection of a set of
financial instruments acceptable for replication.
Reliability of the replicating portfolio valuation of insurance liabilities is essential. It is therefore important
that only financial instruments traded in a deep and liquid market are used for the replication. Financial
instruments traded in a less deep and liquid financial market have less-reliable market prices. A replicating
portfolio with less-reliable market prices would reflect the valuation uncertainty and would result in an
inappropriate (too low) valuation of the corresponding liabilities.
A replicating portfolio valuation may be sensitive to the choice of acceptable financial instruments, since
there may be additional risks included in the replicating financial instruments. For example, market valuation
uncertainty stemming from a lack of reliable market prices due to the illiquidity of acceptable financial
instruments would likely be embedded in current market prices—lower current market prices than if no such
uncertainty existed. This example would be inconsistent with the premise underlying the use of a replicating
portfolio to value the cash flows because any reduction in market prices due to uncertainty in the financial
instrument cash flows would not be appropriate for a valuation of insurance liabilities for which uncertainty
considerations would increase the liability valuation, rather than reduce it. Of course, the measurement of
uncertainty regarding reliable market prices due to illiquidity is likely to be less relevant than to measure the
uncertainty risk in the insurance liability cash flows.
The use of replicating portfolio values in determining the value of the insurance liabilities will need to take
into account:

The market value (or fair value) of the replicating portfolio;

The unbiased mean value determined for the component cash flows that cannot be replicated;

The risk adjustment for the component cash flows that cannot be replicated;

The contractual service margin (CSM) liability as required based on the principle under IFRS X
Insurance Contracts of no accounting gain on initial recognition.
Dynamic versus static portfolio replication are concepts that are more natural in the context of a marketconsistent valuation versus IFRS valuation. Dynamic replication requires the use of current information at
all times while static replication assumes a hold-to-maturity view where assets are not sold over the lifetime
of the insurance contract cash flows. In a dynamic replication, the set of replicating instruments is changed
over time and optimized, based on the most recent information. The yearly application of dynamic
replication is, for example, an (implicit) assumption under Solvency II or the Swiss Solvency Test.
Under IFRS X Insurance Contracts, the requirements are for the valuation to be current. That is, at the time
of the reporting of the financial statements, e.g., quarterly, semi-annually, or annually. Consequently, static
portfolio replication valuation would appear to be inconsistent with this IFRS principle of current valuation
and the dynamic replication would appear to be appropriate.
Replicating portfolio valuation under IFRS X Insurance Contracts for fulfilment cash flows
33
As discussed above, the replicating portfolio value of a set of cash flows is considered a market value 20 of
those cash flows. This replicated value concept is included under IFRS X Insurance Contracts. However,
when some component cash flows of such insurance liabilities cannot be replicated, only a portion of the
value of the liabilities can be valued using this approach. This market-based value of insurance liabilities
requires a specification of the universe of financial instruments that are deemed acceptable for replication.
In order to arrive at a reliable market-based value for the replicating portfolio, the financial instruments need
to have reliable market prices. This implies that the financial instruments used for replication must be traded
in a deep, liquid, and transparent market. Otherwise, financial instruments would not have reliable market
prices and consequently the market-based valuation using the replicating portfolio becomes unreliable.
The replicating portfolio approach to determining the value of the insurance liabilities is unique to the specific
entity with respect to the fulfilment cash flows, but not unique to the entity’s risk preferences or the entity’s
compensation required for bearing the cash flow risks. It depends only on the choice of acceptable financial
instruments for the replicable component of the insurance liability cash flows.
The cash flows emanating from insurance liabilities to be valued are dependent on the nature of the risk
insured, the provisions of the insurance contracts that determine the cash flows, and the contingent events
that impact those cash flows. Where some of the contract cash flows can be replicated by a set of financial
instruments, they can be valued using the market value of the replicating portfolio. The risk characteristics
and probability distribution of the replicated fulfilment cash flows, including the amount, timing, and
uncertainty, need to be perfectly matched under all scenarios. It may be possible to replicate the expected
value of the cash flows in their amount and timing. However, the uncertainty of the cash flows could be
significantly different between the replicating assets and the insurance liabilities, and therefore such
differences in uncertainty of the cash flows would not allow the use of replicating portfolio under IFRS X
Insurance Contracts.
In applying a replicating portfolio approach, the market value of the replicating portfolio may or may not
reflect some level of credit risk from the selected set of financial instruments, depending on whether the
market value considers the credit risk to be readily diversified away. Consequently, the selection of a
replicating portfolio of a set of financial instruments would need to consider whether there is a significant
element of non-diversified credit risk embedded in the market value of the financial instruments.
[Possible Appendix/Sidepanel/Sidebar/Footnote(s) re: additional comments
regarding use of best estimate under IFRS X]
The term “best estimate” is frequently used to indicate an estimate that reflects multiple inputs, from which
the “best” is selected. Another term, actuarial central estimate, depicts an estimate that represents the
mean. Actuaries also use the concept of a reasonable estimate. Various definitions and discussions of
these concepts, and the concept of unbiased, are explained below.
In statistical theory, best estimate is used to describe an unbiased estimate (statistically unbiased, based
on expected value) that has minimum variance (among other unbiased estimates). However, this term has
not generally been used in actuarial work in a way that is identical to the meaning from statistical theory.
Consequently, there can be some differences in how this term is interpreted and applied in practice.
In a 1998 paper,21 Kathleen Blum and David Otto note:
The concept of “best estimate” by itself, however, is ambiguous and does not have any particular
tie to the scientific foundation of the reserving process. “Best” is a loaded word which leads to the
question: “best by what standard?” If two potential users have different ideas of the purpose of a
20
IFRS X Insurance Contracts refers to this value as a fair value, which is defined more broadly in other IFRS guidance than a value
simply taken from market prices based on a closing price or average price of the financial instruments.
21
"Best Estimate Loss Reserving: An Actuarial Perspective".
34
reserve, then their understandings of “best” might also differ. Thus, while the concept of “best
estimate” may elicit an intuitive understanding, this understanding will not necessarily be the same
from one person to the next. Something more is needed.
In practice, some actuarial guidance or common practice may allow or suggest that a best estimate can
include an implicit level of prudence in the selection of the estimate using various methods, models,
assumptions, parameters, factors, or a range of estimates. Under IFRS X Insurance Contracts such
guidance would be contrary to the standard’s intent. For example, the American Academy of Actuaries
(AAA) developed definitions22 to distinguish whether a best estimate excludes or includes a “level of
conservatism” that may be appropriate:
Best Estimate: The actuary's expectation of future experience for a risk factor given all available,
relevant experience and information pertaining to the assumption being estimated and set in such
a manner that there is an equal likelihood of the actual value being greater than or less than the
expected value.
Prudent Best Estimate: Any valuation assumption used for projections that is developed by applying
a margin for uncertainty (reflecting estimation error and adverse deviation) to the Best Estimate
assumption. The resulting assumption should be consistent with the stated principles of a
Principles-Based Approach, be based on any relevant and credible experience that is available,
and should be set to produce, in concert with other Prudent Best Estimate Assumptions, an overall
value that is consistent with the stated level of conservatism for the calculation under consideration.
Equal likelihood in the description above is not intended to be 50% probability, i.e., the mode of the
probability distribution. Rather, an estimate of the mean is intended, in which the estimate reflects a
probability-weighted expected value and the probability weights are unbiased.
One definition of best estimate is given under Solvency II23 as “the best estimate shall be equal to the
probability-weighted average of future cash-flows, taking account of the time value of money (expected
present value of future cash-flows), using the relevant risk-free interest rate term structure.”
This definition is analogous to the combination of building block one and building block two under IFRS X.
There are general provisions under Solvency II, including “Technical provisions shall be calculated in a
prudent, reliable and objective manner.” In this case, the reference to a prudent manner would not seem to
suggest that estimates underlying the provisions would include a margin for prudence. Additional Solvency
II guidance24 for the calculation of the technical provisions includes the directions: “The calculation of the
best estimate shall be based upon up-to-date and credible information and realistic assumptions and be
performed using adequate, applicable and relevant actuarial and statistical methods.”
The Solvency II definition and guidance lack any reference to the best estimate being unbiased. However,
the European regulator, EIOPA (formerly CEIOPS), has provided the following advice25 on the calculation
of the best estimate under Solvency II:
This in effect acknowledges that the best estimate shall allow for uncertainty in the future cashflows used for the calculation. In the context of this advice, allowance for uncertainty refers to the
consideration of the variability of the cash flows necessary to ensure that the best estimate
represents the mean of the cash flows. Allowance for uncertainty does not suggest that additional
margins should be included within the best estimate.
22
Principles-Based Approach Definitions from the American Academy of Actuaries’ Consistency: Principles, Summary, Definitions &
Report Format Work Group.
23
Article 76 and Article 77(2), Solvency II Directive proposal adopted by the ECOFIN Council on December 2, 2008.
24
Ibid. Article 77(2).
25
CEIOPS Consultation Paper No. 26.
35
The expected value is the average of the outcomes of all possible scenarios, weighted according
to their respective probabilities. Although, in principle, all possible scenarios are considered, it may
not be necessary, or even possible, to explicitly incorporate all possible scenarios in the valuation
of the liability, nor to develop explicit probability distributions in all cases, depending on the type of
risks involved and the materiality of the expected financial effect of the scenarios under
consideration.
Although the concept of an unbiased estimate is not explicitly addressed in the Solvency II guidance or in
the advisory language above, it appears that the intent is the same as under IFRS X Insurance Contracts.
In a 1998 paper26, Richard Stein and Michael Stein suggest certain criteria for a best estimate value that
included unbiased:
an unbiased, accurate, and financially meaningful estimate which is generated by a scientific,
actuarial model which employs reasonable assumptions and in which the appropriate procedures
have been applied dispassionately.
These authors point out that others have investigated certain tendencies of specific actuarial methods used
in non-life insurance to result in biased estimates:
It is worth noting that Cheng-sheng Peter Wu’s paper indicates that the use of link ratio averages
which exclude the highest and lowest historical values results in a downward bias in best reserve
estimates. Also, Daniel Gogol points out that both the simple average development factor method
and the weighted average development factor method are both biased upwards.
The term actuarial central estimate is defined by the Actuarial Standards Board (ASB)27 of the U.S. as: “An
estimate that represents an expected value over the range of reasonably possible outcomes.” The expected
value concept is included in this guidance with the following explanation:
. . . the actuarial central estimate represents an expected value over the range of reasonably
possible outcomes. Such range of reasonably possible outcomes may not include all conceivable
outcomes, as, for example, it would not include conceivable extreme events where the contribution
of such events to an expected value is not reliably estimable. An actuarial central estimate may or
may not be the result of the use of a probability distribution or a statistical analysis. This description
is intended to clarify the concept rather than assign a precise statistical measure, as commonly
used actuarial methods typically do not result in a statistical mean.
The terms “best estimate” and “actuarial estimate” are not sufficient identification of the intended
measure, as they describe the source or the quality of the estimate but not the objective of the
estimate.
This actuarial standard includes the following guidance regarding the avoidance of bias in the estimate,
noting that bias can occur whether the estimate is intended to be the expected value or some other
measure.
The actuary should consider the reasonableness of the assumptions underlying each method or
model used. Assumptions generally involve significant professional judgment as to the
appropriateness of the methods and models used and the parameters underlying the application
of such methods and models. Assumptions may be implicit or explicit and may involve interpreting
past data or projecting future trends. The actuary should use assumptions that, in the actuary’s
professional judgment, have no known significant bias to underestimation or overestimation of the
identified intended measure and are not internally inconsistent. Note that bias with regard to an
26
“Sources of Bias and Inaccuracy in the Development of a Best Estimate”.
27
Actuarial Standard of Practice No. 43, Property/Casualty Unpaid Claim Estimates. Updated for deviation language effective May 1,
2011. June 2007.
36
expected value estimate would not necessarily be bias with regard to a measure intended to be
higher or lower than an expected value estimate.
The concept of a “reasonable estimate” is sometimes used in actuarial practice and typically applies to an
actuarial opinion that a liability estimate is reasonable. It relates to a range of estimates based on
appropriate actuarial methods or alternative assumptions judged to be reasonable. The ASB28 provides
guidance that is fairly broad with respect to how reasonable estimates are derived.
The actuary should consider a reserve to be reasonable if it is within a range of estimates that could
be produced by an unpaid claim estimate analysis that is, in the actuary’s professional judgment,
consistent with both ASOP No. 43, Property/Casualty Unpaid Claim Estimates, and the identified
stated basis of reserve presentation.
This actuarial standard recognises that a “reasonable estimate” may include or exclude a provision for an
explicit risk margin for uncertainty in the estimate.
The actuary should identify the stated basis of reserve presentation, which is a description of the
nature of the reserves, usually found in the financial statement and the associated footnotes and
disclosures. The stated basis often depends upon regulatory or accounting requirements. It
includes, as appropriate, the following: [. . .]
b. whether the reserves are stated to include an explicit risk margin and, if so, the stated basis
for the explicit risk margin (for example, stated percentile of distribution, or stated
percentage load above expected).
A paper by Mark Shapland29 explores the possible use of statistical concepts and principles in assessing
the reasonableness of actuarial estimates. For estimates of insurance liabilities under IFRS X Insurance
Contracts, the concept of reasonable estimates could be applied in evaluating the building block estimates.
28
Actuarial Standard of Practice No. 36, Statements of Actuarial Opinion Regarding Property/Casualty Loss and Loss Adjustment
Expense Reserves. December 2010.
29
"Loss Reserve Estimates: A Statistical Approach for Determining ‘Reasonableness’".
37
Chapter 3 – Risk Adjustment
Techniques
Abstract
This chapter discusses alternative techniques for calculating risk adjustments.
The first section covers the two quantile techniques mentioned in the IFRS X Insurance Contracts for
estimating the risk adjustment.
The second discusses the cost-of-capital technique for risk adjustment and the significant assumptions
needed.
The third discusses an alternative method to the three typical techniques that considers the risk preferences
of the entity by a mathematical model that has significant advantages over other techniques.
The fourth considers advantages, disadvantages, and considerations regarding the appropriateness of the
three techniques illustrated in IFRS X Insurance Contracts and some criteria to assist in selecting
appropriate techniques for estimating IFRS X Insurance Contracts risk adjustments.
The final section discusses the use of replicating portfolios and the implications for risk adjustments.
Section 3.1 Techniques for risk adjustments
In this section three techniques are described to estimate risk adjustments for insurance liabilities. The
IFRS X Insurance Contracts does not provide specific examples of techniques. IFRS X Insurance Contracts
does not mandate a particular technique to determine risk adjustments, and does not limit the entity from
using a particular technique. Its provisions do require that if an entity uses a technique other than the
confidence level, it will need to convert the result of any chosen technique into an equivalent confidence
level.
3.1.1 Quantile techniques
The two quantile techniques described below are commonly used to reflect differences in risk based on
knowledge and analyses that describe the uncertainty of outcomes by means of a probability distribution.

Confidence level (percentile or value at risk)—In this method, the risk adjustment is calculated as
the amount that must be added to the expected value of the insurance liabilities, such that the
probability that the actual outcome will be less than the liability (including the risk adjustment) is
equal to a targeted probability (i.e., confidence level). The risk adjustment is the difference between
the probability-weighted expected value and the corresponding result at the selected percentile of
the probability distribution.

Conditional tail expectation, or CTE (tail value at risk)—This is a modification of the confidence
level technique. The risk adjustment is calculated as a conditional mean of the cash flows for all
points of the probability distribution in excess of a chosen confidence level. The risk adjustment is
the distance between the probability-weighted expected value (an estimate of the mean over the
whole distribution) and the probability-weighted expected value of cash flows only for those points
of the distribution beyond a selected percentile of the probability distribution.
Note that the use of the above quantile techniques in determining compensation for risk is based on the
notion that the compensation is proportional to some quantum measurement of risk. In the actuarial pricing
of insurance contracts, typically in the property/casualty area, theoretical and practical premium principles
38
have been developed that include a provision for risk, sometimes referred to as the component of premium
for contingencies. Virginia Young30 discusses these premium principles in depth. The noteworthy ones
include those based on a quantum of risk, such as standard deviation or variance, which is used to compute
the risk loading in the premium. Others are based on theories about the aggregate risk in a given insurance
market and utility theory functions that model the acceptable compensation (premium) for risk. Young
provides properties for premium principles to compare the principles’ characteristics. Of the various
principles, the quantile methods described above are mentioned as risk measures for actuarial applications
by Mary Hardy31. In certain special cases, such as a normal probability distribution, these quantile
techniques have mathematical equivalency to the standard deviation or variance of the probability
distribution.
The IFRS X Insurance Contracts measurement objectives for risk adjustments are very similar to the
objectives underlying the component of the various premium principles describing the price (compensation)
for uncertainty in outcomes. However, these quantile techniques are more typically used as measures of
risk, rather than as the compensation required by an entity for bearing uncertainty. Such quantile measures
of risk are typically applied by actuaries in determining maximum risk limits in terms of the probability of
exceedance (ruin theory) or in the determination of capital adequacy (solvency risk).
The properties of these quantile risk measures present issues to be considered when they are used for
determining risk adjustments.
For example, if the confidence level amount for a particular set of future uncertain cash flows is 100 currency
units (CU) (in excess of the expected present value of the cash flows) at a 90% confidence level, an entity
may require only 60 CU as compensation for bearing that level of uncertainty. In other words, in this case
the confidence level measure of risk is 100 CU, but the entity only requires 60 CU to compensate for that
level of uncertainty.
Alternatively, consider a second entity that sets its price for risk—the compensation for bearing the
uncertainty—to be equal to 100% of the selected confidence level. In the example, the second entity would
set its compensation at 100 CU, equal to the confidence level amount at a 90% confidence level. These
simple examples illustrate that the measure of risk, the confidence level amount at the selected 90 th
percentile, may not be the only parameter relevant to an entity when determining the risk adjustment. The
entity’s compensation (price for risk) may be some function of one or more risk measures, such as the
confidence level, or it may reflect a more complex set of risk preferences for different confidence levels or
CTE values.
Discussion of the use of quantile risk adjustment techniques
The confidence level technique is relatively easy to communicate to users as it is simply the difference of
two values (the estimate of the liability at the chosen confidence level and the mean estimate of the liability).
However, this technique may not provide such a clear or intuitive proxy that satisfies the IFRS X Insurance
Contracts principle of representing the compensation the insurer requires for bearing the uncertainty in the
cash flows that arise as it fulfils the contract.
As described in the IFRS X Insurance Contracts, this method’s usefulness declines as the probability
distribution becomes more skewed, because the method ignores all results higher than the chosen
percentile. In the case of a significantly skewed probability distribution, the technique would require a very
high percentile to capture the possibility of extreme events. As an extreme example, if one out of 100
observations has a value of 1,000 CU while the other 99 observations all have a value of 1 CU, the risk
adjustment corresponding to the 90th percentile would be a negative number (the 90th percentile is 1, which
is smaller than the average). As another example, from the table shown in section 3.1.2, if the 75 th percentile
were selected, the risk adjustment would be 6.4 CU, but such an adjustment would not reflect the effect of
30
"Premium principles". Encyclopedia of Actuarial Science.
31
An Introduction to Risk Measures for Actuarial Applications.
39
the probability of outcomes greater than 106.4 CU. Consider an entity that estimates its risk adjustment
using the 75th percentile. Where two different distributions have the same value at the 75 th percentile, the
values from those distributions at higher percentiles could have vastly different values. By utilizing this
technique for estimating risk adjustments, the selection of the results at the 75 th percentile for all probability
distributions would not produce risk adjustments that reflect the differences in the relative riskiness of these
different probability distributions.
The CTE technique allows for a better reflection of the possibility of extreme values with small probabilities
because it takes into account the expected value of all outcomes beyond the chosen threshold. It reflects
more of the probability distribution in determining the risk adjustment, including results in the tail of the
probability distribution. Thus, in the example given in the prior paragraph, if the 75th percentile were the
chosen threshold, this method would reflect the difference in the risk measures for the two distributions.
Therefore, the CTE technique provides a better risk measure that accounts for the skewness in the tail of
the probability distribution, which could be an important factor for an entity in determining the compensation
it requires for bearing risk, as almost all entities want to be compensated a greater amount for taking on tail
risk. The CTE technique is an enhanced technique for measuring risk in the case of skewed probability
distributions compared to the Confidence Level technique. A disadvantage of the CTE technique is how
accurately the technique reflects the entity’s risk preferences, which is an important factor to the
measurement objective for the risk adjustment. With the CTE method, only the right-hand tail of the
distribution, i.e., the more extreme unfavourable outcomes, is included in the risk measure. Consequently,
the risk adjustment using CTE is very sensitive to the very low probabilities estimates. As with the
confidence level technique, no weight is given to other possible outcomes that span a wide range of values
and the majority of the probability distribution. The CTE technique ignores the uncertainty associated with
favourable outcomes and with other outcomes that are more favourable than the selected CTE probability
level.
Another consideration in using the CTE technique is that the probability of an extreme event may be very
small with very few, if any, relevant observations of such extreme events. Thus, although the measurement
of the likelihood of such events may lack precision, it could still be relevant and a faithful representation of
risk and the estimate of the risk adjustments. While the probability-weighted contribution of the rare, but
extreme, events may not result in a material impact to the overall expected value, the risk preference
assigned to such events may have a significant impact on the amount of the risk adjustment. That is, the
compensation for bearing the uncertainty associated with extreme events could be significant, despite the
small probability of such events and the difficulty in measuring with any degree of accuracy the small
probabilities involved. In such a situation, the entity might consider a higher level of confidence or another
measure of risk, because the estimation of low probabilities may depend much more on assumptions and
less on evidence from observations of actual occurrences. However, the selection of a “higher” confidence
level can be an issue when extreme events are a material consideration. The issue is the difficulty in
selecting what the confidence level should be and the difficulty in accurately estimating the values at high
confidence levels.
In summary, using a confidence level or CTE technique as the basis for a risk adjustment can be
problematic where there is a significant element of extreme event risk. By their nature, extreme events are
low-probability occurrences, e.g., less than 1% chance of happening, and therefore the value at the 99%
confidence level could be close to zero (or substantially less than the probability-weighted expected value).
Similarly, estimates of CTE may not be realistic since CTE values are based on probability estimates of
even more extreme outcomes with probabilities that are very close to zero. Therefore, a CTE value may be
dependent on technical assumptions and models that could be difficult, or impractical, to validate or
substantiate.
Note that the confidence level measure is not a statistical coherent measure. A statistical coherent risk
measure f(x) satisfies the four mathematical properties:
1. Monotonicity: if x<=y, then f(x)<=f(y)
2. Sub-addivity: f(x+y) <= f(x) + f(y)
3. Homogeneity: f(ax)<=a*f(x) if a>0
40
4. Translation invariance: f(x+a) = f(x) + a for a constant a
The confidence level measure does not pass the sub-addivity test, which, as an consequence, may
discourage the measurement of diversification when aggregating risks. A simple example is shown below
to illustrate this issue:
Scenario
X1
X2
X1+X2
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0
7
0
0
0
8
0
0
0
9
0
1
1
10
1
0
1
0
0
1
Confidence Level (Value
at Risk @85%
As shown above, the sum of the two risks (X1, X2) has a value at risk of 1 at the 85% percentile, while the
confidence level (value at risk) for each risk has a value of 0. Clearly, no diversification of risks has been
achieved by adopting this risk measure.
In comparison, CTE (tail value at risk) is a coherent measure that satisfies the four properties above. CTE
sometimes is a more desired measure of risk, not only due to its superior mathematical properties, but also
because it offers insights into the shape of the tail of the distribution. However, value at risk, while being a
point estimate only and not measuring the severity of the tail, depending on the situation, may prove to be
a valuable measure as well, when:
1. The tail of the distribution is unknown or the tail of the empirical data is unreliable;
2. Good models for the tail risk are not available; and
3. From a consistency perspective, the value at risk measure may be more desired as it presents a
confidence level and is not driven by the shape of the tail, which could be volatile and unpredictable.
3.1.2 Cost-of-capital technique
This technique for risk adjustments is based on the concept that an entity will determine its risk preference
based on the entity’s selection of a capital amount appropriate for the risks that are relevant to IFRS X
Insurance Contracts measurement objectives.
The amount of capital used for this technique also depends on the time horizon for the capital amounts to
be held. In many cases, the future fulfilment cash flows will extend for several if not many years into the
future. An extended time horizon is needed to estimate the capital amounts held over the lifetime of the
insurance contract cash flows. This technique typically is described as selecting future capital amounts
based on the determination of a probability distribution for future cash flows related to the insurance liability.
Such capital amounts are not defined based on regulatory capital adequacy requirements nor on the entity’s
actual capital, because the IFRS X Insurance Contracts measurement objectives are stated in terms of the
entity’s requirements, rather than any external requirements. In practice, capital amounts might be
determined based on stress tests, stochastic models or factor-based models by the entity. A confidence
level from the estimated probability distribution of the fulfilment cash flows is selected that corresponds to
the amount of capital based on the entity’s criteria for the compensation for bearing risk.
41
The IFRS X Insurance Contracts guidance suggests that this confidence level for determining the capital
amount be set at a high degree of certainty that losses will not exceed that amount. However, the selected
measure(s) of risk, the other uncertainty considerations, and the level of compensation required by the
entity for bearing risk and uncertainty, are the relevant inputs to the estimation of the risk adjustment. The
difference between the amount from the probability distribution associated with the selected confidence
level and the probability-weighted expected value represents the amount of capital that the insurer would
use in computing its cost of capital. That amount of capital is then multiplied by the entity’s selected costof-capital rate. The probability distribution of the fulfilment cash flows, the amount of capital, and the costof-capital rate at future points in time are projected for the entire period until the fulfilment cash flows are
projected to be completed. The risk adjustment is computed as the present value of the future cost of the
capital associated with the entity’s relevant fulfilment cash flows.
The cost-of-capital technique is further discussed in this chapter.
Illustration of risk adjustment calculations for three techniques
In order to illustrate the techniques described above, a probability distribution was selected from the
Casualty Actuarial Society’s December 2011 webinar on risk adjustments.
Suppose the probability distribution of the insurance liability cash flows is described as lognormal with mean
of 100 CU, standard deviation of 10 and coefficient of variation of 10%.
The various confidence level values (present value of the fulfilment cash flows) for each percentile and the
CTE would be as follows, where the confidence level corresponds to the value with probability less than
the given percentile and CTE corresponds to the conditional average of all values in the distribution beyond
the given percentile:
Confidence Level
CTE
(CU)
(CU)
60
102.1
109.7
65
103.4
110.7
70
104.8
111.8
75
106.4
113.1
80
108.2
114.5
85
110.3
116.3
90
113.1
118.6
95
117.2
122.4
99
125.5
128.1
99.5
129.0
130.2
Percentile
42
This example assumes that the entity’s compensation for bearing risk is equal to 100% of these quantiles.
The confidence level risk adjustment would be 6.4 CU if the percentile threshold were chosen as 75%
(106.4 CU less 100 CU). The CTE risk adjustment would be 13.1 CU if the percentile threshold were chosen
as 75% (the conditional average of all values beyond 75% is 113.1 CU less 100 CU). The cost-of-capital
technique is more complex. The insurance liabilities are run off using a cash flow pattern, estimating the
remaining cash flows every year to determine the average capital amount for the year, applying the costof-capital rate to the applicable capital amounts, and then adjusting the cost of capital to present value
using the applicable discount rate. The cost-of-capital risk adjustment is the sum of the present values of
the cost of capital by year over all future years until all of the fulfilment cash flows have been completed.
For example, suppose a confidence level were chosen for the capital amount each year at the 99.5%
confidence level; then the risk adjustment would be computed as shown in the following table (assuming
8% cost-of-capital rate, 2% risk-free discount rate). The risk adjustment is 6.2 CU, which is the sum of the
rightmost column below.
Year
Undiscounted
Expected Value
of Future
Fulfilment Cash
Flows (CU)
99.5%
Confidence
Level(CU)
Capital
Amount
(CU)
Cost of
Capital
(8% Cost of
Capital
Rate)(CU)
Present
Value
Cost of
Capital(CU)
1
100
129
29
2.3
2.3
2
65
84
19
1.5
1.4
3
42
54
12
1.0
0.9
4
27
35
8
0.6
0.6
5
18
23
5
0.4
0.4
6
12
15
3
0.3
0.2
7
8
10
2
0.2
0.2
8
5
6
1
0.1
0.1
9
2
3
1
0.1
0.1
10
0
0
0
0
0
Total
6.2
Discussion of the use of the cost-of-capital technique
The concept of the cost of capital focuses on the amount of capital an insurer must hold for bearing a risk
with unknown but estimable consequences. The cost of that amount of capital is a measurement technique
based on a return on the shareholders’ capital to compensate for the risk to that capital. For this concept to
properly reflect the full extent of possible risk, it should address the risk to capital that is high enough to
allow for extremely unfavourable but uncertain events; otherwise this technique would suffer from the same
potential weakness as described for the confidence level technique in not reflecting the distribution’s full
43
tail. The cost-of-capital concept attempts to take into account the extreme risk in the tail of the probability
distribution by using a capital amount large enough to reflect almost the entire distribution.
Unlike the quantile techniques, it reflects how the risk relates to changes in the amount of insurance
liabilities over the time that the future cash flows occur (the liability amount and its probability distribution)
and in that respect is potentially a more relevant measure of the risk for the risk adjustment. However, it
also requires estimates of the future capital amounts over time associated with the remaining future
insurance liability cash flows. It is grounded in the concepts underlying market consistent valuation
principles described in section 2.1. The cost of capital typically is concerned with all risks to the
shareholders’ capital. Nevertheless, this technique needs to be relevant to the risk in the fulfilment cash
flows and not to other risks irrelevant to the quantification of the insurance liability cash flows (such as
asset/liability mismatching, uncertainty regarding future business, and several other risks that can threaten
an entity’s solvency). It is more complex than the quantile techniques and requires additional variables,
assumptions, and sensitivities.
The compensation for uncertainty in producing the fulfilment cash flows can be estimated by relating such
compensation to the risk capital. The concept of the compensation for bearing risk is similar to the concept
of setting a price component for a policy, block, or portfolio of similar policies that varies directly with various
risk measures. Under this approach, the amount of capital that is at risk then becomes the basis for
measuring the compensation in terms of the return to shareholders as the cost associated with the risk of
uncertain, unfavourable outcomes.
A given amount of capital is considered necessary to maintain an acceptable level of financial resources to
secure the entity against the risk of extraordinary financial stress due to uncertain cash flows that
significantly exceed those expected over the lifetime of the insurance contract cash flows. The security
provided by capital requirements is for the protection of policyholders who expect their insurance contracts
to be fulfilled, and also for the entity’s owners to protect their investment it. Insurance supervisory authorities
typically use capital requirements as a means to regulate insurer solvency.
Capital providers would like to minimize the capital requirements, control unfavourable risk, and maximize
profits (return on capital). However, the concept of capital investment for an insurer is not like other nonfinancial enterprises where the majority of capital investment is needed to fund the entity’s operations, such
as equipment, inventory, raw materials, and facilities directly related to an enterprise’s ability to produce
goods and services and generate profits. The primary need for capital in an insurance business is to have
sufficient financial resources consisting of unencumbered financial assets that can be used to keep the
insurer in business when unexpected events occur. The capital provides the main financial resource that
can be readily used to pay the amounts required under its insurance contracts in excess of its liabilities
(provisions) for such payments, if and when unfavourable outcomes occur.
The risk economics of an insurance business are to achieve a scale large enough that the pooling and
diversification of risk from its insurance contracts’ cash flows is sufficient to reduce the risk of a significant
loss of capital. Although the risk of loss of capital is seldom eliminated simply because of the volume of
business written, some amount of capital is needed for an insurer to function as an ongoing business. The
risk of loss of capital can be a relevant factor in evaluating the compensation for bearing risk. The level of
compensation required for a selected aggregation of cash flows (e.g., policy, block of business, portfolio)
could be measured in a way that considers the impact that the aggregation of cash flows might have on the
risk of loss of capital.
Insurance is a regulated business. There are usually requirements for an insurer to operate that involve
proof of financial resources, commonly in the form of capital (surplus). Many jurisdictions impose
requirements regarding a minimum amount of capital for an insurer to issue insurance contracts. The
amount of capital required can vary based on a number of factors, including the volume of insurance sold
by the insurer and the total size of the insurer’s assets and liabilities. Solvency requirements can also
include capital adequacy analyses, risk-based capital standards, solvency modelling and other metrics,
tests, or technical provisions. As the size of an insurer increases, the acceptable amount of capital will also
increase, but such increases will generally be less than in proportion to various measures of the insurer’s
size. It is noted that the capital used in the cost-of-capital approach is more akin to an economic capital
concept, with the regulatory capital for the relevant risks as the minimum floor.
44
The purpose of capital is generally related to protecting the policyholders who have bought insurance for
the financial protection needed or desired when unexpected events occur, where such events might
produce a significant unfavourable outcome to the policyholder, or when cash value or investment return
benefits are payable to policyholders.
This purpose of capital encompasses all risks faced by an insurer, including general business risk,
investment risk, catastrophe risk, business concentration risk, operational risk, and valuation risk, as well
as the adequacy risks of its liabilities or provisions for insurance liabilities, important indicators of solvency
risk. Therefore, several capital risks mentioned above are related to the insurer’s overall solvency, but are
not directly limited to only the risk and uncertainty in the future insurance contract fulfilment cash flows that
comprise an insurer’s insurance contract liabilities. For example, capital adequacy considerations would
include elements related to an insurer’s growth, future new business, new products, structure of its
investment portfolio, future renewal policies, geographic boundaries, legal structure, and expansion plans.
Several aspects of the capital adequacy considerations mentioned above go well beyond the boundaries
of an insurer’s in-force insurance contracts or insurance contract liabilities as of a particular financial
statement date. In addition, some of these considerations may be a blend of risks related to insurance
liabilities and other types of business risks for which capital is needed. For risk adjustments based on the
stated principles in IFRS X Insurance Contracts, the use of a cost-of-capital method would be based on
selecting an amount of capital limited to addressing the measurement objectives of the risk adjustment
under IFRS X Insurance Contracts. The measurement objective for risk adjustments is defined as limited
to the uncertainty in cash flows associated with fulfilling the insurance contracts.
Consequently, when using the cost of capital as a risk adjustment technique, it will be important to consider
how the capital amount used in the risk adjustment calculations is determined. The measurement objectives
under IFRS X Insurance Contracts define the risks appropriate for the risk adjustment for insurance
liabilities. The risk adjustment principles in IFRS X Insurance Contracts are stated in terms of risks and
uncertainties for insurance liability cash flows only. Therefore, the capital used in cost-of-capital techniques
would exclude those risks not primarily related to the cash flows underlying the insurance contract liabilities
as of a certain date. This issue of differences in the objective of capital, and therefore the amount of capital,
can be addressed by establishing a method to allocate capital between risk associated with the liability cash
flows and the other risks of the insurer for which capital is needed. In some situations, the capital needed
for insurance liability cash flows may be impacted or correlated to the risk and uncertainty that is not directly
attributable to the insurance liabilities. The allocation of a capital amount to the amount appropriate for risk
adjustments is independent of the capital considerations associated with the wider range of risks and
uncertainties unrelated to the insurance liability cash flows, e.g., to asset-related risks.
Additional comments on the cost-of-capital technique for risk adjustments are:

There are a variety of methods for computing capital requirements, including those needed to
satisfy the solvency oversight required by local insurance laws, those required by regulatory
provisions of local insurance supervisory authorities, economic capital, or those considered by the
insurance market as a prerequisite for insurance policyholders’ choice of insurer.

An entity’s risk preferences include some level of risk aversion that may be expressed in terms of
an amount of capital in order to provide a desired level of security specific to the risk associated
with defined, but uncertain, unfavourable insurance liability fulfilment cash flows.

The amount of capital used to estimate the cost of capital will depend on the level of security
desired, an assessment of the probabilities that unfavourable cash flow outcomes will consume
some or all of the capital, and the entity’s level of risk aversion regarding the uncertain,
unfavourable outcomes.

By selecting an amount of capital, the entity provides a basis for assigning a cost to a level of
financial resources that the insurer has committed to securing the uncertain cash flows.

The cost associated with maintaining this amount of capital could be used as the basis for an
estimate of the compensation that the insurer requires to meet the measurement objective for the
risk adjustment. However, since the risk in the fulfilment cash flows is already taken into account in
45
the selected capital amount, any risk adjustment in the cost-of-capital rate should be avoided,.
Otherwise, the risk adjustment based on a risk loaded cost-of-capital rate would be higher than the
appropriate compensation for bearing the uncertainty in the fulfilment cash flows.

This cost would be measured by applying a rate of return on the capital amount for the period of
time the capital is needed to compensate for the uncertain cash flows of each period until the
business has run-off.

A cost-of-capital approach can be thought of as the mean present value cost of having to hold
capital in relation to the non-replicable risk over the lifetime of the insurance contract cash flows.
This expected cost will depend on the entity’s uncertain fulfilment cash flows that cannot be hedged
using the set of financial instruments acceptable for replication. The non-hedgeable risk depends
not only on the risks of the insurance contract cash flows but also on the availability of financial
instruments acceptable for replication.
Cost-of-capital rate
A simple cost-of-capital method is described as applying a single rate of return on capital to a single capital
amount for a period of time. The method is based on the following key variables:
1. The capital amounts appropriate for the risk and uncertainty for various periods of cash flows;
2. The period applicable to the capital amount;
3. The rate of return on the capital amount; and
4. The probability distribution of the uncertain cash flows, i.e., their amount and timing.
There can be interdependencies among these four variables. For example, if the capital amount is
established at a high level of confidence (i.e., a low risk that cash flows will exceed a given level of capital),
a lower rate of return on that capital might be more appropriate compared to a capital amount established
at a lower level of confidence or for a wider range of risks. For some types of cash flows the uncertainty in
them, and therefore the capital amount, might be larger for shorter periods and then decline as the period
of time lengthens and the uncertainty in the remaining cash flows lessens.
For other types of cash flows the uncertainty may increase with the period of time such as typical life
insurance where the contingent event (i.e., death) could happen any time during the coverage period,
indicating higher capital amounts relative to the remaining cash flows. Given the potentially long periods of
time for holding capital, it may be appropriate to use a yield curve to provide different rates of return on
capital depending on the length of the period applicable to the duration of assigned capital amounts.
The appropriate rate of return on capital may depend on the impact of the risk and uncertainty associated
with valuation estimates, and the purpose of the cost of capital. For example, solvency regulations of
insurance companies have developed criteria for the reporting of risk provisions (risk margins) to satisfy the
needs of financial solvency regulation of insurance companies. There are different reporting requirements
involving capital adequacy using the cost-of-capital method. For example, Solvency II in the European
Union, the Swiss Solvency Test, and in other jurisdictions where provisions for adverse deviations or other
similar technical provisions (provisions for adverse deviations or risk margins) are used.
For risk adjustments under IFRS X Insurance Contracts, the entity’s cost-of-capital rate would be chosen
to meet the specific measurement objectives, reflecting a rate of return consistent with the entity being
indifferent between fulfilling an insurance contract liability with a range of possible outcomes versus fulfilling
a liability that will generate fixed cash flows with the same expected value of cash flows as the insurance
contract. Like other valuation assumptions, the cost-of-capital rate is reflective of the entity’s own
preferences and experience, and will need to be reviewed and update regularly. It is influenced by the
changes in the entity’s risk profile, and external market conditions etc. The cost-of-capital rate is one of the
key quantitative parameters characteristic of the cost-of-capital method. As described earlier, these
characteristics are the cost-of-capital rate, the capital amounts at various points in time, the period of time
associated with retaining the capital amount, and the probability distribution associated with events that
present a risk to capital (both favourable and unfavourable).
46
Further discussion of the cost-of-capital technique
The cost-of-capital technique includes additional elements that are not used in the quantile techniques. In
particular, the measure of risk is expressed in terms of a capital amount that is based on the probability
distribution of the fulfilment cash flows. This technique expresses the compensation component in terms of
a cost that is proportionate to the measure of risk. The duration of both the fulfilment cash flows and the
uncertainty are reflected in this risk adjustment computation. While the quantile techniques also measure
risk based on the probability distribution, the risk adjustment is derived directly from the probability
distribution without any reference to the cost, duration, or other basis of providing compensation. The
technique corrects shortcomings of the quantile techniques with respect to risk adjustments.
One consideration in the selection of the cost-of-capital rate would be whether the cash flows are impacted
by future inflation. To the extent that the cost-of-capital rate is influenced by expectations about future
economic inflation, and if the future fulfilment cash flows are not also influenced by future inflation
expectations to a similar extent, then the capital amounts would not reflect the same inflation expectations.
This raises theoretical and practical issues as how to adjust the cost-of-capital rate to remove the influence
of inflation expectations in computing the cost of capital. Also, in using the cost-of-capital technique for
computing risk adjustments, the computation of the cost of capital on a present value basis may involve the
use of a discount rate that is also influenced by future inflation expectations. If the underlying cash flows
and the associated capital amounts do not reflect future inflation expectations to the same extent as
reflected in the discount rate, similar issues will need to be addressed in how to adjust the discount rate to
remove the influence of inflation expectations.
Cost-of-capital comparison to market-consistent valuation
The valuation of insurance liabilities under IFRS X Insurance Contracts may be compared to a marketconsistent valuation approach, and some common elements and differences may assist in understanding
the principles for risk adjustments under IFRS X Insurance Contracts. As discussed in section 1.3.1, under
this approach the amount of capital is a choice, which could be based on the insurer’s own capital
requirements or risk appetite, or defined based on regulatory capital adequacy requirements. It also
depends on the time horizon for which the capital amounts are estimated to be held. For example, in
Europe, a common time horizon for the capital amount is one year for market-consistent valuation, which
means that capital amount is intended to buffer risks emanating during a one-year time horizon. A one-year
horizon could be expanded by using a sequence of one-year capital amounts held over the lifetime of the
insurance contract cash flows. For long term products such as whole life insurance, it may be more
appropriate to consider a longer time horizon for the determination of the capital amounts.
The cost-of-capital rate under a market-consistent valuation can be thought of as a measure of the excess
return over the risk-free rate at which investors expect to be compensated for investing in the insurer.
Investors do not expect to be compensated for hedgeable or replicable risks, as they can hedge such risks.
Consequently, the market-consistent cost-of-capital rate would be a function of the frictional costs to the
investor, such as:

Cost of double taxation—The cost due to the fact that investment returns are taxed both at the level
of insurance companies and investors.

Agency costs—The costs due to the misalignment of interests between investors and the insurers’
management.

Financial distress costs—costs associated with financial distress, for example, the loss of
reputation or the cost of having to raise capital.
The cost-of-capital rate used by certain solvency regulatory frameworks is based on a market-consistent
valuation approach. These frameworks have specified certain parameters for the cost of capital. For
example, a time horizon of one year and a cost-of-capital rate of 6% for the Swiss Solvency Test and under
Solvency II. The 6% rate was initially selected by the Swiss regulator based on estimates of the frictional
cost of capital for BBB-rated insurers.
47
Under IFRS X Insurance Contracts, the cost-of-capital technique is referenced as a technique to estimate
risk adjustments. However, there are no specifications regarding the choice or criteria for the amount of
capital or the cost-of-capital rate. The time horizon for capital is the lifetime of the fulfilment cash flows. The
guidance in IFRS X Insurance Contracts provides a principles-based measurement objective for the risk
adjustment as the basis for determining the elements and parameters to be used for the cost-of-capital
technique.
3.1.3 Other risk adjustment techniques
In considering the three techniques discussed above (two quantile techniques and the cost-of-capital
technique) in light of the IFRS X Insurance Contracts criteria and other desirable characteristics of risk
adjustments, it is apparent that all three techniques could be acceptable under certain circumstances.
It is also apparent that no one technique will meet all of the criteria in all situations. For example, the
confidence level technique may be appropriate for a liability with a non-skewed distribution, and has the
advantage of being relatively simple to compute and explain. The cost-of-capital method could be more
appropriate for more complex or long-duration risks, but is likely to have challenges in meeting other criteria,
such as simplicity.
As noted, IFRS X Insurance Contracts does not limit the techniques for estimating risk adjustment. The
selection of techniques is made considering the principles and measurement objectives under IFRS X
Insurance Contracts. The discussion in this chapter is intended to assist the reader in understanding the
techniques and addressing key application issues for reporting under IFRS X Insurance Contracts.
As discussed in the 2009 IAA publication Measurement of Liabilities for Insurance Contracts, a risk margin
may also take the form of an explicit margin added to individual assumptions, or an adjustment made to the
discount rates. If explicit assumption margins are used for calculating the risk adjustment under IFRS X
Insurance Contracts, it could be cumbersome as each individual key assumption would need to be
evaluated for the purpose of developing an appropriate underlying margin. If simplicity is desired and a
universal percentage margin (such as 5% or 10%) is added to all relevant expectation assumptions, it would
increase the ease of implementation, but the trade-off is the loss of consistency when translating the risk
adjustment under IFRS X Insurance Contracts to a confidence level for disclosure purposes as required by
the IASB. Environmental changes over time (e.g., interest, inflation, equity markets, and legal changes) are
likely to cause the translated confidence level to vary greatly for the same 5% or 10% percentage margin.
If an adjustment to the discount rates were made for the purpose of calculating the risk adjustment
(discount-related risk margins, as referenced in the 2009 IAA publication), it would be similar to the asset
valuation. The shortcoming of making an adjustment to discount rates is that the discount rate generally
has little relevance to the insurance cash flow risks, while the insurance cash flow risks are often the most
important consideration in developing the risk adjustment. Also, there is no accepted method for
determining the discount rate risk adjustment for the purpose of calculating risk adjustments.
Below we will introduce another technique that connects insurance product pricing practices to the
development of the risk adjustment. The actuarial principles related to the pricing of aggregate insurance
risk, sometimes referred to as premium principles, are comparable to the measurement objectives for the
compensation for bearing uncertainty under IFRS X Insurance Contracts. For example, the cost-of-capital
technique is very similar to an analogous premium principle based on the return on capital. Other premium
principles have been shown to align well with market-based asset pricing models. The use of one or more
such principles may provide an alternative technique for computing risk adjustments. Some of them are
more focused on market pricing (what the market would require for bearing uncertainty), so their application
for risk adjustments would need to be calibrated to the entity-specific risk preferences rather than to market
indicators of such compensation levels. The application of actuarial premium principles, particularly the
component that provides compensation as a function of a measure of risk, may provide a more robust
technique for determining risk adjustments than techniques based on quantiles or other measures of risk.
One method of expressing risk preferences is using a risk preference model to adjust the probability
distribution. Such a model assigns lower preference-adjusted probability values to more favourable
outcomes, i.e., outcomes that have lower cash flow liabilities than the mean. For unfavourable (adverse)
outcomes, i.e., outcomes that have higher cash flow liabilities than the mean, higher preference-adjusted
48
probability values would be assigned. This class of risk preference models is referred to as proportional
hazard transforms. One such premium principle is the Wang Transform. Robert Miccolis and David
Heppen32 show how the Wang Transform can be applied to develop risk adjustments under IFRS X
Insurance Contracts by calibrating one transform parameter—price of risk—to expected profitability levels
for particular lines of insurance in the U.S. This approach can be adapted to other aggregations of uncertain
cash flows and their probability distributions, and calibrated to a specific entity’s profitability level for which
the entity is indifferent between fulfilling an insurance contract liability with a range of possible outcomes
versus fulfilling a liability that will generate fixed cash flows with the same expected value of cash flows.
The advantage of using a premium pricing principle, such as the Wang Transform, is that the transform
parameter can be used consistently as the key variable for the price for risk and applied to various
probability distributions of fulfilment cash flows for the same entity. This also permits modification to risk
adjustments that are based on changes in the probability distribution, perhaps for different portions of the
distribution, such as when evaluating the impact of different reinsurance structures or other contingent
changes in the fulfilment cash flows.
A challenge in applying this approach is the calibration of the transform parameter to the entity-specific risk
preferences. This is similar to determining the appropriate capital amounts that are relevant to the fulfilment
cash flows in the application of the cost-of-capital approach. From this perspective, it is relatively easier to
apply the confidence level or CTE techniques instead. However, as mentioned earlier, the Wang Transform
approach could provide better consistency in the view of risks by linking to pricing.
Examples for the application of Wang Transform in the risk adjustment estimation can be found in chapter
10.
3.1.4 Considerations for qualitative risk characteristics
The principles regarding risk adjustments under IFRS X Insurance Contracts suggest that risk adjustments
in certain situations, such as those involving extreme or remote events, would result in higher risk
adjustments due to the very limited knowledge about the current estimate, the low frequency and high
severity of the risk, and the skewed probability distribution for extreme events. For example, the
characteristic of a wide probability distribution might simply be an assessment that there is a significant
potential for extremely-high-severity events rather than a specific measure based on a mathematical or
statistical measure, statistic, or parameter. Such considerations are of particular concern when the
probability model of the fulfilment cash flows cannot be adequately modified to reflect these characteristics
of the uncertainty in the cash flows in a way that can be validated or substantiated.
As mentioned in chapter 2, probability models are subject to several limitations. Some are related to data,
some to assumptions, and some to the models. The measurement objective for risk adjustments under
IFRS X Insurance Contracts does not distinguish between uncertainties that can be modelled versus those
that cannot. The risk adjustment techniques discussed in this monograph are those for which some
technical approach can be applied, supported by knowledge and experience of the risk and uncertainty
involved. Consequently, their application will require adjustment for those uncertainties not reflected in the
models.
One of the qualitative risk characteristics identified in the application guidance in IFRS X Insurance
Contracts is that the less that is known about the current estimate and its trend, the higher the risk
adjustment. The current estimate in this context would be the expected present value of the cash flows. A
corollary to this characteristic would be that the less that is known about the uncertainties of the cash flows
(e.g., type of probability distribution, moments, or parameters), the larger the risk adjustment. Consequently,
there can be situations where significant uncertainties in the cash flows are driven by the degree of
knowledge and relevance about the measures of risk. Chapter 2 mentions risks that highlight these issues.
32
A Practical Approach to Risk Margins in the Measurement of Insurance Liabilities for Property and Casualty (General Insurance)
under Developing International Financial Reporting Standards.
49
However, the risk measurement techniques described in this chapter are dependent on having a probability
distribution, at least at the right-hand tail of the distribution where the most unfavourable outcomes have
associated with them small probabilities.
In order to incorporate the qualitative risk characteristics into the risk adjustment, alternatives include:

Selecting a target level of compensation for bearing uncertainty from the modelled risk that results
in a larger risk adjustment depending on the entity’s judgment about the significance that is not
represented in the modelled uncertainty, e.g., selecting a higher confidence level;

Creating a compound probability model to reflect the qualitative uncertainty, perhaps for the first or
second moments (mean and variance) of the underlying probability distribution of the cash flows,
based on judgments about the qualitative risk characteristics;

Creating scenarios and assigning weights based on judgment (subjective probabilities) to possible
“states of the world”;

Subjective-rank-ordering the qualitative risk characteristics based on judgment, combined with a
calibration of entity-specific compensation that is based on eliminating only the modelled
uncertainty;

Developing rankings, scales, or quantitative benchmarks of expected profit measures (e.g., rate of
return) based on how the entity selects the minimum level of compensation it requires to bear the
uncertainty for new markets, products, or policies (blocks of business) where there are significant
qualitative risk characteristics; and

Using internal or external data to create examples or scenarios of similar qualitative risk
characteristics to test the entity’s risk preferences and risk appetite for such qualitative risk
characteristics, and to compare levels of additional compensation that an entity would require to be
indifferent between situations that require bearing the uncertainty from such qualitative risk
characteristics versus situations that do not.
Some of the above suggestions involve creating or expanding a model in order to incorporate these more
qualitative risk characteristics into the risk adjustments by means of modifying the risk adjustment
technique, adding one or more elements to the technique, using an alternative technique or adjusting some
of the inputs to the technique.
In applying these suggestions, it ought to be recognised that creating quantitative measures or models for
qualitative risk characteristics requires judgment based on such factors as experience, some type of
validation where possible, and comparisons to similar situations. Since the risk adjustments are specific to
the entity and the compensation it requires, such qualitative risk characteristics are likely to be a factor in
its pricing and compensation decisions involving qualitative uncertainty characteristics in various risk
transfer decisions. These suggestions provide alternatives on how to develop quantitative risk adjustments
appropriate when there are significant qualitative risk characteristics that are not reflected in the risk
measures or probability models.
Another noteworthy issue is how to combine the considerations for such qualitative risk characteristics
when the level of aggregation for the risk adjustment is such that the significance of these risk
characteristics tends to dissipate as the level of aggregation increases. Given the overall measurement
objective for risk adjustments under IFRS X Insurance Contracts, the reporting entity’s selection of the level
of aggregation for risk adjustments will influence whether such qualitative risk characteristics are significant
enough to require separate treatment in developing the risk adjustments, and the extent of such
consideration.
50
Chapter 4 – Techniques and
Considerations in Quantitative
Modelling
Abstract
This chapter introduces techniques and considerations in quantitative modelling, in order to quantify risk
adjustment for financial reporting purposes. To develop a quantitative statistical model, often times a
probability distribution is required. The advantage of having a probability distribution is that it is more
straightforward to derive a quantitatively-based risk adjustment based on the generated statistical functions.
The chapter’s first section introduces a summary of techniques used in modelling the fulfilment cash flows.
Then section 4.2 discusses the selection of data and the application of judgement in selecting it for the
purpose of quantitative modelling such as fitting probability distributions. In practice, it is common that
available data are insufficient to allow an entity to construct a useful probability distribution from the data
alone. In such cases, statistical approaches may not be suitable without significant reliance on qualitative
assessments and judgements to supplement, modify, or substitute for the empirical data. This Chapter
focuses on the application of statistical approaches useful for quantifying the risk adjustment based on a
selected probability distribution function for the insurance liability cash flows. Chapter 5 then discusses
qualitative assessments and other factors for consideration in developing risk adjustment.
Section 4.3 and 4.4 introduce probability distributions that are typically used to fit insurance liability cash
flows, and techniques used in distribution fitting. Depending on the nature of the liability cash flows, the
data available, and the distributions selected for fitting to empirical data, different methods can be employed
for selecting appropriate distributions and quantifying the parameters of such distributions. These methods
include the method of moments, maximum likelihood, and the Bayesian method. Section 4.4 also discusses
how to evaluate the goodness of fit and other considerations for selecting a probability distribution and the
relevant parameters based on the results of the fit.
Building on the foundation of derived distributions, section 4.5 explores statistical tools that have been
shown to be effective and can be applied to liability cash flow modelling for the purpose of quantifying risk
adjustments. When dealing with one particular risk, a variety of modelling techniques can be applied, such
as deterministic, stochastic Monte Carlo, and bootstrapping to model the fulfilment cash flows. In quantifying
the risk adjustment associated with the fulfilment cash flows, as introduced in chapter 3, risk measures such
as confidence level or CTE, cost-of-capital, or Wang Transform may be considered. In order to estimate
risk adjustments, simplified approaches such as utilizing proxy models or stress tests could also be
considered rather than building out a full-blown fulfilment cash flow model.
When multiple risks are being considered, risk diversification can be significant when estimating risk
adjustments at a level of aggregation for such risks. One important characteristic of a coherent measure of
risk is that the aggregate amount should be less than or equal to the sum of the amounts for the measured
risks. It may be possible to model multiple risks altogether by using a Monte Carlo simulation approach to
quantify an aggregate risk adjustment and thereby implicitly capture the diversification benefit. Other
methodologies, such as the variance-covariance approach or a copula model, may also be useful to
aggregate the risks that have been analysed separately for each risk. Section 4.6 discusses risk
dependencies and copula models.
51
Section 4.1 Summary of techniques
Application of the techniques introduced in chapter 3, in order to estimate a risk adjustment, requires the
modelling of the fulfilment cash flows. In this chapter, statistical quantitative techniques are introduced that
can be used to generate the fulfilment cash flows for the purpose of estimating risk adjustments.
Insurance contracts can present a wide variety of risks, which require a correspondingly wide variety of
modelling approaches. For most traditional insurance products, when assumptions are set, deterministic
cash flows can usually be derived. In this case, the distribution of cash flows may need to be estimated in
order to apply a quantile type of measure for the risk adjustment. For example, if the underlying fulfilment
cash flows are assumed to follow a normal distribution, at the 99% percentile the risk adjustment can be
estimated as 2.33 times the standard deviation. The advantage of having a probability distribution is that it
is more straightforward to derive a quantitatively-based risk adjustment based on the generated statistical
functions.
When stochastic approaches are used in modelling the fulfilment cash flows, it is sometimes more
convenient to apply the various risk adjustment techniques. Stochastic approaches generate a distribution
of cash flows, and the expected value and standard deviation can be readily observed from the generated
sets of cash flows. In the absence of closed-form solutions, stochastic modelling provides a way to quantify
risks.
There are different types of stochastic approaches:

Monte Carlo simulation—This simulates repeatedly a random process for risk variables of interest
covering a wide range of possible situations. To generate simulated Monte Carlo observations, one
would need the underlying distribution of the basic variables that drive the process. For example,
to simulate mortality and lapse decrements for a certain year, if binomial distributions are assumed,
first, a random number is generated and compared with the mortality rate Q(d) to determine whether
this policy is terminated by death. If the policy is determined not to be terminated by death this year,
another random number is generated to compare to the lapse rate Q(w), to determine whether this
policy is terminated by lapse. For more complex distributions, the inverse transform method is used
to simulate random variables. This method involves generating a random number from 0 to 1 and
finding the corresponding value in the cumulative distribution of the simulated random variable. In
general, thousands of simulations are typically generated under the Monte Carlo method in order
to reduce sampling variability.

Bootstrapping—Bootstrapping is a resampling technique where historical observations are used to
create stochastic scenarios. Rather than a hypothetical distribution, this technique relies on
historical information as potential future observations. It is typically used in generating scenarios
for capital market-related variables such as equity index prices and interest rates. For insurance
transactions, this approach can be applied as well. However, some sort of normalization needs to
be applied to remove such factors as seasonality or adjust for exposure. This technique has some
merit because it may more closely resemble what happens in reality, and include fat tails or any
other departure from theoretical distributions. However, it may be a poor approximation for small
samples and it relies heavily on the fact that each sampled variable is independent from another.

Other approaches are available that may incorporate some acceleration techniques to reduce the
number of simulated scenarios without loss of accuracy. For example, the scenarios may be handpicked with probabilities assigned, or chosen based on a stratified sample technique, or an
antithetic variable technique may be applied that changes sign of all random samples to double the
scenarios without doubling the time.
In addition, when quantifying the risk adjustment for cash flows subject to multiple risks, it is necessary to
apply statistical techniques to capture the diversification benefit or expected correlation between the
variables. If all the risks are modelled together through a Monte Carlo simulation, the risk adjustment,
52
quantified based on the simulated observations, generally captures the diversification benefit. In other
cases, such as a deterministic calculation, it may not be realistic to estimate the aggregate risk adjustment
through a closed-form solution, reflective of multiple risk distributions. It is therefore important to estimate
the correlations between risks.
Common techniques applied to capture the diversification benefit include:

Fixed diversification, which reduces the total risk adjustment by a fixed percentage;

Variance-covariance approach, which applies a variance-covariance matrix to derive the aggregate
risk adjustment; and

Copula model, which utilizes a copula model to capture the diversification benefit.
Details regarding the aggregation techniques can be found in section 5.3.
Section 4.2 Data and judgement
Statistical modelling tools rely on observing patterns in empirical data and using these tools to make
inferences about the patterns applicable for future outcomes. In the insurance world, insurance liability cash
flows generally take the form of a time series, i.e., individual data records of historical values, or aggregate
compilations of such data records, organized to include key dates and other relevant data. For example,
the timing of occurrence of a death or automobile accident could be a random variable that follows a certain
probability distribution. The magnitude, or severity, of liability claims could be another random variable. In
selecting the empirical data for historical data records for distribution fitting, the following considerations are
commonly applied:
1. What types of internal or external related data are available that can be used to study the trend
(such as mortality improvement), and estimate the size and timing of claims?
Generally, an entity’s own experience is utilized to develop studies and estimate distributions if
deemed sufficiently credible. Where an entity’s own data are not credible or do not exist for a new
line of business or a new product, relevant experience from the industry is generally used as a
reference point or basis to develop estimates. In cases where data points are not sufficiently
credible (or even quality and relevance of the data are of concern), judgment needs to be
considered to assign proper weights to past experience and current assumptions.
2. Do the data represent a “single” distribution or a “compound” distribution?
For example, the annual insurance claim for a property/casualty line would typically be a
“compound” distribution, which includes both the frequency of the claim occurrence during a year
and also the severity of the claims. In this case, it may be desired to separate the annual claim data
into two streams of data, one for frequency and one for severity, to fit them into two “single”
distributions. This is because it is generally more challenging to fit a “compound” distribution to data
without losing accuracy and quality of the fit. In addition, the two separate distributions can be easily
aggregated to study the annual claims, and they also provide more flexibilities in terms of modelling
aggregate losses under different sensitivity scenarios (such as increased frequencies or revised
severity under different claim deductible levels). However, it is also likely that some may find good
distribution fit to the “aggregate” data such as the liability cash flows or annual aggregate loss data.
Practical considerations must also be given if granular data collection proves to be challenging, or
modelling multiple “single” distributions leads to longer run time, etc.
3. Is there any seasonality, cyclical behaviour, policyholder-specific behaviour, or inter-relationship
with other factors in the data or external variables?
For example, are the data from the historical insurance claims affected by inflation from year to
year? Does the magnitude of historical claims tie to the size of the premiums? Do the data clearly
53
present some seasonality where there are certain periods of the year that exhibit a pattern of
significantly higher claims? In such cases, the historical data may need to be normalized, adjusted
or processed to remove the seasonality or reflect the impact of inflation in order to achieve a more
appropriate fit. In addition, the data may reflect systematic policyholder behaviour or policyholder
behaviour dependent on other factors. For example, in life insurance, for certain products that credit
interest or provide guarantees to policyholders the liability cash flows may reflect systematically
less or more lapse rates, which is driven by the capital market environment at the time, such as
interest rate movement and equity market performance. Unique dependencies or drivers such as
these may need to be separately modelled instead of fitting empirical data directly into a statistical
distribution.
4. Are the data truncated or censored?
Deductibles, retentions, or limits are used in many lines of property/casualty insurance. Data
collected may be incomplete in that nothing is available for losses below some fixed currency
amount (left-truncated), or exact loss amount is unavailable for losses above a limit (rightcensored). In these cases, one needs to be make appropriate adjustments when fitting distributions
with such data.
5. What are the timing, length, and nature of the data?
Are monthly or yearly data subject to the study here? How many years of data are available, and
are they sufficient for probability distribution fitting? Is the occurrence of the collected data
dependent on timing, which would indicate multiple random variables are involved (e.g., timing and
severity)? What’s the source of data—has external data from the industry been used and would
they require any adjustment to fit the specific entity’s profile? Depending on the answers to these
questions, judgment may need to be applied to adjust the data set.
6. Are there any other relevant observations in the collected data?
Are there any outliers that might be data errors or other abnormalities that should be removed or
further assessed? Are there large observations or sudden changes apparently separating different
regimes of behaviour? Do the observations have a continuous distribution, or do certain values
(i.e., exactly 0 or exactly 1) occur often enough to call continuity into question? Is there a trend
affecting the values in the data: increasing or decreasing, linear or exponential? Is an increase
more likely followed by another increase, by a decrease, or does the previous movement have no
predictive power on the direction of future changes?
In general, when working with empirical insurance data from historical periods, exploratory data analysis
can be quite important to making appropriate adjustments to the data as part of the process of selecting
and fitting distributions. In some cases, plotting data points on a chart or graph can be helpful in assessing
whether it is more appropriate to aggregate the data to fit a “compound” distribution or not. Also, it can be
useful to calculate some statistics in order to form a judgement about possible probability distributions. This
judgement can incorporate qualitative information and considerations that would be difficult to find in the
empirical data, such as an understanding of the processes that generated the data, awareness of external
events that could have caused changes in data behaviour, knowledge of approaches that have proved
useful in similar situations. Furthermore, this judgment can also reflect data quality, the relevance of data
about past events compared to potential future events, and the overall credibility of the data.
Section 4.3 Statistical distributions
Once the data set for distribution fitting is determined, one or multiple probability distributions can be
selected to fit it. The criteria for selecting distributions is discussed below for different classes of distributions
and their respective key properties. In some situations, empirical distributions can be utilized for
assumptions such as mortality, lapse, and morbidity, rather than probability distribution functions. For
example, mortality tables and lapse tables may be based on assumptions about mean mortality and lapse
54
rates that vary by attained age or policy duration, and those rates can be developed from empirical
distributions based on historical experience. However, there may be other practices that utilize predictive
models to estimate life insurance risk assumptions, such as mortality and lapse rates, whereby data are
analysed using various statistical functions. This section introduces the statistical distributions that have
been used for property/casualty, or general insurance. Empirical distributions are discussed in section 4.4.1.
4.3.1 Criteria for probability distribution families
Familiar probability distributions, such as the Normal, Poisson or Gamma, have many convenient
mathematical properties and are widely used in statistics for that reason. However, there is no fundamental
physical or economic reason why data should conform to a simple mathematical form of a probability
distribution. Indeed, the mathematical simplicity that arises from a small number of parameters is typically
too restrictive in many applications when tested with empirical data. Modellers typically need to look for
larger, more flexible, distribution families in order to provide a more robust representation of observed data
indicative of past behaviour or predictive of future behaviour.
In insurance work, it is often necessary to calibrate a distribution to positive data sets, such as those arising
from claim amounts. Hogg and Klugman’s Loss Distributions book provides a comprehensive treatment of
such distributions.
More general applications may require distributions that can take arbitrarily large positive or negative
values. Indeed, where distributions are used to represent very large positive values, it is common to
transform the variables in such cases by taking logarithms. The next section explores three specific families
of distributions that have been found useful for capital and risk margin calculations.
Knowledge about a wide variety of distribution families is very useful when selecting an appropriate class
of probability models for a particular application. But there is another reason for investigating many
distributions: to test the robustness of statistical techniques since the risk from model mis-specification can
be significant. For example, one might investigate whether statistical fitting model class A indicates incorrect
commercial decisions if the data were in fact generated from a different models class B. In order to provide
a full picture of when model A works, and what might cause it to produce inappropriate results, it is advisable
to test against some alternative model B’s.
In general terms, the requirements for a suitable distribution family can be distinguished between location,
scale, and shape parameters. A change in location means shifting a distribution to the right or the left. A
change in scale means shrinking or expanding a distribution. In each of these cases, the fundamental shape
of the distribution is unchanged. This implies, for example, that all normal distributions can be considered
to have the same shape, as any two normal distributions can be related by changes in location and scale.
Furthermore, in addition to the parameters that define the location, scale, and shape, a general
understanding of the shape itself will sometimes help inform a suitable selection.
For example, for a probability density plot as shown below, it is likely that a normal distribution would be
selected to fit the data, with a location of around 1,000 CU, considering it is bell-shaped and no obvious
skewness is observed. If skewness is observed, a lognormal distribution may be considered by plotting the
logarithm of the data values and observing if the empirical distribution of the logarithmic values appears to
be a normal distribution.
55
Probability Distribution of
Insurance Liability
7.0%
6.0%
5.0%
4.0%
3.0%
2.0%
1.0%
0.0%
872
930
988
1,046
1,105
It is usually necessary to consider variety of different distribution shapes. In particular, many distributions
encountered in practice have fatter tails that the normal distribution. Observed distributions for insurance
data tend to have too many extreme events relative to a normal distribution, even if the usual measures of
scale (such as standard deviation) are the same. This implies that a shape parameter associated with tail
fatness is required. In such examples, therefore, a parametric family should be where the thinnest-tailed
member of the family is the normal distribution.
It may also be necessary to consider asymmetric distributions skewed to the left or to the right. This
generates the need for a further shape parameter to cover asymmetry, in addition to the tail fatness
parameter. Thus, families of distributions used in practice are more likely to have four parameters: one for
location, one for scale, one for tail fatness, and one for asymmetry.
4.3.2 Families of distributions
This section introduces different families of distributions that can be considered in fitting insurance data for
property/casualty or general insurance.
In the modelling of insurance losses, the exponential family of distributions is most commonly used. The
exponential family includes many of the most common distributions, including the Normal, Lognormal
Exponential, Pareto, Geometric, Gamma, Binomial, Negative Binomial, Weibull, Chi-squared, Beta,
Bernoulli, and Poisson. These common distributions include continuous distributions and discrete
distributions that are useful when modelling probability distributions for claim counts. For details of some
exponential distributions, refer to appendix F of the IAA’s education monograph Stochastic Modeling –
Theory and reality from an actuarial perspective.
For example, in practice, Geometric, Poisson, and Negative Binomial have been used in modelling loss
frequency (claim count). Exponential, Weibull, and Gamma have been used in modelling the severity of
loss (size of loss), whereas Gamma is also used in the modelling of aggregate loss.
The table below provides the forms of the probability density functions of a number of distributions from the
exponential family.
Figure 4.1
Distribution
Probability Density function
56
Bernoulli
Binomial
Poisson
Geometric
Negative Binomial
Exponential
Gamma
Normal
Lognormal
X ( , 2 )
X  e  Z where Z is a standard
normal variable
Where symmetric distributions are considered in distribution fitting, useful distribution families to consider
are the Student T, symmetric Exponential Generalized Beta of the second kind (EGB2), and symmetric
Johnson unbounded. Each of these families has one parameter, describing tail fatness. All of these families
include the Normal distribution as a limiting case.
The table below shows a number of representative distributions from these three families. These can all be
modified by shifting and scaling, which are omitted in here.
Figure 4.2
Distribution
Probability Density function
57
Cauchy (Student T1)
1
 (1  x 2 )
Student T2, scaled by
√2
1
2(1  x 2 ) 3 / 2
 v21 
Pearson Type VII
(scaled version of
Student T with v
degrees of freedom)
  2v 
Logistic
ex
(1  e x ) 2
Hyperbolic secant
ex/2
 (1  e x )
Laplace
1
2
Symmetric EGB2
(α=½: hyperbolic
secant. α=1: logistic.
α=0: Laplace. Normal
limit as α tends to
infinity.)
Symmetric Johnson
unbounded (normal
limit as δ tends to 0).
v 1
2  2
1  x 
e | x |
(2 )
ex
( ) 2 (1  e x ) 2
where Γ(z)
is Euler’s gamma function

2 (1  x 2 )


2
 2
 exp 
ln x  1  x 2 
 2

The three symmetric families can be further generalized to reflect asymmetry by including extra parameters
in the distribution functions. Below are representative examples of the asymmetric families: 
Figure 4.3
Distribution
Pearson IV (v=1)
Probability density function
 exp  tan 1 ( x)
exp  12    exp  12  (1  x 2 )
58
Distribution
Pearson IV (v=2)
Pearson IV
(general v)
EGB2(α,1)
Probability density function


v 1i
2
 exp  tan
(1   2 ) exp  tan 1 ( x)
exp  12    exp  12  (1  x 2 ) 3 / 2
2 v 1 

v 1 i
2

(v)
1
( x)

2 ( v 1) / 2
(1  x )
ex
(1  e x ) 1
EGB2(1,β)
e x
(1  e x )  1
EGB2
(   )
ex
( )(  ) (1  e x )  
It is also important to consider another family of distributions under the scope of this monograph. For the
purpose of estimating the risk adjustment, the right tail of the liability or loss distribution can be particularly
important. Consequently, it may not be necessary to consider the entire distribution. The normal distribution
is the important limiting distribution for sample sums or averages, as is made explicit in the central limit
theorem. Another family of distributions is important in the study of the limiting behaviour of sample extrema.
This is the family of extreme value distributions. In general, when considering the maximum observations
from each of a sample of independent and identically distributed random variables, as the size of sample
increases, the distribution of the maximum observation, H(x), converges to the generalized extreme value
distribution (GEV).
The GEV distribution can be defined under this form:
1
𝐻(𝑥) = Pr(𝑋 ≤ 𝑥) = {
𝑒
−(1+𝛾
𝑒 −𝑒
−(
𝑥−𝛼 −𝛾
)
𝛽
𝑥−𝛼
)
𝛽
𝑖𝑓 𝛾 ≠ 0;
𝑖𝑓 𝛾 = 0.
where γ defines the shape of the tail, α defines the location, and β defines the scale.
In statistical theory, the value of interest is the threshold value, beyond which the maximum observations
are selected. As this threshold value increases, the conditional distribution approaches the generalized
Pareto, which is defined as follows:
𝑥 −1
1 − (1 + 𝛾 ) 𝛾 𝑖𝑓 𝛾 ≠ 0;
𝛽
Pr(𝑋 − 𝜇 ≤ 𝑥|𝑋 > 𝜇) = {
1−𝑒
59
𝑥
−
𝛽
𝑖𝑓 𝛾 = 0.
This property of the GEV distribution lends itself most readily to the modelling of insurance tail (high-value)
losses.
4.3.3 Moments and cumulants
Moments are quantitative measures that provide useful information about a probability distribution.
Understanding moments can be helpful in determining the appropriate statistical distribution that fits the
experience data. Any distribution can be characterized by a number of these moment measures, such as
the mean, the variance, the skewness, and the kurtosis. Formally, the n-th moment of a continuous function
f(x) about a value c is defined as:
∞
𝑚𝑛 = ∫ (𝑥 − 𝑐)𝑛 𝑓(𝑥)𝑑𝑥
−∞
When c=0, this measure is called the “raw moment”, which can also be expressed as E(X n). When c is the
mean, it is called the central moment.
The first raw moment, or first moment about zero, is referred to as the distribution’s mean, or expectation.
In higher orders, the central moments (moments about the mean) generally provide more information about
the distribution’s shape. Variance is the second central moment, which represents the width or dispersion
of the distribution. The third central moment, when normalized, is called skewness, which measures the
lopsidedness of the distribution. Any symmetric distribution will have a skewness of zero. A distribution that
has a fat tail on the right has a positive skewness. Kurtosis is relevant to the fourth central moment. Kurtosis
is defined as the normalized fourth central moments minus 3, which is the fourth central moment for Normal
distribution (note that some textbooks do not subtract three). Kurtosis represents the likelihood of extreme
values. If a distribution has long tails, Kurtosis is positive.
The moments are useful tools to summarize key properties of a distribution. They can also be estimated
readily from data, allowing comparison between properties of the data and corresponding properties for
fitted distributions.
The moments satisfy some important identities under scaling. For an arbitrary constant λ, it can be seen
that
mn(λX) = E(λnXn) = λnE(Xn) = λnmn(X).
Additionally, if X and Y are independent (in the statistical sense) then
mn(XY) = mn(X) mn(Y)
The behaviour under shifting is a little more complex; it can be shown using the binomial theorem that:
n
mn ( X  c)  
r 0
n!
c n  r mr ( X )
r!(n  r )!
For many purposes, it is more convenient to deal with Thiele’s cumulant construction, instead of moments.
Cumulants are defined in terms of moments, and, conversely, moments are easily reconstructed from
cumulants. Thus, if two distributions share the same moments (up to a given order) then they also share
cumulants to that order.
Cumulants are defined as follows:
60
k1  m1
k 2  m2  m1k1
k 3  m3  m2 k1  2m1k 2
k 4  m4  m3k1  3m2 k 2  3m1k 3
n
k n 1  mn 1    mn  j k j 1
j 0  j 
n 1
Similar to moments, the cumulants satisfy the scaling property k n(λX) = λnkn(X). While moments are easy
to compute for products of independent variables, there is a corresponding cumulant theorem for sums of
independent random variables: if X and Y are independent then
kn(X+Y) = kn(X) + kn(Y)
As a corollary, the first cumulant of a distribution is the mean; the second is the variance, and for a normal
distribution all subsequent cumulants are zero. For symmetric distributions, the third and higher odd
cumulants are all zero if they exist.
Section 4.4 Distribution fitting
4.4.1 Empirical distribution
When constructing a probability model, if the data are of sufficient quality and quantity to adequately
represent the population, an empirical distribution can be developed. For instance, in actuarial practice for
life insurance, mortality rates and policy surrender rates are often modelled based on empirical distributions.
For illustration, the following chart demonstrates the probability of death at each attained age based on a
set of raw experience data, and also a probability curve after fitting. Insurers often smooth (graduate) the
mortality rates by fitting the raw probability curve to a polynomial function or some sort of graduation
technique. The purpose is to maintain what is normally a strictly increasing nature of mortality rates by age
while maintaining a balance between goodness of fit and smoothness. Commonly used graduation
techniques by actuaries include Whittaker-Henderson and P-spline graduation, techniques explained
thoroughly in actuarial literature.
61
Probability of death by age
0.2
raw data
Fitted
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
40
45
50
55
60
65
70
75
80
85
90
There are often industry mortality tables available that are used as a reference table. Sometimes, a review
of the empirical distribution of the mortality rates may suggest the application of a multiplier approach by
applying an average multiplier determined based on a comparison of the experience data to the experience
suggested by an industry table. In some cases, the empirical distribution follows a similar shape to the
industry table. In others an empirical distribution might be used for ages where credible data exists,
extended by using a multiplier approach for younger and older ages where only limited data exist.
An empirical distribution is a non-parametric approach, which mostly relies on the experience data and
does not make any parameter assumptions. It is generally only used when the data are of sufficient quantity
and quality, with a disadvantage of being limited by the depth and comparability of the historical data.
4.4.2 Method of moments
The method of moments involves finding an analytical distribution whose moments are equal, or as close
as possible, to the moments of a data sample. The number of moments fitted equals the number of
parameters in the fitted distribution. The four-parameter families permit fitting to the first four moments.
Equivalently, these can be fitted to the first four cumulants. Practical calculations usually proceed by the
following steps:
(a) Estimate the sample moments of the data. At its simplest, the k th moment is estimated as the
sample mean of xik over all observations xi
(b) Convert the sample moments to sample cumulants. Scale the sample cumulants to obtain two
shape parameters: the sample skewness = k3/k23/2 and the sample kurtosis = k4/k22
(c) Plot the sample skewness on a plan of the theoretical skewness and kurtosis for the parametric
family under consideration. Typically, for each value of skewness there is a theoretical minimum
skewness that can be achieved within a given distribution family. The set of achievable (skewness,
kurtosis) combinations is called the feasible region.
(d) If the sample skewness and kurtosis lie within the feasible region, the shape parameters of the
distribution are determined. Otherwise, it is necessary to find the point on the feasible region
boundary that lies closest to the sample skewness and kurtosis.
62
(e) Choose the location and scale parameters of the fitted distribution to replicate the sample mean
and variance of the data.
For a more technical discussion of the theory and application of the method of moments, refer to chapter
I.C of Stochastic Modeling – Theory and reality from an actuarial perspective.
4.4.3 Maximum likelihood
The method of maximum likelihood, due to Fisher, is an alternative approach to parameter estimation. Here
the application described is for continuous random variables.
Its objective is to calculate the likelihood, i.e., the multiplicative product of the probability density evaluated
at each observation given a test parameter value. For large data sets, this can get close to zero or very
large, so it is usually more convenient to look at the empirical mean log likelihood, i.e., the sum of the log
density at each observation, divided by the number of observations. The mean log likelihood is then a
function of the data and of the test parameter. The maximum likelihood estimator is the test parameter that
produces the highest sample mean log likelihood, or equivalently, the highest product of likelihoods.
Stochastic Modeling – Theory and reality from an actuarial perspective provides extensive discussions and
illustrative examples on the theory and application of the maximum likelihood method, which can be found
in chapter I.C and appendix F.
4.4.4 Bayesian methods
An alternative class of methods for estimation is known as Bayesian methods. They treat not only the data
observations xt as random variables, but also the unknown parameter θ. This is in contrast to classical
statistical inference in which the “true” parameter is treated as an unknown constant, while estimates of that
parameter are random variables.
The Bayesian model is then specified using two distributions:
1. The distribution of X given the parameter θ, as is required for all methods; and
2. A “prior” distribution for the distribution θ.
Ideally, there would be a large number of historic data series in which the true parameter is known, for
which a distribution of parameters could be calibrated. Using this parameter distribution, a joint distribution
of the parameters and the data could be constructed. A forecast can be constructed by using the distribution
of the next data point given the data already observed.
In symbols, the calculations proceed as follows.
Denote the prior parameter distribution by p(θ), assumed to be a discrete distribution, and denote the
conditional distribution of x given θ by f(x|θ).
Assuming independent samples, the distribution of the parameter given the data are obtained from Bayes’
theorem:
p posterior ( | x) 
i max
n 1
i 1
j 1
 p( i ) f ( x j  i )
A
where A is the integral of the numerator over θ.
The conditional distribution for the next observation x n+1 is then given by:
63
i max
n 1
i 1
j 1
i max
n
 p( i ) p( x j  i )
 p( ) p( x
i 1
i
j
i )
j 1
For a more technical discussion on the Bayesian methods, refer to chapter F.9 of Stochastic Modeling –
Theory and reality from an actuarial perspective.
4.4.5 Use of judgement and prior knowledge
While ideally a large dataset of historic “true” parameter values would be valuable for calibrating prior
distributions, in practice such large datasets are seldom available. Consequently, it is usually not possible
to observe any true parameters at all; at best there is one estimated value.
In the absence of firm evidence to support a prior distribution, statisticians are divided as to the best of the
course of action.
“Bayesians” would advocate subjective processes to choose a prior distribution. Such processes may be
described as a “judgement” or “experience”. These will necessarily be personal to the expert engaged to
choose the distribution. Other experts may produce different distributions. Given the extent of subjectivity
necessary, it is difficult for insurers to know how much confidence to place in the expert views. Comfort
might be gained from three methods:

Background checks on the expert, including qualifications, relevant experience, and career history,
and the retrospective accuracy of any previously published forecasts;

Transparency of the approach taken and assumptions used; and

Benchmarking views of more than one expert.
While these methods have some merit, care is needed in assessing the results for expert input, regardless
of the quality of the expert opinion. For example, at any time there are a large number of potential experts,
some of whom will have a good forecasting track record by good luck rather than skill. The bias is
exacerbated when experts selectively report their historic successes and not their failures.
If there are enough relevant data to validate an expert’s predictions, then they could be used in conventional
statistical analysis to give a more objective model calibration side-stepping the use of the judgement. Some
of the most challenging areas of judgement, including long-term trends in longevity and the effect of climate
change, will remain untestable for many years pending emergence of data.
Expert behaviour may change as a result of the widespread use of benchmarking. Experts find they are
paid more when their views are consistent with other experts and with what the industry wants to hear.
Challenging or contrarian views are not effectively propagated in the market for opinions. The appearance
of consensus arises from the censorship of other views rather than from broadly-based agreement.
4.4.6 Evaluation of models and goodness of fit
The table below compares the advantages and disadvantages of the different methods, including using the
empirical data to construct an empirical distribution.
Figure 4.4
Advantages
64
Disadvantages
Empirical distribution
Free of model assumptions
Lumpy
Never produces observations
more extreme than largest
historic observation
Method of moments fit
Easy to compute
Parsimonious description for
fitted model
Maximum likelihood fit
Cannot fit distributions with
infinite moments
Limited feasible range
Easy to compare
distributions
Arbitrarily assumes a fit over
the range of the distribution
when only a small number of
points are used in the fitting
Asymptotically efficient
estimate if distribution known
to be from the family
Difficult to prove an estimate
maximises likelihood
Numerical problems with
multiple local minima,
unbounded functions, etc.
Vulnerable to model misspecification: any distribution
is a maximum likelihood fit
when the alternatives are
implausible
Bayesian methods
Includes parameter
uncertainty in distribution
forecast
Arbitrariness of prior
distribution
Calculation of posterior
distribution numerically
challenging
Once the fitting is conducted, the next step is to evaluate the goodness of fit and make a model choice.
Fitting a distribution to a set of data does not guarantee that the fitted distribution adequately captures
relevant features of the data. The fitted distribution might be the best fit from a convenient mathematical
family, but could still be a poor fit to the data in absolute terms or in the event of changes in circumstances
between the time of the projection and actual experience. On the other hand, any data set is inevitably
lumpy, and alarm at deviations between empirical and fitted distribution may be misplaced if the deviations
could plausibly have arisen from random sampling. It is therefore important to monitor these deviations and
devise tests of whether they are statistically significant.
The simplest test is the binomial test. Suppose a test is needed to determine whether a random sample is
drawn from a continuous distribution with cumulative probability function F. A quantile could be picked, for
example the lower quartile q25% satisfies F(q25%) = 25%. Out of n observations, one quarter are expected
to lie below q25%. However, the observed proportion in a finite data set may not be exactly 25%. Instead,
under the null hypothesis (i.e., assuming F is correctly specified), the sample proportion has a binomial
distribution with mean 0.25 and variance 0.1875/n. For large n, that distribution is approximately normal. A
value far from 0.25 might lead one to reject the null hypothesis, that is, to conclude that the data set is
65
probably not a random sample from the claimed function F. For a test size of 5%, the null hypothesis is
rejected if:
observed proportion 
1 0.85

4
n
Here, the constant 0.85 is 1.96 (the standard normal 97.5%-ile) multiplied by the square root of (0.1875).
A potential criticism of the binomial approach is the need to select what could be viewed as being an
arbitrary percentile. This is resolved by the application of a Kolmogorov-Smirnov test (known informally as
a K-S test). In addition, chi-squared test is a commonly used non-parametric test. Both the K-S test and the
chi-squared test are discussed in details in chapter F.6 of Stochastic Modeling – Theory and reality from
an actuarial perspective.
Section 4.5 Modelling of fulfilment cash flows
The modelling of fulfilment cash flows is essentially the modelling of the wide variety of risks from insurance
contracts.
Many of the risks involved in insurance contracts are demographic risks or underwriting risks, such as
mortality, morbidity, longevity, incidence of insured event, catastrophe, and policyholder behaviour options.
Insurance contracts sometimes also present exposure to market risks if the fulfilment cash flows are
dependent on equity market performance or interest rate movement.
4.5.1 Parametric versus stochastic
Many educational textbooks cover the modelling of insurance losses and benefits. In general, once the
underlying distribution for the modelled risk or insurance variable is derived (see chapter 4 for approaches
to derive probability distributions), insurers should be able to generate fulfilment cash flows in a
deterministic fashion. If multiple random variables are involved and a Monte Carlo simulation approach is
more appropriate when closed-form parametric solutions are not possible, the inverse transform method
can be applied to simulate the insurance variables based on the underlying distribution.
For example, let us assume the number of claims in a year follows a Poisson distribution, and the average
size of a claim follows a Pareto distribution. The aggregate claim in a year then depends on the number of
claims in that year and the size of each of them. In this case, insurers can use a frequency-severity
simulation approach to generate aggregate claims for future years. Take frequency, for example; the
probability function follows:
P( x)  e   .
x
x!
where x = (0,1,2,3…)
This formula provides the probability of a Poisson distributed random variable having a value of x for a
given parameter  . Once a random number is simulated, by comparing the random number with the
cumulative probability for the Poisson distributed random variable, which is the sum of the probability
functions for all values less than or equal to x, insurers can determine a corresponding x as the simulated
frequency. Once frequency x is determined for a year, the size of each claim in x then can be simulated
based on the claim severity that has a Pareto distribution, assuming that each claim is independent of
another. Pareto has a continuous probability function, so a claim size can be solved for by setting the
cumulative probability function equal to a simulated random number.
66
Between the deterministic parametric approach and the Monte Carlo stochastic simulation, scenario
analysis can also be applied. It has been utilized in the area of corporate risk management and asset liability
management, often with a focus on interest rate risk, but it can also be applied for a variety of other risks.
Multiple scenarios are designed to accommodate various events. A scenario is specified as a set of “paths”
that will be taken by relevant risk factors. Scenarios all follow the same time horizon and time steps in order
to allow for consistent projection of events. Scenario analysis provides the benefit of performing a “what if”
analysis to help understand the impact of particular cash flow scenarios. In financial reporting, it can also
aid understanding of anticipated future outcomes for the purpose of estimating the insurance liabilities, but
each scenario will need to be assigned a probability. Under the Monte Carlo simulation, each randomly
generated scenario is assigned an equal probability. Depending on how the scenarios are designed, under
scenario analysis, a limited number of scenarios can be used to reduce the run time and each scenario
may have its own probability.
For reference, detailed discussion of stochastic modelling for general insurance can be found in Stochastic
Claims Reserving in General Insurance, written by P.D. England and R.J. Verrall.
4.5.2 Policyholder behaviour
Policyholder behaviour refers to the decision that policyholders make in the selection and utilization of
benefits and guarantees embedded in insurance products. Policyholder behaviours can be observed in
lapse, surrender, and partial withdrawal activities, etc., which generally have a direct impact on the financial
performance of insurers. Policyholder behaviour has always been an important aspect in actuarial analysis
and modelling but continues to receive increased attention due to the risks and uncertainties involved. In
some situations, the primary policyholder decision is whether to continue paying premiums or to surrender
or convert the product. The modelling of policyholder behaviour becomes much more complex where morecomplex insurance products provide more flexibility and options to policyholders, such as increased
investment components embedded in insurance products over which policyholders have control.
As the insurance needs of customers and financial markets become increasingly connected, policyholder
behaviours have become more dynamic and are subject to change due to the external environment rather
than simply a function of the policy characteristics. For example, when a life insurance product offers a
crediting rate to the account value, policyholder lapse rates may have some dependency on the external
interest rates. When the external interest rates rise, meaning that the rates competitive insurers can afford
to offer are increasing, it is generally expected that policyholders tend to lapse their current policies in order
to realize higher yields.
Policyholder behaviour risks are typically considered non-diversifiable and are path-driven. Hence,
deterministic modelling is generally not appropriate for policyholder behaviour that has a dynamic nature.
In addition, dynamic policyholder behaviour will potentially increase the severity of claims at the tail, which
makes it more important to incorporate the modelling and consideration of dynamic policyholder behaviour
in the risk adjustment calculation.
For example, below is an illustration chart of the dynamic lapse behaviour typically anticipated for variable
annuity living benefit guaranteed riders that are sold in the U.S. The x-axis represents the level of in-themoney, which is the guaranteed benefit base over the variable annuity account value. As seen in the chart,
as the in-the-money level increases—which means the equity market perhaps has performed poorly and
the account value has dropped, resulting in more intrinsic value to the policyholder for the embedded policy
rider—lapse rates decline, and vice versa.
67
Dynamic Lapse Rates for Variable Annuity Living Benefit
Riders
25%
20%
15%
10%
5%
In-the-moneyness
0%
0%
40%
80%
120%
160%
200%
240%
280%
In modelling the fulfilment cash flows, if policyholder behaviour has a dynamic nature or is expected to be
sensitive to externalities, it is necessary to simulate different paths of the variables that drive the dynamic
behaviour, such as equity returns or interest rates.
4.5.3 Market risk modelling
When a product design is more complex and more flexibility is offered to policyholders through embedded
options, market risks (e.g., risk-free interest rates, foreign exchange, inflation, and credit spreads) are
widely encountered in fulfilling an insurance policy’s obligation, leading to evolution of industry-standard
modelling approaches.
Some possible approaches that have been typically observed in the industry are summarised here. The
IAA stochastic modelling monograph provides further examples and details.
There are two general types of interest rate models: equilibrium and arbitrage-free. The equilibrium models
postulate a stochastic process and model the behaviours of the interest rate term structure over time.
However, they do not necessarily exactly calibrate to current market prices. In comparison, the arbitragefree models include time-dependent parameters to fit current market prices, but they do not model the
dynamics of the term structure. In general, arbitrage-free models are used to generate risk-neutral
scenarios, which assume that all term premia are zero. Equilibrium models can be used for both risk-neutral
scenarios and real-world scenarios. Real-world scenarios are generally considered in stress testing.
Below a few single-factor interest rate models are introduced that are typically utilized in the industry:

A basic single-factor interest rate model with drift rate of  takes the form of rt  t   t . This
assumes that the change in interest rate follows a normal distribution, which means negative
interest rates are possible. A slight modification to this formula is to have a time-varying
 t , which
becomes the Ho-Lee model that can be used to calibrate modelled bond prices to their current
market prices.

The Ho-Lee model does not allow for mean reversion. The Vasieck model, which follows
rt  (  r t 1)t   t , allows for mean reversion with a reversion speed of  , and an expected
long-term rate of  /  . Similarly, by making  time dependent, which becomes the Hull-White
model, this model can be used for calibration to current market prices.
68

The Cox-Ingersoll-Ross (CIR) model, rt  ( t   t r t 1)t 
volatility of
rt 1  t , is a lognormal model. The
 t is time-varying in the CIR model rather than a fixed volatility as used in the above
models. The mean reversion speed  is made time-varying too, both of which allow the model to
better fit modelled bond prices to the market prices. The CIR model fixed the negative interest rate
issue given that it is lognormal (the volatility is proportional to the square root of the previous value
of the series).

Another
lognormal model that is
 ln rt  ( t   t ln r t 1)t   t .
also
widely used is
the Black-Karasinski model,
Single-factor interest rate models work well for simulating a single interest rate of a particular term, but they
have limitations in modelling different points on a yield curve. Two-factor models address this limitation by
modelling both the short- and long-term rates. For example, the interest rate generator published by the
AAA for the modelling of risk-based capital utilizes a two-factor interest rate model.
As to the modelling of equity returns or stock prices, one simple model is the Geometric Brownian Motion.
It assumes that the random shocks to the price do not depend on past information. In its discrete form, the
formula for the stock price is St 1  St  St  St  St (t   t ) , where
 and  represent
the
mean return and volatility (for simplicity, assume these parameters remain constant over time here), and
 represents a random variable whose expected behaviour follows a standard normal distribution.
When generating multiple stock prices, it is necessary to account for correlations. Choleskey Decomposition
is generally used in generating correlated financial scenarios. Basically, the first step is to decompose the
correlation matrix R of the correlated financial variables into its Choleskey factors: R=TT’, where T is a
lower triangular matrix with zeros in the upper right corners. The second step is to apply matrix T to the
uncorrelated variables  to produce correlated variables  :   T . For example, in a two-variable
setting, one would construct
 1  1
and
 2  1  (1   2 ) 2 based on the Choleskey
Decomposition, where  represents the correlation between the two variables. This is because the twovariable correlation matrix is decomposed into the following:
1  1
 1   

 
Hence,
 1   1
  T :    
 2   
 1

(1   )  0
0
2


2 
(1   ) 
 1 

.
(1   )   2 
0
2
There are educational materials written on the modelling of financial scenarios, for further reading about
the modelling of market risks. One example is Paul Swetting’s Financial Enterprise Risk Management.
Section 4.6 Risk dependencies and aggregation techniques
Risk adjustment calculations require aggregation of multiple probability distributions. For example, fulfilment
cash flow distributions from different segments of business that are selected to be aggregated. Aggregation
of different distributions is also a common problem in risk management and economic capital modelling. In
some situations, insurance risk may be modelled at a line of business level. In such cases, there may be
some knowledge of the correlation among different lines but the joint distribution of different lines may be
69
unknown. This section deals with the problem of aggregating different marginal distributions with a known
correlation structure so as to create a desired joint distribution. Some practical considerations are also
discussed in section 6.3.
In mathematical terms, the problem can be described as follows: there are n random variables 𝑋𝑖 with
known distribution function (d.f.) 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛; and there also is knowledge about the correlation matrix
∑ = ((𝜌𝑖𝑗 )) , 𝑖, 𝑗 = 1,2, … , 𝑛. The problem to solve is finding an appropriate joint distribution function 𝐹 for the
vector (𝑋1, 𝑋2 , … , 𝑋𝑛 ). However, given a set of marginal distributions and correlations, there is not a unique
joint distribution, except in the case of a multivariate normal distribution. There is more to the dependence
structure than just the correlation matrix. This section discusses dependence among variables and ways of
implementing them.
Section 4.6.1 describes different measures of dependence or association among random variables and
their properties. It also defines tail dependence. Section 4.6.2 describes dependence structures as defined
by copulas. Dependence of two random variables is a property of their copulas. This section talks about
various copulas and their use. Section 4.6.3 describes the techniques to incorporate dependence in
simulated data in order to aggregate them. It also features an algorithm described by Iman and Conover,
and a simple implementation.
For the sake of simplicity, probability distributions are represented in the form of simulated values. Monte
Carlo simulations help to avoid the problem of having to find a closed form mathematical solution or using
numerical methods.
4.6.1 Measures of dependence
Two random variables X and Y are independent if for every a and b,
𝑃𝑟𝑜𝑏 (𝑋 ≤ 𝑎, 𝑌 ≤ 𝑏) = 𝑃𝑟𝑜𝑏 (𝑋 ≤ 𝑎) ∗ 𝑃𝑟𝑜𝑏 (𝑌 ≤ 𝑏)
There are a few measures of association or dependence (commonly known as correlation) between two
random variables X and Y.
The Linear or Pearson correlation coefficient between variables X and Y is
𝜌 (𝑋, 𝑌) =
𝐶𝑜𝑣 (𝑋, 𝑌)
√𝜎 2 (𝑋)𝜎 2 (𝑌)
where 𝐶𝑜𝑣 (𝑋, 𝑌) is the covariance between X and Y and is defined as 𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)
This is the measure of linear dependence. The correlation coefficient is a natural measure of dependence
when X and Y come from bivariate normal distribution. For a multivariate normal distribution, correlation
coefficients completely define the dependence structure; however, that cannot be generalized to other
distributions.
Correlation and covariance are easy to manipulate under linear transformations. If Σ is the variancecovariance matrix of a set of random variables 𝑋 = (𝑋1 , 𝑋2 , … , 𝑋𝑛 ), and 𝐴 is a linear transformation, 𝐴: ℝ𝑚 →
ℝ𝑛 , then 𝐴𝑋 has a variance-covariance matrix 𝐴Σ𝐴′ .
In general, independent variables are uncorrelated, but the converse is not necessarily true; a zero linear
correlation does not mean that the variables are independent. For example, 𝑋 and 𝑋 2 have linear correlation
of 0, if X is normally distributed. Another example is a bivariate t-distribution with 0 correlation; the marginal
distributions are not independent.
Another problem with linear correlation is that it is not invariant under non-linear monotonic transformations.
For example, (𝑙𝑜𝑔(𝑋), 𝑙𝑜𝑔(𝑌)) or (𝑒𝑥𝑝(𝑋), 𝑒𝑥𝑝(𝑌)) would have different correlation coefficient from (𝑋, 𝑌).
A second measure of dependence among variables is rank correlation. Spearman’s rank correlation
between two variables X and Y with marginal distributions F and G is defined as
70
𝜌𝑠 (𝑋, 𝑌) = 12𝐸[(𝐹(𝑋) − 0.5)(𝐹(𝑌) − 0.5)]
Alternatively, it can be expressed as
𝜌𝑠 (𝑋, 𝑌) = 𝜌 (𝐹(𝑋), 𝐺(𝑌))
This is essentially a linear correlation of the probability-transformed random variables. Another property of
rank correlation is that a rank correlation of a sample is linear correlation of ranks of the sample.
A third measure of dependence among variables is Kendall’s tau; between two variables X and Y it is
defined as
𝜌𝜏 (𝑋, 𝑌) = 𝑃{(𝑋 − 𝑋̃)(𝑌 − 𝑌̃) > 0} − 𝑃{(𝑋 − 𝑋̃)(𝑌 − 𝑌̃) < 0}
where (𝑋̃, 𝑌̃ ) is an independent copy of (𝑋, 𝑌).
This is essentially a probability of concordance minus a probability of discordance. Two pairs of data points
(𝑥𝑖 , 𝑦𝑖 ) and (𝑥𝑗 , 𝑦𝑗 ) are concordant if (𝑥𝑖 − 𝑥𝑗 )(𝑦𝑖 − 𝑦𝑗 ) > 0, and discordant if (𝑥𝑖 − 𝑥𝑗 )(𝑦𝑖 − 𝑦𝑗 ) < 0.
Both 𝜌𝑠 and 𝜌𝜏 are degrees of monotonic dependencies between X and Y. Both of them assign the value 1
for perfect positive dependence and the value of -1 in case of perfect negative dependence. Also, one of
the main advantages of rank correlation and Kendall’s tau is that they are invariant under monotonic
transformations. However, they do not allow permit variance-covariance manipulations as would be
possible with linear correlation assumptions.
For multivariate normal distributions, linear correlations can be converted into rank correlation or Kendall’s
tau (and vice versa) using the following relationship:
6
𝜌
sin−1
𝜋
2
2
𝜌𝜏 =
sin−1 𝜌
𝜋
𝜌𝑠 =
The relationship with Kendall’s tau also holds true for other elliptical distributions.
Another measure of dependence is tail dependence, the concept of which is relevant for the study of
dependence between extreme values. The coefficient of upper tail dependence between two random
variables X and Y is defined as
𝜆𝑢 = lim 𝑃[𝑌 > 𝑉𝑎𝑅𝛼 (𝑌)| 𝑋 > 𝑉𝑎𝑅𝛼 (𝑋)]
𝛼→1−
Intuitively, it expresses the probability of having a high (low) extreme value of Y given that a high (low)
extreme value of X has occurred. If this limit is 0, the variables are asymptotically independent in the upper
tail; otherwise they are asymptotically dependent. While linear or rank correlation measures average
correlation between two distributions, tail correlation only considers the dependence of the tails of the
distributions.
The coefficient of lower tail dependence is defined in a similar way.
A correlation matrix needs to be positive semi-definite. When estimating a correlation matrix using
judgmentally selected parameters, one needs to keep in mind that the resultant matrix might not be positive
semi-definite. But with the help of existing software packages, a positive semi-definite matrix can be
determined that is reasonably close to the estimated one.
4.6.2 Dependence structure and copula models
The joint distribution of a set of random variables contains all the information about their individual
(marginal) distributions and dependence structure. Dependence is a property of their copula. Copulas allow
one to deal with the dependence among random variables separately from their marginal distributions.
71
For a set of random variables (𝑋1, 𝑋2 , … , 𝑋𝑛 ) with marginal distribution functions 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛, consider
applying probability integral transformation (𝑈1 , 𝑈2 , … , 𝑈𝑛 ) = (𝐹1 (𝑋1 ), 𝐹2 (𝑋2 ), … , 𝐹𝑛 (𝑋𝑛 )). The copula function
C of (𝑋1, 𝑋2 , … , 𝑋𝑛 ) is essentially the joint distribution function of (𝑈1 , 𝑈2 , … , 𝑈𝑛 ). It contains all information
about the dependence between (𝑋1, 𝑋2 , … , 𝑋𝑛 ); and marginal distribution functions 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛 contain
all information about the individual distributions of 𝑋𝑖 , 𝑖 = 1,2, … , 𝑛.
An n-dimensional copula is defined as a multivariate distribution function, C, with uniform distributed
marginal in [0, 1] and the following properties:
1. 𝐶: [0,1]𝑛 → [0,1]
2. 𝐶 is grounded and n-increasing
3. 𝐶 has marginals 𝐶𝑖 , which satisfy 𝐶𝑖 (𝑢) = 𝐶(1, … ,1, 𝑢, … ,1) = 𝑢 for all 𝑢 ∈ [0,1]
The following theorem, known as Sklar’s Theorem, provides the theoretical basis for the application of
copulas—essentially, separating dependence and marginal distributions.
Let 𝐹 be an n-dimensional cumulative distribution function of 𝑋𝑖 , 𝑖 = 1,2, … , 𝑛 with continuous marginals
𝐹𝑖 , 𝑖 = 1,2, … , 𝑛. Then there exists a unique copula C, on the cartesian product of the ranges
(𝑅𝑎𝑛(𝐹1 ) × … × 𝑅𝑎𝑛(𝐹𝑛 )), such that 𝐹(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) = 𝐶(𝐹1 (𝑥1 ), 𝐹2 (𝑥2 ), … , 𝐹𝑛 (𝑥𝑛 )). Conversely, given a
copula C and marginals 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛, 𝐶(𝐹1 (𝑥1 ), 𝐹2 (𝑥2 ), … , 𝐹𝑛 (𝑥𝑛 )) defines an n-dimensional cumulative
distribution function.
A copula of independent random variables is of the form 𝐶(𝑢1 , 𝑢2 , … , 𝑢𝑛 ) = 𝑢1 . 𝑢2 … 𝑢𝑛 . Below are some
commonly used copulas.
Gaussian copula
This is the copula of the multivariate normal distribution. It is defined as:
𝐶Σ𝐺𝑎 = ΦΣ (Φ−1 (𝑢1 ), Φ −1 (𝑢2 ), … , Φ−1 (𝑢𝑛 ))
where ΦΣ is the standard multivariate normal distribution function with linear correlation matrix Σ; and Φ−1
is the inverse of standard univariate normal distribution function.
The Gaussian copula is the most popular and easy to implement, as no parameter other than the linear
correlation matrix needs to be estimated. Gaussian dependence is completely determined by that matrix.
Also, there is the option of estimating either the rank correlation coefficients or the linear correlation
coefficients. Under Gaussian copula assumptions, conversion between linear and rank correlation
coefficients is easy.
However, one major disadvantage with a Gaussian copula is that it underestimates aggregate outcome in
the tail. In fact, the theoretical tail correlation is 0 (in the limiting case) under the Gaussian copula, unless
the variables are perfectly correlated. So very low correlation is expected in the tails of, for example, 1-in100 or 1-in-200 year events.
To simulate random variables from a Gaussian copula with linear correlation matrix Σ = ((𝜌𝑖𝑗 )), one can
use the following procedure:
1. Find the Cholesky decomposition 𝐶 of Σ, such that Σ = C′C
2. Simulate 𝑛 independent standard normal variables 𝑧 = (𝑧1, 𝑧2 , … , 𝑧𝑛 )
3. Set x = Cz
4. Set 𝑢𝑖 = φ(𝑥𝑖 ), for 𝑖 = 1,2, … , 𝑛. The vector (𝑢1 , 𝑢2 , … , 𝑢𝑛 ) is a random sample from n-dimensional
Gaussian copula
72
In fact, given a set of marginal cumulative distribution functions 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛, setting 𝑥𝑖 = 𝐹𝑖 −1 (𝑢𝑖 ),
(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) is a random sample from a multivariate distribution with Gaussian dependence with linear
correlation matrix Σ and marginals 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛.
t copula
Another common copula is a t copula. It is the copula of a multivariate t-Student distribution and is defined
as:
𝑡
−1
−1
𝐶υ,Σ
= t 𝑛𝜐,Σ (t −1
𝜐 (𝑢1 ), t 𝜐 (𝑢2 ), … , t 𝜐 (𝑢𝑛 ))
where t 𝑛𝜐,Σ is multivariate t distribution function with 𝜐 degrees of freedom and correlation matrix Σ = ((𝜌𝑖𝑗 ));
and t 𝜐−1 is the inverse of univariate t distribution with 𝜐 degrees of freedom.
t copula is also used in practice and easy to implement, provided the parameters can be estimated. Under
t copula, linear correlation can easily be converted into Kendall’s rank correlation (but not to Spearman’s).
The main advantage with t copula is that it allows one to model tail dependence; between 𝑋𝑖 and
𝑋𝑗 dependence increases as 𝜌𝑖𝑗 increases and/or 𝜈 decreases. In fact, even if 𝜌𝑖𝑗 is zero, there will be
asymptotic dependence in the tail.
The main problem with t copula is estimating the additional degrees of freedom parameter 𝜈.
To simulate random variables from a t copula with 𝜐 degrees of freedom and linear correlation matrix Σ =
((𝜌𝑖𝑗 )), one can use the following procedure:
1. Find the Cholesky decomposition 𝐶 of Σ, such that Σ = C′C
2. Simulate 𝑛 independent standard normal variables 𝑧 = (𝑧1, 𝑧2 , … , 𝑧𝑛 )
3. Simulate a random variable 𝑠 from 𝜒𝜐2 distribution, independent of 𝑧
4. Set y = Cz
5. Set x =
√𝜐
√𝑠
y
6. Set 𝑢𝑖 = 𝑡𝜐 (𝑥𝑖 ), for 𝑖 = 1,2, … , 𝑛. The vector (𝑢1 , 𝑢2 , … , 𝑢𝑛 ) is a random sample from n-dimensional
𝑡
t copula 𝐶υ,Σ
Again, given a set of marginal cumulative distribution functions 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛, setting 𝑥𝑖 = 𝐹𝑖 −1 (𝑢𝑖 ),
(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) is a random sample from a multivariate distribution with t dependence (with 𝜐 degrees of
freedom) with linear correlation matrix Σ and marginals 𝐹𝑖 , 𝑖 = 1,2, … , 𝑛.
Gumbel copula
A Gumbel or Logistic copula belongs to the Archimedean family of copulas. It is non-elliptical and is defined
as:
1
𝐶𝜃𝐺𝑢 = exp (−[(− ln(𝑢1 ))θ + ⋯ + (− ln(𝑢𝑛 ))θ ]θ ) , 𝜃 ≥ 1
With this, degree of dependence is measured by 𝜃; 𝜃 = 1 gives independence, while 𝜃 → ∞ provides perfect
dependence.
A Gumbel copula allows one to model tail dependence. But the main problem remains with estimating the
dependence parameter 𝜃. In the non-elliptical world, the intuition about correlation breaks down. Given the
subjectivity and expert judgment involved in estimating the correlation coefficients, this will be a major
problem.
73
When aggregating risks, correlation is only a part of the story. One needs to choose the right copula as
well. Below is an example where two lognormal distributions were aggregated with 0 correlation under two
different copula approaches—one using a Gaussian copula and the second using a t copula with two
degrees of freedom. The two lognormal distributions are assumed to represent liabilities from two different
lines. It is clear from the correlated scatter plots below that a t copula produces significant tail correlation,
even when the correlation parameter is 0. So, liabilities aggregated using a t copula (with low degrees of
freedom) would be greater than liabilities using a Gaussian copula in extreme scenarios .
Gaussian Dependence, Corr 0
10,000,000
9,000,000
8,000,000
7,000,000
6,000,000
5,000,000
4,000,000
3,000,000
2,000,000
1,000,000
-
500,000
1,000,000
1,500,000
2,000,000
t Dependence with df 2, Corr 0
10,000,000
9,000,000
8,000,000
7,000,000
6,000,000
5,000,000
4,000,000
3,000,000
2,000,000
1,000,000
-
500,000
1,000,000
1,500,000
2,000,000
4.6.3 Invoking dependence in simulated data
In the last section, the algorithms are provided to simulate samples from Gaussian and t copulas. Common
statistical packages provide programs for such simulations. For example, R provides a package “copula”,
which contains classes “normal copula” and “t copula”; using these classes one can simulate samples from
these copulas.
74
But the problem at hand is not to simulate from specified copulas, but rather how to combine multiple
marginal distributions using an appropriate dependence structure. These distributions could be arbitrary
and may not have a standard statistical form. For example, a frequency-severity model may be used for a
segment of business. So the distribution might be a series of simulated values from a Negative BinomialPareto model.
There are theoretical constraints related to linear correlations; given a set of marginals, a particular linear
correlation matrix might not be attainable. But this is rarely an issue in practice for insurance companies.
There is too much subjectivity in the estimation of the correlation parameters. A few percentage points’
difference between the desired and achieved correlation matrix would hardly be material.
Iman-Conover method
Iman and Conover put forth a simple method using rank correlation.
Given a sample of N values from a set of marginal distributions (𝐹1, 𝐹2 , … , 𝐹𝑛 ) and a linear correlation matrix
Σ, re-order the samples to have the same rank order as a reference distribution, of size N x n, with linear
correlation matrix Σ. Since linear correlation and rank correlation are typically close, the re-ordered output
will have approximately the desired correlation structure.
The algorithm is as follows:
1. Simulate N x n samples from a multivariate reference distribution (for example, Normal or t) such
that columns of the sample matrix, M, are uncorrelated, mean 0 and standard deviation 1. This can
be easily done in R using “rmvnorm” or “rmvt” functions (under package “mvtnorm”) with zero mean
and covariance matrix identity.
2. Compute the correlation matrix E = 𝑁 −1 M′M of M
3. Compute the Cholesky decomposition E = F′F of E
4. Compute the Cholesky decomposition of desired linear correlation matrix Σ = C′C
5. Compute T = M𝐹 −1 𝐶. This N x n matrix T has exactly the desired correlation matrix Σ
6. Re-order the columns of the input samples (with given marginals) to have the exact same rank
ordering as T.
Note that the re-ordered samples have exactly the same rank correlation matrix Σ, and linear correlation
matrix approximately equal to Σ. With large samples, this method typically produces a very close
approximation to the required correlation matrix.
Conceptually, the Iman-Conover method is very similar to copula methods. The copula methods generate
the samples by inverting the distribution function of each marginal as a part of the simulation process,
whereas Iman-Conover works on a given set of marginal distributions.
For all practical purposes, the idea underlying it can be used to simplify the implementation even further.
Simplified implementation of dependence
Using Iman and Conover’s idea of re-ordering marginals based on a reference distribution, a desired
dependence structure can be introduced in the following simple steps:
1. Simulate N x n samples from a multivariate reference distribution with desired linear correlation
matrix Σ. Here are a few possible choices:
a. Simulate from multivariate normal or t distributions; or
b. Simulate from normal or t copula; or
75
c.
Simulate using Iman-Conover’s Cholesky trick, with normal or t reference distributions
(compute matrix T in the above Iman-Conover algorithm);
2. Compute the ranks of the columns of the sample matrix; and
3. Re-order the input marginal distributions using the ranks computed above.
This approach can be implemented in R using all three choices—1.a, 1.b, and 1.c—above to compare the
resultant joint distribution. Start with four pre-simulated (10,000 simulations each) marginal distributions
and a desired correlation matrix. Assume that they represent liabilities from four different lines. Aggregate
them under all three choices and compute a total liability distribution. A common seed can be used for each
of the three choices, and for repeating the process many times. This approach can produce the following
observations:

There was not much of a difference in the tail of the resulting total liability, as long as a common
reference distribution (normal or t distribution with specified degrees of freedom) is used. So a
Gaussian dependence structure can be implemented. This will simply simulate from a multivariate
normal distribution and use those relative ranks for re-ordering the desired marginals. For all
practical purposes, this method will produce a joint distribution very close to a distribution produced
by Gaussian copula or Iman-Conover with normal reference distribution.

Simulating ranks from a t distribution with smaller degrees of freedom produces thicker tails for the
joint distribution. If some business segments are known to be correlated in extreme scenarios
(presence of tail correlation), then a t reference distribution with small degrees of freedom may be
used. If there is no significant tail correlation expected, then a normal distribution could be used as
a reference.
76
Chapter 5 – Qualitative Assessments
and Other Factors to Consider
Abstract
This chapter includes discussions of qualitative factors to be considered in determining risk adjustments in
practice not covered in previous chapters.
The five principles set out under IFRS X Insurance Contracts describe considerations that are qualitative
in nature, in addition to those related to the risk preference of the entity. Some characteristics suggested
by them might be incorporated into a selected quantitative method, or quantitative parameters, for
calculating the risk adjustment (for example, a longer duration results in a higher risk adjustment).
The qualitative considerations and assessment based on the five principles are necessary criteria to ensure
the reasonableness of the risk adjustment. For example, the quality of the data utilized for the risk
adjustment calculation relates to the credibility of past experience and the basis for assessing the risk
associated with emerging experience and uncertainty of the liability estimate. Consequently, the quality of
available data may indicate the risk of making erroneous assumptions or selecting incorrect parameters,
and therefore should be reflected in the risk adjustment estimate. The type of risks modelled, whether they
are low frequency and high severity, or high frequency and low severity, can also affect the risk adjustment
estimate. How much is known, or can be determined, about the interaction of components or underlying
risks, such as their dependency structure, may also affect the risk adjustment estimate. In addition, there
are other factors that would affect the entity’s assessment of the risk adjustment, such as the level of
aggregation chosen for the risk adjustment versus the allocation of the risk adjustment to the level at which
the CSMs are accounted for. Furthermore, there are considerations concerning the variation in the value of
risk adjustment estimates under different risk adjustment techniques, assumptions and parameters.
This chapter will discuss these qualitative considerations in some detail.
Section 5.1 Source of inputs and quality of data
It is important for insurers and users of their financial reports to understand how the data utilized for
insurance modelling is managed, and the quality of it. Also, the appropriateness of the data for the purpose
of the model is important. Completeness and accuracy are two other important aspects concerning the
quality of the insurance data. Insurance products and claims management continue to evolve, and inevitably
there will be practical difficulties to obtaining data that are appropriate, complete, and accurate. It is thus
important to assess the data’s quality, including its limitations and alternative solutions, and recognize it in
the derivation of the risk adjustment.
Per the guidance under IFRS X Insurance Contracts as referenced in the abstract to this chapter, “the less
that is known about the current estimate and its trend, the higher the risk adjustment shall be.” And “
to the extent that emerging experience reduces uncertainty, risk adjustments will decrease and vice
versa.” If the data quality is incomplete or inaccurate, or data points are scarce, which all increase the
uncertainty about the liability estimate, a higher risk adjustment would be needed.
In practice, for financial reporting purpose it is necessary to translate the qualitative assessment to a
quantitative expression. When the data quality is poor and there is no better alternative option, one may
choose a higher confidence level in order to recognize the limitations of available data. However, for a multi-
77
line company that has various level of data availabilities, this approach may lead to varying confidence
levels for different lines.
For example, if the amount of yearly insurance claim follows a normal distribution, and the mean and
standard deviation of the distribution have been determined based on sufficiently credible historical data, a
risk adjustment at the 95% confidence level would correspond to 1.96 times the standard deviation.
However, if the historical data points are deemed insufficient to provide for a true mean of the claim, the
insurer would want to recognize the variability around the estimate of the mean in quantifying the risk
adjustment. According to statistical theory, with a normal distribution, the distribution of sample mean
follows:
ˆ ~ N (  ,
where N is the sample size,
distribution.

2
N
is the true mean and
)
 is
Hence, the standard estimation error of the mean estimate is
the true standard deviation of the claim
 / N . Note that the sample standard
 / 2 N when observations increase. As
deviation ˆ has an estimation error too, which approximates
the sample size increases, the sample standard deviation converges faster to the true value than the sample
mean. Assuming that the insurer takes ˆ / N as the estimation error of the sample mean, the insurer
could effectively create a risk adjustment estimate that is 1.96 times the sample standard deviation plus an
addition related to the estimation error of the sample standard deviation, in order to maintain the confidence
level of 95%.
Note that the above approach relates to only the sample size. Various approaches could be considered to
recognize the impact of the estimation error on the risk adjustments. Where data points are considerable,
estimation errors could also arise from process risk, defects in the data, or model risk.
Section 5.2 Type of risks modelled
Risks that drive the insurance fulfilment cash flows arise from multiple sources of uncertainty. For example,
for property/casualty, and general insurance, some businesses may be short-tailed, in that losses are
usually known and paid shortly after the loss occurs. In this case, estimated liabilities are more likely to be
established following deterministic methods (for example, a loss ratio type of approach) given that losses
are reasonably predictable based on historical experience. In contrast, for long-tailed business, losses may
not be known for some time, and claims can take significant time to report and settle—this includes most
non-life lines, such as general liability and motor liability insurance. In light of the uncertainty and volatility
of long-tailed business, companies may consider more sophisticated approaches in establishing liabilities
and risk adjustment, such as making actuarial assumptions for frequency, severity, and loss emergence
patterns. Similarly, for life, annuity, and health insurance, risks also vary by line of business or issue year
cohort depending on the mix of such factors as issue ages and types of insurance products. Furthermore,
even within the same line of business—deferred annuity, for example—depending on the type of products
and guarantees offered (whether a guaranteed minimum death benefit, or guaranteed minimum
accumulation benefit rider, attached to a variable annuity product), the nature of the claims and risks would
be significantly different. Some may be more driven by mortality risk, such as the guaranteed minimum
death benefit rider, and some may be more driven by equity market risk, such as the guaranteed minimum
accumulation benefit.
Changes in the market addressed and underwriting standards also have an impact on the risk profile. When
making actuarial assumptions, it is important to consider the appropriate type or level of aggregation or
segregation of policies, and whether assumptions should be determined based on a blended approach with
78
credibility weighting, or policies before and after the changes of underwriting standards should be assessed
separately.
For products of greater complexity and innovations, more insurance products can incorporate features such
as embedded guarantees that relate to financial risk. For an embedded guarantee, when the benchmark
index level is lower than that guaranteed, there will be an impact on the fulfilment cash flow. Therefore, the
modelling of financial risks will need to consider the extent that the guarantee is triggered and affects the
fulfilment cash flows. In this context, financial risk refers to the risk of a possible future change in one or
more financial variables, such as a specified interest rate, financial instrument price, commodity price,
foreign exchange rate, index of prices or rates, credit rating, or credit index.
In addition, the type of risks sometimes determines a potential choice of statistical models and the metric
for risk adjustment. For example, if catastrophe risk is involved, CTE may be more appropriate to measure
the risk adjustment to capture the severity at the tail of the fulfilment cash flow distribution. As another
example, if multiple heavy-tailed risks are modelled, utilizing a single correlation matrix to model the risks
may not be sufficient to capture the interaction in the tail, and quantify an appropriate risk adjustment
because the variable dependence may change significantly in the tail of the distribution.
The horizon of the risks may drive different types of uncertainties too, which would affect the modelling
decisions. Generally longer term comes with more potential for policyholder behaviour, impact due to
interest rate movement, and more judgment in the modelling.
Section 5.3 Consideration of parameters and modelling capability
5.3.1 Correlation of variables
In previous chapters the desired correlation between variables is assumed to be known. But in practice, the
exact correlation or dependence structure of different business segments or different sources of risk is not
fully known. If there are enough historical data with joint observations, it may be possible to estimate copula
parameters. Statistical packages typically have inbuilt routines to estimate copula parameters from joint
observations. However, significant challenges exist and many questions need to be addressed in order to
estimate correlations or copula parameters reflecting dependence structure.
Evaluating correlation among various lines of business
The company’s own data, where available, from different lines of business or different products may be
used to evaluate whether there is significant correlation and degree of correlation among lines. Key
considerations while using the data include:

Do the business lines under consideration have a natural hedge relationship, such as life insurance
and annuity lines? Are they exposed to the same or similar or opposite risks? Are the types of
markets, and demographics of the claimants, relevant in evaluating the correlation?

Should the amount of losses be used? Or would loss ratio be a better option than the amount of
losses?

Should the nominal values of ultimate losses be used? Or must historical losses be adjusted for
trends and/or exposure changes?

When significantly higher losses exist for one line compared to others, would they skew the
calculation if linear correlation is assumed? Would relative ranking be a better option to estimate
the correlation? Or should a different correlation or dependence structure be modelled?

How many years of historical data should be considered? While increasing the number of years
would increase the credibility of the estimates, data from older years might not be representative
of future correlation. It is also important to take into account the size of the correlation matrix, as
decided by the number of lines of business. For example, it is unreasonable to estimate a 10 x 10
79
correlation matrix with less than 10 years of data. Some of these problems could be avoided if
quarterly data are available.

Similarly, if the data are too granular, perhaps it would be better to aggregate some of them to
capture the historical correlation or dependence structure implicitly, instead of explicitly estimating
the dependence structure.

Are historical large and catastrophic losses included in the data? It is important to recognize the
severity at the tail when considering the correlation or dependence structure between different lines
or products.

Should the tail correlation be estimated from the company’s own experience, which might not be
sufficient to estimate the tail correlation among different lines. On the other hand, if the exposure
to low-frequency and high-severity losses between the lines is not significant, perhaps the
estimation of tail correlation is not warranted.
In addition, the company’s own experience can be supplemented with industry experience if relevant
industry data are available. This is especially useful for new companies or those writing new lines of
business.
The role of expert judgment in estimating correlation is certainly important. A company may have knowledge
about how different business segments are correlated relative to each other. For example, a motor portfolio
is believed to have a low correlation with a marine portfolio. However, the latter may have relatively high
correlation with an energy portfolio. In this case, the company could construct a correlation matrix with “low”,
“medium”, and “high” values, and use judgment to assign values—say 15%, 30%, and 60%—for “low”,
“medium”, and “high” values correspondingly. Furthermore, as the company accumulates more credible
data, or relevant industry data become available, the correlation matrix constructed in this way can be
tested against historical experience.
Evaluating correlation among different sources of risks
Two of the largest sources of risks for an insurance company are insurance risk (risk that insurance
operations would be unprofitable) and investment risk (risk that company’s investment portfolio would result
in a deficient return). These can be correlated over a specified period. For example, a bad quarter of
insurance operations with major losses may lead company management to invest funds in safer instruments
(or perhaps riskier investments to compensate for the prior losses) in subsequent quarters. Another
example would be where a large catastrophe or a major economic disruption occurs; these sources of risks
would have a significant tail correlation. If a copula model is used, the t copula with lower degrees of
freedom could be a better choice than the Gaussian copula model.
Key considerations while evaluating correlations among different sources of risk for risk adjustment include:

For risk adjustment under IFRS X, the sources of risk should be those that could impact the
contractual fulfilment cash flows, rather than the broader sources underlying insurance risk and
investment risk.

Should the data from other companies writing similar business be used, where it is available? Such
data would give more observations to evaluate the correlations. However, care needs to be taken
to use the resultant correlation matrix, as risks faced by other companies could be significantly
different.

Use of quarterly data would increase the number of data points for the estimation. But correlations
measured over a quarter might be very different from that over a year.

Economic scenario generators (ESGs), such as stochastic interest rate curves, can be used to
model market investment returns. Therefore, ESGs can be used to discount insurance contract
fulfilment cash flows (i.e., insurance liabilities) under alternative economic scenarios. In this case,
80
under each scenario the generated market returns used to model fulfilment cash flows are
correlated with the rates used to discount the fulfilment cash flows.
It is worthwhile to evaluate the reasonableness of the correlation matrix in terms of how the risk adjustment
is affected by the stress, scenario, and sensitivity testing. It is also useful to develop an understanding of
how aggregate risk level is driven by correlation parameters. For example, the company might calculate
the risk adjustment under different scenarios by varying the correlation parameters and/or copula
assumptions.
5.3.2 Extreme events or tail events
Correlations utilized in a variance-covariance approach assume a linear relationship and do not necessarily
depict an accurate dependence structure between variables. It is likely that correlation can be different in
the tail of the distribution where extreme events occur, especially for heavy-tailed risks. In this case, a
different set of correlations for the tail can be used, or a copula model may provide more flexibility in
modelling variable dependence.
5.3.3 Model capability
Despite the advances in computing power due to modern technology, the increased availability in complex
product features and optionalities, and developments in stochastic modelling approaches, seem to create
a rapidly increased demand in model capability. When millions of insurance policies are modelled for the
purpose of financial reporting, it is necessary to properly recognize trade-off between model complexity and
added level of accuracy of results, and consider approximation techniques.
For example, by shocking key risk parameters sensitivity tests could be performed to understand the
relationship between the risk parameters and the resulting estimation of the fulfilment cash flows, instead
of generating stochastic runs. For a traditional whole life insurance block, shocking the mortality and lapse
rates in different directions (e.g., +/-10%) could reveal the impact of those shocks on the resulting estimation
of the fulfilment cash flows. If probabilities can be associated with the sensitivity scenarios, a chosen
confidence level could be quantified through sensitivity tests rather than generating stochastic scenarios,
which is more computationally intensive.
Section 5.4 Level of aggregation
Under the IFRS X Insurance Contracts guidance for risk adjustments, the level of aggregation for the risk
adjustment is separate and apart from the level of aggregation for the CSM. For the risk adjustment, a
principle-based standard was set; hence extensive guidance was not provided for the level of aggregation
for the risk adjustment. The reason is that the objective of the risk adjustment is to reflect an insurer’s
compensation for the risk it bears in the fulfilment cash flows; therefore, the level of aggregation for
determining the risk adjustment is specific to the insurer’s view of the compensation.
The CSM is measured at issue to represent a current estimate of the net fulfilment cash flows less a risk
adjustment. The CSM is measured at a level of aggregation (“portfolio”) based on the contracts that provide
coverage for similar risks and are managed together as a single pool. Therefore, the computation of the
CSM at inception requires a risk adjustment appropriate for the level of aggregation used for the CSM.
Hence, if the adjustment is determined at a level higher than a portfolio, it will need to be allocated down to
the portfolio level for purposes of computing the CSM. The difference between the level of aggregation for
the risk adjustment and the CSM, if any, needs to be considered. The graph below illustrates the
consideration of what will take place at each level of aggregation.
81
Total population of
entity’s insurance contracts
Entity-wide or lower, for
the risk adjustment
Portfolio is a subset of
Contracts providing coverage for
similar risks, managed together as
a single pool
Subset of contracts with
similar inception and end
dates, which may apply to
reinsurance risk transfer
assessment and other
purposes
In addition, under the IFRS X Insurance Contracts guidance, the CSM is required to be unlocked at
subsequent valuation periods to reflect the changes in the estimates of future cash flows. The CSM also
needs to be adjusted to reflect the current estimate of the risk adjustment that relates to coverage and other
services for future periods, subject to the condition that the CSM should not be negative. Hence, if the risk
adjustment is determined at a level higher than “portfolio”, it will need to be allocated down to the portfolio
level in order for the insurer to appropriately remeasure the unlocked CSM.
Section 5.5 Variation of risk adjustments under different methods
As a principle-based standard, IFRS X Insurance Contracts does not specify either the methods for
determining the risk adjustment nor the level of aggregation to be used. The principles for the risk
adjustment are for the entity to select methods and appropriate levels of aggregation that reflect its
compensation for bearing the risks in the fulfilment cash flows at the reporting date. In addition, IFRS X
Insurance Contracts also requires disclosure of the confidence level to which the total risk adjustment
corresponds. This disclosure requirement is intended to provide some means of communicating the level
of risk and uncertainty in the insurer’s insurance contracts to users of financial statements inherent in the
risk adjustment amount. However, since each entity is free to select methods appropriate to its business,
the standard attempts to make available through disclosures the risk adjustments determined using the
same basis. Nevertheless, such comparability will nonetheless be difficult; for example, the underlying
distribution of risks faced by the two entities may be quite different. Consequently, even the overall risk
adjustment confidence level disclosed respectively by them may not provide a meaningful comparison with
respect to the differences in the underlying distribution of risk for each entity.
Furthermore, for multi-line insurers, depending on the risk profile and differing nature of business for each
line, it is likely that an entity may adopt multiple approaches for its various lines of business in calculating
82
the risk adjustment. For example, for the variable annuity line an entity may employ a stochastic approach
incorporating the modelling of dynamic policyholder behaviour and scenario generation. In contrast, for the
traditional life insurance line the entity may utilize a formulaic approach to arrive at a risk adjustment. For
disclosure purposes, it will be challenging for the entity to estimate the overall confidence level for its entire
insurance business to disclose for its total risk adjustment when the risk adjustments for the respective
component lines of business have disparate risks and are derived differently.
To illustrate this challenges, below is a simplified example for a multi-line company.
A company has two product lines, X and Y. The risk adjustment and liability calculation take place at the
line of business level given the difference in the nature of business and risk profile. For simplicity, assume
that the aggregate amount of risk for both lines can be expressed in terms of normal distributions, as shown
in the table below with their respective mean and standard deviation values (mu and sigma). The 90%
confidence levels for line X and Y are 2.56 and 5.13, respectively. Assuming zero correlation between the
two lines, when aggregating the confidence levels using the typical aggregation formula for risk measures,
which is expressed in terms of the square root of the sum of the squares, the aggregated risk measure
according to a confidence level approach would be 5.73 for the combined lines X and Y. Since the estimated
distributions of the two lines is known in this example, which is also a normal distribution being the sum of
the two normal distributions (i.e., two product lines X and Y), the true 90% confidence level can be derived
for the two lines in aggregate. With a mean of 3 and standard deviation of 4.47, where 4.47 is the square
root of (22+42), the 90% confidence level for the two lines in aggregate is calculated to be 5.73, which
validated the earlier calculation. From risk management theories, risk measures such as confidence level
(i.e., value at risk), or CTE, can be aggregated in closed form, if the underlying distributions are elliptical
(see the discussion paper Measurement and Modeling of Dependencies in Economic Capital included in
the bibliography). Elliptical distributions include normal distribution, laplace distribution, t-student
distribution, Cauchy distribution, and logistic distribution.
LOB X
LOB Y
Company (X+Y)
Mu
1.00
2.00
3.00
Sigma
2.00
4.00
4.47
90% Confidence Level
2.56
5.13
5.73
For the purpose of financial reporting, suppose the company holds 2.56 CU as the risk adjustment for line
X, and 5.13 CU for line Y by using the 90% confidence level for each line. However, the company needs to
report a risk adjustment for lines X and Y combined. Knowing the mu and sigma for the combined lines X
and Y, the corresponding 90% confidence level is 5.73 CU for the company. The straight sum of the two
individual risk adjustments is 7.69 CU, which corresponds to about 96% confidence level. Therefore, if a
company sets its risk adjustment at a selected confidence level for each line, then this company in
aggregate would be holding a total risk adjustment at a much higher confidence level, which may not be
what the company intended. In that situation, a diversification benefit may need to be reflected.
The difference between 7.69 (the sum of the 90% confidence level for line X and Y), and 5.73 (the 90%
confidence level in aggregate), is 1.96, which is the total diversification benefit that should be allocated
down to the two lines. If the benefit is allocated based on the proportion of the risk adjustment, line X gets
0.65 CU, which is 1.96 * (2.56/7.69), and line Y gets 1.31. Removing the diversification benefit, the adjusted
risk adjustments for line X and Y are 1.91 and 3.82 CU respectively, which sums up to 5.73.
As illustrated in the table below, the adjusted risk adjustments for line X and Y correspond to only the 83%
confidence level at the line of business level.
83
LOB X
LOB Y
Company (X+Y)
Mu
1.00
2.00
3.00
Sigma
2.00
4.00
4.47
90% confidence level
2.56
5.13
5.73
Diversification benefit
-0.65
-1.31
Adj. risk adjustment
1.91
3.82
5.73
Confidence level
83%
83%
90%
There are practical challenges associated with the case above if the simplistic assumptions do not hold.
1. The example above utilized the risk adjustment calculated at the company level to determine the
diversification benefit that needs to be allocated down to the line level. The risk adjustment for the
company at a target confidence level of 90% was derived based on a closed-form aggregation
approach. As mentioned above, the closed-form aggregation of risk measures only works for
elliptical distributions. In reality, the true distributions are not known for the company or at the line
of business level, nor whether they are elliptical. A copula may be necessary to capture the extreme
tail risk or any non-linear dependence structure between lines. Or stochastic simulations may be
needed if a closed-form solution is deemed unsuitable.
2. In the example above, both lines X and Y started with a 90% confidence level. We assumed that
this entity started with the risk adjustment calculation at the line level, targeting a 90% confidence
level, and also targeting a 90% confidence level for the entity in aggregate. This allowed us to
quantify the diversification benefit to reach the defined confidence level for the entity in aggregate.
However, in reality, due to the difference in risk profile and other considerations, there may be
varying confidence levels for different lines of businesses. Suppose a 95% confidence level is
selected for line X and a 90% confidence level for line Y. What would be an appropriate level of
confidence level for line X and Y combined? Without choosing an appropriate level of confidence
level for the total company, how would the total diversification benefit be determined and allocated
to the line level? As mentioned earlier, if the company takes the straight sum of the risk adjustments
from line X and Y without adjusting for diversification benefits, it might result in an estimate of the
aggregate risk adjustment that is higher than the compensation the entity requires to bear the
uncertainty under the IFRS measurement objective. In this example, the sum of the 95% confidence
level risk adjustment for line X and the 90% confidence level risk adjustment for line Y would
correspond to the 97% confidence level for the company that has the two lines combined.
Section 5.6 Other considerations
IFRS X Insurance Contracts defines the risk adjustment for the effects of “uncertainty” about the timing and
the amount of future fulfilment cash flows. However, it does not distinguish between the concepts of risk
and uncertainty. In this section, risk and uncertainty are discussed based on other views that categorise
these terms in different ways, depending on what is being analysed and the application.
In his 1921 book Risk, Uncertainty, and Profit, economist Frank Knight distinguished between situations
under risk where the outcomes were unknown but are governed by probability distributions known at the
84
onset (such as tossing a fair coin), and an uncertainty situation where the outcomes, although likewise
random in nature, are governed by an unknown probability distribution, model, or process:
The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while
at other times it is something distinctly not of this character; and there are far-reaching and crucial
differences in the bearings of the phenomenon depending on which of the two is really present and
operating… It will appear that a measurable uncertainty, or ‘risk’ proper . . . is so far different from
an unmeasurable one that it is not in effect an uncertainty at all.
Knight’s concept of uncertainty is often referred to as the “uncertainty” about the extent to which the
expected mean of the probability distribution is incorrect. Following Knight’s distinction, which has been
adopted by some practitioners in the financial world, uncertainty includes parameter mis-estimation and
other “unmeasurable” aspects of a financial process through formal processes of logic and model-building.
In the context of the IFRS risk adjustment, Knight’s distinction might limit the quantification of the risk
adjustment primarily to the mis-estimation risk of the chosen assumptions and parameters in which the
probability distribution was known, and the variability of cash flows that could arrive from the
“unmeasurable” aspects of the process. Said another way, if an insured event is known to follow a certain
statistical distribution, and the results for any given year could differ due to the statistical nature of the
variable, there would be no need to establish a risk adjustment, because there is no “uncertainty”. In this
monograph, the variability of cash flows that could arise from the statistical nature of variables (even if the
distribution is known) is also considered to be included in the considerations in estimating risk adjustments.
Therefore, not only is process risk reflected in the risk adjustment, but parameter risk and model risk are
also included.
85
Chapter 6 – Effect of Risk
Mitigation Techniques
Abstract
The purpose of this chapter is to consider the extent to which certain risk mitigation techniques relate to the
risk adjustment discussed in previous chapters, and how their impact is reflected in the risk adjustments.
Insurers can mitigate in different ways their cash flow risks associated with the insurance contracts they
issue. This chapter focuses on two primary ways to consider with respect to risk adjustment of insurance
contracts are:

Product design features or contract terms that reduce or share certain cash flow risks with
policyholders (e.g., participating business), or mitigate investment risks that affect policyholder
benefits.

Ceded reinsurance (also known as outgoing reinsurance or purchased reinsurance); and
It is important to note that not all types of risk mitigation affect the reporting of risk adjustments because not
all risks are reflected in the risk adjustment. In particular, IFRS X Insurance Contracts provides guidance
regarding the risks to be excluded from the risk adjustment, per paragraph B69: “It [the risk adjustment]
shall not reflect risks that do not arise from the insurance contract, such as investment risk (except when
investment risk affects the amount of payments to policyholders), asset-liability mismatch risk or general
operational risk relating to future transactions.” Consequently, the mitigation of the risks not directly affecting
the cash flow risks would not be reflected in the risk adjustment. The risk-mitigating impact of ceded
reinsurance is reflected in the valuation of the ceded reinsurance assets via a separate risk adjustment,
rather than as a reduction in the risk adjustment.
Section 6.1 Product design
6.1.1
General description
Product design is an important aspect of risk management for insurers through their insurance products
that directly affect the fulfilment cash flows and the quantification of the risk adjustment. Generally, insurers
develop products to meet their profitability goals within certain risk appetite limits and considerations. They
will invest in developing new product approaches and streamline underwriting and sales processes to
remain competitive and profitable. An important aspect of product design is to evaluate their exposure to
covered events and manage risks that the insurance product covers. For example, policyholder behaviour
in exercising various policy options presents an important risk factor for life insurers. Careful product design
can limit the potential negative impact of the optionalities embedded in such products and reduce the
associated cash flow risks. Another example is to design products that pass asset and investment risk to
policyholders, through participating features that allow the insurer to exercise discretion in issuing policy
dividend, or through market value adjustment features that allow the insurer to adjust the surrender value
based on the current market conditions. In addition, requiring safety programs for commercial accounts and
promoting preventive health practices can not only reduce the amount of expected, and variability of,
fulfilment cash flows, but can also benefit the policyholders at the same time.
86
6.1.2 A real-life example
In the U.S., guaranteed living benefits for annuity products were introduced in the early 2000s, which has
played a part in the rapid increase in the sales of variable annuity products and development of increasingly
sophisticated guarantee designs. The living benefit guarantees provide specified payments that
policyholders could receive either during the accumulation or withdrawal phase, regardless of expected
lifespan. They attracted significant customer interest because they protect the customer’s assets from
declines of equity markets. However, given that the introduction of these guarantees happened during a
rising market, the risks associated with the products were often not adequately considered by insurers.
During the 2008 economic crisis, the total market capitalization of the largest insurers in the U.S. decreased
by more than 50%. For the top variable annuity writers, available capital evaporated. The risk associated
with these guarantees, which is non-diversifiable, was highlighted during the subsequent economic
downturn. The decline in the equity market led to increased value for these guarantees and increased
liabilities and payoffs. It was observed that expected policyholder behaviour assumptions did not work under
current events. For example, it has generally been assumed that lapse rates for variable annuity policies
would increase when a policy is out of the money, meaning that the policyholder tends to seek alternative
investments when the current account value is higher than the guaranteed value. Similarly, the lapse rates
would decrease when the guarantee is in the money (i.e., the current account value is less than the
guaranteed value). However, the economic downturn led to significantly increased levels of in-themoneyness, and it was observed that lapse rates also significantly increased due to “run on the bank”
behaviour.
In addition:
1. The nature of the risks in the fulfilment cash flows is linked to the equity market for the variable
annuity guarantees; and
2. There is generally a surrender charge period or a wait period of five to 10 years before the policy
can be surrendered without penalty or guarantees can be exercised.
Consequently, it is particularly challenging to validate the assumptions made concerning policyholder
behaviour (i.e., lapses and utilization of guarantees). For about a decade, most U.S. insurers that sell
variable annuity living benefit guarantees have used pricing assumptions before enough data points
emerged to allow them to fully validate their pricing assumptions.
In the calculation of the risk adjustment, it is important to recognize the variability of cash flows that could
arise due to the various optionalities incorporated into the product design. As seen above, ever-evolving
product innovation can resulting in risks that were not originally anticipated or are challenging to calibrate
and quantify. Consideration should be made for these in the development of the risk adjustment, and it is
important to use product design as a risk mitigation technique to remove or reduce certain risks from the
fulfilment cash flows. Typical techniques used in product development to mitigate risks include risk sharing
with the policyholders, imposing stricter limits on policyholder options—such as subsequent premium
deposits—and limiting how frequently policyholders can exercise certain options.
For the variable annuity products mentioned above, some U.S. insurers introduced products that limit
policyholders’ investments to “target volatility funds”, specialized mutual funds designed to rebalance
automatically among a pre-selected set of mutual funds to achieve certain target volatilities. The funds are
less volatile this way relative to the market. While still offering attractive guarantees, basis point fees
deduced from the account value are increased post the economic crisis. Effectively, some of the hedging
that used to be performed by the insurer outside of the product is now managed within the product, through
the rebalancing of the funds to achieve the target volatilities, with part of that cost passed onto the
policyholders. While the nature of the fulfilment cash flows associated with these guarantees remains
unchanged, the change in the product design does have an effect on the magnitude and variability of the
fulfilment cash flows and may lead to a smaller risk adjustment.
87
Another more apparent and intuitive example of mitigation is the participating feature of life insurance
products. When compared to a comparable non-participating life insurance policy, the policy that offers a
participating feature should generate a smaller risk adjustment than the policy without the feature.
Section 6.2 Reinsurance contracts – definition and classification
The definition of a reinsurance contract, under IFRS, is “an insurance contract issued by one insurer (the
reinsurer) to compensate another insurer (the cedant) for losses on one or more contracts issued by the
cedant.” In addition, the insurer has an obligation under the insurance contract to compensate a
policyholder if an insured event occurs, and the reinsurer has an obligation defined under the reinsurance
contract to compensate the insurer if an insured event happens. Thus, a reinsurance contract provides
uncertain cash inflows from the reinsurer to the cedant based on the insurer’s (cedant’s) cash outflows
arising from the relevant insurance contracts. Consequently, there is a need to reflect the risk mitigation
achieved by the ceded reinsurance contract.
The IFRS reporting requirements do not allow such mitigation to be used to influence the size of the risk
adjustment that needs to be held. Instead, the insurer estimates the value of the reinsurance asset including
a risk adjustment for ceded reinsurance that increases the value of the reinsurance asset to the extent that
the reinsurance reduces the net cash flow risks of the insurer.
Not all ceded reinsurance contracts are treated as reinsurance for the purposes of IFRS reporting and
hence not all reinsurance arrangements will influence the associated reinsurance asset values. As with
insurance contracts, each reinsurance contract must be properly classified into one of three categories:

Insurance contract;

Financial instruments or investment contracts; or

Service contracts.
The classification of contracts applies to ceded reinsurance contracts as it does to insurance or assumed
reinsurance contracts. The ceded reinsurance contract is evaluated separately by the ceding company,
based on the impact to the company, regardless of the classification or treatment of the contract by the
reinsurer. The ceding company is responsible for the evaluation of each such contract33 and the evaluation
by the reinsurer of its assumed contracts may be the same or different.
For reinsurance between affiliated entities related by common ownership, including some captive
reinsurance transactions, the treatment depends on which entity is the reporting entity. For reporting on a
consolidated basis, it is generally the case that such related party reinsurance transactions are reversed
out or ignored for the risk adjustment. However, if the affiliated entities report separately under IFRS X, then
such reinsurance transactions would be treated in the same manner as a reinsurance transaction between
unrelated parties.
A reinsurance agreement related to one or more insurance contracts may not meet the IFRS X Insurance
Contracts requirements for the transfer of significant insurance risk. For example, a reinsurance contract
that only transfers financial or investment risk would be defined and accounted for as a financial instrument.
In this case, such a reinsurance contract would not be treated as a reinsurance asset.
For a reinsurance arrangement to be treated as a reinsurance asset by the ceding company, with a risk
adjustment included in the asset’s value, it must itself be classed as an insurance contract based on the
insurance risk criteria applied to the ceding company. For example, a ceded reinsurance contract that
33
Refer to guidance in IFRS X Insurance Contracts regarding the treatment of multiple contracts between the same parties, since
there may be conditions where such contracts should be treated as a single contract.
88
transfers both investment risk as well as insurance risk of the ceding company would need to meet the
criteria that the insurance risk transfer is sufficient. IFRS refers to this criteria as “significant insurance risk”.
Significant insurance risk
The evaluation of significant insurance risk transferred by a ceded reinsurance contract is based on a
thorough understanding of the transaction in terms of the fulfilment cash flows being transferred to the
reinsurer.
First, the evaluation would be based on terms of the reinsurance contract and any contingent cash flow
commitments made by the reinsurer and ceding company.
Second, it is important to identify the fulfilment cash flows associated with the reinsurance contract even if
such cash flows are independent of the underlying insurance contracts. As with insurance contracts, the
risks considered for the risk adjustment for the reinsurance asset value (“reinsurance risk adjustment”)
would be limited to those contracts considered to be insurance contracts. For example, contracts with
payments based solely on a financial index would not be considered an insurance contract. Third, IFRS
requires that the transferred insurance risks be significant. IFRS guidance indicates that “insurance risk is
significant when an insured event causes the insurer to pay a significant additional benefit in any scenario.”
The additional benefit is an amount paid in excess of amounts payable when no insured event has
happened.
Typical reinsurance contracts would include proportional (quota share and coinsurance 34) and nonproportional (excess of loss, aggregate excess of loss, and stop loss). These types of reinsurance would
meet the significant insurance risk requirements under IFRS, unless such contracts include provisions that
effectively eliminate all significant insurance risk transfer, such as might be the result of loss-sensitive risksharing features. For example, significant insurance risk requirements can be evaluated based on
considering the fulfilment cash flows that would result from the occurrence of a single event. There can be
significant insurance risks even when the likelihood of the event may be very low, or when the expected
amount of the reinsurance recoverable may be small.
The significant insurance risk requirements under IFRS do not require statistical, frequency, or severity loss
calculations to be performed. The insurance risk transfer requirements that might be applicable under
government regulations or non-IFRS insurance accounting (e.g., the U.S. Generally Accepted Accounting
Principles—or GAAP—FASB 113/ASC 944) may have different criteria and types of analyses to document
how reinsurance contracts meet those risk transfer regulations. The conclusion reached based on nonIFRS requirements does not apply to IFRS. Consequently, it is important to identify where such differences
may impact the risk adjustment for ceded reinsurance under IFRS versus the other non-IFRS financial
reporting requirements.
Section 6.3 Reinsurance contracts – detailed provisions
The estimation of the risk adjustments associated with the reinsurance asset will depend upon the precise
terms of the ceded reinsurance contracts. The following section discusses the considerations affecting the
risk adjustment based on the structures and features of typical reinsurance contracts.
34
This reference to coinsurance is as a form of reinsurance and includes typical variations, such as modified coinsurance, coinsurance
with funds withheld, and coinsurance/modified coinsurance (co/modco). There are other uses of coinsurance for certain types of
insurance contracts that include risk sharing on a percentage basis between the insurance company and the insured (policyholder),
or an insurance contract with risk sharing among more than one insurance company where each takes a percentage share of the
risk under the insurance contract.
89
6.3.1 General discussion
Many forms of reinsurance are structured where the reinsurer reimburses or indemnifies the ceding
company for claims paid under its contracts (insurance or assumed reinsurance). The typical terms of the
reinsurance contract will identify the reinsurer’s obligations to reimburse the ceding company for its losses
or benefits paid for claims from insurance contracts that it issues, as well as the ceding commissions,
expense reimbursements, and ceded premiums under the contract. The details of the contract terms,
provisions, and features will be needed to evaluate the following main elements for estimating the risk
adjustment for the reinsurance asset:
1. The expected present value of the cash flows between the ceding company and the reinsurer
related to fulfilling the reinsurance contract;
2. The risk in the cash flows between the ceding company and the reinsurer related to fulfilling the
reinsurance contract;
3. The risk in the net fulfilment cash flows of the ceding company after reflecting recoveries to the
ceding company per the specifications of the reinsurance contract;
4. The difference between the risk adjustment without reinsurance versus the risk adjustment
reflecting the reinsurance contract; and
5. The combined impact of the entity’s reinsurance contracts with respect to the diversification of risk
and the entity’s compensation for bearing risk with reinsurance versus without reinsurance.
The basics of the reinsurance risk adjustment can be explained by considering the difference in risk
adjustments as mentioned in the points above. However, there may be cases where the risk adjustment
can be estimated directly from the risk in the ceded cash flows. Applying the measurement objective for the
risk adjustment directly to ceded reinsurance cash flows may be more difficult. The ceding company should
estimate a risk adjustment that reflects the compensation that would make it indifferent between eliminating
the uncertain reinsurance recoveries (which are tied to its policyholder cash flows) and bearing the
uncertain cash flows without such reinsurance.
6.3.2 Typical reinsurance contract provisions
The list below identifies some of the typical reinsurance contract terms that can have a bearing on the risks
transferred under a reinsurance contract as well as the cash flows included in the reinsurance transaction.
1. Reinsurance premiums are paid by the ceding company to the reinsurer. This includes the
reinsurance premium paid on the effective date of the agreement and the quota share percentage
of the policyholder premiums, if applicable. Reinsurance premiums may also include retrospective
premium adjustments or other amounts that are defined under the contract that may, but not
necessarily, be based on policyholder premiums. The IFRS guidance requires that any premiums
that are determined directly from the ceded amount of losses or benefits would be considered a
reduction in the ceded loss/benefit amounts.
2. Ceding commissions may be allowed or paid by the reinsurer to the ceding company to reimburse
the ceding company for expenses associated with acquisition, policy issuance, policy maintenance,
administrative, and overhead expenses. Under IFRS, any ceding commissions or expense
reimbursements are deducted from the reinsurance premiums, such that only the net premium cash
flows are considered. Significant timing differences between the reinsurance premium cash flows
and the ceding commission cash flows would be considered in the present value computations.
3. Reinsurance expense allowances are reinsurance cash flows based on % of premium, per policy
or per CU 1,000 of insurance. The allowances are established to reimburse the ceding company
for premium tax, acquisition, maintenance, and overhead expenses. Most reinsurance agreements
guarantee the reinsurance expense allowances payable to the ceding company. As with ceding
commissions, such expense allowance cash flows are deducted from the reinsurance premiums.
90
4. Reinsurance recovery cash flows include reimbursement for ceding company insurance losses or
insured policyholder benefits, and the costs to adjust the underlying losses or provide the
underlying benefits, as covered under the reinsurance contract. For example, a reinsurance
contract could provide recovery cash flows for the ceding company based on a quota share
percentage of death benefits, cash surrender, annuity, unit linked, endowment, or dividend (bonus)
payments.
5. Contingent cash flows are cash flows specified in the reinsurance contract that depend on the level
of losses or benefits ceded under the contract, such as a loss ratio corridor, or possibly an index of
some sort, such as an industry loss warranty. Other forms of such cash flows might include
retrospective premium adjustments, contingent commissions, profit commissions, sliding scale
commissions, and experience account payments. Under IFRS, such amounts are treated as
reductions in ceded premiums or reductions in losses/benefits ceded.
6.3.3 Reinsurance contracts with funds withheld and modified coinsurance
Reinsurance contracts with funds withheld and modified coinsurance contracts involve the ceding company
retaining a significant portion of the insurance and investment cash flows subject to the reinsurance for long
periods. In addition, typically investment returns from the retained amounts by the ceding company are
credited to the ultimate benefit of the reinsurer. Investment returns may vary based on actual investment
results (including capital gains or losses) or the investment rate can be a guaranteed rate specified in the
contract. The reinsurance contract will specify how the amounts will be determined and credited or debited
to the contract, based on the underlying policies or risks, such as policyholder premiums, expense
allowances, investment returns, investment credits, benefit claims, insured losses, and policyholder
dividends. Under these types of reinsurance transactions few or no actual cash flows occur between the
ceding company and the reinsurer unless and until the accumulated underlying cash flows are such that
the contract specifies a payment (cash flow) by the reinsurer or by the ceding company to the other party.
The estimate of the risk adjustment to be included as a reinsurance asset will likely be based on an analysis
of the impact of such reinsurance on the ceding company. The main characteristic of this type of reinsurance
contract is the mixture of investment risk and insurance risk that can directly impact the actual cash flows
needed to fulfil the contract.
For example, there may be a notional (or memorandum) account that accounts for the credits and debits
considered under the contract. This is a simple ledger of the items comprising the reinsurance account
balance related to a specific contract. It is reported periodically by the ceding company to the reinsurer.
Under the terms of the contract, actual cash flows may be required, based on the notional account balance,
and perhaps also based on a date or time period that is specified or contingent on specified events or
amounts. While the reinsurance risk adjustment is based on the risk in the reinsurance fulfilment cash flows,
such cash flows under this type of reinsurance contract are dependent on the mixture of the underlying
amounts that are contingent on insurance risk elements (timing and amount of premiums, benefits or
losses), combined with investment risk elements (timing, amount and rate of return).
In the case where the investment crediting rate is fixed, it may be useful to evaluate the risk transferred to
the reinsurer by estimating the notional account balance based on a stochastic risk model of the underlying
insurance cash flow timing and amounts for the full life of the contract. The reinsurance cash flows (ceding
company) would be modelled based on what the contract specifies regarding the amount and timing of the
cash flows between the reinsurer and the ceding company. The reinsurance risk adjustment would then be
based on applying an appropriate risk adjustment technique to the modelling of the ceding company’s net
cash flows with and without the reinsurance contract.
In the case where the investment crediting rate can vary, it may be useful to first evaluate the risk transferred
to the reinsurer by estimating the investment credit in the notional account balance based on a stochastic
risk model of rates of return on investment (or a stochastic risk model of the investment yield curve) for the
full life of the contract, modelled with the timing and amounts of insurance loss and benefit payments fixed
91
to the unbiased estimates of the timing and amount. Then the results of that model can be compared to a
full stochastic risk model of the investment returns and the timing and amounts of the insurance cash flows.
The risk adjustment, estimated using the full stochastic risk model, would then be reduced by the risk
adjustment estimated using the first stochastic investment return risk model.
While the type of reinsurance described in this section is proportional reinsurance, the modifications and
funds-withheld features do impact the nature of the fulfilment cash flows under the reinsurance contract.
Therefore, the analysis of the reinsurance risk adjustments for such contracts is similar to reinsurance risk
adjustment methods used for non-proportional reinsurance.
Risk adjustment analysis may also include additional considerations, some of which are discussed in the
following paragraphs.
6.3.4 Considerations regarding reinsurance contract boundaries
Reinsurance contracts that provide the reinsurer with the option to raise premiums, adjust the cost of
insurance or expense charges, or allow the reinsurer to reduce benefit reimbursements will have a bearing
on reinsurance recoverable accounting. As with insurance contracts, the contract’s term is defined by the
contract boundaries, which include options such as those mentioned. The ability to change premiums,
adjust charges, or reduce benefits creates a contract boundary and therefore affects the period of risk
considered under the reinsurance contract. Only those fulfilment cash flows that can occur during the
reinsurance contract period, i.e., within the boundaries, are considered in estimating the reinsurance
recoverable and the reinsurance risk adjustment. The same requirements applied under IFRS for the
boundaries of insurance contracts apply to ceded reinsurance contracts. Consequently, the reinsurance
risk adjustment will be limited to the reinsurance cash flows within the boundaries of the reinsurance
contract, even if the underlying insurance risks extend beyond those contract boundaries.
6.3.5 Considerations regarding the level of aggregation
Another consideration is that the risks transferred under a reinsurance contract can include some from
many underlying insurance contracts. Consequently, the estimation of the reinsurance risk adjustment may
be quite different from the underlying risk adjustment estimations. For example, the level of aggregation
selected for the risk adjustment without considering ceded reinsurance may be for specific lines of business,
business groups, or legal entities. The level of aggregation for ceded reinsurance may be a single contract,
multiple years of similar contracts, or a combination of ceded reinsurance contracts applied to a type of loss
or line of business. The IFRS principle is to aggregate such contracts in a way that the entity believes
appropriately represents its compensation for bearing the risks. The actual levels of aggregation choices
that the entity selects also depend on the entity’s ability to measure the cash flow risks and apply a risk
adjustment technique that it believes is appropriate for them.
6.3.6 Considerations with non-proportional reinsurance
Non-proportional reinsurance typically involves the transfer of risk from events with low probability and high
severity, and with high levels of uncertainty in estimating the probabilities of such events occurring and in
estimating the severity distribution of such events. Consequently, risk adjustment techniques based on
confidence level and CTE may be difficult to apply in practice. For example, a 95 th percentile confidence
level might produce a zero risk adjustment35, because the probability distribution of the non-proportional
losses has less than a 5% probability of occurring. In the case of the CTE technique applied to the same
extreme value probability distribution, the difference between the CTE value and the unbiased mean value
35
When the 95th confidence level is less than the unbiased mean value, such as in the case of an extreme value distribution, the risk
adjustment is zero because the IFRS guidance does not permit a negative risk adjustment.
92
is likely to be zero or very close to it. This indicates that a 95th percentile confidence level may not be
appropriate to capture the risk.
Similar difficulties are likely to occur when using such risk adjustment techniques for the risk in the fulfilment
cash flows with reinsurance, i.e,, net of reinsurance recoveries. This may cause the risk adjustment
estimated with reinsurance is about the same as the risk adjustment estimated without reinsurance. These
approaches would likely produce a zero or very small value for the reinsurance risk adjustment. Alternative
approaches are needed, such as capital of capital—which can better reflect the appropriate difference in
risk capital with and without reinsurance—or using transformed probability distributions, which adjust the
tail probabilities to reflect the risk tolerance and risk preference with regard to the mitigation of the tail risks.
6.3.7 Considerations of values to apply risk adjustment techniques
When applying one of the risk adjustment techniques to a selected level of aggregation of cash flow risks,
it may be appropriate to consider different values, e.g., the confidence level percentage, for calculations
without reinsurance versus with reinsurance. For example, the confidence level without reinsurance might
be selected at 98% because of the unknowns in estimating the probability of high severity risks. However,
when computing the risk adjustment with reinsurance, it is recognized that the impact of the unknown highseverity risk probabilities is mitigated by the ceded reinsurance and therefore the entity is comfortable
selecting a 95% confidence level with reinsurance. The shift to a lower confidence level, as well as the
impact on the cash flow risks with reinsurance, would be reflected in the estimate of the reinsurance risk
adjustment. Similarly, under a cost-of-capital technique, the amount of capital required with reinsurance
versus without reinsurance might be reflected by using a different formula or percentile for the capital
amount. Alternatively, the risk mitigation from the reinsurance may be reflected in a lower cost-of-capital
rate with reinsurance.
Section 6.4 Reinsurance ceded – key financial reporting principles
This section of the report will discuss the key financial reporting implications under IFRS affecting
reinsurance risk adjustments.
6.4.1 Reinsurance recoverable (ceded reinsurance asset)
The guidance under IFRS requires the separate reporting of a reinsurance recoverable asset for ceded
reinsurance, including a risk adjustment. Consequently, the risk mitigation impact of reinsurance is not
accounted for by a reduction in the risk adjustment liabilities. Rather, the value of the risk mitigation from
reinsurance is separately estimated and reported as a separate reinsurance asset.
The reinsurance recoverable and risk adjustment are first estimated under the assumption that all the
fulfilment cash flows payable by the reinsurer will be collected in a timely manner by the ceding company.
However, the IFRS guidance requires that the reinsurance recoverable asset values should be adjusted to
reflect non-performance risk. This is the risk that the reinsurance fulfilment cash flows become uncollectible.
The non-performance adjustment is reflected on the basis of objective evidence and is measured by a
reliable estimate of the expected reduction in the amount of reinsurance cash flows for current recoverables
and future amounts to be collected by the ceding company. The non-performance adjustment would also
reflect the effects of available collateral (more collateral would reduce the non-performance adjustment)
and the reduction in reinsurance collections due to disputes between the ceding company and its
reinsurer(s). This adjustment for the risk of non-performance by the issuer of the reinsurance contract would
also be applied to the reinsurance risk adjustment. However, the techniques used to estimate the
reinsurance risk adjustment would not reflect this non-performance risk. Rather, the adjustment for nonperformance would be estimated based on an expected (mean) present values, and applied to both the
reinsurance recoverables and the reinsurance risk adjustment.
93
6.4.2 Balance sheet reporting
The insurance liabilities and the risk adjustment associated with the insurance liabilities are reported on the
balance sheet on a gross basis before the impact of reinsurance ceded is taken into account. Consequently,
the value of the risk mitigation that relates to the risk transferred via ceded reinsurance is reflected in the
balance sheet via a reinsurance recoverable asset and a reinsurance risk adjustment asset. The table
below provides the basic reporting framework of reinsurance and risk adjustments in the assets and
liabilities reported on the balance sheet.
Assets
Liabilities
Investment assets
Insurance liabilities (mean present value,
gross)
Reinsurance recoverable (mean present value)
Risk adjustment (gross)
Reinsurance risk
adjustment36
Contractual Service Margin
Other assets
Other liabilities
Section 6.5 Reinsurance contracts – consideration for different types of
reinsurance
The reinsurance contract identifies the risks transferred and the cash flows to be paid by each party. The
terms of insurance risk transfer, the reinsurance premiums, the amounts and timing of reinsurance
recoverable, the period of coverage, the business included or excluded, limits of coverage, risk sharing,
and the reinsurance collateral, if applicable, are described in detail in the reinsurance contracts. These are
essential components with respect to the obligations between the reinsurer and the ceding company.
The paragraphs below discuss the implications arising from risk mitigation associated with four different
types of reinsurance contracts. These include basic proportional reinsurance, financial reinsurance, stop
loss reinsurance, and reinsurance of participation agreements.
6.4.1 Basic proportional reinsurance
Under a basic proportional reinsurance contract (e.g., quota share, coinsurance) the ceding company
reinsures a percentage of the risk associated with a reinsured policy. Proportional reinsurance is utilized
for many types of insurance business, both short- and long-duration insurance contracts. The primary
business reason for entering into a basic proportional reinsurance is to reduce the size or volume of an
entity’s insurance risks (e.g., mortality, health, accidents, longevity, and property/casualty risks) in return
for a proportion of the profit and loss from the business.
The terms of basic proportional reinsurance transfer a fixed percentage of the losses or benefits for an
identified insurance contract or group of insurance contracts issued by the ceding company in return for
that same percentage of the original premium for the underlying insurance contracts that the ceding
company receives. However, there are usually other provisions within the reinsurance contract, such as
ceding commissions, expense allowances, and reimbursement of loss expenses, as explained in section
6.2, which can have a financial impact for the ceding company and for the reinsurer that is not proportional.
36
The reinsurance risk adjustment is an asset that is an estimate of the difference between the risk adjustment risk adjustment on a
gross basis (without reinsurance) versus the risk adjustment estimated on a net basis (with reinsurance).
94
In addition, the impact of the transfer of risk usually differs, particularly as reflected in the risk adjustment.
Consequently, the reinsurance recoverables reported by the ceding company for a reinsurance contract
will not be the same as the corresponding value of the obligations reported by the reinsurer.
6.5.2 Financial reinsurance
Financial reinsurance can contain contract provisions that may affect whether significant insurance risk
transfer exists, as well as the timing and amount of the reinsurance cash flows. In general, the economics
of a financial reinsurance contract primarily transfer timing risk to the reinsurer. This is indicated when the
ceding company is reimbursed for loss, but then repays the reinsurer from future profits earned under the
contract or possibly from future renewals of the reinsurance contract.
Those provisions may include:

A provision for experience refunds payable to the ceding company as a form of profit sharing, which
is based on a formula contained in the reinsurance contract that includes a computation of profits
generated from the policies reinsured by the ceding company.

A notional (or memorandum) account or “loss carry over” or “experience” account is created to
accumulate losses paid by the reinsurer in excess of the net premiums received after deduction of
other reinsurer charges, with interest charges accrued on negative balances. Future profits earned
under the contract, and possibly future renewals of the contract, are deducted by the reinsurer from
the notional account balance to the extent that there are losses carried over in the account.

A risk and profit charge paid by the ceding company to the reinsurer. The amount of the risk fee is
generally defined in the contract and is based on a percentage of assets, liabilities, or capital base
associated with the reinsured policies.

For interest-sensitive products (e.g., universal and variable insurance contracts) and participating
life insurance policies, reinsurance contract terms may include funds withheld or modified
coinsurance. This type of reinsurance is explained in section 6.2 and might be considered a form
of financial reinsurance.
Unless significant insurance risks are transferred to the reinsurer, the contract would be classified as a
financial instrument or service contract. For example, where the effect of the reinsurance contract provisions
is such that future losses under the contract will be reimbursed by the ceding company, this may result in
the establishment of a deposit liability on the books of that company. Reinsurance contract terms that
require the ceding company to reimburse the reinsurer for losses may impact the extent of risk mitigation
present in the agreement and the amount of the reinsurance receivable asset reported on the ceding
company’s balance sheet. These types of contracts are typically complex and require a thorough analysis
of the contingent cash flows under different scenarios. Such analysis can be used in the estimation of the
reinsurance risk adjustment for such contracts.
6.5.3 Reinsurance of participating contracts
Participating life insurance and annuity contracts share the investment and mortality experience of the
insured block of policies with the individual policyholder, by means of a participating dividend payment to
the policyholders. The formulas for such dividends in a participating insurance contract may include cost of
insurance charges, policyholder bonuses, endowment benefits, and credited interest rates. Examples of
participating life and annuity products include variable life and annuities, participating whole life insurance,
fixed and deferred variable annuity contracts, indexed universal life insurance, and fixed universal life
insurance contracts. For variable life insurance the policyholder has the choice to invest the contract’s cash
value in different investment funds.
Participating life insurance and annuity contracts provide specific minimum death and annuity benefit
guarantees and include cost of insurance and expense charges levied against the policyholder’s fund value.
95
Variable contracts contain death and income benefit guarantees. Risk mitigation strategies for the insurer
may include the use of investment hedging activities by the life insurance company, or through reinsurance
contracts with an affiliated or captive reinsurance company or a third-party reinsurer. There is risk to the
ceding company from stock market volatility when the market value of the separate policyholder account
assets is less than the actuarial value of the income guarantees provided under the variable annuity
contracts.
Reinsurance of participating policies can involve the reimbursement of policyholder dividends as part of the
reinsurance cash flows. The reimbursement of the policyholder dividends may affect the estimation of the
reinsurance risk adjustment.
Interest credited to the policyholder’s fixed and variable universal life insurance contract has a direct impact
to the cash value, death benefits, and other policyholder benefits. The reinsurance of such insurance
contracts may include provisions that the reinsurer will assume some risks associated with the interest
crediting and the resulting cash value and benefits to the policyholder.
Variable annuity contracts provide death and income benefit guarantees based on separate policyholder
account investment results. Where capital markets reinsurance contracts are used to provide benefits that
are determined by specific formula linked to the performance of capital markets, such contracts may not
meet the requirements for significant insurance risk transfer. If that was the case, there would be no
reinsurance risk adjustment if such capital market reinsurance transactions do not meet the insurance risk
transfer requirement under IFRS. Such transactions would be treated as financial instruments or service
contracts.
The reinsurance of participating insurance contracts can be complex and requires a thorough analysis of
the contingent cash flows under different scenarios. Where such analysis results in meeting the
requirements for sufficient insurance risk transfer, the scenarios or probability analyses can be used in the
estimation of the reinsurance risk adjustment for such contracts.
6.5.4 Stop loss reinsurance
Stop loss reinsurance is a specific type of non-proportional reinsurance. Reinsurance payments to the
ceding company are based on aggregate claim amounts, usually aggregated over some period of time,
such as one to three years. Stop loss reinsurance agreements provide protection against large
accumulations of losses. Their contract terms usually include a definition of reinsured events, attachment
point (or retention), and maximum aggregate limit. Stop loss reinsurance is commonly used by life insurance
companies to protect against catastrophic losses resulting from a large number of death claims.
The frequency of events causing the aggregate claims can be extremely low and the expected value of
ceded losses arising from life catastrophic reinsurance are typically quite small. Stop loss reinsurance
contracts will seldom have issues with meeting the requirements regarding sufficient transfer of significant
insurance risk.
Estimating the reinsurance risk adjustment for stop loss reinsurance exemplifies the challenges discussed
in section 6.2, for non-proportional reinsurance. That is, the reinsurance fulfilment cash flows are linked to
events with low probability and high severity, and with high levels of uncertainty in estimating the very
remote probabilities. Consequently, a confidence level or CTE technique would produce about the same
level of risk adjustment with reinsurance as without reinsurance, indicating an unrealistic zero value for the
reinsurance risk adjustment. Section 6.2 discusses alternative approaches for estimating risk adjustments
in such cases.
Section 6.6 Conclusion
To account for the risk mitigation strategies reflected in an insurer’s financial statements, the strategies
used need to be thoroughly identified and understood. This chapter identifies two common forms of risk
96
mitigation techniques: reinsurance and product design. In both cases, as for risk management strategies
generally, the expected effect of the risk mitigation strategy should be reflected in the computation of the
risk adjustment. For reinsurance, from a ceding company’s perspective, this reflection is achieved through
the reporting of a risk adjustment before considering reinsurance, and the reporting of the ceded portion of
the business with an offsetting risk adjustment asset that recognises the value of the risk mitigation.
97
Chapter 7 – Validation of Risk
Adjustments
Abstract
Once the risk adjustment is estimated, it is necessary, as with other financial reporting values, to validate
the calculated results.
This chapter starts with an introduction of the general validation framework for the risk adjustment, which
includes validation of data, assumptions, process, model and results.
Further details regarding the validation of the risk adjustment estimates before and after aggregation are
also discussed.
Section 7.1 General validation framework
The risk adjustment calculation is the result of a complex process including multiple business decisions and
requiring several levels of data transformation. As a consequence, the final results will depend strongly on
the robustness of the framework defined by and processed followed by the company. To ensure reliable
calculation of the risk adjustment, the following considerations may be helpful.
7.1.1 Validation of data
Data encompass both raw and transformed data. Raw data refers to the data coming from the source
systems that can be either internal or external data. In addition, data after mapping or format changes are
also considered raw. Examples of raw data are policyholder, contract, individual claims, or Bloomberg
market data. Transformed data are intermediary outputs. Examples include smoothed yield curves,
smoothed lapses, claims ratios, or liability model points.
Relevant data validation is designed to ensure the accuracy, completeness, and appropriateness of both
raw and transformed data.
7.1.2 Validation of assumptions
Assumptions are parameters that are determined based on internal or external data, and often involve the
use of expert judgment. In the case of actuarial calculations, an assumption might be, for example, the
future lapse rates that will be used by the model projection. An actuarial model requires input of data and
assumptions in order to produce outputs.
Assumptions
Data
Model
Outputs
Validating the quality of assumptions requires a control framework. Aspects of a framework include
assumptions set in a realistic manner and derived consistently from year to year, credible for the purpose
used, subject to an appropriate governance structure that includes peer review, sign-offs, and elimination
98
of conflicts of interest, in order to maintain completeness and accuracy of the utilized data and integrity of
the procedures.
Those controls are typically integrated in the calculation process due to the significant misstatement risk of
using incorrect, inaccurate, or unapproved assumptions.
7.1.3 Validation of process
Process contains the activity steps applied at each level of risk adjustment calculation. Those steps can be
clearly described by the entity both at a high level and from a detailed perspective, illustrating the activity
steps applied and business divisions involved.
The validation of the process can ensure that the underlying framework is functioning as expected, this
framework is generating complete and coherent results, and effective controls are being followed. A nonexhaustive list of the requirements regarding process validation would be as follows:

Documentation—The overall and detailed procedures are fully documented (including sources of
the inputs and runs of the models). Dictionaries of the elements of the process (such as data,
assumptions, and variables) allow external readers to understand the process.

Management—Process management is established: one approach is designate an owner for each
process involved who is responsible for its development, documentation, improvement, and
controls.

Company policy—The company defines a set of organisation-wide requirements regarding the
expectations from process management.

Controls—The process includes specific quality controls for levels of identified risk, including IT
security and possibly conflicts of interest.

Audit trail—The process is auditable by a regulator or internal or external auditors.
7.1.4 Validation of computation model
The validation of the model includes validation of the underlying mechanics (i.e., whether it truthfully
represents the contractual features of the insurance products) and assumption implementations (i.e.,
whether it appropriately incorporates the intended assumptions).
Some of the questions that could be asked are:

Are the expected values from the stochastic model materially different from the central estimates?
In some cases, a simpler model might be used to determine the risk adjustment than is used for
the central estimate. Any simplifications, especially where they involve judgement, would be
examined carefully to see if they are likely to materially affect the risk adjustment.

Is there systematic variation in the historical time periods that is not taken into account? Uncertainty
affecting the future trends can be a major source of risk.

Are the cash flows projected from the model consistent with the stated assumptions?

Can a stream of cash flows be validated following an audit trail?

Can the change of results from period to period be reasonably attributed to changes in
methodologies, assumptions, and input data?
7.1.5 Validation of results
Even when the data, the assumptions, the parameters, and the computation models have already been
validated, it is still necessary to validate the results. This validation is performed on each result produced,
99
including intermediary results not reported, since such results are eventually used as an element in the
computation of another reported amount.
Techniques for validating results will depend on the actual method of computation of the risk adjustment,
and are presented separately in the following sections.
Section 7.2 Validation of risk adjustments before aggregation
7.2.1 List of main elements to validate
When a confidence level, CTE, or cost-of-capital method is used, the following elements are subject to
validation:

The form of the statistical distributions being used;

The parameters of the statistical distributions chosen;

The sufficiency of the number of simulations, if simulation is involved;

The quantile of the statistical distribution at the appropriate level for the computation of the risk
adjustment; and

The basis for reflection of the risk preference of the entity with respect to the compensation for
bearing risks.
In addition, when a cost-of-capital method is being used, based on stress scenarios, it may also be
necessary to validate:

The impact of the stress scenarios on the remaining fulfilment cash flows—this may require a stress
model of the future cash flows, with testing of each set of assumptions being used;

The accurate application of the criteria selected for the determination of the capital amounts that
the entity will hold relative to the remaining risk in the fulfilment cash flows;

The projection of the capital allocated to the risks aggregated for the purpose of determining the
risk adjustment; and

The rate of the cost of capital being used to reflect the compensation that the entity requires to be
indifferent between the uncertain future cash flows and the expected value of the future cash flows.
7.2.2 Validation of the statistical distribution for each risk
For each risk a statistical distribution has to be chosen (e.g., lognormal), so it is necessary to validate the
appropriateness of this statistical distribution against the considered risk.
One validation technique is to use stochastic reserving methodologies such as bootstrap GLM or Bayesian
techniques to assess an empirical distribution of the liabilities, then to compare its result with the statistical
distribution being assumed, to determine whether the statistical distribution chosen is a reasonable fit for
the considered risk.
In particular, the focus of the validation is on the adequacy of the chosen statistical distribution to represent
the cash flow risks. However, it is also important to validate other risk modelling considerations, such as
the assessment of the model risk, parameter estimation risk, and correlation assumptions, as previously
discussed.
In determining the risk adjustment amount using a simulation approach, a sufficient number of simulations
is needed to accurately represent the cash flow risks. For example, the number of simulations might be set
by running the model several times, with a different seed for the random number generator, and with the
same number of simulations. For each run of the simulation model, the actuary would compare the moments
100
of the output from the model run. When the simulated moments for several model runs are stable, this can
be an indication that the number of simulations is reasonable. However, if the nature of the simulated risks
involves extreme values with low probabilities, the simulation approach might need to be tested to ensure
that the extreme event risk is appropriately included in the risk modelling.
7.2.3 Validation of capital amount for stand-alone risks
In the case that a cost-of-capital method is used, capital amounts may be determined based on stress
testing using selected stress assumptions. Once the stress assumptions for each stand-alone risk have
been determined, it is necessary to assess their combined impact in terms of corresponding amount of
capital needed to avoid severe financial distress as a result of the stress assumptions. For short-term risks
(e.g., motor insurance) this can be straightforward as the impact may be measured by the increase of the
value of claims. For long-term risks (e.g., life insurance) it may be necessary to use a financial model that
also simulates the uncertain future cash flows for in-force policies.
Examples of some common validation techniques include:

For each risk handled through deterministic estimation methods (as opposed to stochastic
methods), proof that the deterministic method gives a good approximation of the mean value;

For each risk handled through stochastic methods, demonstration that the number of simulations
being used produces a good approximation of the mean value (“convergence tests”); and

When input data are grouped, testing that the computation with grouped data gives approximately
the same result as a computation with ungrouped data.
In the calculations using stressed assumptions, particular effort is usually applied to check that the modelled
cash flows meet the definition of cash flows associated with the fulfilment of the insurance contracts.
7.2.4 Validation of aggregate capital amount
When using a cost-of-capital approach and the level of the capital for each stand-alone risk has been
determined, they must be aggregated to determine the overall amount of required capital. That amount
should also reflect the appropriate diversification benefits. For example, the following tests may be
appropriate:

Checking the robustness of the aggregation methods by aggregating each risk step by step (as
opposed to aggregating all risks within the same step of the process) and measuring the
diversification benefits after each step;

Checking that the groupings of risks for the purpose of aggregation produce results that reflect the
anticipated benefit of diversification;

Applying a reverse scenario: once the level of capital after diversification is determined, checking
which level of stress (on each assumption) corresponds to this level of capital, and assessing
whether this level of stress makes sense compared to the level of standalone stress; and

Checking if the aggregation method being used has been verified with respect to the contracts/risks
being considered. For instance, if a correlation matrix is used, checking whether the risks
associated with the contracts being evaluated have statistically correlated properties that are
appropriately represented by the matrix (e.g., the same level of dependence in good or bad
scenarios).
Similar tests may be used to validate the diversification among different lines of business when determining
the diversified level of risk adjustment.
101
7.2.5 Validation of the projection of capital amount
Appropriateness of the projection factor
When the capital amount is projected by using some ratios, drivers, or carriers, as opposed to an exact
calculation, there is a need to validate whether this projection gives an appropriate projection of the risk
adjustment according to the risks.
For instance, the following factors may not be appropriate drivers in some situations:

Premiums may not be appropriate drivers for contracts where they may be flat for the life of the
contract, while risks may be increasing (e.g., whole life contracts with fixed periodic premiums, or
long-term health contracts with fixed periodic premiums); and

Liabilities in local GAAP or regulatory reporting may not be appropriate drivers when local GAAP
(or regulatory) liabilities are based on methods and assumptions that are independent of the actual
risks of the insurance contract, or based on prudent parameters that would over-estimate the actual
risks of the insurance contract.
For short-term risks (e.g., motor insurance) the payment pattern of the liabilities can often be used as an
appropriate projection driver. It can be assessed using extrapolation techniques on claims triangles, for
instance.
Appropriateness of a constant projection ratio
Using a constant projection ratio makes the implicit assumption that the ratio between the capital amount
and the projection driver is a constant. This assumption ought to be validated—in particular, the actuary
may check:

Whether this is true for each period of projection; and

Whether this is true for each scenario being projected.
In particular, a constant ratio also implies the diversification benefits are constant over time. Checking this
is important, as some risks will typically be extinguished before others, which could lower the diversification
benefits and hence increase the appropriate proportion of capital against the projection driver being used.
Length of projection
The projection of capital is typically done on a long-enough time horizon so that the remaining risks after
that period are immaterial.
7.2.6 Validation of the rate of return on capital
The rate of return on capital used for the cost-of-capital approach ought to reflect the compensation for
bearing risk with the specific risks associated with the fulfilment cash flows for the obligations as of the
financial reporting date. The validations will mainly operate by comparisons with available information for
similar risks. For example:

Capital Allocation—For instance, an insurance entity may allocate its capital to specific lines of
business or blocks of business. The capital allocation can be tested for consistency with respect to
measures of the primary risk drivers among capital allocation segments.

Rate of return on capital—The rate of return on capital may differ among business risk segments.
Such differences may reflect some of the five qualitative principles for risk adjustment, as discussed
in chapter 5.
Whatever the method to determine the level of capital and the rate of return on capital, the validation tests
are normally structured to identify inconsistencies between the capital amount and the rate of return on
102
capital. The main concern is to avoid gaps or overlaps being reflected in the considerations used to set the
capital amount and the rate of return.
In general, differences in risk metrics would affect the capital amount based on those metrics. The rate of
return on capital tends to reflect the compensation desired for a given level of risk. However, because there
can be qualitative considerations regarding different risks, the metrics may not be a sufficient means to
capture the differences in risk. Consequently, the rate of return on capital may be an alternate means to
calibrate the risk adjustment calculations.
Section 7.3 Validation for risk adjustment aggregation/allocation
7.3.1 Reminders about risk adjustment sub-additivity
Risk adjustments can be calculated at different levels of granularity (for instance, at the level of the
insurance contract, or insurance line of business, or legal entity). In each case, the risk adjustments
calculated will need to be aggregated (e.g., consolidation of results at the level of the insurance group) or
allocated (e.g., allocation of diversification benefits calculated at aggregated level).
The uncertainty in the estimation for two contracts computed together cannot be larger than the sum of
uncertainties in the estimation of each contract separately. As a direct consequence, the risk adjustment is
directly a sub-additive measure.
In other words, if risk adjustments are computed separately for insurance liabilities “A” and “B”, the risk
adjustment calculated at aggregated level “A+B” will be equal to or lower than the sum of the risk adjustment
“A” and the risk adjustment “B”:
𝑅𝑖𝑠𝑘 𝐴𝑑𝑗𝑢𝑠𝑡𝑚𝑒𝑛𝑡(𝐴 + 𝐵) ≤ 𝑅𝑖𝑠𝑘 𝐴𝑑𝑗𝑢𝑠𝑡𝑚𝑒𝑛𝑡(𝐴) + 𝑅𝑖𝑠𝑘 𝐴𝑑𝑗𝑢𝑠𝑡𝑚𝑒𝑛𝑡(𝐵)
This last inequality applies regardless of the granularity of insurance liabilities “A” and “B”. Hence,
regardless of the methodology used for the aggregation or the allocation of risk adjustments, this principle
of sub-additivity can be validated.
7.3.2 Validation of ex-ante aggregation of risk adjustments
Ex-ante aggregation is defined as when the aggregation occurs before the actual calculation of risk
adjustments; for instance, when the aggregation occurs directly at the level of assumptions, as shown
below:
In this case, specific attention is usually given to:

Consistency of assumptions between elements aggregated;

Being exhaustive but avoiding double-counting; and

Qualitative analysis of the global risk adjustment value.
103
The input for the calculation method assumes homogeneous data (e.g., if two cash flows are expressed in
different currencies, they need to be converted in the same currency before being aggregated).
This consistency of assumptions can be assessed by testing that the:

Economic assumptions are the same for each component aggregated; and

Components being aggregated have the same level of granularity.
7.3.3 Validation of ex-post aggregation of risk adjustments
Ex-post aggregation is defined as when the aggregation of the risk adjustments occurs after the risk
adjustments have been computed.
In that case, the validation of the aggregated risk adjustment focuses on the:

Consistency of the calculation methods used for each pre-aggregation risk adjustment;

Calculation of the diversification effects, and their qualitative analysis; and

Aggregation’s sub-additivity.
In particular, the aggregation of risk adjustments will typically be based on an underlying dependence
structure (e.g., correlation matrix or more sophisticated copulas). It is then necessary to validate the chosen
dependence structure, which can be done either:

Directly, when enough data are available, by statistical tests on the dependence structure; or

Indirectly, by testing the properties of the assumed dependence structure. In that case it is
especially important to test the properties of the assumed dependence structure in the tail sector
of the statistical distributions.
Section 7.4 Validation of the final result
7.4.1 Validation through sensitivities
Sensitivities on assumptions will highlight the parameters that have the most material impact on the level
of the risk adjustment and that will need specific tests to assess their value (statistical tests, back-testing,
etc.). To assess the robustness of the risk adjustments calculated, it is appropriate to calculate sensitivities
to each significant parameter of the computation, including sensitivities to:

The statistical distribution retained for each standalone risk (e.g., testing the level of the risk
adjustment if a different statistical distribution is used);

Each parameter of each statistical distribution (e.g., testing the level of the risk adjustment when a
given parameter is increased/decreased by a given percentage);

The dependence structure retained (e.g., testing the level of the risk adjustment if a different copula
is used, or a copula with different parameters).
In cases where it is challenging to parametrize and qualitative assessments are required, sensitivity testing
can also utilized. A range of parameters or scenarios, translated from qualitative assessments as discussed
in Chapter 5 to incorporate considerations around data quality, type of risk modelled, correlation of extreme
events, and model capability etc., can be tested to assess the robustness of the risk adjustments.
7.4.2 Validation through analysis of change
The variation of the risk adjustment from the previous reporting period to the current reporting period is
typically assessed through an analysis of change that will split the variation of the risk adjustment by
104
changing, step by step, each assumption between the two calculation dates. During this process, the
qualitative analysis ought to demonstrate that at each step the risk adjustment variation is consistent with
the variation of assumptions.
7.4.3 Validation through benchmarks
Once the final values are computed, a validation step is to check if the value is consistent with benchmarks
for the considered line of business. For instance, if the ratio of risk adjustment to the net present value of
fulfilment cash flows is significantly different from corresponding measures historically used by the business
or between products or group of claims, this could be indicative of an error. These consistency ratios or
measures are relatively easy to compute and therefore useful for detecting modelling errors.
The Solvency II quantitative impact studies (QIS) gave benchmarks of the risk adjustment, based on
percentages of the best estimate liabilities, for property/casualty lines of business. These values can be
used to benchmark the computations. Being able to explain the difference between the insurer’s calculation
and the QIS benchmark, or any other market benchmark, is a reasonable validation step.
7.4.4 Validation through proxies
Other tests of the risk adjustment include the computation of proxies of risk capital. For instance, the
Solvency II QIS gave a comprehensive approach for calculating proxies for one-year risk capital calculation,
i.e., for the risk in a property/casualty insurance contract, knowing the liability volatility of a line of business
is enough to compute its proxy-SCR. Those proxies can then be projected using the same projection drivers
as those used in the model to assess a proxy-risk adjustment.
105
Chapter 8 – Remeasurement of the
Risk Adjustment
Abstract
This chapter will discuss the remeasurement of the risk adjustment as the facts and circumstances affecting
the underlying calculations change over time.
Its first part discusses actuarial and accounting principles surrounding the remeasurement of the risk
adjustment.
Its second discusses the components of the risk adjustment that can be changed, common triggers for
changes in them, and additional practical considerations relevant to the remeasurement process.
Section 8.1
Principles of risk adjustment remeasurement
An entity’s view of the uncertainty related to the fulfilment cash flows can change significantly over time. As
such, its ability to revise its estimate of the liability attributable to its in-force contracts is a key tenet of the
IFRS insurance accounting framework. The Conceptual Framework for Financial Reporting states in QC4
that:
If financial information is to be useful, it must be relevant . . . and faithfully represent what it purports
to represent. The usefulness of financial information is enhanced if it is comparable, verifiable,
timely and understandable.
Accordingly, in order for the information represented in the risk adjustment to conform to the principles of
timeliness and relevance, IFRS X Insurance Contracts states that:
The present value of the fulfilment cash flows shall reflect all available information at the end of the
reporting period (i.e., it shall reflect current estimates of the amount, timing and uncertainty of the
remaining future cash flows, current discount rates and a current risk adjustment). An insurer shall
review its estimates at that date and update them if evidence indicates that previous estimates are
no longer valid. In doing so, an insurer shall consider both of the following:
(a) whether the updated estimates represent faithfully the conditions at the end of the reporting
period, and
(b) whether changes in estimates represent faithfully changes in conditions during the period.
Updated information is continuously received that could affect the determination of the risk adjustment. A
remeasurement of the risk adjustment at every reporting period will ensure that users of the financial
statements in question will have the most up-to-date information about the uncertainty of the cash flows.
As discussed in the next part of this chapter, many components affect the risk adjustment calculation, and
the frequency at which they might be estimated varies significantly. For instance, information regarding the
number of policies in force, the distribution of policies among established risk cohorts, and unpaid claim
liabilities is readily available and will drive changes in the risk adjustment in each reporting period. In
contrast, claim experience data that can be used to update cash flow assumptions for future claims may
require a significant amount of time and volume to be deemed credible, and therefore may be updated less
frequently. Professional judgment in light of materiality and availability of resources is needed to be
deployed in order to determine when each of the components in the risk adjustment should be remeasured.
106
The measurement framework as established by the IASB differs from other paradigms in which insurers
may operate. For instance, some companies may report in certain required financial statements (not IFRS)
using assumptions to calculate the liability for future policy benefits (i.e., active life reserves) that are locked
in for the remaining life of the policy. Such a lock-in concept would dictate that the liability calculation
assumptions cannot be changed unless the underlying business is found to be in a loss position. In the
event that experience is favourable relative to the original valuation assumptions, the booked liability will
not reflect the improvement under such reporting requirements. Therefore, while the IFRS risk adjustment
will change to the extent that emerging experience indicates a reduction in risk, other such reporting
requirements would not reflect the change in risk.
The continual remeasurement of the risk adjustment has a number of implications for insurers subject to
IFRS X Insurance Contracts. Entities required to also report financial information using a different
remeasurement structure (such as provided by local laws or regulations) will need additional valuation
processes to satisfy the IFRS X Insurance Contracts requirement that the insurance contract liability,
including the risk adjustment, consistently reflects the most up-to-date information. All insurers will need to
consider and update their policies and procedures to establish and monitor the criteria used to determine
when a component of the risk adjustment calculation should be updated. As discussed in the next chapter,
companies will also need to conform to various disclosure requirements under IFRS X Insurance Contracts
related to the remeasurement of the risk adjustment.
Section 8.2
Components of remeasurement
The need for remeasurement underlying common components of risk adjustment calculations may vary.
However, changes in the following components are the most likely to require remeasurement:
37

Volume/demographic nature of business—At a minimum, the risk adjustment needs to be
remeasured in every reporting period to take into account newly issued policies, policies that have
lapsed, policies that have changed benefit amounts, losses that have been settled, new claims that
have made, changes in the estimated value of individual claims, and other changes in the expected
value or uncertainties in the remaining insurance liability cash flows. Additionally, as time passes,
the make-up of the portfolios of policies in force or the liabilities will change and, therefore, the risks
and uncertainties in the future cash flows can change. As newly issued policies are added to the
portfolio, as policies lapse, and as liabilities change, it is also possible that major changes to the
risk characteristics will cause the insurer to have a different view of the level of risk adjustment it
requires to meet the measure objective.

Valuation assumptions—An update to any common valuation assumption, such as mortality,
morbidity, lapse, loss development patterns, cash flow patterns, IBNR factors, assumptions utilized
in the probability distributions, or the present value discount rates 37 can create the need to change
the risk adjustment.

Risk adjustment techniques and the parameters for the risk adjustment calculation—A change to
the techniques, methods, or parameters used to determine the risk adjustment will also be a
remeasurement itself. Chapters 3–5 of this monograph contain a detailed discussion of common
techniques, methods, and parameters. In some situations, the entity may decide that the risk
adjustment techniques it has been using are no longer appropriate for the measurement objective
for a block of business, or some element of the technique can no longer be applied appropriately.
In particular, to the extent that the valuation discount rate is based on observable market forces, it is generally expected that changes
in the discount rate assumption will occur frequently
107
For example, it might have been previously using a CTE technique to set its level of risk adjustment.
Based on updated experience or possibly updated industry studies, it decides a confidence level
technique would be a more appropriate method of risk adjustment for some or all of its business.
Additionally, for a cost-of-capital technique, it may decide it is more appropriate for the level of
capital it requires to be adjusted to correspond to changes in some of the more qualitative
characteristics of the uncertainty in the tail of the distribution. An example of a parameter change
would be for an entity using a confidence level technique for risk adjustments, previously risk
adjustment based on a 90% confidence level. However, the entity revises its current view on the
compensation it requires for uncertainty to a 95% confidence level. Generally, changes to risk
adjustment techniques, and the associated methods or technical parameters, would be expected
to be infrequent and are likely to flow from the entity’s reassessment of the uncertainty in the
remaining future cash flows from its business, particularly in light of emerging economic,
marketplace circumstances or other factors affecting its views about how the measurement
objectives are best applied to its insurance liabilities.
Section 8.3
Common triggers for changes in the risk adjustment calculation
Throughout the life of a contract or a portfolio of contracts, including the remaining settlement period for
claims, changes in conditions can trigger the remeasurement of risk adjustment. Some might be passive,
with the entity merely observing alterations in the marketplace that cause it to have a different view on its
assessment of the level of risk, either through changes in risk measures or in its evaluation of nonquantitative risk characteristics. For other changes, the entity might take a very deliberate action or actions
to purposefully alter the level of compensation it requires to bear uncertainty. For example, it may have a
change in its risk preferences. Regardless of the cause and effect, remeasurement would be necessary.
8.3.1 Size and composition of underlying business
The size and composition of a portfolio of insurance liabilities is constantly changing over time and will
therefore require a frequent reassessment of how the remeasurement of the risk adjustment will be
accomplished. Below are examples of how changes in liabilities’ size or composition could affect a change
in the risk adjustment calculation.

The demographic composition of a block of life insurance liabilities often changes over time and
frequently alters the entity’s expectation of the inherent risk present in those liabilities. Certain
policyholder characteristics, such as age, sex, marital status, and underwriting standard are often
clearly delineated to maintain the most precise measurement of the risk exposure as is practical.
As time progresses, changes to such characteristics or the manner in which they are classified will
result in the need to update the calculation.

A change in the volume of business, in terms of the number of active policyholders, level of in-force
benefits, or the composition of the unreported or unsettled claims, may trigger a change in the
calculation. For example, life insurance liabilities are frequently calculated based on previously
determined liability factors that represent the level of liability necessary for policies with similar
characteristics and a base number of units. These factors are then applied to the number of units
present in a block of business for in-force policies with similar characteristics to arrive at the total
liability for that block. Therefore, a change in the volume of business will have a direct impact on
the level of liabilities. However, as discussed below, changes in business volume will influence the
perception of the credibility of the reserving method. The law of large numbers states that the risk
that the average of the cohorts’ claim activity will deviate from expectations grows as the size of a
cohort of insurance policies decreases. In such cases, a large decrease in the volume of the inforce business could conversely justify an increase in the relative size of the risk adjustment.

Additionally, a segment of in-force business considered to be immaterial that uses a simplified
method or approach to estimate the risk adjustment might become more material due to changes
108
in volume or observed risk. In such a situation, the risk adjustment calculation would need to be
refined to be more robust to be more reflective of the materiality of the uncertainty from that segment
of business.
8.3.2
Experience studies and assumption updates
As actuaries monitor experience as it relates to current pricing and valuation or reserving assumptions, the
experience related to the risk and uncertainty in those assumptions may also change. For example, in life
insurance a change in the expectation of the underlying financial dynamics of an insurance product, in
particular regarding policyholder behaviour, will most likely change the assessment of uncertainty in the
product’s cash flows, and therefore the risk adjustment. There are many instances where examining
experience could result in a change in the uncertainty that drives the risk adjustment calculation.
A number of factors will influence the type and frequency of the experience studies that are performed that
will impact key assumptions, including—but not limited to—the materiality of the in-force business, the
credibility of the experience, and the entity’s views on the relevance of the experience studies. The results
of such studies may also expose the need for more exhaustive analysis in future studies that could lead to
a refinement of the understanding of product risks. For example, when experience deviates from
expectations, it is common to thoroughly analyse the results to more fully understand the source of the
deviation, leading to a refinement of the categorization of risks and a more reliable estimate of the inherent
risk and uncertainty associated with estimates of the fulfilment cash flows. Also, the underwriting
procedures or risk classifications may be changed, which could ultimately result in a change in the risk
adjustment calculation in future periods.
Updates of experience studies are not the only events that could trigger a change in expectations. For
example, a change to claims management practices could trigger a change in the risk adjustment
calculation. To the extent claims management becomes more or less aggressive in its investigation and
scrutiny in the claim payments, its view of the riskiness of the cash flows may change, and thus a
modification to the risk adjustment calculation may be appropriate. A change in economic performance,
such as investments, inflation, and interest rates, is likely to result in a change in the risk adjustment
calculation to the extent the economic indicators are deemed relevant and credible with respect to the risk
and uncertainty in the future cash flows or in the compensation required by the entity.
8.3.3
Governance and controls on assumptions
There are also supervisory and regulatory bodies with separate requirements regarding how insurance
liabilities are determined for regulatory purposes, which could indirectly influence how an entity views the
compensation required for bearing risk with respect to IFRS. For example, capital requirements imposed
by solvency standards, such as Solvency II in Europe, Risk-Based Capital in the U.S., or Minimum
Continuing Capital and Surplus Requirements in Canada, can affect how capital is viewed by the entity.
Solvency considerations, for instance, may be used to attribute or allocate capital internally based on the
uncertainty in the fulfilment cash flows versus other risks related to maintaining a reasonably conservative
capital position. However, such solvency regulations and related capital considerations can influence the
compensation an entity requires for bearing risk in its insurance liabilities. Therefore, to the extent that such
solvency considerations are impacted by changes in the uncertainty in the insurance liabilities, there may
also be a corresponding change in the risk adjustment.
8.3.4
Industry experience studies and market consistency
Some actuarial professional organizations, such as the Society of Actuaries and LIMRA in the U.S., provide
the facility for the analysis of aggregated industry experience of many of the most significant valuation
assumptions used by actuaries. Actuaries will reference such studies for a number of reasons, including
but not limited to:
109

In cases where the experience of an entity’s business is not available or not credible, the results of
industry experience are commonly used as the basis for pricing and valuation assumptions;

As a reasonableness check on the entity’s experience-based assumptions; and

As a means of comparing the entity’s experience to the broader industry.
Given the reliance placed on industry experience, it can be expected that any widely communicated
updated information, whether in the form of formal experience studies or bulletins discussing recent
experience trends, will impact the identification and measurement of the uncertainty inherent in an entity’s
insurance liabilities and is likely to affect the risk adjustment calculations.
Section 8.4
Practical considerations
The underlying changes in unpaid claim development, policyholder behaviour, or external market forces
that will drive changes in the components of the risk adjustment over time can be difficult to detect. This
may be because there may not be a credible base of experience over which to evaluate relevant metrics,
or there may be a belief that past experience is not indicative of future experience or that the assumptions
that are studied and modelled are slow moving and changes may be difficult to detect over short periods.
Even if changes in underlying policyholder behaviour or claimant behaviour were easy to detect, there may
also be other practical limitations to being able to update the risk adjustment assumptions. Examples
include resource availability, systems flexibility, and the materiality of the effect of changing assumptions
or method parameters on financial statements.
The remainder of this section will address in greater detail other considerations, be they theoretical or
practical, to be made in determining whether to update an underlying assumption or method parameter.
8.4.1
Credibility and actuarial judgment
It is common during the course of evaluating insurance experience to assign a certain level of credibility to
the experience being studied. As the volume of data underlying the experience population grows, it is
typically deemed more credible. Actuaries will often employ various techniques to enhance the credibility
of the data set being studied, including looking further back in time or broadening the base of experience.
The actuary will balance the benefits of such adjustments with the costs and relevance of that other
experience. The benefit of such practice is to enrich the credibility of the overall pool of data from which
insights are derived. The cost of doing so is that the broader data set with which an entity’s own experience
is blended may not be of a similar risk profile, and that experience of the broader data set may not happen
at the frequency necessary or desired by the actuary studying an entity’s experience.
Actuarial judgment is crucial when analysing the credibility of a block of past experience to determine if
recently observed experience trends are accurate depictions of future events. For example, the actuarial
profession has produced significant evidence over the last several decades that rates of mortality have
improved (decreased) over time. That said, such improvement can be slow, and the rate of improvement is
difficult to estimate, especially over short periods. For types of insurance coverage with relatively infrequent
claim events (e.g., mortality and morbidity risk for accidental death and dismemberment insurance
products), even if experience data sets are fully credible, it is not unusual for emerging experience related
to certain assumptions to vary extremely over time. The actuary must then exercise judgment to determine
whether very recent experience is an emerging trend, or simply an outlier in an emerging time series.
Reacting too quickly to recent experience in the determination of actuarial assumptions can add
unnecessary and possibly misleading variability to the values in the reported financial statements. To
temper reactiveness to emerging experience, actuaries sometimes use auto-regressive or moving average
techniques to more gradually recognize emerging experience patterns. However, such smoothing
techniques will need to be reconciled against the IFRS measurement objectives concerning current
estimates, including an unbiased, probability-weighted estimate of the mean (building block one). In
110
addition, where the emergence of new trends or patterns is not considered credible, the risk assessment
underlying the risk adjustment should consider the possible risk impact of emerging experience or new
evidence of changes in variability of key experience values.
8.4.2
Sensitivity analysis and materiality
The calculation of insurance contract liabilities and component risk adjustments is likely to be more sensitive
to some assumptions than others. Having an awareness of the sensitivity of the amount of insurance
contract liabilities to each relevant assumption will help the actuary understand which assumptions are
more critical to track and re-estimate on an ongoing basis. In situations where the implementation of new
actuarial valuation assumptions costs significant time and effort upon the part of the actuary (e.g., to load
valuation models, run attribution analyses to explain the impact of the new assumptions, and produce
related disclosures), actuaries have frequently established well-articulated policies and procedures that
consider the liabilities’ sensitivity to the assumptions and the materiality of the liabilities to determine an
appropriate frequency for the reconsideration of the assumptions used to estimate the insurance liabilities.
IFRS X Insurance Contracts requires that the assumptions used need to be current. Actuaries may
determine that updating certain assumptions less frequently is appropriate, if the impact on the financial
statements is acceptable as reasonable and meets materiality considerations.
8.4.3
Reasonableness checks
When risk adjustments are re-measured, controls over financial reporting typically include a certain level of
checking/validation as to whether the remeasured risk adjustment is reasonable. When there are no
changes in assumptions or method parameters, such controls might include a simple comparison of the
change in the risk adjustment relative to previous valuation periods. When there are changes to underlying
assumptions or method parameters, an effective check of the reasonableness of those changes might
include an estimate of the change in the insurance liabilities absent changes in assumptions or method
parameters, and then a comparison of the change in the insurance liabilities when taking into consideration
the change in valuation assumption or method parameter. Such analyses are likely to become the
expectation when disclosures of changes in liabilities invariably become more uniform across preparers of
financial statements over time.
8.4.4
Other practical considerations
There are practical limitations to updating assumptions and resultant risk adjustment remeasurement. The
most common is having the staff and resource availability to analyse the experience, update and receive
approvals for change, and update relevant models. Because assumption studies typically require a large
amount of work, insurers balance the need for experience analysis with the many other responsibilities
delegated to the actuary.
Another critical practical consideration is the flexibility of models and systems to effectively store and
manage the assumptions used to generate historical data for financial statements, and back-end data
warehouses to catalogue output tracked to various assumptions or method parameters. System limitations
or other constraints concerning the amount of data available may impact how to store and track
assumptions related to historical financial statement data. Even with extensive systems’ data capacity,
back-end data warehouses used to store experience and valuation input will need to be able to track data,
assumptions, and results for each historical financial statement and will ideally be sufficiently nimble to be
able to perform attribution analyses of the drivers of the change in the insurance contract liabilities and the
risk adjustments.
Section 8.5
Interaction between risk adjustment and CSM
As discussed in chapter 5, under the IFRS X Insurance Contracts guidance, the CSM is unlocked at
subsequent valuation periods to reflect the changes in the estimates of future cash flows. It needs to be
111
adjusted to reflect the current estimates of the risk adjustment that relate to coverage and other services
for future periods, subject to the condition that the CSM cannot be negative. Hence, if the risk adjustment
is determined at a level higher than the level of aggregation at which the CSM is calculated, for each
subsequent valuation period, it will need to be allocated down to the CSM’s calculation level in order for the
insurer to conduct the unlocking of the CSM.
112
Chapter 9 – Disclosure and
Communication
Abstract
The objective of the disclosure requirement for the IFRS financial reporting is to enable users of financial
statements to understand the nature, amount, timing, and uncertainty of future cash flows that arise from
contracts within the scope of the applicable IFRS standard. Under the IFRS balance sheet, the risk
adjustment is required to be separately identified and disclosed. In addition, the change in the risk
adjustment from period to period is required to be recognized in profit or loss as part of the insurance
contract revenue. An example of this presentation is included in section 11.1. Specifically, related to the
risk adjustment, the following are required for disclosure by insurance entities:
1. Quantified amounts;
2. The judgements and changes in those judgements made when applying the IFRS standard; and
3. The nature and extent of the risks that arise from insurance contracts should be disclosed by
insurance entities, including the changes in the risk adjustment during the period.
This chapter discusses each of the requirements above.
Section 9.1
Disclosure of quantification
The quantified risk adjustment amount needs to be disclosed. Depending on the level of aggregation for
the risk adjustment calculation, there may be one or multiple risk adjustment amounts disclosed for an
entity.
No extensive guidance is provided under IFRS for the level of aggregation for the risk adjustment. In
general, the guidance under IFRS is principle-based such that the level of aggregation for the risk
adjustment reflects the entity’s perception of the compensation it requires to be indifferent between
accepting the original uncertain cash flows or accepting the expected cash flows with no uncertainty.
Insurance entities consider how to present their disclosures in a way that useful information is
communicated clearly and not obscured by inappropriate aggregation or disaggregation. If the
disaggregation is more than necessary, the disclosure may include a large amount of insignificant details,
or if items such as type of contracts or reporting segments with different characteristics are aggregated
together, it may create distortion in the reported results.
If the entity uses a technique other than the confidence level technique for determining the risk adjustment,
it discloses a translation of the result of that technique into a confidence level (for example, that the risk
adjustment was estimated using technique Y and corresponds to a confidence level of Z%). However, IFRS
X Insurance Contracts requires the disclosure of the equivalent confidence level at the entity level or
reporting segment. As discussed in chapter 6, challenges may exist for multi-line insurers in reporting an
aggregate confidence level.
In order to translate the risk adjustment estimated using other techniques to a confidence level, it is
necessary to understand and estimate the distribution of the liabilities. In the most simplistic case, for a
uniform distribution, the CTE90 corresponds to a 95% confidence level. Nonetheless, for a line of business
that may have extreme losses at the tail, CTE90 will correspond to a confidence level higher than 95%. If
the CTE measure is used, it may or may not be possible to translate to a confidence level based on a closed
113
form solution. Take a normal distribution for example. Mary Hardy 38 indicates that the CTE measure can
be derived in closed form based on the confidence level measure and the probability density distribution,
as follows:
where
is the standard normal probability density function, and Q is the quantile
measure (i.e., confidence level). For a standard normal distribution, based on the formula above, one can
easily derive the corresponding confidence levels for CTE measures.
Risk Adjustment
(Confidence
Confidence Level
Level)
Risk Adjustment
(CTE)
Equivalent Confidence
Level (CTE)
80.0%
0.842
1.400
91.9%
85.0%
1.036
1.554
94.0%
90.0%
1.282
1.755
96.0%
95.0%
1.645
2.063
98.0%
97.5%
1.960
2.338
99.0%
99.0%
2.326
2.665
99.6%
When other approaches are used that do not directly utilize the distribution of liabilities, such as the costof-capital approach, it becomes necessary to model out the liability distribution to determine an equivalent
confidence level.
Section 9.2
Disclosure of judgements applied
As discussed throughout this monograph, there are judgements involved in risk adjustment estimation, such
as selecting data inputs for the estimation of a risk distribution, selecting a distribution for a modelled risk,
selecting a risk adjustment technique, and other approximations if deemed appropriate by the insurers. Key
judgements applied in the risk adjustment estimation are required for disclosure under the IFRS X Insurance
Contracts guidance.
IFRS X Insurance Contracts does not specify the use of a particular risk adjustment technique. However, it
does require a disclosure of the corresponding confidence level if the entity uses a technique other than
the confidence level technique for determining the risk adjustment. Actuarial judgements need to be applied
in selecting a technique or method and the required inputs and assumptions.
The methods and inputs used to estimate the risk adjustment are required for disclosure. In addition, if there
are any changes in those methods and inputs, explanations of the reason for each, and their effects, need
to be disclosed as well.
38
An Introduction to Risk Measures for Actuarial Applications.
114
Section 9.3
Disclosure of the nature and extent of risks
IFRS X Insurance Contracts specifically requires that an entity disclose exposures to risks, how the risks
arise, and its objectives, policies, and processes for managing risks that arise from insurance contracts.
Any changes in the risks and how they are managed from previous reporting periods also need to be
disclosed. While this disclosure is not uniquely related to risk adjustments, it reflects many of the inputs,
judgments, and results associated with risk adjustments.
With respect to disclosure related to insurance risks, the entity would disclose information about its
insurance risks before and after risk mitigation (for example, on a gross basis and a net basis for ceded
reinsurance). In addition, there is required disclosure of sensitivity analysis for each type of insurance risk
in relation to its effect on profit or loss and equity. Concentrations of insurance risk, including a description
of how management determines the concentrations and a description of the shared characteristic that
identifies each concentration (for example, the type of insured event, geographical area, or currency), are
also required for disclosure. For further details, IFRS X Insurance Contracts provides guidance around the
required disclosure for insurance risks, as well as disclosures required for credit risk, liquidity risk, and
market risk.
An entity is further required to disclose information about the effect of each regulatory framework in which
the entity operates. The nature and extent of the risks are sometimes affected by the regulatory framework.
For example, two entities may issue the same type of insurance contract to populations with similar
demographics. If one entity is subject to a regulatory required interest rate guarantee while the other is not,
this entity would then have a different risk exposure, which would affect its risk adjustment quantification.
In this case, the effect of the regulatory requirements shall be disclosed.
Section 9.4
Communication
In addition to required disclosures as discussed above, there are important communications concerning
risk adjustments. To effectively communicate the aspects of risk adjustments, based on a top-down view,
it is important to communicate the following to readers of financial statements and disclosure, as well as
those responsible for financial reporting within the entity:

The risk appetite or the insurer’s view of risks that may drive the estimation of risk adjustments;

The mix of business and differences in lines of business that may drive diversification and its
effects;

Differences among risk adjustment techniques and methods for various lines of business;

Differences and sensitivities in assumptions and parameters for different lines of business; and

Changes in key assumptions and parameters from period to period.
These points are covered under the IFRS X Insurance Contracts disclosure requirements. However, IFRS
X Insurance Contracts does not specify a format of communication. Insurers could adopt such a top-down
framework in communicating the various aspects of risk adjustments to users of financial statements. In
practice, from one financial reporting period to another, insurers could perform an attribution analysis and
use the results of that as an effective communication tool around the change in risk adjustments. An
attribution analysis is a quantitative tool often used in the financial field to analyse the change in certain
financial metrics, such as portfolio performance. It breaks down a given result into the fundamental drivers
of the change to help the management or users of financial statements to understand why certain result
occurred and how it can be improved. When it is applied to the risk adjustment, insurers could easily
translate the top-down framework into key drivers, and calculate an attribution amount for each driver to
evaluate the impact on the risk adjustment. The key drivers included in this quantitative analysis could
include change in the view of risks, in business mix (i.e., change in diversification), in risk adjustment
115
techniques, and in each of the key assumptions for each line, such as in-force demographic. The results of
the attribution analysis will help the management understand the impact of each driver, which will help
improve the communication to users of financial statements. The results can also be fed back into the
business forecasting process, allowing management to provide more guidance in managing the business.
116
Chapter 10 – Case Studies
Section 10.1 Cost-of-capital approach for a five-year term life insurance
product (without an endowment feature)
10.1.1 Learning objectives
This case study illustrates the development of risk adjustment for a simple five-year term life insurance
policy without an endowment feature using the cost-of-capital approach. The key learning objective is to
understand the practical application of the cost-of-capital technique in determining the risk adjustment.
10.1.2 Product description
This case study is for a block of five-year term life insurance policies without an endowment feature, issued
to 45-year-old males with a total face amount of CU 50 million. None of the policies in this block have a
surrender value. The table below shows the assumptions for the per-unit premium and the estimate
unbiased probability-weighted expected value (mean) cash flows utilized in this example:
Table 10.1.1 Assumption Table
Year from date of policy issue
Premium CU/ 1000 CU face amount
Number of policies
1
2
3
4
5
4.5
4.5
4.5
4.5
4.5
1,000 at time 0
Face amount per policy (CU)
50,000
Mortality rate
0.0021
0.0023
0.0024
0.0026
0.0027
Lapse rate
0.0500
0.0500
0.0500
0.0500
1.0000
75%
5%
5%
5%
5%
10
10
10
10
2.65%
2.65%
2.65%
2.65%
Commissions
Acquisition per policy (CU)
75
Maintenance per policy (CU)
10
Annual inflation on maintenance
Tax rate
Discount rate
3.00%
35%
2.65%
The discount rate is the sum of a risk-free rate and liquidity premium that is assumed to be 2.28% and
0.37% respectively.
10.1.3 Application of cost-of-capital technique
Using the approach defined in section 3.1.2, the future fulfilment cash flows are projected and capital
amounts are developed for each future year. The cost-of-capital rate is applied to the capital amount for
each year to determine the amount of capital cost. The cost-of-capital risk adjustment is the sum of the
present values of the cost of capital by year over all future years until all of the fulfilment cash flows have
been completed.
Therefore,
117
Risk Adjustment = PV{ ∑nt=1 rc * Ct}
where,
Ct is Cost of Capital at time t and rc is the assumed cost-of-capital rate
Determining cost of capital
For this technique, capital amount is based on the determination of a probability distribution for future cash
flows related to the insurance liability. Such capital amounts are not defined based on regulatory capital
adequacy requirements nor on the entity’s actual capital, because the IFRS X Insurance Contracts
measurement objectives are stated in terms of the entity’s requirements for bearing the risk of the uncertain
future cash flows, rather than any external requirements. A confidence level from the estimated probability
distribution of the fulfilment cash flows is selected by the entity that corresponds to the amount of capital
based on the entity’s criteria for the compensation for bearing risk. The IFRS X Insurance Contracts
guidance suggests that this confidence level for determining the capital amount would be at a high degree
of certainty that actual future fulfilment cash outflows will not exceed the amount. It attempts to take into
account the extreme risk in the tail of the probability distribution by using a capital amount large enough to
reflect almost the entire distribution.
There are a variety of methods for computing capital requirements, such as those needed to satisfy the
solvency oversight required by local insurance laws, those required by regulatory provisions of local
insurance supervisory authorities, or those considered by the insurance market as a prerequisite for
insurance policyholders’ choice of insurer. An entity’s risk preferences will depend on the level of security
desired, an assessment of the probabilities that unfavourable cash flow outcomes will consume some or all
of the capital, and the entity’s level of risk aversion regarding the uncertain, unfavourable outcomes.
Determining the cost-of-capital rate
For risk adjustments under IFRS X Insurance Contracts, the entity’s cost-of-capital rate would be chosen
to meet the specific measurement objectives for IFRS risk adjustments, i.e., reflecting a rate consistent with
the entity being indifferent between fulfilling the uncertain insurance contract liability cash flows versus
fulfilling a notional liability where the cash flows are fixed in time and amount, and equal in amount to the
unbiased probability-weighted expected value (mean) of the remaining fulfilment cash flows for each future
time period.
For this case study, a simple cost-of-capital method is applied by using a single rate of return on capital to
a single capital amount for a period of time. The method is a function of the following key variables:
1. The capital amounts appropriate for the risk and uncertainty related to the future fulfilment cash
flows;
2. The period applicable to the capital amount;
3. The rate of return (cost-of-capital rate) applied to the capital amount; and
4. The probability distribution of the uncertain fulfilment cash flows, i.e., the amount and timing of the
cash flows.
Calculations for illustration under consideration (amounts in thousands)
Below, the table presents the estimated fulfilment cash flows for the next five years. Note the assumption
of beginning-of-year timing for premium, commission, acquisition expense, and maintenance expenses,
and an end-of-year timing for benefit payments.
Table 10.1.2 Estimated Cash Flows (in thousands CU)
Year
1
118
2
3
4
5
In-force policies: beg of yr
1,000
948.0
898.6
851.6
806.9
Lapses (end of year)
2.1
2.2
2.2
2.2
2.2
Deaths (end of year)
49.9
47.3
44.8
42.5
804.7
948.0
898.6
851.6
806.9
0.0
225
213
202
192
182
Benefits
104
108
110
109
110
Commissions
169
11
10
10
9
Acquisition expense
56
0
0
0
0
Maintenance expense
10
10
10
10
9
(114)
85
73
63
53
In-force policies: end of yr
Fulfilment cash inflows:
Premium
Fulfilment cash outflows:
Net fulfilment cash flows
Capital amount
104
98
93
88
84
Capital
amount at
time t is
selected to
Cost of capital
6.2
5.9
5.6
5.3
5.0
represent
the 99.5th
percentile
of
the
Discount rate
2.65%
probability
distribution
of the present value of future cash flows in the internal capital calculation. In our case, for simplicity we
have adopted an approximation formula to calculate the capital amount at time t: C t = 0.18% of face amountt
+ 6.16% of premiumt
and
cost-of-capital rate, rc = 6.00%
Risk adjustments are calculated in the following table. Note that the change in the risk adjustment from
period to period is required to be recognized in profit or loss as part of the insurance contract revenue.
Table 12.1.3 Risk Adjustment
Time
0
NPV premium
965
NPV benefits
500
NPV commissions
206
NPV acquisition expense
56
NPV maintenance expense
47
NPV net fulfilment cash flows
Risk adjustment
157
26
where the 26 CU of risk adjustment is calculated as the sum of present values of the cost-of-capital amounts
6.2 CU, 5.9 CU, 5.6 CU, 5.3 CU, and 5.0 CU for the five projection years.
119
Section 10.2 Risk adjustment for a single-premium fixed deferred annuity
with 10-year deferral period using Wang Transform technique
10.2.1 Learning objectives
This case study illustrates the development of the risk adjustment for a single-premium fixed deferred
annuity with 10-year deferral period using the Wang Transform approach. The key learning objective is to
understand the practical application of this technique in determining the risk adjustment. As a supplement
to this case study, the risk adjustment for this example is also estimated using a cost-of-capital approach.
10.2.2 Product description
The product is a single-premium fixed annuity with a 10-year deferral period. Premium in the amount of
100,000 CU is collected at the beginning of the 10-year period. Annuity payments commence at the end of
the tenth year if the person insured survives past the deferral period. The annuity payments are based on
the account value at the end of deferred period, which will be converted to a payout benefit stream based
on the then-prevailing market rates. The policy is issued to a 55-year-old female.
10.2.3 Modelled assumptions
The table below shows the relevant assumptions:
Earned interest rate
Risk-free rate (market rate)
Spread
150bps
Credited rate
Market rate minus spread
Min. credited interest rate
1.00%
Gross single premium
100,000 CU
st
1 year commission
7.5%
Other initial expenses
500 CU
Maintenance expenses
50 CU inflating at 3% per year policy is in force
Surrender rate (lapse rate)
8% per year (including mortality)
Surrender charge (year since policy
inception)
5%, 5%, 4%, 3%, 1%, 0%
Front-end load
3,000 CU (deduction from account balance)
Model simulation (normal probability
Risk-free rate (market rate each year for policy
distribution)
duration)
Mean of account earned rate
3.85% (risk-free market rate)
Volatility (standard deviation)
3.36% (implied volatility*)
Additional lapse when market rate is high
[(market rate – credited rate – 1%)*100] * 80%
* Based on 80/20 mix of bond and cash. Implied volatility for bond is based on Merrill Lynch’s Move index
as of 12/31/2009. Implied volatility for cash is assumed to be 1%.
10.2.4 Wang Transform technique
One methodology for the calculation of risk adjustments is a technique called Wang Transform, as
introduced in chapter 3. For an arbitrary insurance risk X, where X is, for example, a distribution of insurance
losses, Wang Transform describes a distortion to the cumulative density function (CDF) F(X). The distorted
120
CDF F*(X) can then be used to determine the compensation for bearing risk, where the premium equals
the expected value of X.
For normally distributed risks, Wang Transform applies the Sharpe ratio concept as used in the capital
markets world. However, it can also extend the concept to the skewed distributions. Basically, the
distribution F(x) describes the real-world set of probabilities attributable to a range of possible outcomes,
while F*(x) describes the risk-adjusted probability distribution for the range of outcomes. Essentially, Wang
Transform increases the probability of severe outcomes by reducing their implied percentile. Wang
Transform enables the actuary to calculate a margin for the fair value of liabilities using notions of
compensation for bearing risk, usually denoted by a risk reference parameter, lambda, and a measure of
risk that is determined by the entire distribution.
The normal distribution, being a two-tailed symmetrical distribution, is not a very realistic one for modelling
behavioural risks. Lognormal is a tailed distribution that better depicts policyholder behaviour. For the
lognormal, the transformed CDF per Wang Transform is also distributed lognormally.
Estimating lambda
The transform parameter lambda is the compensation for bearing risk that is needed in applying this
technique to estimate the risk adjustment. That is, it shows by how much the price will increase, should a
measure of risk increase by one unit. It is independent of the nature of risk and is closely related to the
overall risk tolerance of the entity. There are two approaches to estimate lambda: entity-specific and market.
With the entity-specific approach the question should be asked: at what level of extra return, in excess of
the risk-free rate, will the entity be indifferent between accepting the additional marginal unit of risk versus
not accepting the risk? One can extend the definition of lambda to capital markets, which is essentially the
definition of Sharpe ratio if a market view is adopted. The Sharpe ratio based on historical returns for a
broad range of domestic equity ranges between 0.3 and 0.4. As mentioned in chapter 3, the entity’s risk
preferences need to be reflected if Wang Transform is adopted for the purpose of estimating risk
adjustments.
10.2.5 Application of Wang Transform for the under-consideration case study

1,000 scenarios of cash flows are projected with risk-neutral growth rate and implied volatility;

Market-consistent discount rates are used; and

Risk adjustment is calculated using Wang Transform to capture the risk related to the inherent
uncertainty in the central estimate of the present value of the expected future payments. It is
calculated as the difference between the mean of the original distribution and the shifted (or
transformed) distribution. Lambda is assumed as 30% for creating the transformed distribution.
The following graphs show the liability distribution at inception of the policy (year 0), as well as the
comparison between the original probability distribution and the transformed distribution.
121
Distribution of Liability Value(year=0)
8.0%
7.0%
6.0%
5.0%
4.0%
3.0%
2.0%
1.0%
0.0%
83,596
90,033
96,471
102,908
109,346
Transformed Distribution
8.0%
7.0%
6.0%
5.0%
4.0%
3.0%
2.0%
1.0%
0.0%
83,596
90,033
96,471
102,908
Original
109,346
Transformed
Results
Table 10.2.1 Wang Transform Approach
Year From Date of Issue
Liability
Risk
Adjustment
0
1
2
3
4
5
6
7
8
9
10
94,928
92,193
89,089
86,281
75,544
72,686
69,986
67,381
64,872
62,429
0
1,387
1,792
1,917
2,061
1,744
1,944
2,008
2,219
2,102
2,207
0
We note that the sudden drop of risk adjustments to zero at year 10 is due to two reasons:
1. The end of the deferred period is reached, at which the deferred annuity pays out an account value
lump-sum that gets converted into a pay-out annuity. The risk adjustment is forced to zero at year
122
10, because the calculation of the risk adjustment for the pay-out annuity (including the risk
associated with the future annuity pay-out rates in the market) is beyond this case as it is for a
different product;
2. One could argue that as it gets closer to the end of the deferred period, the variability of the cash
flows for the deferred annuity—driven by mortality risk, lapse risk, and market risk that has an
impact on policyholder behaviour—decreases because the uncertainty associated with the lump
sum is reduced as the deferral period ends. It is a valid argument for one particular policy or a
closed block, but in reality when a mix of in-force and new businesses exists and the block is
steadily growing, the entity would most likely use a fairly stable risk parameter (in our case, lambda
in the Wang Transform technique). In our case, we have assumed a static risk parameter that is
utilized in the risk adjustment calculation for all future years, which contributes to the sudden drop
of risk adjustments from year 9 to 10.
Section 10.3 Cost-of-capital approach for a single-premium deferred
annuity product
For the same case given in section 11.2 using the cost-of-capital technique, where capital is set at the CTE
90 level with a cost-of-capital rate at 6.0%, the results are shown below.
As illustrated, the risk adjustments calculated are greater than those from section 12.2 produced by the
Wang Transform approach. The comparison of the two approaches may serve as a validation of the results.
If there is more confidence placed on the cost-of-capital approach, the entity could use the results from that
approach to calibrate and further review the Wang Transform parameter. A quick analysis suggests that a
lambda of 80%, rather than 30%, would produce risk adjustments more in line with the results from the
cost-of-capital approach.
Table 10.3.1 Cost-of-Capital Approach
Year From Date of Issue
Liability
Risk
Adjustment
0
1
2
3
4
5
6
7
8
9
10
94,928
92,193
89,089
86,281
75,544
72,686
69,986
67,381
64,872
62,429
0
6,258
6,231
6,121
6,008
5,303
5,201
5,042
4,912
4,753
4,595
0
Section 10.4 Value-at-risk approach for a block of group long-term disability
policies
10.4.1 Learning objectives
This case study illustrates the development of a risk adjustment for a block of group long-term disability
(LTD) policies using the value-at-risk measure, also known as the confidence level technique, for claims
liabilities of disabled lives. The key learning objective is to understand the practical application of this
measure in determining risk adjustment.
10.4.2 Product description
Group LTD product is usually offered through employers, and provides a portion of the insured individual
income during an extended period of a disabling illness or accident. This coverage is yearly renewable, i.e.,
the liability during the coverage period is determined under the simplified approach, thus not requiring an
explicit risk adjustment during the coverage period.
123
10.4.3 Modelled assumptions
For illustration purposes, simplified assumptions are utilized. It is assumed that this modelled group
contains 1,000 members in year 2009 when the group coverage begins. Although the LTD coverage is
yearly renewable, in reality the rates are not guaranteed year over year, but in this case for simplicity
purposes assume no rate increase. No claim cost trend assumption is made, to be consistent with the
premium assumption. Disability incidences, as well as terminations caused by recovery or death, are
modelled each year. The group of members actively paying premiums declines in size due to disablement
and no new members enter into the group over time. It is assumed that the terminations caused by recovery
do not return to the same group of workforce so termination of a claim is equivalent to the member exiting
the group.
Monthly premium per member is assumed to be 60 CU, and the annual premium is 720 CU. Yearly
incidence rate is 0.5%. Termination rates are assumed as follows. Rate in year 10 after disablement is
assumed to be 100% to keep the projection horizon at 10 years, for simplicity.
Year after disablement
1
2
3
4
5
6
7
8
9
10
Termination rate
15%
15%
15%
10%
10%
10%
5%
5%
5%
100%
Based on historical experience, the average claim per year per member is estimated to be 21,000 CU at
the end of the year, which is viewed as a static payment amount once a life enters disablement. No waiting
period is assumed.
10.4.4 Risk adjustment calculation
Following the incidence rate and termination rate assumptions set above, the disabled lives triangle can be
shown as the following (assuming no rounding to integers, and we note that the triangle has been extended
to a matrix including projection beyond valuation year 2014, where the numbers are bolded):
Year of
disablement
1
2
3
4
5
6
7
8
9
10
2009
5.00
4.25
3.61
3.07
2.76
2.49
2.24
2.13
2.02
1.92
2010
4.98
4.23
3.59
3.06
2.75
2.47
2.23
2.12
2.01
1.91
2011
4.95
4.21
3.58
3.04
2.74
2.46
2.22
2.11
2.00
1.90
2012
4.93
4.19
3.56
3.02
2.72
2.45
2.21
2.09
1.99
1.89
2013
4.90
4.17
3.54
3.01
2.71
2.44
2.19
2.08
1.98
1.88
2014
4.88
4.14
3.52
2.99
2.70
2.43
2.18
2.07
1.97
1.87
For valuation year 2014, at the end of the year, the total disabled lives are:
20.88 = 4.88 + 4.17 + 3.56 + 3.04 + 2.75 + 2.49
The claim triangle can be shown in a similar fashion. In a given year, the claim payment is the average
claim per member per year, which is 21,000 CU, multiplied by the number of disabled lives. As shown
below, the total payments made during year 2014 are 438,426 CU, which is the sum of the 2014 diagonal
values.
Year of
1
2
disablement
2009
105,000 89,250
2010
104,475 88,804
2011
103,953 88,360
3
4
5
6
7
8
9
10
75,863
75,483
75,106
64,483
64,161
63,840
58,035
57,745
57,456
52,231
51,970
51,710
47,008
46,773
46,539
44,658
44,434
44,212
42,425
42,213
42,002
40,304
40,102
39,902
124
Year of
1
2
disablement
2012
103,433 87,918
2013
102,916 87,478
2014
102,401 87,041
3
4
5
6
7
8
9
10
74,730
74,357
73,985
63,521
63,203
62,887
57,169
56,883
56,598
51,452
51,195
50,939
46,307
46,075
45,845
43,991
43,771
43,552
41,792
41,583
41,375
39,702
39,504
39,306
A calendar view of the progression of this group can be presented as the following:
Calendar year
Members Premium Incidences Terminations
Disabled
lives
Claims
2009
1,000
720,000
5.00
-
5.00
105,000
2010
995
716,400
4.98
0.75
9.23
193,725
2011
990
712,818
4.95
1.38
12.79
268,619
2012
985
709,254
4.93
1.92
15.80
331,759
2013
980
705,708
4.90
2.22
18.48
388,135
2014
975
702,179
4.88
2.48
20.88
438,426
In this case, the claim liability for valuation year 2014 is simply the sum of all expected future payments in
the claim triangle. The total of all future payments amounts to 1,943,739 CU.
We assume that the claim payment is constant once a life enters disablement, and the variability of cash
flows is mainly due to the timing of termination for the existing disabled lives. Also assuming each
disablement year is independent of each another, the remaining terminations can be modelled through a
Monte Carlo simulation to understand the length of future payments for remaining lives associated with
each disablement year.
For illustrative purposes, the following table contains representative results as would be obtained from the
simulation analysis:
Year of
disablement
Expected payment years
for remaining lives
2009
2010
2011
2012
2013
2014
Total
8.30
10.74
13.42
16.38
19.84
23.88
92.56
Standard
deviation
(payment years)
0.21
0.38
0.60
0.90
1.29
1.79
90%
confidence
level
8.65
11.36
14.41
17.86
21.96
26.83
101.06
Future
payments (CU)
181,566
238,474
302,681
375,047
461,107
563,398
2,122,274
8.30 from disablement year of 2009 can be derived from the disabled life triangle above, which is the sum
of 2.24, 2.13, 2.02, and 1.92 from years 7–10. This means that, with an average payment of 21,000 CU per
member per year, the expected payment years for remaining lives associated with disablement year of
2009 is 8.30. The standard deviation for disablement year of 2009, from the simulation analysis, is 0.21
payment years. Therefore, assuming a normal distribution, the 90% confidence level corresponds to
payment years of 8.65, which translates to a total future payment of 181,566 CU. Aggregating all
disablement years, the total future payments amount to 2,122,274 CU. This means that to ensure the liability
is sufficient 90% of the time, the liability level needs to be 9.2% higher than the expected future claim
payment. The 9.2% additional liability level, the risk adjustment, is in the amount of 178,536 CU.
125
Section 10.5 Risk adjustment for a block of participating contracts
10.5.1 Learning Objectives
This case study illustrates the development of Risk Adjustment for a block of with-profits life insurance
policies with profit participating feature. The key learning objective is to understand how to approach the
risk adjustment for insurance products that have a discretionary feature.
10.5.2 Product Description
A with-profits policy (Commonwealth) or participating policy (U.S.) is an insurance contract that participates
in the profits of a life insurance company. The insurance company aims to distribute part of its profit to the
with-profits policyholders in the form of a bonus or dividend. The bonus rate is decided after considering a
variety of factors such as the return on the underlying assets, the level of bonuses declared in previous
years and other actuarial assumptions (especially future liabilities and anticipated investment returns), as
well as marketing considerations. For illustration purpose, we have made simplified assumptions for this
case study. For our case, this with-profits product offers a 90% of profit sharing for the increase in value of
the underlying pool of assets that support this block of business. The product offers a guaranteed interest
rate of 3%.
10.5.3 Modelled Assumptions
For illustration purpose, simplified assumptions are utilized in this case. It is assumed that the asset pool
consists of available for sale bonds (with a duration of 5 years) and earns a 5% interest each year. Mortality
or lapse decrements are not modelled here for simplicity, and capital considerations are not taken into
account either which means scenarios that require capital injections cannot be analysed.
An initial premium of 1,000 CU is paid. The maturity of the policy is assumed to occur after 5 years.
10.5.4 Risk Adjustment Calculation
To be developed using the variable fee approach.
Section 10.6 Risk adjustment for auto liability (motor insurance) product
using Wang Transform
10.6.1 Learning objectives
This case study illustrates the development of risk adjustment using the Wang Transform approach for a
commercial auto liability (motor liability) product.
10.6.2 Case description
In this case study, the risk adjustment is estimated for an entity’s liabilities for unpaid claims (unpaid losses)
at the end of the annual financial reporting period (e.g., December 31). The entity prepares detailed
schedules and exhibits for this line of business, for each individual accident year where there are unpaid
claims.
The actuarial approach used in this case study was to develop risk probability distributions for unpaid losses
based on an analysis of historical estimates of ultimate losses. The uncertainty exhibited in the past
actuarial estimates, specifically for unpaid losses, was incorporated into the analysis. The approach used
126
to develop a probability distribution for the unpaid losses was selected independent of the method used to
determine the unpaid claim estimates from each point in time.
10.6.3 Application of Wang Transform technique
In order to estimate risk adjustments associated with the entity’s unpaid claim liabilities for its commercial
auto line of business as of December 31, the following process was performed:
1. Historical data for this line of business was used to calibrate the entity’s risk preference parameter
(λ), representing the parameter within the Wang Transform that is used to calibrate the risk measure
to the entity’s compensation for bearing risk. Because the data used for the calibration of the entity’s
risk preference parameter (λ) had a cash flow duration different than cash flow duration for the
unpaid claims liabilities, the risk preference parameter (λ 1) was the parameter value adjusted to a
duration of one year.
2. A cash flow pattern for the pay-out of the unpaid claims liabilities was developed to estimate future
cash flows for the entity based on the entity’s distribution of unpaid claims according to the maturity
of the unpaid claim estimates for each accident year.
3. The entity’s historical pattern of changes in estimated ultimate losses from year to year was used
to estimate the parameters of the risk probability distribution of the entity’s portfolio of unpaid claims
by accident year as of December 31 by applying the Klugman-Rehman method39.
4. The cash flow duration (D) of the entity’s portfolio of unpaid claims as of December 31 was
determined.
5. The present value factors by accident year as of December 31 for application to the entity’s unpaid
claims estimates were determined based on the cash flow pattern and a risk-free yield curve.
6. Using a lognormal probability risk distribution with parameters, μ and σ, ultimate losses were
simulated for the run-off of each past accident year. The accident year unpaid losses for each
simulation were calculated by subtracting the known paid losses for each accident year as of
December 31.
7. From the 500 simulations of the entity’s value of unpaid claims for each accident year, the results
for each simulation were compiled and totalled for the unpaid losses for all accident years.
8. The sample mean (𝜇̂ ) and sample standard deviation (𝜎̂) from the 500 simulations were computed
for the logarithm of the total (all accident years) of the simulated unpaid losses.
9. Using the simulated sample mean (𝜇̂ ) and sample standard deviation (𝜎̂) for the entity, from step
8, the expected mean value of the unpaid losses was computed using the formula for the mean of
the lognormal distribution, 𝑒 𝜇̂ +½∙𝜎̂² .
10. As in step 9, the (𝜇̂ ) and (𝜎̂) from step 8 were used to compute the risk adjusted expected value of
the unpaid losses using the entity’s risk preference parameter (λ 1), the duration (D) of the unpaid
losses and the formula for the mean of the lognormal distribution after application of the Wang
Transform, 𝑒 𝜇̂ +½∙𝜎̂ ²+𝜆1∙𝜎̂∙√𝐷 .
11. The risk adjustment applicable to the entity’s liabilities was then computed as the difference
between the transformed probability-weighted expected value from the transformed probability
distribution from step (10) and the unbiased probability-weighted expected value from the original
probability distribution of the unpaid losses from step (9);
39
Rehmann, Zia and Stuart Klugman. "Quantifying Uncertainty in Reserve Estimates".
127
The numerical results of each of the above steps are shown in the table below:
Simulated unpaid claims
CU (millions)
(1)
25th percentile
196,974
(2)
(3)
(4)
(5)
50th percentile
75th percentile
Average
Standard deviation
Simulated sample μ
= average[log(simulated unpaid claims)]
206,468
216,956
206,773
14,753
(6)
12.237
(7)
Simulated sample σ
= standard deviation[log(simulated unpaid claims)]
(8)
Expected value of unpaid claims
= exp(μ + ½∙σ²)
(9)
Risk preference parameter for risk compensation (λ₁)* - 1 year duration
0.671
Duration of unpaid claims (D)
Risk-adjusted expected value of unpaid claims
= exp(μ + ½∙σ² + λ₁∙σ∙√D)
Risk adjustment
= (11) - (8)
1.807
(10)
(11)
(12)
0.071
206,870
220,460
13,590
* Lambda (λ₁), the risk preference parameter used for the risk compensation (one-year duration of risk), is
calculated as:
λ₁ = [ln (1-ER) - ln (1+ULAE) - ln (PV) - μAY ULR - ½ ∙ combined σ²] / [combined σ ∙ √(D)]
where:
1 - ER = 69.5% (100% - expense ratio)
1 + ULAE = 1.106 (ULAE factor = loss and LAE ratio / loss and ALAE ratio)
PV = 0.973 (present value factor based on risk-free/neutral yield curve and expected payout pattern)
μ = - 0.385% (sample mean of development of estimated ultimate losses)
σ2 = 0.656% (variance of development of estimated ultimate losses)
σ = 8.099% (standard deviation of development of estimated ultimate losses = square root of σ2)
D = 2.466 (duration)
μAY ULR = - 54.1% (sample mean of logarithm of accident year ultimate loss ratio)
combined μ = - 54.5% = μ + μAY ULR
σ2AY ULR = 0.261% (sample variance of logarithm of accident year ultimate loss ratio)
σ212-ult = 0.577% (sample variance of logarithm of developed accident year ultimate)
Cov(AY ULR, 12-ult) = 0.061% (covariance of accident year loss ratio and development)
combined σ2 = 0.960% = σ2AY ULR + σ212-ult + 2 x Cov(AY ULR, 12-ult)
128
Section 10.7 Risk adjustment for auto liability product using cost-of-capital
approach
10.7.1 Learning objectives
This case study illustrates the development of risk adjustment using the cost-of-capital approach in a nonlife context.
10.7.2 Case description and assumptions
The assumptions used to derive the risk adjustment for a non-life insurance product, motor third-party
liability, are as follows:
Initial current estimate of liabilities
100
Cost of capital (rate of return)
6.0%
Initial capital (% of liabilities)
39%
Annual increase to capital %
10%
Discounted unpaid % (using 4% discount
rate)
End of period
1
58%
2
27%
3
6%
4
2%
5
0%
Note that in the case study the entity determines its compensation for bearing risk based on assigning a
capital requirement as a percentage of the discounted estimate estimated as the unbiased probabilityweighted expected value (mean) of the unpaid losses. The percentage selected by the entity is higher for
that portion of the liabilities and is assumed to increase as a percentage of the remaining liabilities as the
liabilities mature.
10.7.3 Application of cost-of-capital technique
As described in previous section:
risk adjustment = PV{ ∑nt=1 rc * Ct}
where
Ct is cost of capital at time t and rc is the assumed cost-of-capital rate.
Determining cost of capital
The initial capital in this example is based on an assigned amount of capital such that the sum of the capital
and the unbiased probability-weighted expected value (mean) estimate of the liabilities is equal to the
estimate of the unpaid losses at the 99.5% confidence level as of December 31. This is a percentile
approach applied to determine the entity’s selected amount of capital assigned to support the insurance
129
liabilities, the discounted value of the current estimate, plus the amount of capital needed in addition to the
current estimate to have sufficient funds in all cases up to the 99.5% aggregate probability of the outcomes
from the liabilities. The capital can be determined using the probability distribution of the liabilities. This is
sometimes referred to as the value-at-risk (VaR) approach for capital. Similar approaches include the tail
value at risk or CTE, which provides some additional consideration in the capital measurement for the
impact of extreme scenarios. A 99.5% confidence level for determining the capital is used as an example
to illustrate the cost-of-capital methodology.
Determining how capital is released over time
To the extent the liabilities are paid out as expected, and capital is not required to absorb increases in the
estimate of the liabilities, the capital supporting the liabilities may be reduced. The reduction of capital
should theoretically mirror the reduction in aggregate risk of the remaining liabilities. Thus, as claims are
paid and the remaining liabilities are reduced over time, so too should the capital supporting those remaining
liabilities be reduced (to the extent capital remains available given the current estimate). However, it is
generally the case that the capital reduction is not directly proportional to the liability reduction. This is due
to the fact that the relative risk of the remaining liabilities at different points in time can (and will) vary.
The assumption in this case study is that the relative risk of the remaining liabilities is likely to increase over
time; this is the reason for the assumption in the example of a 10% increase in the capital requirement as
a percentage of remaining liabilities in each successive period of runoff. This assumption is based on the
premise that relatively straightforward claims are settled in early periods and the remaining unpaid claims
are more complex and the uncertainty in the value of those unpaid claims increases, albeit on a decreasing
amount of the remaining liabilities. The 10% increase in the capital requirement was shown only to illustrate
its impact on risk adjustments, given the other assumptions made for the case study.
Determining the cost-of-capital rate
The cost of capital should be inversely related to the capital requirement. This is consistent with the premise
that there is a unique market risk adjustment given the risk profile of the liabilities, i.e., the probability
distribution of the ultimate value of those liabilities. Thus, the key determinants of the appropriate risk
adjustment in a cost of capital approach, the amount of capital needed and the rate of return on that capital,
as required by market participants, should yield one answer. That one answer to risk adjustments based
on the cost of capital represents the discounted present value of the product of the capital required and the
rate of return on the capital. Examples of cost of capital might include 6% at the 99.5th percentile or 4% at
the 99.95th percentile, where the rate of return on capital assumption decreases as the percentile
assumption increases.
Calculations for illustration under consideration
With the above assumptions in place, the risk adjustment at the beginning of the period can be illustrated
using this cost-of-capital methodology as follows:
Risk Adjustment Using Cost of Capital Methodology – Time 0
Runoff
period
Current
liability
estimate
0
1
2
3
4
100
58
27
6
2
Capital
(% of
liability
estimate)
39%
43%
47%
52%
57%
Capital
Cost of
capital
Discounted
cost of
capital
39
25
13
3
1
2.3
1.5
0.8
0.2
0.1
2.3
1.4
0.7
0.2
0.1
130
5
0
63%
Total (risk adjustment, time 0)
% of current estimate
0
0.0
0.0
4.5
4.5%
Note that the cost-of-capital rate of return is selected to be 6.0% of the required capital in each period and
the resulting cost-of-capital amount at the end of each period is discounted to time 0 at the same 6.0% rate
to determine the discounted amount of the cost of capital, which is the value used as the risk adjustment in
this case study.
For illustrative purposes, the table below displays the computation of the risk adjustment at the end of the
first period given no changes to the assumptions shown above. In practice, companies will update
assumptions at the end of each period based on information available at that time, and therefore those
assumptions would not be “locked-in” at the outset.
Risk Adjustment Using Cost of Capital Methodology – Time 1
Period
Current
liability
estimate
Capital %
Capital
Cost
of
capital
Discounted
cost of
capital
1
58
43%
25
1.5
1.5
2
27
47%
13
0.8
0.7
3
6
52%
3
0.2
0.2
4
2
57%
1
0.1
0.1
5
0
63%
0
0.0
0.0
Total (risk adjustment, time 1)
2.4
% of current estimate
4.1%
The figures at time 1 are identical to those for time 0 for periods 1–5 with one exception, the discounted
amount of the cost of capital. This is due to the fact that at time 1, the cost-of-capital amount is discounted
back to the beginning of time 1 rather than time 0. The resulting margin of 2.4 is stated as a percentage of
the current estimate at the beginning of time 1 (58 in this example), resulting in the risk adjustment of 4.1%
shown above.
This process will be repeated in each successive period. A summary of the indicated risk adjustments using
this cost-of-capital approach, and holding all assumptions constant, is shown below:
Cost of Capital Risk Adjustments – Motor Liability
Period since
reporting date
Liability
Capital
%
Capital
Cost of
capital
Risk
adjustment
Risk adjustment
as % of liability
0
100
39.1%
39.1
2.3
4.5
4.5%
1
58
43.0%
25.0
1.5
2.4
4.1%
2
27
47.3%
12.8
0.8
1.0
3.6%
3
6
52.1%
3.1
0.2
0.2
4.1%
4
2
57.3%
1.1
0.1
0.1
3.3%
5
0
63.0%
0.0
0.0
0.0
0.0%
131
Section 10.8 Aggregation of risk adjustment using copula
10.8.1 Learning objectives
This case study illustrates how a copula is utilized in the risk adjustment aggregation and disclosure at the
entity level when the entity has multiple lines of business, each of which performs their own risk adjustment
calculation due to the varying risk profile and nature of business.
10.8.2 Case description
An entity has two lines of business, X and Y. Each performs its own calculation for the risk adjustment, and
has a probability distribution of the liability values. The entity’s experience has shown that large losses tend
to come together for both lines, and the correlation between the two lines tends to be less sensitive to small
losses. The entity decides to utilize a copula to model the aggregate risk adjustment, specifically a Gumbel
copula that has a fat tail in the Archimedean copula class in order to capture the higher correlations for
heavier losses. The Gumbel copula is mathematically expressed as follows:
1



 
exp  ((  ln u )  ( ln v) ) 


where the parameter theta controls the behaviour of the copula, including the correlation between losses.
The entity has reviewed its past experience and decided to use a theta of 3 to best represent the
dependency between the two lines.
10.8.3 Aggregation of risk adjustment
Copula pairs will first need to be generated to utilize the distribution of liability values from the two business
lines. The following steps are followed in generating the copula pairs (u, v):
1. Generate two independent uniform variates, u’ and v’.
2. Calculate S   ln(   1)  ln(  ln u ' ) 
1
ln( u ' v' ) .
 1




3. Solve z  ln z  S for z and calculate v  exp  ((  1) z )  ( ln u ' )


1




4. Then u=u’, and a desired copula pair (u,v) is generated.
5. Repeat steps 1–4 to generate more pairs.
With the copula pairs, and also the distribution of liability values from the two business lines, for each
copula, it is possible to generate the distribution of the aggregate liability value for the entity.
Following this approach, the 90% confidence level for the aggregate liability distribution captures the tail
dependency of the two business lines. In comparison, a typical correlation matrix calculation using a
constant correlation to represent dependency may understate the aggregate risk adjustment.
10.8.4 Illustration of an example
Assume with two lines of business the liability value for each line follows a normal distribution. Line X has
an expected liability value of 5,000 CU and a standard deviation of 400 CU. Line Y has an expected liability
value of 6,500 CU and a standard deviation of 500 CU. At the 99% confidence level, the risk adjustment for
line X is 931 CU, and the risk adjustment for line Y is 1,163 CU.
132
The table below shows the generation of copula pairs following the steps described above. Copula pairs
(u,v) are generated for 1,000 simulation runs, and only 20 are displayed below for illustration purpose.
Simulation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
u'
0.6164
0.6895
0.1610
0.4583
0.0045
0.8865
0.8491
0.0449
0.1691
0.0756
0.8912
0.0783
0.7863
0.6061
0.9317
0.9393
0.9421
0.4167
0.4322
0.4809
v'
0.5058
0.7918
0.9792
0.6656
0.8905
0.4339
0.2233
0.4577
0.8294
0.5972
0.2728
0.4549
0.3218
0.7412
0.2828
0.3997
0.9121
0.9848
0.1219
0.5782
S
-0.8362
-1.3801
0.8330
-0.3476
3.7569
-2.3319
-1.6720
2.3820
0.8644
1.8044
-2.1471
1.9097
-1.4314
-0.9847
-2.6747
-2.9739
-3.4376
-0.3811
0.6031
-0.3649
Z
0.3160
0.2049
0.9183
0.4503
2.7584
0.0889
0.1601
1.7962
0.9334
1.4421
0.1052
1.5031
0.1964
0.2818
0.0646
0.0487
0.0312
0.4400
0.8117
0.4450
u
0.6164
0.6895
0.1610
0.4583
0.0045
0.8865
0.8491
0.0449
0.1691
0.0756
0.8912
0.0783
0.7863
0.6061
0.9317
0.9393
0.9421
0.4167
0.4322
0.4809
v
0.5957
0.7714
0.6272
0.5302
0.1183
0.8548
0.7369
0.0785
0.3823
0.1507
0.8203
0.1110
0.6976
0.6860
0.8854
0.9159
0.9693
0.8012
0.2133
0.5074
Then, the liability values corresponding to the copula pairs can be generated based on assumed
distributions for line X and Y, which then give rise to the distribution of the aggregate liability. The 99%
percentile of the aggregate distribution is 11,888 CU, and the mean value is 10,004 CU, which translates
to a risk adjustment of 1,884 CU at the 99% confidence level.
Simulation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
u
0.6164
0.6895
0.1610
0.4583
0.0045
0.8865
0.8491
0.0449
0.1691
0.0756
0.8912
0.0783
0.7863
0.6061
0.9317
0.9393
0.9421
v
0.5957
0.7714
0.6272
0.5302
0.1183
0.8548
0.7369
0.0785
0.3823
0.1507
0.8203
0.1110
0.6976
0.6860
0.8854
0.9159
0.9693
Liability(X) Liability(Y) Total liability
5,118
5,097
10,215
5,198
5,297
10,495
4,604
5,130
9,734
4,958
5,030
9,988
3,955
4,527
8,481
5,483
5,423
10,906
5,413
5,254
10,667
4,321
4,434
8,755
4,617
4,880
9,497
4,426
4,587
9,013
5,493
5,367
10,860
4,433
4,511
8,945
5,317
5,207
10,524
5,108
5,194
10,301
5,595
5,481
11,076
5,620
5,551
11,171
5,629
5,748
11,377
133
Simulation
18
19
20
u
0.4167
0.4322
0.4809
v
0.8012
0.2133
0.5074
Liability(X) Liability(Y) Total liability
4,916
5,338
10,254
4,932
4,682
9,614
4,981
5,007
9,988
134
Chapter 11 – Bibliography
Allaben, Mark, Christopher Diamantoukos, Arnold Dicke, et al. The Principles Underlying Actuarial Science.
Paper presented at the International Congress of Actuaries, Cape Town, South Africa. 2008.
American Academy of Actuaries’ Consistency: Principles, Summary, Definitions & Report Format Work
Group. “Principles-Based Approach Definitions”. Presented to the National Association of Insurance
Commissioners’ Life and Health Actuarial Task Force. St. Louis, MO. 2006.
Australian Accounting Standards Board. AASB 1023 – General Insurance Contracts. 2010.
Blum, Kathleen A., and David J. Otto. "Best Estimate Loss Reserving: An Actuarial Perspective". Casualty
Actuarial Society Forum (CAS Forum). Fall. Vol. 1: 55. 1998.
Brewster, Rachel, and Sam Gutterman. The Volatility in Long-Term Care Insurance. Research report
supported by the Society of Actuaries’ Long-Term Case Insurance Section. 2014.
Cairns, Andrew. “A Discussion of Parameter and Model Uncertainty in Insurance: Mathematics and
Economics”. Insurance: Mathematics and Economics: 27. 2000.
Casualty Actuarial Society Research Working Party. The Report of the Research Working Party on
Correlations and Dependencies Among All Risk Sources. 2006.
Dacorogna, Michel M., and Davide Canestrato. The Influence of Risk Measures and Tail Dependencies on
Capital Allocation. SCOR Publications. 2010.
Embrechts, Paul, Alexander McNeil, and Daniel Straumann. Correlation and Dependency in Risk
Management: Properties and Pitfalls. 1998.
Gutterman, Sam. The Valuation of Future Cash Flows: An Actuarial Issues Paper. Springer. 1999.
Halpern, Joseph Y. Using Sets of Probability Measures to Represent Uncertainty. 2008.
Hao Bui, and Briallen Cummings. Risk Margin for Life Insurers, Presented to the Institute of Actuaries of
Australia, 4th Financial Services Forum 19-20 May 2008
Hardy, Mary R. An Introduction to Risk Measures for Actuarial Applications. Society of Actuaries. 2006.
Hogg, Robert V., and Stuart A. Klugman. Loss Distributions. Wiley-Blackwell. 1984.
Institute of Actuaries of Australia Risk Adjustments Taskforce. A Framework for Assessing Risk
Adjustments. 2008.
International Accounting Standards Board. Conceptual Framework for Financial Reporting 2010. 2013.
International Actuarial Association. Discount Rates in Financial Reporting: A Practical Guide. 2013.
———. Measurement of Liabilities for Insurance Contracts: Current Estimates and Risk Adjustments. 2009.
———. Stochastic Modeling – Theory and Reality from an Actuarial Perspective. 2010.
Johnson, Norman L., Samuel Kotz, and N. Balakrishnan. Continuous Univariate Distributions, vols. 1 and
2. Wiley. 1994.
Jorion, Philippe. Value at Risk: The New Benchmark for Managing Financial Risk. Third edition. McGrawHill Professional. 2006.
Joseph, A. W. “The Whittaker-Henderson Method of Graduation”. Journal of the Institute of Actuaries. Vol.
78: 1. 1952.
135
Knight, Frank H. Risk, Uncertainty, and Profit. Hart, Schaffner & Marx. 1921.
Levine, Damon. “Modeling Tail Behavior with Extreme Value Theory”. Risk Management: 17. 2009.
Mari, Dominique, and Samuel Kotz. Correlation and Dependence. Imperial College Press. 2001.
McNeil, Alexander J., Rüdiger Frey, and Paul Embrechts. Quantitative Risk Management: Concepts,
Techniques, and Tools. Princeton University Press. 2005.
Miccolis, Robert, and David Heppen. A Practical Approach to Risk Margins in the Measurement of
Insurance Liabilities for Property and Casualty (General Insurance) under Developing International
Financial Reporting Standards. Paper presented at the International Congress of Actuaries, Cape Town,
South Africa. 2010.
Milholland, Jim. “The Risk Adjustment – Accounting Perspective”. The Financial Reporter: 88. 2012.
Milliman, Inc. Aggregation of risks and Allocation of capital. 2009.
———. Economic Capital Modeling: Practical Considerations. 2006.
Nelsen, R. An Introduction to Copulas. Springer Verlag. 1999.
P.D. England, and R.J. Verrall. “Stochastic Claims Reserving in General Insurance”, presented to the
Institute of Actuaries, 28 January 2002
Rehmann, Zia, and Stuart Klugman. "Quantifying Uncertainty in Reserve Estimates". Variance. Vol. 4: 1.
2010.
Shapland, Mark R. "Loss Reserve Estimates: A Statistical Approach for Determining ‘Reasonableness’".
Variance Vol. 1:1. 2007.
Shaw, R. A., A. D. Smith, and G. S. Spivak. Measurement and Modeling of Dependencies in Economic
Capital. 2010.
Stein, Richard, and Michael Stein. “Sources of Bias and Inaccuracy in the Development of a Best Estimate”.
CAS Forum. Summer 1998. 1998.
Sweeting, Paul. Financial Enterprise Risk Management. Cambridge University Press. 2011.
Swiss Federal Office of Private Insurance. White Paper of the Swiss Solvency Test. 2004.
The Risk Margins Taskforce (Karl Marshall, Scott Collings, Matt Hodson, Conor O’Dowd), A Framework for
Assessing Risk Margins, Presented to the Institute of Actuaries of Australia 16th General Insurance Seminar
9-12 November 2008
Vozian, Ecaterina. “Value-at-Risk: Evolution, Deficiencies, and Alternatives”. Risk Professional. 2010.
Wang, Shaun S. “A Universal Framework for Pricing Financial and Insurance Risks”. ASTIN Bulletin. 2002.
Young, Virginia R. “Premium principles”. Encyclopedia of Actuarial Science. John Wiley & Sons. 2004.
136