Download Book Review - IFRS 9 and CECL Credit Risk Modeling

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Book Review
IFRS 9 and CECL Credit Risk Modelling and Validation
A Practical Guide with Examples Worked in R and SAS
First Edition, 2019
Author - Tiziano Bellini
Academic Press, Elsevier, London
ISBN: 978-0-12-814940-9
Allowance for Loans and Lease Losses (ALLL), an incurred loss model based on historic loss
data could not provide a robust framework to recognize credit losses in times of economic
shocks (2007-08 financial crisis) for financial institutions. This prompted the Financial
Accounting Standards Board (FASB) to come up with an alternative framework, Current
Expected Credit Loss (CECL) model. This framework provides for accounting expected credit
losses over the estimated life of loans. CECL has been adopted by US Financial Institutions
from Jan. 1, 2020.
A new standard, International Financial Reporting Standard 9 (IFRS 9) was issued by the
International Accounting Standard Board (IASB) in 2014 to calculate expected credit loss
(ECL). The implementation went live in 2018 and adopted by European Banks.
IFRS 9 relies on a three-bucket classification where one-year or lifetime expected credit
losses are computed, while CECL follows a lifetime perspective as a rule. Both IFRS 9 and
CECL accounting standards require banks to adopt a new perspective in assessing Expected
Credit Losses.
In general, ECL is calculated as follows as per the new standards:
ECL
=∑
( PD
*
LGD
*
EAD
* DF )
Expected
Credit Losses
Probability of
Default
Loss given
Default
Exposure at
Default
Discount
Factor
(i) Lifetime: Present
value of all cash
shortfalls expected
over the remaining
life of instrument
(i) Point-in-time PD
to be calculated
(i) Point-in-time
LGD to be
calculated (not
downturn LGD)
(i) Cash-flows
through the
lifetime of the
asset
(ii) Only costs
directly
attributable to
collection of
recoveries
(ii) Consider all
contractual terms
over the lifetime
(i) Discount
factor
calculated
through current
market rate or
Effective
Interest Rate
metod
(ii) 12-month –
portion of lifetime
ECL associated with
default within 12
months
(ii) PD to be
extrapolated over the
remaining expected
lifetime of the asset
(iii) Must consider
reasonable and
supportable forwardlooking information
(iii) Must
consider
reasonable and
supportable
forward-looking
information
The book review is aimed to study the approach taken by the author to conform to the new
standards and modelling techniques explained in the book.
This book explores a wide range of models and corresponding validation procedures to
calculate the ECL and is divided into six chapters. It attempts to provide a comprehensive
guide to model and validate ECL under both frameworks. By including examples, problems
and software solutions (both in R and SAS) it aims to attract students and academia.
The author describes both the IFRS 9 and CECL standards, their mechanics to enable readers
to appreciate the changes brought by the standards and ending it with the comparison. The
book attempts to leverage the major similarities and tries to simplify the measuring
methodology that can be applied in financial institutions, helping to meet both standards'
requirements.
While pointing out the similarities in the standards moving from incurred loss to expected
loss and non-prescriptive nature, the author spelt out the fundamental difference. IFRS 9
requires a staging allocation process (Stages 1, 2 and 3), while CECL focuses on the lifetime
loss perspective without making distinctions among credit pools.
As the choice has been provided to the institutions to decide on the methodology, the book
attempts to provide a comprehensive calculation approach, minimizing the efforts to
calculate ECL under both standards.
Probability of Default – Since both the accounting standards do not have a definition for
default, the book compared the standards' default definition flexibility with that of Basel II.
It explains the quantitative and qualitative indicators of the default which is the basic to
prepare data for modelling.
Book describes ways to build the dataset, how to include historical data, account-level panel
structure, behavioral characters of assets and macro-economic variables. Though the book
has cited certain behavioral variables of the asset, it is not explicit what macro-economic
variables to be considered while building the data set.
PD calculation has been divided into One year and Lifetime. This is to simplify the approach
to calculate PD and it would not change the existing procedures adopted by financial
institutions. Point-in-time PD (PIT PD) calculation focuses on the existing practice financial
institutions are following and thus the forward-looking perspective is not included.
The PIT PD model is based on the Generalized Linear Models (GLM), widespread usage
among banks. Then the book explores further into CART, bagging, random forest and
boosting to develop alternate models for Machine Learning environment.
Lifetime PD calculation is contemporary in risk management and the book attempts to
calculate and here it provides ways to include the forward-looking perspective to PD, a
requirement under both standards. It seeks to achieve this by adding a forward-looking
perspective to PIT PD and extend it to the lifetime of the assets.
Four modelling approaches are studied, viz., Lifetime GLM framework, Survival Modelling,
Lifetime ML Modelling, and Transition Matrix Modelling.
An attempt has been made to provide relevant examples and exercises making students
learn by doing.
Loss Given Default – The book devotes major attention to the Workout LGD approach,
which is internal to a financial institution than the other two approaches it discusses. It
suggests having a distinction on the classification and calculation based on cured and
written-off accounts. This modelling is at the portfolio level.
The books explain in detail the LGD data concepts since the modelling depends on adequate
and sufficient information in the database. These concepts are further elaborated with an
example to enable the reader to understand the importance of data and its structure in LGD
modelling.
The book adopts a Micro-structure approach providing a comprehensive view of the postdefault recovery process. It focusses on the probability of cure and severity with GLM and
classification trees methods.
A multi-step procedure is adopted to capture the relationship between the probability of
cure and severity with macro-economic variables to bring the forward-looking perspective
to the modelling. These steps are explained with a Real Estate model, giving a complete
framework for LGD modelling. The book also attempts to provide a framework in cases
where data is scarce and portfolio having low default rates.
Exposure at Default – The book analyses the exposure at two different scenarios (i) when
full prepayments and overpayments (partial repayments) and (ii) combine them with
defaults while modelling.
For prepayments and overpayments, GLM is used and for prepayments, overpayments and
default, Multinomial Regression is used. These two approaches apply to loan-type products,
where the financial institution is not having any further commitment to lend.
For uncommitted lines and revolving facilities, the book utilizes Tobit and Beta regression
techniques to model the LGD.
Expected Credit Loss – Though FASB does not prescribe scenario analysis (IFRS 9 does),
the author suggests usage of scenario analysis would help in bringing the forward-looking
perspective to the estimate. However, to address the reversion to long-term mean loss rate
which is central to FASB standards, the author explores the use of Time Series analysis.
The author uses the UK macroeconomic time series from the year 2000 to 2013 to describe
the Vector Auto Regression and vector Error Correction techniques. A detailed analysis is
attempted with a case in the book.
More focus is devoted to IFRS 9 staging requirements and validation of the ECL model using
simulation and other methods.
The book followed a series of steps to address the changed requirements in the calculation
of ECL in compliance with the standards. The discount rate to the PIT estimation of ECL is
left to the financial institutions and the book do not attempt to calculate or estimate the
discount rate.
Each parameter PD, LGD, and EAD is explained starting with data requirement, how to collect
data, the structure of the database and modelling techniques followed by validation methods.
General Linear Model, traditional regression method is extensively used to estimate the risk
parameters and the author build upon the concept and learning to much more innovative
methods like machine learning, survival analysis and competing risk modelling.
The author includes Examples and Case Studies worked in R and SAS, the most widely used
software packages used by practitioners in Credit Risk Management in each section of the
book to enable the reader to understand the codes and modelling described in the sections.
Further elaborate comparison in the calculation of parameters while estimating EAD and
LGD would have helped new readers and practitioner's further insight into the material. The
examples and cases are limited and with more examples, first-time readers and students
would have benefitted more.
One major missing part is a calculation of ECL across multiple years. Though explained little
and shown in few examples, a more detailed description and discussions about a PD term
structure on a lifetime path, the effect of macro-economic variables and more examples of
scenario based ECL calculation, would have enriched the book.
The book has comprehensively covered ECL modelling with various techniques including
traditional GLM and more contemporary ML methods, with hands-on training in R and SAS.
This book would serve as a starting point for new practitioners and students to understand
the concepts and various methodologies in modelling ECL. The models and methods
discussed in the book can be used as a benchmark to implement and validate ECL estimates.