Next Article in Journal
Market Orientation and Marketing Innovation Activities in the Czech Manufacturing Sector
Previous Article in Journal
Determinants of Indebtedness: Influence of Behavioral and Demographic Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Bayesian Approach to Capital Allocation at Operational Risk: A Combination of Statistical Data and Expert Opinion

Department of Management Sciences, University of Mohamed 5, Rabat 10052, Morocco
*
Author to whom correspondence should be addressed.
Int. J. Financial Stud. 2020, 8(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/ijfs8010009
Submission received: 3 October 2019 / Revised: 21 January 2020 / Accepted: 31 January 2020 / Published: 14 February 2020

Abstract

:
Operational risk management remains a major concern for financial institutions. Indeed, institutions are bound to manage their own funds to hedge this risk. In this paper, we propose an approach to allocate one’s own funds based on a combination of historical data and expert opinion using the loss distribution approach (LDA) and Bayesian logic. The results show that internal models are of great importance in the process of allocating one’s own funds, and the use of the Delphi method for modelling expert opinion is very useful in ensuring the reliability of estimates.
JEL Classification:
C11; C13; C15; G21; G32

1. Introduction

Since the 1990s, the Basel Committee and researchers have tried to define an incontestable framework for modelling and managing operational risk. However, the efforts made have shown that theoretical and practical mastery of this risk is far from being achieved.
Operational risk management practice is based on an approach composed of four steps: Identification, assessment of impact, classification of risks, and implementation of action plans. Indeed, the risk management process must be able to ensure perfect knowledge and control of operational risk at the level of the various activities exercised.
With regard to the minimum capital requirement, the legislator under the Basel II offered banks several approaches and methods for calculating operational risk depending on the degree of control and the availability of the information required for internal modelling. As a result, the regulator proposes, on the one hand, simple, unified, and standardized approaches whose characteristics are provided by him and, on the other hand, complicated and sophisticated approaches whose characteristics are determined by banks.
In terms of quantification, the committee presented some methods that could be used in the Advanced Measurement Approach ( A M A ) in a document published in 2001 entitled the “Working Paper on the Regulatory Treatment of Operational Risk”.
The A M A approach is an approach that allows banks to use internal models for risk measurement. Indeed, three approaches have been proposed: The Scorecard approach, the IMA (internal measurement approach), and the L D A approach (loss distribution approach).
The use of the L D A approach for the calculation of capital requirements has become very complex given the multitude of theories and models used, such as the probabilistic approach, the Bayesian approach, the Markov chain Monte Carlo approach, and the use of copulas to model the correlation. This situation has generated a significant model risk because it has become impossible to compare and benchmark between banks and assess the evolution of the risk profile.
Following the financial crisis, the minimum capital requirements for operational risk were reviewed by the Basel Committee (BCBS). Indeed, the publication in December 2017 of the document entitled “Basel III: Finalizing post-crisis reforms” divulged the orientation of banking regulation after 2022, which consists of replacing existing operational risk measurement approaches with a single approach known as the “Standardized Measurement Approach” ( S M A ) which will enter into effect in January 2022.
The Basel committee justified the decision to abandon internal models for the calculation of capital requirements based on the complexity of the models used and proposed a simple standardized approach.
Until the Basel III reform enters into effect, banks will continue to use their own models for calculating minimum capital requirements. Indeed, banks opt for two types of modelling approaches: the Top-Down approach or the Bottom-Up approach.
The Top-Down approach quantifies operational risk without attempting to identify events or causes of losses. Operational losses, under this approach, are measured based on overall historical data while The Bottom-Up approach quantifies operational risk based on knowledge of events by identifying internal events and relates generating factors in detail at the level of each task and entity. The information collected is included in the overall calculation of the capital charge.
Despite the Basel Committee’s decision to abandon the A M A approach, the use of internal models is essential for operational risk management, notably for the risk appetite process and capital allocation process.
In this study, we show the interest of internal models in the allocation of equity capital based on the LDA approach and propose a practical approach based on the Delphi method to adjust historical data by expert opinions, using Bayesian logic to determine the risk measure to be used in the capital allocation and applying the proposed approach for the allocation of capital for the retail banking business line of a Moroccan banks.
Therefore, in this article, the second section will be reserved for the literature review, the third part for the methodology, and the fourth part for the empirical study.

2. Literature Review

The modelling and management of operational risk remains a major concern for financial institutions, particularly in the absence of a total consensus on the approach to be followed between BCBS and the academicians and professionals in the domain. Indeed, research has focused on the approaches to be used for loss modelling, severity modelling, frequency modelling, correlation between losses, correlation between losses and total income, capital allocation, etc.
The approaches quantifying operational risk are multiple in the most well-known:
(1)
The IMA approach based on a proportionality assumption between the expected loss and unexpected loss, presented by Akkizidis and Bouchereau (2005) and Cruz et al. (2015);
(2)
The scorecard approach based on calculating a score for the risks measured by an entity and acting on its changing values, presented by Niven (2006), Akkizidis and Bouchereau (2005), Figini and Giudici (2013), and Facchinetti et al. (2019);
(3)
The LDA approach based on the distribution of frequency and the severity of losses.
The latter approach consists of three forms. The first is the classic LDA approach, which consists of determining the distributions that fits with the loss data and their parameters. In this case, the parameters are estimated by the moment method or by the maximum likelihood. This technique has been studied by a large number of researchers, such as Frachot et al. (2001, 2003); King (2001), Cruz (2002), Alexander (2003), Chernobai et al. (2005), Bee (2006), and P. V. Shevchenko (2010); the second is the LDA Bayesian approach with conjugated distributions, which considers that the parameters of frequency and severity distributions are random variables distributed according to a priori laws. This approach has been the subject of various studies, such as Giudici and Bilotta (2004), P. V. Shevchenko (2011), Dalla Valle (2009), Figini et al. (2014), and Benbachir and Habachi (2018). The last method is the LDA approach by markov chain monte carlo (MCMC), which uses non-informative laws and Markov chain properties. This method has been studied by Peters and Sisson (2006), Dalla Valle and Giudici (2008), and Shevchenko and Temnov (2009).
The dependence between the operational losses using mathematical copulas has been studied by various researchers, such as Cope and Antonini (2008), Brechmann et al. (2013), Groenewald (2014), and Abdymomunov and Ergen (2017). The opinions are divergent on this point because some consider it to be weak or inconclusive (Cope and Antonini 2008; Groenewald 2014).
The Basel III reform abandoned the “AMA” approach, for a new standard approach (SMA). The latter has been the subject of several critical studies that have shown the importance of the associated model risk, including the study by Mignola et al. (2016), Peters et al. (2016), and McConnell (2017). As a result, some researchers have proposed other types of models based on historical losses (Cohen 2016, 2018).
Capital allocation is an important area in risk management. Indeed, various studies have addressed this subject in different categories of risks including studies by Denault (2001), Tasche (2007), Dhaene et al. (2012), and Boonen (2019). In terms of operational risk, this issue is treated by Urbina and Guillén (2014).

3. Methodology

3.1. The Risk Appetite Process

Risk appetite is defined as the maximum loss that the bank supports in order to achieve its profitability objectives. Indeed, the Board of Directors must define the risks that shareholders accept in order to achieve the objectives defined for the Senior Management.
Risk appetite must be defined by the Senior Management at the level of each business line and activity, by defining risk tolerance at the intermediate level and risk limits at the operational level.
Risk appetite is directly related to the current risk profile and its evolution in correlation with the evolution of the bank’s activity. As a result, the bank must determine its risk profile at the date of preparing its risk appetite policy and must estimate the evolution of its profile in accordance with the progress of its development and expansion plan.
The risk profile is determined internally by the bank and may differ from its regulatory profile, as determined by the regulatory capital. Indeed, the actual profile is determined by the bank’s economic capital, while the regulatory profile is defined by the minimum capital requirement according to the standard approach of Basel III.
For the deployment of a risk appetite framework, Shang and Chen (2012) identified seven steps:
(1)
A Bottom-up analysis of the company’s current risk profile;
(2)
Interviews with the board of directors regarding the level of risk tolerance;
(3)
Alignment of risk appetite with the company’s goal and strategy;
(4)
Formalization of the risk appetite statement with approval from the board of directors;
(5)
Establishment of risk policies, risk limits, and risk-monitoring processes consistent with risk appetite;
(6)
Design and implementation of a risk-mitigation plan consistent with risk appetite;
(7)
Communication with local senior management for their buy in.
Indeed, this approach should be able to define three components:
(1)
The risk profile;
(2)
The risk tolerance process;
(3)
The process for defining operational risk limits.

3.2. The Process of Capital Allocation

Capital allocation is the process that defines the capital allocated by the bank to a given entity to achieve the intended profitability objective. Indeed, the capital K i allocated to unit ( i ) is defined according to the risk incurred by the said unit.
The definition of a risk measure ρ is an essential component in the capital allocation process. Indeed, for operational risk, two measures can be used: Value at risk ( V a R ), which is a non-coherent risk measure, and the Expected Shortfall, which is a coherent risk measure. The expected Shortfall ( E S ) is defined by
E S α = 1 α 0 α F 1 ( p ) d p ,
where F is the cumulative distribution function of operational losses.
Let X i , i = 1 , , n be the random variables representing the individual losses of n business units and K i ,   i = 1 , , n be the allocation of capital for each probable individual loss ( i ). The total operational loss ( P ) and total risk capital ( K ) are expressed as
{ P = i = 1 n X i , K = i = 1 n K i ,
For the allocation of risk capital for operational risk, several methods can be used, such as the proportionality allocation method (Hamlen et al. 1977), the beta method (Panjer 2002), the incremental method (Jorion 2001), the cost gap method (Driessen and Tijs 1985), the Shapley method (Shapley 1953), and Euler allocation (Aumann and Shapley 1974).
In operational risk, Urbina and Guillén (2014) used the proportionality allocation method using the VaR for capital allocation in the case of fraud.
In our study, we will use the same method for the allocation of capital at the retail banking business line level, using a Bayesian risk measure to integrate expert estimates.
This method is based on an assumption of proportionality between allocated and unallocated capital:
K j = K i = 1 n ρ ( X i ) ρ ( X j ) ,
where ρ ( X j ) = F X i 1 ( α ) = V a R α ( X i ) or ρ ( X j ) = E S α ( X i ) = 1 α 0 α F X i 1 ( p ) d p .
The capital allocated by this principle neglects the dependence of the losses of the different business lines and risk categories. The Haircut allocation method considers that the correlation between risk categories and business lines is weak or insignificant. Indeed, the studies of Cope and Antonini (2008) and Groenewald (2014), cited above, encourage the use of this method.
Under the second pillar, the allocation of capital and the implementation of the risk appetite process strengthen the use of internal models despite the suppression of their use for the calculation of the minimum capital requirement under the first pillar. Indeed, the piloting of the activity by the risk requires an individual monitoring of the risk by business line in order to guarantee adequacy between the risk incurred and the capital allocated. Consequently, the bank must develop its own models for estimating the economic capital needed to develop its business independently of the regulatory constraint of measuring the solvency ratio based on the standard approach of Basel III.

3.3. The Risk Mapping and Capital Requirements

3.3.1. The Risk Mapping

Operational risk mapping is a balance sheet of the probable risks incurred by a bank at a given date. This type of mapping represents all operational risk situations broken down by business line and risk category. An operational risk situation is composed of three elements:
(1)
The generating factor of the risk (hazard), which constitutes the factors that favor the occurrence of the risk incident for inexperienced personnel and the malfunction of the control device;
(2)
The operational risk event (incident), which constitutes the single incident whose occurrence can generates losses for the bank as internal fraud and external fraud;
(3)
The impact (loss), which it constitutes the amount of financial damage resulting from an event.
To normalize the identification of an operational risk situation, the BCBS (2006) defines the generic mapping of operational risks within credit institutions, comprising eight business lines and seven categories of operational risks.

3.3.2. The Operational Risk Categories

The operational risk categories ( RT c , 1 ≤ c ≤ 7) are RT 1 _ execution, delivery, and process management, RT 2 _business disruption, and system failures, RT 3 _damage to physical assets, RT 4 _clients, products, and business practices, RT 5 _employment practices, and workplace safety, RT 6 _external fraud, RT 7 _internal fraud.

3.3.3. The Business Lines

The business lines ( B L i , 1 ≤ 𝑖 ≤ 8) are B L 1 _Corporate finance, B L 2 -Trading, and sales, B L 3 _Retail banking, B L 4 _Commercial banking, B L 5 _Payment, and settlement, B L 6 _Agency services, B L 7 _Asset management, B L 8 _Retail Brokerage.

3.3.4. Capital Requirements

The quantification of operational risk remains a major problem for the Basel Committee. Indeed, several approaches have been adopted in the Basel II framework, including the A M A approach based on internal models, which is considered the most important.
The use of internal models has been strongly criticized by the Basel Committee. Indeed, a new orientation of the Basel Committee has been born; this orientation considers abandoning all Basel II approaches and adopting a new standard approach, S M A , which will replace all previous approaches.
The standard approach S M A defined by (BCBS 2016, 2017) is based on the Business indicator ( B I ) defined as follows:
B I = I L D C + S C + F C ,
The components I L D C , S C and F C are calculated by the following formulas:
I L D C = M i n [ ( 1 3 i = 1 3 | P I i C I i | ) ; 2 , 25 % × ( 1 3 i = 1 3 A P I i ) ] + 1 3 i = 1 3 D i ,
S C = M a x [ ( 1 3 i = 1 3 A C E i ) ; ( 1 3 i = 1 3 A P E i ) ] + M a x [ ( 1 3 i = 1 3 P H C i ) ; ( 1 3 i = 1 3 C H C i ) ] ,
F C = 1 3 i = 1 3 | P L T i | + 1 3 i = 1 3 | P L B i | ,
where1:
-
P I i and C I i are, respectively, the Interest Income and the Interest Expense for the year ( i ) ; A P I i is the Interest Earning Assets for the year ( i ) ; D i is the Dividend Income for the year ( i ) ;
-
A C E i and A P E i are the other operating income and the other operating expense for the year ( i ) ;
-
P H C i and C H C i are, respectively, the Fee Income Fee Expenses for the year ( i ) ;
-
P L T i is the Net P&L Trading Book for the year ( i ) ;
-
P L B i is the Net P&L Banking Book for year ( i ) .

3.4. The LDA Approach and the V a R of Operational Risk

3.4.1. The Loss Distribution Approach L DA

The LDA approach uses the distributions of the frequency and severity of operational losses to determine operational the losses over a time horizon T .

The Classical L D A Model

i. Mathematical formulation of the model
In the L D A approach, the operational loss in horizon T is considered as a random variable P , defined as follows:
P N = i = 1 N X i
where:
-
X i is the random variable that represents the individual impact of operational risk incidents;
-
N is the random variable that represents the number of occurrences on a horizon T .
The random variables X i are independent and identically distributed. The random variable N is independent of variables X i .
The mathematical expectation and variance of the compound random variable P are defined as follows:
E ( P ) = E ( X ) × E ( N ) = λ     E ( X ) ,
VAR   ( P ) = E ( N ) × var ( X ) +   var ( N ) × E ( X ) 2 .
ii. Presentation of the classical L D A approach.
The classical L D A approach considers that severity and frequency can be modelled by the usual theoretical laws whose parameters are estimated from these data.
For modelling the individual severity of losses X i , several distributions can be used to represent the severity random variable X as the LogNormal distribution, the Beta distribution, the Weibull distribution, or other distributions, which are detailed in Chernobai et al. (2007). In our study, we will limit ourselves to the LogNormal distribution ( μ , σ ) .
E ( X ) = e μ + σ 2 2 ,
V a r ( X ) = ( σ 2 1 ) e 2 μ + σ 2 .
With regard to the modelling of the loss frequency N , we use the Poisson distribution P ( λ ) or the Negative Binomial distribution B N ( a , b ) .

The Pure Bayesian L D A Approach

In the pure Bayesian L D A approach, the parameters of the distribution of the frequency N and the individual loss X i are considered as random variables with a probability density function.
The pure Bayesian approach considers the parameters ( μ , σ ) and λ of the density functions of X i and N as the random variables whose the densities are, respectively, π μ , π σ , and π λ .
i. Description of the pure Bayesian L D A approach.
Let Y = ( Y 1 , Y m ) be a vector of random variables independent and identically distributed (i.i.d). Let ( y 1 , y m ) be a realization of vector Y , and let θ = ( θ 1 , θ 2 , , θ p ) be a vector of the random variables of the parameters of the density of vector Y .
The density function f ( Y , θ ) of vector ( Y , θ ) = ( Y 1 , Y m , θ 1 , θ 2 , , θ p ) is defined by:
f ( Y , θ ) = f ( Y / θ ) π ( θ ) = π ( θ / Y ) f ( Y ) ,
where
  • π ( θ ) is the probability density of the parameter θ called the “prior density function”;
  • π ( θ / Y ) is the conditional probability density function of the parameter θ knowing Y , which is called “posterior density”;
  • f ( Y , θ ) is a probability density function of the couple ( Y , θ ) ;
  • f ( Y / θ ) is the conditional density function of Y knowing θ ; this is the likelihood function f ( Y / θ ) = i = 1 m f i ( Y i / θ ) with f i ( Y i / θ ) as the conditional probability density function of Y i ;
  • f ( Y ) is the marginal density of Y that can be written as f ( Y / θ ) π ( θ ) d θ .
Hence
π ( θ / Y ) f ( Y / θ ) π ( θ ) ,
where f ( Y ) is a normalization constant, and the posterior distribution π ( θ / Y ) can be viewed as a combination of a priori knowledge π ( θ ) with a likelihood function f ( Y / θ ) for the observed data. Since f ( Y ) is a normalization constant, the posterior distribution is often written with the form (13), where the symbol ∝ signified “is proportional”, with a constant of proportionality independent of the parameter θ .
ii.
The Bayesian Estimator θ ^ B a y
The parameter ( θ ) can be univariate or multivariate. The estimate of the Bayesian posterior mean θ ^ B a y of θ is defined as follows:
iii.
If parameter ( θ ) is univariate, the estimate of the Bayesian posterior mean of θ , denoted as θ ^ B a y , is a conditional expectation of θ knowing Y , defined by
θ ^ B a y = E ( θ / Y ) = θ × π ( θ / Y ) d θ = θ × f ( Y / θ ) π ( θ ) d θ f ( Y )   .
iv.
In a multidimensional context, where θ = ( θ 1 , θ 2 , , θ p ) , the estimate of the Bayesian posterior mean of θ, denoted as θ ^ B a y , is a conditional expectation of vector θ knowing Y , defined by
θ ^ B a y = E ( θ / Y ) = ( E ( θ 1 / Y ) , E ( θ 2 / Y ) , , E ( θ p / Y ) ) = ( θ 1 × π ( θ 1 / X ) d θ 1 , θ 2 × π ( θ 2 / X ) d θ 2 , . , θ p × π ( θ p / X ) d θ p ) .
a. Calculation of the estimate of the Bayesian posterior mean
To determine the estimate of the Bayesian posterior mean defined by Formulas (13) and (14), we must determine the prior and posterior laws of the random variable θ . We will limit our study to a Lognormal distribution for loss severity X i L N ( μ , σ ) , 1 i m and to the Poisson distribution for the frequency of the losses N P ( λ ) . The parameters μ , σ , and λ are considered random variables.
Therefore, we must determine the following estimate of the Bayesian posterior mean:
θ ^ B a y = ( μ ^ , σ ^ ) = E ( μ , σ / X 1 , X n )   ,
θ ^ B a y = λ ^ = E ( λ / N ) .  
b. Determination of the prior law of the parameters
The Bayesian approach depends on the accuracy of the information provided by experts on the parameters of the prior law. Below, we present the approach adopted:
– The prior law of the parameter λ with N P ( λ )
In our study, we consider that the prior law is a gamma distribution Γ with parameters ( a , b ) to be determined by the experts. The choice of the prior distribution of the parameter λ depends on the description of the characteristics of the random variable given by the experts. In our study, we consider that the prior law is a Gamma distribution ( Γ ) of parameter ( a , b ).
– The prior law of μ and σ with X i L N ( μ , σ )
In this paper we limit ourselves to a case where μ is a gaussian random variable μ N ( μ 0 , σ 0 ) , and σ is a known constant. However, P. V. Shevchenko (2011), represented σ 2 by the inverse Chi-square distribution (Inv.Chi.Sq) of parameters ( α , β ) .
c. Determination of the posterior law of the parameters λ and μ
The posterior distribution is determined from the likelihood function and the prior distribution by the Formula (12). Thereby, we will calculate the posterior law of frequency and severity:
– The posterior law of parameter λ with N P ( λ )
Let N = ( N 1 , N l ) be a vector of random variables of the frequency. Let ( n 1 , n l ) be a realization of vector N . We suppose that N j P ( λ ) , and we consider that λ Γ ( a , b ) .
The posterior law conjugated in the prior law λ is defined by
π ( λ / N ) f ( N / λ ) π ( λ ) f ( N / λ ) ( λ b ) a 1 Γ ( a ) × b × e λ b ,
and we have
f ( N / λ ) = j = 1 l f j ( N j / λ ) = j = 1 l λ n j n j ! e λ .
Thus,
π ( λ / N ) j = 1 l λ n j n j   ! e λ × ( λ b ) a 1 Γ ( a ) × b × e λ b ( λ b ) a 1 Γ ( a ) × b × e λ b × j = 1 l e λ λ n j n j   ! ( λ b ) a 1 Γ ( a ) × b e λ b j = 1 l ( e λ λ n j n j   ! ) ( λ b ) a 1 Γ ( a ) × b j = 1 l λ n j n j   ! × ( e l × λ e λ b ) ( λ b ) a 1 Γ ( a ) × b j = 1 l λ n j n j   ! × ( e l × λ e λ b ) ( λ b ) a 1 Γ ( a ) × b j = 1 l λ n j n j   ! × e λ ( 1 b + l ) λ a 1 λ j = 1 l n j × e λ ( 1 b + l ) λ ( a + j = 1 l n j ) 1 × e λ ( 1 + b × l b ) ,
We pose a l = a + j = 1 l n j and b l = b 1 + b × l . Thus,
π ( λ / N ) λ a l 1 e λ b l   .
From Formula (17), we deduce that the posterior law is a gamma law Γ ( a l , b l ) .
– The posterior law of the parameter μ N ( μ 0 , σ 0 ) with σ as a constant
Let x 1 , , x m be the realizations of random variables X 1 , , X m representing the collected losses. We suppose here for the Bayesian modelling of the severity that μ N ( μ 0 , σ 0 ) and σ a constant, which we estimate from the sample by the maximum likelihood method. We pose Z i = l n ( X i ) . Thus, Z i N ( μ , σ ) . We then consider the random vector Z = ( Z 1 , , Z m ) .The prior distribution of μ is given by
π ( μ ) = 1 σ 0 2 π e ( μ μ 0 ) 2 2 σ 0 2 ,
and the conditional distribution of a random vector Z is given by
f ( Z / μ , σ ) = i = 1 m 1 σ 2 π e ( Z i μ ) 2 2 σ 2 .
Hence, the posterior law of μ :
π ( μ / Z ) f ( Z / μ ) π ( μ ) ,
π ( μ / Z = ( z 1 , , z m ) ) i = 1 m 1 σ 2 π e ( z i μ ) 2 2 σ 2 × 1 σ 0 2 π e ( μ μ 0 ) 2 2 σ 0 2 ,
π ( μ / Z = ( z 1 , , z m ) ) e ( μ μ 0 m ) 2 2 σ 0 m 2 .
where
{ μ 0 m = μ 0 + m × ε × Z ¯ 1 + m × ε σ 0 m 2 = σ 0 2 1 + m × ε   and   { Z ¯ = 1 m I = 1 m z i ε = σ 0 2 σ 2
Formula (18) shows that the posterior law of μ is a gaussian law N ( μ 0 m , σ 0 m ) .
d. Calculation of the Bayesian estimator μ ^ B a y and λ ^ Bay
The Bayesian estimator λ ^ B a y of parameter λ is given by
λ ^ B a y = E ( λ / N ) .
Result (17) shows that the posterior law of λ is a Γ ( a l , b l ) distribution with ( a l , b l ) = ( a + j = 1 l n j , b 1 + b × l ) . Consequently, the estimator λ ^ B a y is the mathematical expectation of the posterior law of λ :
λ ^ B a y = a l × b l = ( a + j = 1 l n j ) × b 1 + b × l = a × b + b × l × ( j = 1 l n j l ) 1 + b × l = λ 0 + b × l × ( j = 1 l n j l ) 1 + b × l = λ 0 + b × l × ( N ¯ ) 1 + b × l
λ ^ B a y = ε 0 × λ 0 + ( 1 ε 0 ) × N ¯ =   ε 0 × λ 0 + ( 1 ε 0 ) × λ o b s e r v e d ,
where ε 0 = 1 1 + b × l , λ o b s e r v e d = N ¯ = j = 1 l n j l   a n d   λ 0 = E ( λ ) . Parameter λ 0 is estimated by the experts.
The Bayesian estimator μ ^ B a y of parameter μ is given by
μ ^ B a y = E ( μ / ( σ ; X 1 , , X m ) ) = E ( μ / ( σ ; x 1 , , x m ) ) ,
where σ is a constant, and x 1 , , x m are realizations of the random variables X 1 , , X m .
Result (18) shows that the posterior law of μ is a gaussian distribution N ( μ 0 m , σ 0 m ) . Consequently, the estimator μ ^ B a y is the mathematical expectation of the posterior law of μ . Thus,
μ ^ B a y = μ 0 m = μ 0 + m × ε × Z ¯ 1 + m × ε ,
which can be written as
μ ^ B a y = ε 2 × μ 0 + ( 1 ε 2 ) × Z ¯ = ε 2 × μ 0 + ( 1 ε 2 ) × μ o b s e r v e d ,
where
ε 2 = 1 1 + m × ε ;   ε = σ 0 2 σ 2 ;   Z ¯ = 1 m I = 1 m z i = μ o b s e r v e d ;   z i = l n ( x i ) ;   μ 0 = E ( μ ) ,
where the parameter μ 0 is estimated by the experts.
Consequently, the parameters of the LogNormal law used in the simulation are μ ^ B a y and σ .

3.4.2. Value at Risk of Operational Risk

Value at Risk (VaR) is a measure adopted by the Basel Committee on Banking Supervision under Basel II to measure credit risk, market risk, and operational risk in the framework of advanced approaches based on internal models. Indeed, the committee requires that the internal model be very robust and meet very high requirements by fixing the threshold for the VaR of operational risk at 99.9%.
In terms of operational risk, the V a R model is the main component for the calculation of capital requirements by the L D A approach, which is based on the determination of the distribution of aggregate operational losses, and determination of the 99.9% percentile of this distribution.
The determination of V a R is dependent on the determination of the aggregate operational loss distribution because it can be calculated analytically, determined by numerical algorithms or calculated by Monte Carlo simulation.

Presentation of Value at Risk ( V a R )

Let X t , t = 1 , , n , be, a series of stationary data for the cumulative distribution function F . The value at risk (VaR) for a given probability α is defined mathematically by
V a R α = i n f { u / F ( u ) α } ,

Definition of the Capital at Operational Risk

We consider the aggregated loss P N = i = 1 N X i in a given horizon T . We fix the level of confidence 1 α = 99.9 % .
The requirement of capital to cover the operational risk is measured by the Value at Risk (VaR). The V a R is the quantile of order 1 α of the aggregated loss P N defined by
F P N ( V a R ) = F i = 1 N X i ( V a R ) = P ( P N V a R ) = 1 α ,
where F P N is the cumulative distribution function of P N . The V a R is given by
V a R = F P N 1 ( 1 α ) ,
The simulation of the VaR is presented in Appendix A.

3.5. Bayesian Modelling of the Expert Opinion

3.5.1. Collecting and Modelling the Expert Opinion

Organization of the Process for Collecting the Expert Opinion

Obtaining an expert opinion can be defined as the process of collecting information and data or answering questions about problems to be solved. In this study, we must define the parameters of the frequency and severity of operational risk events. Therefore, the approach adopted must ensure a high level of accuracy and reliability of the expert opinion in order to reduce the impact of this data on the bank’s risk profile.
The modelling of expert opinions has been the subject of various studies that have used various techniques for collecting expert opinions, such as the Delphi technique defined by Helmer (1968) and the practical guides proposed by Ayyub (2001).
In our study, we use the Delphi technique after adapting it to the specificities of collecting information from experts in the field of operational risk.

Presentation of the Delphi Method

The Delphi method includes eight steps according to Ayyub (2001), which are defined as follows:
(1)
Selection of issues or questions and development of questionnaires;
(2)
Selection of experts who are most knowledgeable about issues or questions of concern;
(3)
Issue familiarization of experts by providing sufficient details on the issues via questionnaires;
(4)
Elicitation of experts about the pertinent issues. The experts might not know who the other respondents are;
(5)
Aggregation and presentation of the results in the form of median values and using an inter-quartile range (i.e., 25% and 75% values);
(6)
Review of results and revision of the initial answers by experts. This iterative re-examination of issues sometimes increases the accuracy of results. Respondents who provide answers outside the inter-quartile range need to provide written justifications or arguments during the second cycle of completing the questionnaires;
(7)
Revision of results and re-review for another cycle. The process should be repeated until a complete consensus is achieved. Typically, the Delphi method requires two to four cycles or iterations;
(8)
A summary of the results is prepared with an argument summary for out of inter-quartile range values.

3.5.2. Summary Presentation of the Process for Collecting Expert Opinions

The approach for collecting expert opinion is based on that defined by Ayyub (2001) with readjustments to better adapt the process to the area of operational risk:
(1)
Definition of the information requested;
(2)
Definition of interveners in the data collecting process;
(3)
Identification of problems, information sources and insufficiencies;
(4)
Analysis and collecting of pertinent information;
(5)
Choice of interveners in the data collecting process;
(6)
Knowledge of the operation’s objectives by the experts and a formation of those objectives.
(7)
Soliciting and collecting opinions;
(8)
Simulation, revision of assumptions, and estimates. If the expert provides his consent, we pass to the next step; otherwise, we repeat steps 6, 7, and 8;
(9)
Aggregation of estimates and overall validation;
(10)
Preparation of reporting and determination of results.

Definition of the Information Requested

The collecting of information from experts has two objectives:
(1)
The first consists in modelling the law a priori of the frequency and the severity of data by risk category. Indeed, the expert must provide the forms of the priori laws of frequency and severity and an estimation of their parameters ( λ e , μ e , σ e ) ;
(2)
The second objective is the estimation of the expert weighting with the control functions (internal audit and permanent control).
i. Modelling the a priori law.
In this case, the expert must provide:
(1)
The estimation of parameter μ e of the lognormal law L N ( μ , σ ) , which models the severity X i by risk category by knowing that σ is a constant and μ   ~   N ( μ e , σ 0 ) ;
(2)
The estimation of parameter λ e of the Poisson’s law P ( λ ) , which models the frequency N by risk category over a horizon ( T ) knowing that λ ~ gamma ( a 0 ,   b 0 ).
ii. Weighting of the expert opinion.
The objective of weighting the expert opinion is to determine the parameters of the a posteriori law. Indeed, for frequency, this weighting permits one to determine the parameter λ ^ B a y of Poisson’s law relating to the frequency of losses by risk category R T c . For severity, this weighting allows one to determine parameter μ ^ B a y of the LogNormal law relating to the severity of losses by risk category.

Definition of Interveners in the Data Collecting Process

The evaluation of the parameters of a priori law involves all operational entities concerned, as well as the risk management function:
i. The Risk managers.
The risk managers have the status of evaluators because they must conduct the evaluation process with the various experts.
ii. Person in charge of incident reporting (risk correspondents) and their managers.
This is an essential population with great added value, given their experience in collecting incidents and their contributions to correcting collection biases.
iii. Experts from the operating entities and the business lines.
The operational losses are dependent on the business line and the activity exercised. Indeed, the severity and frequency generally reflect the risk profile of each activity and business line because they depend on the size of the transactions concluded by the business line (or activity exercised) and on their frequencies.
Consequently, the use of experienced and well-qualified experts is the first step in the evaluation process, which will be followed by a phase of estimation and an aggregation of the data collected, which takes into account the specificities of the activity targeted by the evaluation.
iv. Internal auditors and permanent controllers.
The internal audit and permanent control functions have the right to supervise all activities and executions on a permanent or periodic basis, as well as audit and control missions for the various business lines and operational entities. Their verification approaches are based on a risk identification approach using risk mapping and the database of events collected. Therefore, recourse to the service of this category for the weighting of experts’ opinions is necessary.

Identification of Problems, Information Sources, and Insufficiencies

The main reason for using expert opinion modelling is to reduce the uncertainty due to the change in the bank’s risk profile caused by changes at the organization level and in the process of control and risk management, given that the distributions of observed historical losses in frequency and severity follow Poisson’s law and the LogNormal law, respectively. Indeed, uncertainty is linked to a change in the parameters of the two laws because the use of historical data alone can bias the estimation of risk capital.
Consequently, the expert opinion makes it possible to define the a priori law on the one hand and to weight the experts’ estimate on the other hand. To do this, we will estimate, with the business experts, the average loss defined by Formula (4), which will allow us to determine the parameters λ e and μ e , respectively.

Analysis and Collecting of Pertinent Information

In order to carry out the evaluation mission and ensure the acceptable reliability of the expert opinion, we have collected a series of relevant information, such as
(1)
The evolution of the bank’s size in terms of net banking income, the number of transactions, the number of incidents, the size of the banking network, and the number of customer claims.
(2)
The organizational and business changes, such as the introduction of new products, the industrialization of sales, control and treatment processes, external audits, control activities, and outsourcing of activities.
(3)
The major losses suffered and the action plans implemented, as well as their impact on the control and risk management device.
(4)
The formation programmers of operational risk and their frequency.

Choice of Interveners in the Data Collecting Process

i. Choice of expert from the operating entities and choice of person in charge of incident reporting.
In our study, we weighted the expert opinion at 25%. However, the approach used is valid for any desired weighting.
Therefore, we have carried out an estimation with experts who can be weighted at 25%. To choose them, we drew a list of experts from the operating entities and the person in charge of incident reporting at the level of each business line and, we scaled the estimate that each person in charge of incident reporting and each expert can provide with a scoring system that we constructed. Then, we selected only those whose estimates can be weighted at 25%.
The determination of the score is made with the hierarchical managers and validated with internal audit and permanent control functions on the basis of the following elements:
(1)
Relevant expertise, academic and professional formation as well as professional experience;
(2)
The number of risk incidents declared and treated;
(3)
Knowledge and mastery of the control device;
(4)
The level of formation and, the knowledge of operational risk;
(5)
The level of knowledge of descriptive and inferential statistics;
(6)
Excellent communication abilities, flexibility, impartiality, and a capacity to generalize and simplify.
The score must give a value that corresponds to a grid of 10%, 25%, 40%, 50% and 75%. Each criterion must have a qualification between low, medium, and high. To calculate the score, a rating was assigned to each qualification. The ratings assigned are presented in Table 1 as follows:
The expert score function that we retained for our study is equal to the sum of the ratings assigned to all criteria, and the weighting is defined according to the score obtained. Expert weighting according to the score function is presented in Table 2 as follows:
ii. Choice of evaluators for internal audit and permanent control.
To choose the evaluators for permanent control and internal audit, we based our choice on the following elements:
(1)
Relevant expertise, academic and professional formation, as well as professional experience;
(2)
The number of control and audit missions conducted annually;
(3)
The level of formation and knowledge of operational risk;
(4)
The level of knowledge of descriptive and inferential statistics;
(5)
Excellent communication abilities, flexibility, impartiality, and a capacity to generalize and simplify.
The designation of evaluators is made by consensus with the audit function and the permanent control function.
iii. Knowledge of the operation’s objectives by the experts and the formation of those objectives.
After we selected the experts and evaluators, we organized an introductory session on the evaluation mission by presenting the main lines of the mission, the objectives, the speakers, and the realization planning. Then, the following elements were sent to the participants before launching the evaluation meetings and workshops:
(1)
The description of the operation’s objectives;
(2)
The list of experts from the operating entities and person in charge of incident reporting, as well as hierarchical managers and the evaluators for internal audit and permanent control;
(3)
A summary description of the risks, tools, and operating system, as well as the organization and controls;
(4)
Basic terminology, definitions that should include probability density, arithmetic and weighted mean, standard deviation, mode, median, etc.;
(5)
A detailed description of the process by which meetings and workshops to collect expert opinions are conducted and the average duration of their conduct;
(6)
Methods for aggregating expert opinions.
iv. Simulation, revision of assumptions, and estimates.
To have the expert’s consent to the estimates obtained, we proceeded as follows:
(1)
The expert estimates the average loss per risk category that will be used to determine the parameters of the frequency law P ( λ ^ e x p e r t ) and severity law L N ( μ ^ e x p e r t , σ ^ e x p e r t ) knowing that σ ^ e x p e r t is equal to σ , as determined by the likelihood. These parameters will be used to simulate, via Monte Carlo, three samples of the realizations concerning, respectively, the individual loss X i , the frequency N , and the annual loss P ( i = 1 n X i ) Then, we analyze the characteristics of these samples with the expert, particularly the average, median, maximum, minimum, and maximum values, etc.;
(2)
If the expert accepts the simulations and their characteristics, the estimation of the parameters λ ^ e x p e r t , μ ^ e x p e r t , and σ ^ e x p e r t will be validated;
(3)
If the expert rejects the simulations, we will eliminate the outliers rejected by the expert and revise the expert’s initial estimates and proposed simulations in an iterative manner until the expert’s consent is obtained.
v. Aggregation of estimates and validation.
In our study, the expert’s estimate concerns the parameters λ ^ e x p e r t ,     μ ^ e x p e r t and σ ^ e x p e r t . Therefore, we need to aggregate historical and expert estimates to determine the Bayesian estimator.

3.5.3. Determination of the Bayesian Estimator

In the theoretical study, we showed that the Bayesian Estimators of the parameters of the severity and the frequency distributions of losses are defined as follows:
(1)
For frequency, Formula (19) defines the Bayesian estimator of λ by
λ ^ B a y = ε 1 × λ expert + ( 1 ε 1 ) × λ observe .
(2)
For severity, Formula (20) defines the Bayesian estimator of μ by
μ ^ B a y = ε 2 × μ expert + ( 1 ε 2 ) × μ observe .
In our study, weights ε 2 and ε 2 are fixed at 25%, which corresponds to the scores of the selected experts.

4. Results

4.1. Data Description

In this study, we used a database of loss incidents concerning the retail banking business line of a Moroccan banking institution. The database was constituted from the losses registered by the bank since the 1990s, as well as the reports and missions of the audit.
The database is composed of 3581 individual losses, i.e., 2069 distinct amounts. The descriptive statistics, of losses is summarized in Table 3 as follows.
The distribution of the database by risk category R T c shows that the losses of the category R T 3 represent 45%, followed by those of R T 6 , which represent 19%. In the third position is R T 1 , which represents 12%, followed by R T 7 with 10%; the other categories represent 15%. The statistical characteristics of the individual losses X i by risk category are summarized in Table 4 as follows.
To determine the frequency of losses, we will segment the database according to a semi-annual horizon. The choice of horizon is based on the data available for modelling, which must be greater than 30 observations.
The statistical characteristics of the frequency by risk category are presented in Table 5 as follows.

4.2. The LDA Approach

4.2.1. Statistical Estimation of Parameters

The estimation of the parameters of the laws of severity and frequency based on the observed data by risk category is presented as follows.

The Parameters of the Severity

The adjustment test of the data with the lognormal law L N ( μ h , σ h ) is based on the Kolmogorov-Smirnov test. As a result, the estimation of the parameters and the results of the adjustment tests by risk category are presented in Table 6 as follows.
The Kolmogorov-Smirnov fit test shows that data from all categories adjust with the lognormal law except the category R T 6 .

The Parameters of the Frequency

The test for adjusting the frequency data with the Poisson law and the negative binomial law is based on the chi-square test. As a result, the estimation of the parameters and the results of the adjustment tests by risk category are presented in Table 7 as follows.
The fit test shows that, for a 5% threshold, the data do not adjust with Poisson’s law and Negative-Binomial law, except for the category R T 7 , which adjusts with the Negative-Binomial law while for a 1% threshold. All categories do not adjust with Poisson’s law but instead adjust with the Negative-Binomial law, except the category R T 7 which, does not adjust with the Negative-Binomial law.

4.2.2. Experts’ Estimates

The mean annual loss is determined by the risk category through maintaining the same allocation structure for the mean annual losses. The calculation of the mean loss for the business line is defined as a percentage of the activity level of the business line ( N a ) . Indeed, the level of activity is deducted from the activity indicator presented above. In our research, the activity level N a is defined as follows:
N a = M i n [ ( 1 3 i = 1 3 | P I i C I i | )   ; 2.25 % × ( 1 3 i = 1 3 A P I i ) ] + M a x [ ( 1 3 i = 1 3 P H C i ) ; ( 1 3 i = 1 3 C H C i ) ] .
For the bank studied, the semi-annual level of activity of the Retail Banking line is equal to 4.5 million MAD.
The experts’ estimates of the mean loss for the business line are set at 1.5%, i.e., a mean loss of 67.5 million MAD, allocated by risk category in Table 8 as follows.
The estimation by the experts is made in two steps. We will first estimate the mean semi-annual frequency ( λ e ) and then estimate the parameter μ e of L N ( μ e   , σ h ) from the Formula (9) using the mean loss per risk category R T c .

Expert Estimation λ e of Parameter λ

The experts’ estimate of parameter λ e is based on the approach defined above. Indeed, the expert gives the first estimate based on historical data. This estimate is used to simulate the realization of Poisson’s law according to the algorithm presented in Appendix A. Then, the values judged to be outliers by the expert are deleted. We determine the new mean of the simulated sample after deleting the values judged to be outliers, which will be confirmed by the expert. This simulation is repeated until the expert’s validation of the mean frequency by risk category is obtained.
The results of this approach are presented in Table 9 as follows.

Estimation of the Parameter μ e

The expert estimates of parameter μ e from the estimation of the mean loss and the mean frequency is made by the following formulas determined from Formulas (7) and (9):
{ P M = λ e E ( X ) , μ e = ln ( E ( X ) ) σ h 2 2 ,
As a result, the estimate of parameter μ e by risk category is presented in Table 10 as follows.

4.2.3. The Bayesian Estimators of Parameters

The Bayesian estimators of frequency and severity are determined by the following relationships:
{ λ ^ B a y = ε 1 × λ e + ( 1 ε 1 ) × λ h μ ^ B a y = ε 2 × μ e + ( 1 ε 2 ) × μ h   with   ε 1 = ε 2 = 25 %
As a result, the Bayesian estimators of severity and frequency by risk category, knowing that variance is a constant determined by likelihood, are presented in Table 11 as follows.

4.2.4. Determination of the V a R by Risk Category

The determination of the V a R is made according to the approach presented above. Indeed, the breakdown of V a R based on historical L D A and the Bayesian L D A by risk category is presented in Table 12 as follows.
The use of expert opinion has permitted us to minimize V a R by risk category. Indeed, the experts readjusted the parameters of the distribution of severity and frequency for all categories, in order to take into account organizational changes and the strengthening of the control device.

4.3. Capital Allocation

The capital is allocated in accordance with Formula (1). Indeed, each category benefiting from a percentage of the capital allocated to the retail banking business line is equivalent to the ratio of its V a R C and the sum of the V a R s of all categories ( c = 1 7 V a R C ) . The capital share of each class is presented in Table 13 as follows.
The allocation of capital in retail banking shows that the integration of expert opinion appreciates certain types of risk, in particular categories R T 1 and R T 6 .
Indeed, the experts consider that the losses recorded do not represent the bank’s actual exposure to these two risks because:
  • For R T 1 , the database only includes proven losses, while risk events are generally adjusted without an accounting impact. However, they can have consequences if the losses recorded are not recovered;
  • For R T 6 , the experts believe that fraud attempts to target large amounts of money, especially those that have not been successful. However, if they are successful, the impact will be great.
On the other hand, our approach is sensitive to several factors:
  • The bank studied is a medium-sized bank whose main activity is the granting of bank loans. Therefore, the use of simple and easy to implement approaches is its principal concern. However, other allocation approaches can be used to refine the allocation process;
  • The approach we propose is based on the average loss per risk category, which favors the category R T 7 . However, the collection approach used by the bank may bias the results because the bank accounts for the losses per fraud file even if a fraud is composed of different amounts distributed over several years;
  • We have defined a list of criteria to score the experts and define their weighting, which makes the process very sensitive to the choice of scoring tool.

5. Discussion and Conclusions

Internal models permit us to determine the economic capital independent of the regulatory capital and the impact of the occurrence of risk events at the level of the different entities and at the aggregate level under the Bottom-Up approach or the Top-Down approach.
For the risk identification process, banks are free to use their own models to achieve the objective of risk supervision in accordance with the second pillar relating to prudential risk management. This situation encourages the use of internal models that can be based on historical data, expert opinion, or a combination of historical data and expert opinions.
The use of expert opinion is essential in risk management given the recurrent changes in organization, business size, and control device. Indeed, expert opinions permit one to readjust estimates and assumptions based on historical data by considering the changes that have been operated.
The reliability of models incorporating expert opinions depends on the approach used to collect the requested information. Indeed, it is necessary to adopt rigorous procedures and approaches at the theoretical and practical levels in order to avoid the risk of a model.
In this context, we have presented in this paper a process for collecting information from experts specific to operational risk, based on the Delphi method, which we believe will give the relevant results for risk measurement if correctly administered.
For the prospects of internal models for quantifying operational risk, banks must separate regulatory capital requirements from internal requirements for managing the return/risk trade-off. Indeed, they must develop internal risk measures allowing them to manage their activities through risks and allocate the necessary equity capital for their business plans.

Author Contributions

All authors contributed to the entire process of writing this paper. H.M. has written the original draft and has realize the statistical studies, all authors reviewed and edited the draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Simulation of Aggregate Operational Losses

To simulate the losses, we use the appropriate estimator. For the classical L D A approach, we use the maximum likelihood estimator ( λ ^ , α ^ i , β ^ i ) of ( λ , α , β ) , respectively, the parameters of P( λ ) and L N ( α , β ) . For the Bayesian approach, we use the Bayesian estimators ( λ ^ B a y , μ ^ B a y σ ^ B a y ) .

Appendix A.1. Presentation of the Simulation by the Inverse Cumulative Distribution Function

The Monte Carlo Method consists of simulating an important sample of realizations p j of size J = 100,000 in the following manner:
For 1 j J :
(1)
Simulate a realization n j of frequency N from the chosen law of frequency ( P ( λ )   o r   B N ( a , b ) ) ;
(2)
Simulate n j realizations x i , 1 i n j , of severity X , from the chosen law of severity ( L N ( α , β )   o r   W e i ( α , β ) ) ;
(3)
Calculate p j = i = 1 n j x i , which will constitute a realization of the loss P N = i = 1 N X i .
Before presenting the simulation by the Monte Carlo method, we first cite the theorem of the inverse cumulative function that allows the simulation of continuous random variables.
Theorem A1.
Suppose U is uniform random variable on the interval [ 0 , 1 ] , and F is a cumulative distribution function that is continuous and strictly increasing. Let Y be the random variable defined from the inverse cumulative distribution function F 1 by Y = F 1 ( U ) . Then, the cumulative distribution function of Y is F .
Consequently, to simulate the realization y i of the random variable Y which has F as a cumulative distribution function, it suffices to:
  • Simulate a realization u i of the Uniform distribution U [ 0 , 1 ] ;
  • Calculate the inverse cumulative distribution function y i = F 1 ( u i ). Then, y i is considered to be a realization of Y .

Appendix A.1.1. Simulation of the Realizations n j for 1 j 100,000

To simulate the realizations of frequency N , we use Poisson’s distribution P ( λ ) or the gamma distribution Γ ( a , b ) .
(1) Simulation Poisson’s distribution.
Propriety: Let ( V i ) i 1 be a sequence of exponential random variables of parameter λ . Then, the random variable is defined by
M = { S u p { k * / i = 1 k V i 1 } ,     V 1 1 0 ,     V 1 > 1
is a random Poisson variable of parameter λ .
To simulate the realizations of Poisson’s law of parameter λ, we use the following algorithm.
Step 1: Simulation of n 1
To simulate the realization n 1 of the frequency, we proceed as follows.
  • We simulate a realization v 1 of the law E x p ( λ ) by the inverse cumulative distribution function. For that, we must
    • Simulate a realization u 1 of the Uniform law U [ 0 ,   1 ] ;
    • Define the cumulative distribution function of the exponential law E x p ( λ ) by F 1 ( u ) = l n ( 1 u ) λ . We then deduct v 1 = F 1 ( u 1 ) = l n ( 1 u 1 ) λ
  • If v 1 > 1 then n 1 = 0 .
If not, we simulate a second realization v 2 of the exponential law E x p ( λ ) according to procedure 1. If v 1 + v 2 > 1 , then n 1 = 1 is a realization of the Poisson of parameter λ ; otherwise, we simulate k realizations v i ,     1 i k until i = 1 k v i 1 and i = 1 k + 1 V i > 1 . The value k that verifies the last two inequalities is the realization n 1 = k of the frequency.
Step j: A simulation of n j ,   2 j 100,000
We repeat step (1.) 100,000 times and we thereby obtain 100,000 realizations of n j .
(2) Simulation of the laws L N ( α , β ) .
To simulate the laws L N ( α , β ) , we use the inverse cumulative distribution function method, as follows:
  • Simulate a realization u i of the Uniform law U [ 0 ,   1 ] ;
  • Calculate x i = F ( α , β ) 1 ( u i ) , where F ( α , β ) is a cumulative distribution function of the law L N ( α , β ) . As F ( α , β ) 1 ( u i ) has no analytical expression, we numerically simulate x i .

Appendix A.1.2. Determination of Operating Losses

For each n j realization of the law of frequency, we have to simulate n j realizations of the law of severity. The simulated loss p j is the sum of the simulated realizations:
p j = i = 1 n j x i ,

Appendix A.2. Calculation of the Capital at Operational Risk ( V a R )

The capital at operational risk is calculated by the determination of the percentile 99.9% of the empirical distribution of the losses p j = i = 1 n j x i , for 1 j 100,000 , as simulated by Monte Carlo.
Let F P be the empirical cumulative distribution function of loss P determined from the simulated realizations p j . The function F P is given by
F P ( y ) = n u m b r e   o f   p j y   n u m b e r   o f   p j ,
The value at risk V a R is expressed by the following formula:
V a R = I n f { y / F P ( y ) 99 , 9 % } = I n f { y / n u m b e r   o f   p j y   n u m b e r   o f   p j 99,9 % } ,
In this paper, frequency is modelled for a horizon of one year T = 12   m o n t h or by dividing the year T into k sub-horizons T k = T k for a k integer 2 k 12 .

Appendix A.2.1. The Annual V a R with Segmentation of the Database by Risk Category

The operational loss P c of risk category R T c is a random variable defined by P c = i = 1 N c X c i , where:
N c : The random variable that represents the frequency of losses of the risk category R T c ;
X c i : The random variable, for 1 i N c , that represents the severity of the losses of the risk category R T c .
Let n j c , 1 j 100,000 be, the annual frequency of the losses collected for the risk category R T c , and let x c i be the simulated realizations of the losses of the risk category R T c . The realizations p j c = i = 1 n j c x c i , 1 j 100,000 are able to calculate the capital at risk V a R c for each risk category R T c . The annual V a R is the sum of the V a R c because it supports that the risk categories are independent. The modelling of the frequency of the loss is made for a horizon of one year T = 12   m o n t h s or by dividing the year T into k sub-horizons T k = T k for a k integer 2 k 12 .

Appendix A.2.2. The Modelling of the Loss Frequency for the Annual Horizon

The horizon chosen is year T = 12   m o n t h s and the level of confidence is 1 α = 99.9 % .
Let F P c be the empirical cumulative distribution function of the losses for the risk category R T c . The capital of operational risk for the category risk R T c is
V a R c = I n f { y / F P c ( y ) 99.9 % }
The capital at risk on the annual horizon is the sum of the V a R c :
V a R = c = 1 7 V a R c

Appendix A.2.3. The Modelling of the Loss Frequency for the Sub-Horizon T k = T k , 2 k 12

Let F P T c be the empirical cumulative distribution function of the operational risk of a given risk category R T c for the horizon T = 1   y e a r , determined from the simulated realizations p j c , with n j c as a realization of the frequency of losses on horizon T . The cumulative distribution function F P T c is simulated k times on the horizon T . Let F P T c i be the ith simulation and V a R c i be the ith capital at operational risk determined from the ith simulation of the losses. The capital at risk on the annual horizon is the sum of the V a R c i ,     1 i k :
V a R c = i = 1 k V a R c i .
The capital at risk on the annual horizon is the sum of the V a R c :
V a R = c = 1 7 i = 1 k V a R c i .

References

  1. Abdymomunov, Azamat, and Ibrahim Ergen. 2017. Tail Dependence and Systemic Risk in Operational Losses of the US Banking. Industry. International Review of Finance 17. [Google Scholar] [CrossRef]
  2. Akkizidis, Ioannis S., and Vivianne Bouchereau. 2005. Guide to Optimal Operational & Basel II. Abingdon: Auerbach Publications. Taylor & Francis Group, p. 139. [Google Scholar]
  3. Alexander, Carol. 2003. Operational Risk: Regulation, Analysis and Management. London: FT Prentice Hall. [Google Scholar]
  4. Aumann, Robert J., and Lloyd S. Shapley. 1974. Value of Non-Atomic Games. Princeton: Princeton University Press. [Google Scholar]
  5. Ayyub, Bilal. 2001. A Practical Guide on Conducting Expert-Opinion Elicitation of Probabilities and Consequences for Corps Facilities. U.S. Army Corps of Engineers Institute. Available online: https://www.iwr.usace.army.mil/Portals/70/docs/iwrreports/01-R-01.pdf (accessed on 10 April 2019).
  6. BCBS. 2006. International Convergence of Capital Measurement and Capital Standards. Basel: Bank for International Settlements. [Google Scholar]
  7. BCBS. 2016. Standardised Measurement Approach for Operational Risk, Consultative Document. Basel: Bank for International Settlements. [Google Scholar]
  8. BCBS. 2017. Basel III: Finalising Post-Crisis Reforms. Basel: Bank for International Settlements. [Google Scholar]
  9. Bee, Marco. 2006. Estimating the Parameters in the Loss Distribution Approach: How Can We Deal with Truncated Data. The Advanced Measurement Approach to Operational Risk. London: E. Davis, Risk Books. [Google Scholar]
  10. Benbachir, Saâd, and Mohamed Habachi. 2018. Assessing the Impact of the Modelling on the Operational Risk Profile of Banks. International Journal of Applied Engineering Research 13: 9060–82. [Google Scholar]
  11. Boonen, Tim Jaij. 2019. Static and dynamic risk capital allocations with the Euler rule. Journal of Risk. [Google Scholar] [CrossRef]
  12. Brechmann, Eike, Claudia Czado, and Sandra Paterlin. 2013. Flexible dependence modeling of operational risk losses and its impact on total capital requirements. Journal of Banking & Finance 40: 271–85. [Google Scholar] [CrossRef] [Green Version]
  13. Chernobai, Anna, Christian Menn, Stefan Truck, and Svetiozar Rachev. 2005. A Note on the Estimation of the Frequency and Severity Distribution of Operational Losses. Mathematical Scientist 30: 87–97. [Google Scholar]
  14. Chernobai, Anna, Svetiozar Rachev, and Frank J. Fabozzi. 2007. Operational Risk. Hoboken: John Wiley & Sons, Inc. [Google Scholar]
  15. Cohen, Ruben. 2016. An assessment of operational loss data and its implications for risk capital modeling. Journal of Operational Risk 11: 71–95. [Google Scholar] [CrossRef]
  16. Cohen, Ruben. 2018. An operational risk capital model based on the loss distribution approach. Journal of Operational Risk 13: 59–81. [Google Scholar] [CrossRef]
  17. Cope, Eric, and Gianluca Antonini. 2008. Observed correlations and dependencies among operational losses in the ORX consortium database. The Journal of Operational Risk 3: 47–74. [Google Scholar] [CrossRef]
  18. Cruz, Marcelo G. 2002. Modeling, Measuring and Hedging Opérational Risk. Hoboken: John Wiley & Sons, Inc. [Google Scholar]
  19. Cruz, Marcelo G., Peters Gareth W., and Pavel V. Shevchenko. 2015. Fondamental Aspect of Operational Risk and Insurance Analytics. Hoboken: Wiley & Sons, Inc., p. 11. [Google Scholar]
  20. Dalla Valle, Luciana. 2009. Bayesian Copulae Distributions, with Application to Operational Risk Management. Methodology and Computing in Applied Probability 11: 95–115. [Google Scholar] [CrossRef]
  21. Dalla Valle, Luciana, and Paolo Giudici. 2008. A Bayesian approach to estimate the marginal loss distributions in operational risk management. Computational Statistics & Data Analysis 52: 3107–27. [Google Scholar] [CrossRef]
  22. Denault, Michel. 2001. Coherent allocation of risk capital. Journal of Risk 4: 1–34. [Google Scholar] [CrossRef] [Green Version]
  23. Dhaene, Jan, Andreas Tsanakas, Emiliano A. Valdez, and Steven Vanduffel. 2012. Optimal Capital Allocation Principles. Journal of Risk and Insurance 79: 1–28. [Google Scholar] [CrossRef] [Green Version]
  24. Driessen, Theodorus Stephanus Hubertus, and Stephanus Hendrikus Tijs. 1985. The Cost Gap Method and Other Cost Allocation Methods for Multipurpose Water Projects. Water Resources Research 21: 1469–75. [Google Scholar] [CrossRef]
  25. Facchinetti, Silvia, Paolo Giudici, and Silvia Angela Osmetti. 2019. Cyber risk measurement with ordinal data. Statistical Methods and Applications. [Google Scholar] [CrossRef]
  26. Figini, Silvia, and Paolo Giudici. 2013. Measuring risk with ordinal variables. The Journal of Operational Risk 8: 35–43. [Google Scholar] [CrossRef]
  27. Figini, Silvia, Lijun Gao, and Paolo Giudici. 2014. Bayesian operational risk models. Journal of Operational Risk 10: 45–60. [Google Scholar] [CrossRef]
  28. Frachot, Antoine, Pierre Georges, and Thierry Roncalli. 2001. Loss Distribution Approach for Operational Risk. Lyonnais: Groupe de Recherche Opérationnelle, Crédit Lyonnais. [Google Scholar] [CrossRef] [Green Version]
  29. Frachot, Antoine, Olivier Moudoulaud, and Thierry Roncalli. 2003. Loss Distribution Approach in Practice. In The Basel Handbook: A Guide for Financial Practioners. Chicago: Micheal Ong, Risk Books. [Google Scholar]
  30. Giudici, Paolo, and Annalisa Bilotta. 2004. Modelling operational losses: A bayesian approach. Quality and Reliability Engineering International 20: 407–17. [Google Scholar] [CrossRef]
  31. Groenewald, Andries. 2014. Practical methods of modelling operational risk. Paper presented at the Lecture, Actuarial, Society of South Africa’s 2014 Convention, Cape Town, South Africa, October 22–23; Available online: https://actuarialsociety.org.za/convention/convention2014/assets/pdf/papers/2014%20ASSA%20Groenewald.pdf (accessed on 23 May 2019).
  32. Hamlen, Susan S., William A. Hamlen Jr., and John T. Tschirhart. 1977. The Use of Core Theory in Evaluating Joint Cost Allocation Schemes. The Accounting Review 52: 616–27. [Google Scholar]
  33. Helmer, Olaf. 1968. Analysis of the Future: The Delphi Method. Available online: https://www.rand.org/pubs/papers/P3558.html (accessed on 15 January 2019).
  34. Jorion, Philippe. 2001. Value at Risk: The New Benchmark for Managing Financial Risk. New York: McGraw-Hill. [Google Scholar]
  35. King, Jack L. 2001. Operational Risk, Measurement and Modelling. New York: Wiley Finance. [Google Scholar]
  36. McConnell, Patrick. 2017. Standardized measurement approach: is comparability attainable? The Journal of Operational Risk 18: 71–110. [Google Scholar] [CrossRef]
  37. Mignola, Giuilio, Roberto Ugoccioni, and Eric Cope. 2016. Comments on the BCBS proposal for a new standardized approach for operational risk. The Journal of Operational Risk 11: 51–69. [Google Scholar] [CrossRef]
  38. Niven, Paul R. 2006. Balanced Scorecard Step-By-Step. Hoboken: John Wiley & Sons, Inc. [Google Scholar]
  39. Panjer, Harry H. 2002. Measurement of Risk, Solvency Requirements and Allocation of Capital within Financial Conglomerates. Available online: http://library.soa.org/files/pdf/measurement_risk.pdf (accessed on 20 February 2019).
  40. Peters, Gareth W., and Scott A. Sisson. 2006. Bayesian inference, Monte Carlo sampling and operational risk. The Journal of Operational Risk 1: 27–50. [Google Scholar] [CrossRef]
  41. Peters, Gareth W., Pavel V. Shevchenko, Bertrand Hassani, and Ariane Chapelle. 2016. Should the advanced measurement approach be replaced with the standardized measurement approach for operational risk? The Journal of Operational Risk 11: 1–49. [Google Scholar] [CrossRef] [Green Version]
  42. Shang, Kailan, and Zhen Chen. 2012. Risk Appetite Linkage with Strategy Planning. Available online: https://web.actuaries.ie/sites/default/files/erm-esources/research_risk_app_link_report.Pdf (accessed on 10 April 2019).
  43. Shapley, Lloyd Stowell. 1953. A value for n-person games. In Contributions to the Theory of Games II. Edited by H. W. Kuhn and A. W. Tucker. Princeton: Princeton University Press, pp. 307–17. [Google Scholar]
  44. Shevchenko, Pavel V. 2010. Implementing Loss Distribution Approach for Operational Risk. Applied Stochastic Models in Business and Industry 26: 277–307. [Google Scholar] [CrossRef] [Green Version]
  45. Shevchenko, Pavel V. 2011. Modelling Operational Risk Using Bayesian Inference. Springer. [Google Scholar]
  46. Shevchenko, Pavel V., and Grigory Temnov. 2009. Modeling operational risk data reported above a time-varying threshold. The Journal of Operational Risk 4: 19–42. [Google Scholar] [CrossRef] [Green Version]
  47. Tasche, Dirk. 2007. Capital Allocation to Business Units and Sub-Portfolios: The Euler Principle. Available online: http://arxiv.org/PScache/arxiv/pdf/0708/0708.2542v3.pdf (accessed on 13 March 2019).
  48. Urbina, Jilber, and Guillén Montserrat. 2014. An application of capital allocation principles to operational risk and the cost of fraud. Expert Systems with Applications 41: 7023–31. [Google Scholar] [CrossRef]
1
The rubrics for calculating the BI are detailed in the Appendix A: the definition of the components of the BI of the Basel III reform.
Table 1. Rating of the criteria for the scoring experts.
Table 1. Rating of the criteria for the scoring experts.
QualificationLowMediumHigh
Rating123
Table 2. Expert weighting according to the score function.
Table 2. Expert weighting according to the score function.
Score ( 6   t o   7 ) ( 8   t o   9 ) (10 to 11) ( 12   t o   14 ) ( 15   t o   18 )
Weighting10%25%40%50%75%
Table 3. Descriptive statistics of Mean of individual losses X i (in amounts).
Table 3. Descriptive statistics of Mean of individual losses X i (in amounts).
Mean   of   Individual   Losses   X i Standard DeviationSkewnessKurtosis
468,7308,719,755.3236.281430.99
Table 4. Statistical characteristics of individual losses X i by risk category (in amounts).
Table 4. Statistical characteristics of individual losses X i by risk category (in amounts).
R T c Distribution   of   Individual   Losses   X i   in   Number   ( in   MAD ) Mean   of   Individual   Losses   X i Distribution   of   Individual   Losses   X i   by   Amount Standard DeviationSkewnessKurtosis
R T 1 12%781,175.0416.8%9,355,985.3915.33234.22
R T 2 9%13,213.100.3%63,647.817.12352.090
R T 3 45%34,573.710.7%363,852.2720.47457.38
R T 4 4%183,689.133.9%781,324.165.8132.690
R T 5 2%199,146.424.3%84,075.401.8534.475
R T 6 19%190,746.844.1%1,379,733.7810.248113.639
R T 7 10%3,249,849.8469.9%25,490,889.7313.408184.914
Table 5. Statistical characteristic of frequency by risk category.
Table 5. Statistical characteristic of frequency by risk category.
R T c MeanStandard Deviation
R T 1 10.5711
R T 2 11.8714.54
R T 3 52.9654.92
R T 4 3.172.71
R T 5 2.923.15
R T 6 38.7252.82
R T 7 7.1210.08
Table 6. Estimation and adjustment test of the L N ( μ h , σ h ) by risk category.
Table 6. Estimation and adjustment test of the L N ( μ h , σ h ) by risk category.
R T c L N ( μ h , σ h ) Kolmogorov-Smirnov Test
μ h σ h p-Value
R T 1 10.601.670.084
R T 2 7.511.580.258
R T 3 8.591.490.419
R T 4 9.842.090.831
R T 5 12.140.350.649
R T 6 8.082.49<0.0001
R T 7 11.522.490.723
Table 7. Estimation and adjustment test of the P ( λ h ) and B N ( a h , b h ) by risk category.
Table 7. Estimation and adjustment test of the P ( λ h ) and B N ( a h , b h ) by risk category.
R T c Poisson   P ( λ h ) p-Value Chi-Square Test Negative - Binomial   B N ( a h , b h ) p-Value Chi-Square Test
λ h a h b h
R T 1 10.57<0.00010.8312.870.01
R T 2 11.87<0.00010.7416.120.040
R T 3 52.96<0.00010.8760.840.385
R T 4 3.17<0.00012.931.080.017
R T 5 2.92<0.00012.381.230.030
R T 6 38.72<0.00010.4292.660.054
R T 7 7.12<0.00011.255.68<0.0001
Table 8. Expert estimate of mean losses ( P M ) by risk category R T C in 1000s of MAD.
Table 8. Expert estimate of mean losses ( P M ) by risk category R T C in 1000s of MAD.
R T c Mean   Annual   Empirical   Losses   ( P M )   by   Category   Mean Losses Structure by Category Expert   Estimate   of   Mean   Losses   ( P M )   by   Category
R T 1 18,966 31.6%21,313
R T 2 3140.5%352
R T 3 2349 3.9%2640
R T 4 22663.8%2546
R T 5 1704 2.8%1915
R T 6 4706 7.8%5289
R T 7 29,762 49.5%33,445
Table 9. The experts’ estimate of parameter λ e risk category.
Table 9. The experts’ estimate of parameter λ e risk category.
R T c λ e
R T 1 11.5
R T 2 14.3
R T 3 54.6
R T 4 4.07
R T 5 3.40
R T 6 5.8
R T 7 3.5
Table 10. The expert estimate of the parameter μ e by risk category.
Table 10. The expert estimate of the parameter μ e by risk category.
R T c P M λ e μ e
R T 1 21,31311.56.13
R T 2 35214.31.95
R T 3 264054.62.77
R T 4 25464.074.25
R T 5 19153.406.27
R T 6 52895.83.72
R T 7 33,4453.56.06
Table 11. The Bayesian estimators of parameters by risk category.
Table 11. The Bayesian estimators of parameters by risk category.
R T c P ( λ ) L N ( μ , σ )
λ ^ B a y μ ^ B a y σ h
R T 1 10.809.481.67
R T 2 12.486.121.58
R T 3 53.377.141.49
R T 4 3.408.442.09
R T 5 3.0410.670.35
R T 6 30.496.992.49
R T 7 6.2210.162.49
Table 12. The V a R under Historical and Bayesian L D A by risk category (in 1000 s of MAD).
Table 12. The V a R under Historical and Bayesian L D A by risk category (in 1000 s of MAD).
R T c V a R C , h ( λ h , μ h , σ h ) V a R C , b a y ( λ ^ B a y , μ ^ B a y , σ h )
R T 1 45,526 14,449
R T 2 1560 367
R T 3 6926 1767
R T 4 45,526 11,764
R T 5 4026 955
R T 6 164,222 45554
R T 7 1,641,160 384680
Table 13. Capital allocation through historical and Bayesian approaches.
Table 13. Capital allocation through historical and Bayesian approaches.
R T c Percentage   of   Capital   Allocated   under   the   Classical   L D A   ( % ) Percentage of Capital Allocated under the Bayesian LDA (%)Deviation (%)
R T 1 2.383.1431.93%
R T 2 0.080.080.00%
R T 3 0.360.385.56%
R T 4 2.382.567.56%
R T 5 0.210.210.00%
R T 6 8.609.9115.23%
R T 7 85.9783.71−2.63%

Share and Cite

MDPI and ACS Style

Habachi, M.; Benbachir, S. The Bayesian Approach to Capital Allocation at Operational Risk: A Combination of Statistical Data and Expert Opinion. Int. J. Financial Stud. 2020, 8, 9. https://0-doi-org.brum.beds.ac.uk/10.3390/ijfs8010009

AMA Style

Habachi M, Benbachir S. The Bayesian Approach to Capital Allocation at Operational Risk: A Combination of Statistical Data and Expert Opinion. International Journal of Financial Studies. 2020; 8(1):9. https://0-doi-org.brum.beds.ac.uk/10.3390/ijfs8010009

Chicago/Turabian Style

Habachi, Mohamed, and Saâd Benbachir. 2020. "The Bayesian Approach to Capital Allocation at Operational Risk: A Combination of Statistical Data and Expert Opinion" International Journal of Financial Studies 8, no. 1: 9. https://0-doi-org.brum.beds.ac.uk/10.3390/ijfs8010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop