Next Issue
Volume 7, September
Previous Issue
Volume 7, March
 
 

Risks, Volume 7, Issue 2 (June 2019) – 36 articles

Cover Story (view full-size image): The cover figure shows the treatment level healthcare data modeling approach. In the paper, the authors attempt to predict the daily, weekly, and monthly medical charge amounts using an extension of the frequency-severity approach to modeling insurance expenditures from the actuarial science literature. The figure illustrates the prediction process that begins by feeding the frequency-severity model with raw data. The prediction results for the daily, weekly, and monthly charge amounts are shown. In order to model each component of the model, the authors use a generalized linear modeling framework. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
16 pages, 2386 KiB  
Article
Predicting Motor Insurance Claims Using Telematics Data—XGBoost versus Logistic Regression
by Jessica Pesantez-Narvaez, Montserrat Guillen and Manuela Alcañiz
Risks 2019, 7(2), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020070 - 20 Jun 2019
Cited by 86 | Viewed by 14188
Abstract
XGBoost is recognized as an algorithm with exceptional predictive capacity. Models for a binary response indicating the existence of accident claims versus no claims can be used to identify the determinants of traffic accidents. This study compared the relative performances of logistic regression [...] Read more.
XGBoost is recognized as an algorithm with exceptional predictive capacity. Models for a binary response indicating the existence of accident claims versus no claims can be used to identify the determinants of traffic accidents. This study compared the relative performances of logistic regression and XGBoost approaches for predicting the existence of accident claims using telematics data. The dataset contained information from an insurance company about the individuals’ driving patterns—including total annual distance driven and percentage of total distance driven in urban areas. Our findings showed that logistic regression is a suitable model given its interpretability and good predictive capacity. XGBoost requires numerous model-tuning procedures to match the predictive performance of the logistic regression model and greater effort as regards to interpretation. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

18 pages, 2008 KiB  
Article
Smallholder Farmers’ Willingness to Pay for Agricultural Production Cost Insurance in Rural West Java, Indonesia: A Contingent Valuation Method (CVM) Approach
by Dadang Jainal Mutaqin and Koichi Usami
Risks 2019, 7(2), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020069 - 20 Jun 2019
Cited by 22 | Viewed by 5846
Abstract
To reduce the negative impacts of risks in farming due to climate change, the government implemented agricultural production cost insurance in 2015. Although a huge amount of subsidy has been allocated by the government (80 percent of the premium), farmers’ participation rate is [...] Read more.
To reduce the negative impacts of risks in farming due to climate change, the government implemented agricultural production cost insurance in 2015. Although a huge amount of subsidy has been allocated by the government (80 percent of the premium), farmers’ participation rate is still low (23 percent of the target in 2016). In order to solve the issue, it is indispensable to identify farmers’ willingness to pay (WTP) for and determinants of their participation in agricultural production cost insurance. Based on a field survey of 240 smallholder farmers in the Garut District, West Java Province in August–October 2017 and February 2018, the contingent valuation method (CVM) estimated that farmers’ mean willingness to pay (WTP) was Rp 30,358/ha/cropping season ($2.25/ha/cropping season), which was 16 percent lower than the current premium (Rp 36,000/ha/cropping season = $2.67/ha/cropping season). Farmers who participated in agricultural production cost insurance shared some characteristics: operating larger farmland, more contact with agricultural extension service, lower expected production for the next cropping season, and a downstream area location. Full article
Show Figures

Figure 1

16 pages, 407 KiB  
Article
Ruin Probability Functions and Severity of Ruin as a Statistical Decision Problem
by Emilio Gómez-Déniz, José María Sarabia and Enrique Calderín-Ojeda
Risks 2019, 7(2), 68; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020068 - 17 Jun 2019
Cited by 4 | Viewed by 3215
Abstract
It is known that the classical ruin function under exponential claim-size distribution depends on two parameters, which are referred to as the mean claim size and the relative security loading. These parameters are assumed to be unknown and random, thus, a loss function [...] Read more.
It is known that the classical ruin function under exponential claim-size distribution depends on two parameters, which are referred to as the mean claim size and the relative security loading. These parameters are assumed to be unknown and random, thus, a loss function that measures the loss sustained by a decision-maker who takes as valid a ruin function which is not correct can be considered. By using squared-error loss function and appropriate distribution function for these parameters, the issue of estimating the ruin function derives in a mixture procedure. Firstly, a bivariate distribution for mixing jointly the two parameters is considered, and second, different univariate distributions for mixing both parameters separately are examined. Consequently, a catalogue of ruin probability functions and severity of ruin, which are more flexible than the original one, are obtained. The methodology is also extended to the Pareto claim size distribution. Several numerical examples illustrate the performance of these functions. Full article
(This article belongs to the Special Issue Loss Models: From Theory to Applications)
23 pages, 621 KiB  
Article
Credit Risk Assessment Model for Small and Micro-Enterprises: The Case of Lithuania
by Rasa Kanapickiene and Renatas Spicas
Risks 2019, 7(2), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020067 - 13 Jun 2019
Cited by 13 | Viewed by 4999
Abstract
In this research, trade credit is analysed form a seller (supplier) perspective. Trade credit allows the supplier to increase sales and profits but creates the risk that the customer will not pay, and at the same time increases the risk of the supplier’s [...] Read more.
In this research, trade credit is analysed form a seller (supplier) perspective. Trade credit allows the supplier to increase sales and profits but creates the risk that the customer will not pay, and at the same time increases the risk of the supplier’s insolvency. If the supplier is a small or micro-enterprise (SMiE), it is usually an issue of human and technical resources. Therefore, when dealing with these issues, the supplier needs a high accuracy but simple and highly interpretable trade credit risk assessment model that allows for assessing the risk of insolvency of buyers (who are usually SMiE). The aim of the research is to create a statistical enterprise trade credit risk assessment (ETCRA) model for Lithuanian small and micro-enterprises (SMiE). In the empirical analysis, the financial and non-financial data of 734 small and micro-sized enterprises in the period of 2010–2012 were chosen as the samples. Based on the logistic regression, the ETCRA model was developed using financial and non-financial variables. In the ETCRA model, the enterprise’s financial performance is assessed from different perspectives: profitability, liquidity, solvency, and activity. Varied model variants have been created using (i) only financial ratios and (ii) financial ratios and non-financial variables. Moreover, the inclusion of non-financial variables in the model does not substantially improve the characteristics of the model. This means that the models that use only financial ratios can be used in practice, and the models that include non-financial variables can also be used. The designed models can be used by suppliers when making decisions of granting a trade credit for small or micro-enterprises. Full article
(This article belongs to the Special Issue Advances in Credit Risk Modeling and Management)
Show Figures

Figure 1

22 pages, 824 KiB  
Article
Risk Factor Evolution for Counterparty Credit Risk under a Hidden Markov Model
by Ioannis Anagnostou and Drona Kandhai
Risks 2019, 7(2), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020066 - 12 Jun 2019
Cited by 3 | Viewed by 4534
Abstract
One of the key components of counterparty credit risk (CCR) measurement is generating scenarios for the evolution of the underlying risk factors, such as interest and exchange rates, equity and commodity prices, and credit spreads. Geometric Brownian Motion (GBM) is a widely used [...] Read more.
One of the key components of counterparty credit risk (CCR) measurement is generating scenarios for the evolution of the underlying risk factors, such as interest and exchange rates, equity and commodity prices, and credit spreads. Geometric Brownian Motion (GBM) is a widely used method for modeling the evolution of exchange rates. An important limitation of GBM is that, due to the assumption of constant drift and volatility, stylized facts of financial time-series, such as volatility clustering and heavy-tailedness in the returns distribution, cannot be captured. We propose a model where volatility and drift are able to switch between regimes; more specifically, they are governed by an unobservable Markov chain. Hence, we model exchange rates with a hidden Markov model (HMM) and generate scenarios for counterparty exposure using this approach. A numerical study is carried out and backtesting results for a number of exchange rates are presented. The impact of using a regime-switching model on counterparty exposure is found to be profound for derivatives with non-linear payoffs. Full article
(This article belongs to the Special Issue Advances in Credit Risk Modeling and Management)
Show Figures

Figure 1

9 pages, 319 KiB  
Article
Generalized Multiplicative Risk Apportionment
by Hongxia Wang
Risks 2019, 7(2), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020065 - 12 Jun 2019
Cited by 1 | Viewed by 2198
Abstract
This work examines apportionment of multiplicative risks by considering three dominance orderings: first-degree stochastic dominance, Rothschild and Stiglitz’s increase in risk and downside risk increase. We use the relative nth-degree risk aversion measure and decreasing relative nth-degree risk aversion to provide [...] Read more.
This work examines apportionment of multiplicative risks by considering three dominance orderings: first-degree stochastic dominance, Rothschild and Stiglitz’s increase in risk and downside risk increase. We use the relative nth-degree risk aversion measure and decreasing relative nth-degree risk aversion to provide conditions guaranteeing the preference for “harm disaggregation” of multiplicative risks. Further, we relate our conclusions to the preference toward bivariate lotteries, which interpret correlation-aversion, cross-prudence and cross-temperance. Full article
(This article belongs to the Special Issue Model Risk and Risk Measures)
17 pages, 414 KiB  
Article
Default Ambiguity
by Tolulope Fadina and Thorsten Schmidt
Risks 2019, 7(2), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020064 - 10 Jun 2019
Cited by 8 | Viewed by 2838
Abstract
This paper discusses ambiguity in the context of single-name credit risk. We focus on uncertainty in the default intensity but also discuss uncertainty in the recovery in a fractional recovery of the market value. This approach is a first step towards integrating uncertainty [...] Read more.
This paper discusses ambiguity in the context of single-name credit risk. We focus on uncertainty in the default intensity but also discuss uncertainty in the recovery in a fractional recovery of the market value. This approach is a first step towards integrating uncertainty in credit-risky term structure models and can profit from its simplicity. We derive drift conditions in a Heath–Jarrow–Morton forward rate setting in the case of ambiguous default intensity in combination with zero recovery, and in the case of ambiguous fractional recovery of the market value. Full article
(This article belongs to the Special Issue Advances in Credit Risk Modeling and Management)
Show Figures

Figure 1

8 pages, 334 KiB  
Article
A Renewal Shot Noise Process with Subexponential Shot Marks
by Yiqing Chen
Risks 2019, 7(2), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020063 - 05 Jun 2019
Cited by 2 | Viewed by 2524
Abstract
We investigate a shot noise process with subexponential shot marks occurring at renewal epochs. Our main result is a precise asymptotic formula for its tail probability. In doing so, some recent results regarding sums of randomly weighted subexponential random variables play a crucial [...] Read more.
We investigate a shot noise process with subexponential shot marks occurring at renewal epochs. Our main result is a precise asymptotic formula for its tail probability. In doing so, some recent results regarding sums of randomly weighted subexponential random variables play a crucial role. Full article
(This article belongs to the Special Issue Heavy-Tail Phenomena in Insurance, Finance, and Other Related Fields)
27 pages, 558 KiB  
Article
Analysis of Stochastic Reserving Models By Means of NAIC Claims Data
by László Martinek
Risks 2019, 7(2), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020062 - 04 Jun 2019
Cited by 6 | Viewed by 3153
Abstract
In the past two decades increasing computational power resulted in the development of more advanced claims reserving techniques, allowing the stochastic branch to overcome the deterministic methods, resulting in forecasts of enhanced quality. Hence, not only point estimates, but predictive distributions can be [...] Read more.
In the past two decades increasing computational power resulted in the development of more advanced claims reserving techniques, allowing the stochastic branch to overcome the deterministic methods, resulting in forecasts of enhanced quality. Hence, not only point estimates, but predictive distributions can be generated in order to forecast future claim amounts. The significant expansion in the variety of models requires the validation of these methods and the creation of supporting techniques for appropriate decision making. The present article compares and validates several existing and self-developed stochastic methods on actual data applying comparison measures in an algorithmic manner. Full article
Show Figures

Figure 1

22 pages, 1751 KiB  
Article
The Investigation of a Forward-Rate Mortality Framework
by Daniel H. Alai, Katja Ignatieva and Michael Sherris
Risks 2019, 7(2), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020061 - 01 Jun 2019
Cited by 3 | Viewed by 2799
Abstract
Stochastic mortality models have been developed for a range of applications from demographic projections to financial management. Financial risk based models built on methods used for interest rates and apply these to mortality rates. They have the advantage of being applied to financial [...] Read more.
Stochastic mortality models have been developed for a range of applications from demographic projections to financial management. Financial risk based models built on methods used for interest rates and apply these to mortality rates. They have the advantage of being applied to financial pricing and the management of longevity risk. Olivier and Jeffery (2004) and Smith (2005) proposed a model based on a forward-rate mortality framework with stochastic factors driven by univariate gamma random variables irrespective of age or duration. We assess and further develop this model. We generalize random shocks from a univariate gamma to a univariate Tweedie distribution and allow for the distributions to vary by age. Furthermore, since dependence between ages is an observed characteristic of mortality rate improvements, we formulate a multivariate framework using copulas. We find that dependence increases with age and introduce a suitable covariance structure, one that is related to the notion of ax minimum. The resulting model provides a more realistic basis for capturing the risk of mortality improvements and serves to enhance longevity risk management for pension and insurance funds. Full article
Show Figures

Figure 1

31 pages, 607 KiB  
Article
A General Framework for Portfolio Theory. Part III: Multi-Period Markets and Modular Approach
by Stanislaus Maier-Paape, Andreas Platen and Qiji Jim Zhu
Risks 2019, 7(2), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020060 - 01 Jun 2019
Viewed by 2632
Abstract
This is Part III of a series of papers which focus on a general framework for portfolio theory. Here, we extend a general framework for portfolio theory in a one-period financial market as introduced in Part I [Maier-Paape and Zhu, Risks 2018, 6(2), [...] Read more.
This is Part III of a series of papers which focus on a general framework for portfolio theory. Here, we extend a general framework for portfolio theory in a one-period financial market as introduced in Part I [Maier-Paape and Zhu, Risks 2018, 6(2), 53] to multi-period markets. This extension is reasonable for applications. More importantly, we take a new approach, the “modular portfolio theory”, which is built from the interaction among four related modules: (a) multi period market model; (b) trading strategies; (c) risk and utility functions (performance criteria); and (d) the optimization problem (efficient frontier and efficient portfolio). An important concept that allows dealing with the more general framework discussed here is a trading strategy generating function. This concept limits the discussion to a special class of manageable trading strategies, which is still wide enough to cover many frequently used trading strategies, for instance “constant weight” (fixed fraction). As application, we discuss the utility function of compounded return and the risk measure of relative log drawdowns. Full article
Show Figures

Figure 1

20 pages, 871 KiB  
Article
American Options on High Dividend Securities: A Numerical Investigation
by Francesco Rotondi
Risks 2019, 7(2), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020059 - 21 May 2019
Viewed by 3081
Abstract
I document a sizeable bias that might arise when valuing out of the money American options via the Least Square Method proposed by Longstaff and Schwartz (2001). The key point of this algorithm is the regression-based estimate of the continuation value of an [...] Read more.
I document a sizeable bias that might arise when valuing out of the money American options via the Least Square Method proposed by Longstaff and Schwartz (2001). The key point of this algorithm is the regression-based estimate of the continuation value of an American option. If this regression is ill-posed, the procedure might deliver biased results. The price of the American option might even fall below the price of its European counterpart. For call options, this is likely to occur when the dividend yield of the underlying is high. This distortion is documented within the standard Black–Scholes–Merton model as well as within its most common extensions (the jump-diffusion, the stochastic volatility and the stochastic interest rates models). Finally, I propose two easy and effective workarounds that fix this distortion. Full article
(This article belongs to the Special Issue Applications of Stochastic Optimal Control to Economics and Finance)
Show Figures

Figure 1

24 pages, 1151 KiB  
Article
Revisiting Calibration of the Solvency II Standard Formula for Mortality Risk: Does the Standard Stress Scenario Provide an Adequate Approximation of Value-at-Risk?
by Rokas Gylys and Jonas Šiaulys
Risks 2019, 7(2), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020058 - 19 May 2019
Cited by 3 | Viewed by 5040
Abstract
The primary objective of this work is to analyze model based Value-at-Risk associated with mortality risk arising from issued term life assurance contracts and to compare the results with the capital requirements for mortality risk as determined using Solvency II Standard Formula. In [...] Read more.
The primary objective of this work is to analyze model based Value-at-Risk associated with mortality risk arising from issued term life assurance contracts and to compare the results with the capital requirements for mortality risk as determined using Solvency II Standard Formula. In particular, two approaches to calculate Value-at-Risk are analyzed: one-year VaR and run-off VaR. The calculations of Value-at-Risk are performed using stochastic mortality rates which are calibrated using the Lee-Carter model fitted using mortality data of selected European countries. Results indicate that, depending on the approach taken to calculate Value-at-Risk, the key factors driving its relative size are: sensitivity of technical provisions to the latest mortality experience, volatility of mortality rates in a country, policy term and benefit formula. Overall, we found that Solvency II Standard Formula on average delivers an adequate capital requirement, however, we also highlight particular situations where it could understate or overstate portfolio specific model based Value-at-Risk for mortality risk. Full article
Show Figures

Figure 1

15 pages, 987 KiB  
Article
The Determinants of Market-Implied Recovery Rates
by Pascal François
Risks 2019, 7(2), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020057 - 18 May 2019
Cited by 4 | Viewed by 3515
Abstract
In the presence of recovery risk, the recovery rate is a random variable whose risk-neutral expectation can be inferred from the prices of defaultable instruments. I extract market-implied recovery rates from the term structures of credit default swap spreads for a sample of [...] Read more.
In the presence of recovery risk, the recovery rate is a random variable whose risk-neutral expectation can be inferred from the prices of defaultable instruments. I extract market-implied recovery rates from the term structures of credit default swap spreads for a sample of 497 United States (U.S.) corporate issuers over the 2005–2014 period. I analyze the explanatory factors of market-implied recovery rates within a linear regression framework and also within a Tobit model, and I compare them with the determinants of historical recovery rates that were previously identified in the literature. In contrast to their historical counterparts, market-implied recovery rates are mostly driven by macroeconomic factors and long-term, issuer-specific variables. Short-term financial variables and industry conditions significantly impact the slope of market-implied recovery rates. These results indicate that the design of a recovery risk model should be based on specific market factors, not on the statistical evidence that is provided by historical recovery rates. Full article
(This article belongs to the Special Issue Advances in Credit Risk Modeling and Management)
Show Figures

Figure 1

14 pages, 6427 KiB  
Article
Statistical Inference for the Beta Coefficient
by Taras Bodnar, Arjun K. Gupta, Valdemar Vitlinskyi and Taras Zabolotskyy
Risks 2019, 7(2), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020056 - 15 May 2019
Cited by 4 | Viewed by 4089
Abstract
The beta coefficient plays a crucial role in finance as a risk measure of a portfolio in comparison to the benchmark portfolio. In the paper, we investigate statistical properties of the sample estimator for the beta coefficient. Assuming that both the holding portfolio [...] Read more.
The beta coefficient plays a crucial role in finance as a risk measure of a portfolio in comparison to the benchmark portfolio. In the paper, we investigate statistical properties of the sample estimator for the beta coefficient. Assuming that both the holding portfolio and the benchmark portfolio consist of the same assets whose returns are multivariate normally distributed, we provide the finite sample and the asymptotic distributions of the sample estimator for the beta coefficient. These findings are used to derive a statistical test for the beta coefficient and to construct a confidence interval for the beta coefficient. Moreover, we show that the sample estimator is an unbiased estimator for the beta coefficient. The theoretical results are implemented in an empirical study. Full article
Show Figures

Figure 1

16 pages, 557 KiB  
Article
Model Efficiency and Uncertainty in Quantile Estimation of Loss Severity Distributions
by Vytaras Brazauskas and Sahadeb Upretee
Risks 2019, 7(2), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020055 - 15 May 2019
Cited by 1 | Viewed by 2553
Abstract
Quantiles of probability distributions play a central role in the definition of risk measures (e.g., value-at-risk, conditional tail expectation) which in turn are used to capture the riskiness of the distribution tail. Estimates of risk measures are needed in many practical situations such [...] Read more.
Quantiles of probability distributions play a central role in the definition of risk measures (e.g., value-at-risk, conditional tail expectation) which in turn are used to capture the riskiness of the distribution tail. Estimates of risk measures are needed in many practical situations such as in pricing of extreme events, developing reserve estimates, designing risk transfer strategies, and allocating capital. In this paper, we present the empirical nonparametric and two types of parametric estimators of quantiles at various levels. For parametric estimation, we employ the maximum likelihood and percentile-matching approaches. Asymptotic distributions of all the estimators under consideration are derived when data are left-truncated and right-censored, which is a typical loss variable modification in insurance. Then, we construct relative efficiency curves (REC) for all the parametric estimators. Specific examples of such curves are provided for exponential and single-parameter Pareto distributions for a few data truncation and censoring cases. Additionally, using simulated data we examine how wrong quantile estimates can be when one makes incorrect modeling assumptions. The numerical analysis is also supplemented with standard model diagnostics and validation (e.g., quantile-quantile plots, goodness-of-fit tests, information criteria) and presents an example of when those methods can mislead the decision maker. These findings pave the way for further work on RECs with potential for them being developed into an effective diagnostic tool in this context. Full article
Show Figures

Figure 1

22 pages, 1179 KiB  
Article
Direct and Hierarchical Models for Aggregating Spatially Dependent Catastrophe Risks
by Rafał Wójcik, Charlie Wusuo Liu and Jayanta Guin
Risks 2019, 7(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020054 - 08 May 2019
Cited by 1 | Viewed by 6165
Abstract
We present several fast algorithms for computing the distribution of a sum of spatially dependent, discrete random variables to aggregate catastrophe risk. The algorithms are based on direct and hierarchical copula trees. Computing speed comes from the fact that loss aggregation at branching [...] Read more.
We present several fast algorithms for computing the distribution of a sum of spatially dependent, discrete random variables to aggregate catastrophe risk. The algorithms are based on direct and hierarchical copula trees. Computing speed comes from the fact that loss aggregation at branching nodes is based on combination of fast approximation to brute-force convolution, arithmetization (regriding) and linear complexity of the method for computing the distribution of comonotonic sum of risks. We discuss the impact of tree topology on the second-order moments and tail statistics of the resulting distribution of the total risk. We test the performance of the presented models by accumulating ground-up loss for 29,000 risks affected by hurricane peril. Full article
Show Figures

Figure 1

11 pages, 1383 KiB  
Article
The Population Accuracy Index: A New Measure of Population Stability for Model Monitoring
by Ross Taplin and Clive Hunt
Risks 2019, 7(2), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020053 - 06 May 2019
Cited by 12 | Viewed by 11165
Abstract
Risk models developed on one dataset are often applied to new data and, in such cases, it is prudent to check that the model is suitable for the new data. An important application is in the banking industry, where statistical models are applied [...] Read more.
Risk models developed on one dataset are often applied to new data and, in such cases, it is prudent to check that the model is suitable for the new data. An important application is in the banking industry, where statistical models are applied to loans to determine provisions and capital requirements. These models are developed on historical data, and regulations require their monitoring to ensure they remain valid on current portfolios—often years since the models were developed. The Population Stability Index (PSI) is an industry standard to measure whether the distribution of the current data has shifted significantly from the distribution of data used to develop the model. This paper explores several disadvantages of the PSI and proposes the Prediction Accuracy Index (PAI) as an alternative. The superior properties and interpretation of the PAI are discussed and it is concluded that the PAI can more accurately summarise the level of population stability, helping risk analysts and managers determine whether the model remains fit-for-purpose. Full article
Show Figures

Figure 1

26 pages, 512 KiB  
Article
Spatial Risk Measures and Rate of Spatial Diversification
by Erwan Koch
Risks 2019, 7(2), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020052 - 02 May 2019
Cited by 3 | Viewed by 3014
Abstract
An accurate assessment of the risk of extreme environmental events is of great importance for populations, authorities and the banking/insurance/reinsurance industry. Koch (2017) introduced a notion of spatial risk measure and a corresponding set of axioms which are well suited to analyze the [...] Read more.
An accurate assessment of the risk of extreme environmental events is of great importance for populations, authorities and the banking/insurance/reinsurance industry. Koch (2017) introduced a notion of spatial risk measure and a corresponding set of axioms which are well suited to analyze the risk due to events having a spatial extent, precisely such as environmental phenomena. The axiom of asymptotic spatial homogeneity is of particular interest since it allows one to quantify the rate of spatial diversification when the region under consideration becomes large. In this paper, we first investigate the general concepts of spatial risk measures and corresponding axioms further and thoroughly explain the usefulness of this theory for both actuarial science and practice. Second, in the case of a general cost field, we give sufficient conditions such that spatial risk measures associated with expectation, variance, value-at-risk as well as expected shortfall and induced by this cost field satisfy the axioms of asymptotic spatial homogeneity of order 0, −2, −1 and −1, respectively. Last but not least, in the case where the cost field is a function of a max-stable random field, we provide conditions on both the function and the max-stable field ensuring the latter properties. Max-stable random fields are relevant when assessing the risk of extreme events since they appear as a natural extension of multivariate extreme-value theory to the level of random fields. Overall, this paper improves our understanding of spatial risk measures as well as of their properties with respect to the space variable and generalizes many results obtained in Koch (2017). Full article
(This article belongs to the Special Issue Risk, Ruin and Survival: Decision Making in Insurance and Finance)
30 pages, 3450 KiB  
Article
The Optimum Leverage Level of the Banking Sector
by Sagara Dewasurendra, Pedro Judice and Qiji Zhu
Risks 2019, 7(2), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020051 - 01 May 2019
Cited by 6 | Viewed by 4254
Abstract
Banks make profits from the difference between short-term and long-term loan interest rates. To issue loans, banks raise funds from capital markets. Since the long-term loan rate is relatively stable, but short-term interest is usually variable, there is an interest rate risk. Therefore, [...] Read more.
Banks make profits from the difference between short-term and long-term loan interest rates. To issue loans, banks raise funds from capital markets. Since the long-term loan rate is relatively stable, but short-term interest is usually variable, there is an interest rate risk. Therefore, banks need information about the optimal leverage strategies based on the current economic situation. Recent studies on the economic crisis by many economists showed that the crisis was due to too much leveraging by “big banks”. This leveraging turns out to be close to Kelly’s optimal point. It is known that Kelly’s strategy does not address risk adequately. We used the return–drawdown ratio and inflection point of Kelly’s cumulative return curve in a finite investment horizon to derive more conservative leverage levels. Moreover, we carried out a sensitivity analysis to determine strategies during a period of interest rates increase, which is the most important and risky period to leverage. Thus, we brought theoretical results closer to practical applications. Furthermore, by using the sensitivity analysis method, banks can change the allocation sizes to loans with different maturities to mediate the risks corresponding to different monetary policy environments. This provides bank managers flexible tools in mitigating risk. Full article
Show Figures

Figure 1

20 pages, 585 KiB  
Article
Practice Oriented and Monte Carlo Based Estimation of the Value-at-Risk for Operational Risk Measurement
by Francesca Greselin, Fabio Piacenza and Ričardas Zitikis
Risks 2019, 7(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020050 - 01 May 2019
Cited by 4 | Viewed by 3838
Abstract
We explore the Monte Carlo steps required to reduce the sampling error of the estimated 99.9% quantile within an acceptable threshold. Our research is of primary interest to practitioners working in the area of operational risk measurement, where the annual loss distribution cannot [...] Read more.
We explore the Monte Carlo steps required to reduce the sampling error of the estimated 99.9% quantile within an acceptable threshold. Our research is of primary interest to practitioners working in the area of operational risk measurement, where the annual loss distribution cannot be analytically determined in advance. Usually, the frequency and the severity distributions should be adequately combined and elaborated with Monte Carlo methods, in order to estimate the loss distributions and risk measures. Naturally, financial analysts and regulators are interested in mitigating sampling errors, as prescribed in EU Regulation 2018/959. In particular, the sampling error of the 99.9% quantile is of paramount importance, along the lines of EU Regulation 575/2013. The Monte Carlo error for the operational risk measure is here assessed on the basis of the binomial distribution. Our approach is then applied to realistic simulated data, yielding a comparable precision of the estimate with a much lower computational effort, when compared to bootstrap, Monte Carlo repetition, and two other methods based on numerical optimization. Full article
(This article belongs to the Special Issue Risk, Ruin and Survival: Decision Making in Insurance and Finance)
Show Figures

Figure 1

23 pages, 907 KiB  
Article
Stackelberg Equilibrium Premium Strategies for Push-Pull Competition in a Non-Life Insurance Market with Product Differentiation
by Søren Asmussen, Bent Jesper Christensen and Julie Thøgersen
Risks 2019, 7(2), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020049 - 01 May 2019
Cited by 6 | Viewed by 3826
Abstract
Two insurance companies I 1 , I 2 with reserves R 1 ( t ) , R 2 ( t ) compete for customers, such that in a suitable differential game the smaller company I 2 with [...] Read more.
Two insurance companies I 1 , I 2 with reserves R 1 ( t ) , R 2 ( t ) compete for customers, such that in a suitable differential game the smaller company I 2 with R 2 ( 0 ) < R 1 ( 0 ) aims at minimizing R 1 ( t ) R 2 ( t ) by using the premium p 2 as control and the larger I 1 at maximizing by using p 1 . Deductibles K 1 , K 2 are fixed but may be different. If K 1 > K 2 and I 2 is the leader choosing its premium first, conditions for Stackelberg equilibrium are established. For gamma-distributed rates of claim arrivals, explicit equilibrium premiums are obtained, and shown to depend on the running reserve difference. The analysis is based on the diffusion approximation to a standard Cramér-Lundberg risk process extended to allow investment in a risk-free asset. Full article
(This article belongs to the Special Issue Recent Development in Actuarial Science and Related Fields)
Show Figures

Figure 1

23 pages, 1315 KiB  
Article
Optimal Excess-of-Loss Reinsurance for Stochastic Factor Risk Models
by Matteo Brachetta and Claudia Ceci
Risks 2019, 7(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020048 - 01 May 2019
Cited by 10 | Viewed by 3200
Abstract
We study the optimal excess-of-loss reinsurance problem when both the intensity of the claims arrival process and the claim size distribution are influenced by an exogenous stochastic factor. We assume that the insurer’s surplus is governed by a marked point process with dual-predictable [...] Read more.
We study the optimal excess-of-loss reinsurance problem when both the intensity of the claims arrival process and the claim size distribution are influenced by an exogenous stochastic factor. We assume that the insurer’s surplus is governed by a marked point process with dual-predictable projection affected by an environmental factor and that the insurance company can borrow and invest money at a constant real-valued risk-free interest rate r. Our model allows for stochastic risk premia, which take into account risk fluctuations. Using stochastic control theory based on the Hamilton-Jacobi-Bellman equation, we analyze the optimal reinsurance strategy under the criterion of maximizing the expected exponential utility of the terminal wealth. A verification theorem for the value function in terms of classical solutions of a backward partial differential equation is provided. Finally, some numerical results are discussed. Full article
(This article belongs to the Special Issue Applications of Stochastic Optimal Control to Economics and Finance)
Show Figures

Figure 1

35 pages, 1739 KiB  
Article
Contingent Convertible Debt: The Impact on Equity Holders
by Delphine Boursicot, Geneviève Gauthier and Farhad Pourkalbassi
Risks 2019, 7(2), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020047 - 29 Apr 2019
Cited by 1 | Viewed by 4277
Abstract
Contingent Convertible (CoCo) is a hybrid debt issued by banks with a specific feature forcing its conversion to equity in the event of the bank’s financial distress. CoCo carries two major risks: the risk of default, which threatens any type of debt instrument, [...] Read more.
Contingent Convertible (CoCo) is a hybrid debt issued by banks with a specific feature forcing its conversion to equity in the event of the bank’s financial distress. CoCo carries two major risks: the risk of default, which threatens any type of debt instrument, plus the exclusive risk of mandatory conversion. In this paper, we propose a model to value CoCo debt instruments as a function of the debt ratio. Although the CoCo is a more expensive instrument than traditional debt, its presence in the capital structure lowers the cost of ordinary debt and reduces the total cost of debt. For preliminary equity holders, the presence of CoCo in the bank’s capital structure increases the shareholder’s aggregate value. Full article
(This article belongs to the Special Issue Advances in Credit Risk Modeling and Management)
Show Figures

Figure 1

19 pages, 389 KiB  
Article
Measuring and Allocating Systemic Risk
by Markus K. Brunnermeier and Patrick Cheridito
Risks 2019, 7(2), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020046 - 26 Apr 2019
Cited by 34 | Viewed by 7681
Abstract
In this paper, we develop a framework for measuring, allocating and managing systemic risk. SystRisk, our measure of total systemic risk, captures the a priori cost to society for providing tail-risk insurance to the financial system. Our allocation principle distributes the total systemic [...] Read more.
In this paper, we develop a framework for measuring, allocating and managing systemic risk. SystRisk, our measure of total systemic risk, captures the a priori cost to society for providing tail-risk insurance to the financial system. Our allocation principle distributes the total systemic risk among individual institutions according to their size-shifted marginal contributions. To describe economic shocks and systemic feedback effects, we propose a reduced form stochastic model that can be calibrated to historical data. We also discuss systemic risk limits, systemic risk charges and a cap and trade system for systemic risk. Full article
18 pages, 418 KiB  
Article
Sound Deposit Insurance Pricing Using a Machine Learning Approach
by Hirbod Assa, Mostafa Pouralizadeh and Abdolrahim Badamchizadeh
Risks 2019, 7(2), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020045 - 19 Apr 2019
Cited by 4 | Viewed by 3603
Abstract
While the main conceptual issue related to deposit insurances is the moral hazard risk, the main technical issue is inaccurate calibration of the implied volatility. This issue can raise the risk of generating an arbitrage. In this paper, first, we discuss that by [...] Read more.
While the main conceptual issue related to deposit insurances is the moral hazard risk, the main technical issue is inaccurate calibration of the implied volatility. This issue can raise the risk of generating an arbitrage. In this paper, first, we discuss that by imposing the no-moral-hazard risk, the removal of arbitrage is equivalent to removing the static arbitrage. Then, we propose a simple quadratic model to parameterize implied volatility and remove the static arbitrage. The process of removing the static risk is as follows: Using a machine learning approach with a regularized cost function, we update the parameters in such a way that butterfly arbitrage is ruled out and also implementing a calibration method, we make some conditions on the parameters of each time slice to rule out calendar spread arbitrage. Therefore, eliminating the effects of both butterfly and calendar spread arbitrage make the implied volatility surface free of static arbitrage. Full article
(This article belongs to the Special Issue Machine Learning in Insurance)
Show Figures

Figure 1

12 pages, 1046 KiB  
Article
Bank Competition in India: Some New Evidence Using Risk-Adjusted Lerner Index Approach
by Rakesh Arrawatia, Arun Misra, Varun Dawar and Debasish Maitra
Risks 2019, 7(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020044 - 18 Apr 2019
Cited by 5 | Viewed by 4019
Abstract
Banks in India have been gone through structural changes in the last three decades. The prices that bank charge depend on the competitive levels in the banking sector and the risk the assets and liabilities carry in banks’ balance sheet. The traditional Lerner [...] Read more.
Banks in India have been gone through structural changes in the last three decades. The prices that bank charge depend on the competitive levels in the banking sector and the risk the assets and liabilities carry in banks’ balance sheet. The traditional Lerner Index indicates competitive levels. However, this measure does not account for the risk, and this study introduces a risk-adjusted Lerner Index for evaluating competition in Indian banking for the period 1996 to 2016. The market power estimated through the adjusted Lerner Index has been declining since 1996, which indicates an improvement in competitive condition for the overall period. Further, as indicated by risk-adjusted Lerner Index, the Indian banking system exerts much less market power and hence are more competitive contrary to what is suggested by traditional Lerner index. Full article
(This article belongs to the Special Issue Financial Risks and Regulation)
Show Figures

Figure 1

22 pages, 834 KiB  
Article
Treatment Level and Store Level Analyses of Healthcare Data
by Kaiwen Wang, Jiehui Ding, Kristen R. Lidwell, Scott Manski, Gee Y. Lee and Emilio Xavier Esposito
Risks 2019, 7(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020043 - 17 Apr 2019
Viewed by 2999
Abstract
The presented research discusses general approaches to analyze and model healthcare data at the treatment level and at the store level. The paper consists of two parts: (1) a general analysis method for store-level product sales of an organization and (2) a treatment-level [...] Read more.
The presented research discusses general approaches to analyze and model healthcare data at the treatment level and at the store level. The paper consists of two parts: (1) a general analysis method for store-level product sales of an organization and (2) a treatment-level analysis method of healthcare expenditures. In the first part, our goal is to develop a modeling framework to help understand the factors influencing the sales volume of stores maintained by a healthcare organization. In the second part of the paper, we demonstrate a treatment-level approach to modeling healthcare expenditures. In this part, we aim to improve the operational-level management of a healthcare provider by predicting the total cost of medical services. From this perspective, treatment-level analyses of medical expenditures may help provide a micro-level approach to predicting the total amount of expenditures for a healthcare provider. We present a model for analyzing a specific type of medical data, which may arise commonly in a healthcare provider’s standardized database. We do this by using an extension of the frequency-severity approach to modeling insurance expenditures from the actuarial science literature. Full article
(This article belongs to the Special Issue Young Researchers in Insurance and Risk Management)
Show Figures

Figure 1

20 pages, 1985 KiB  
Article
Defining Geographical Rating Territories in Auto Insurance Regulation by Spatially Constrained Clustering
by Shengkun Xie
Risks 2019, 7(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020042 - 17 Apr 2019
Cited by 7 | Viewed by 4022
Abstract
Territory design and analysis using geographical loss cost are a key aspect in auto insurance rate regulation. The major objective of this work is to study the design of geographical rating territories by maximizing the within-group homogeneity, as well as maximizing the among-group [...] Read more.
Territory design and analysis using geographical loss cost are a key aspect in auto insurance rate regulation. The major objective of this work is to study the design of geographical rating territories by maximizing the within-group homogeneity, as well as maximizing the among-group heterogeneity from statistical perspectives, while maximizing the actuarial equity of pure premium, as required by insurance regulation. To achieve this goal, the spatially-constrained clustering of industry level loss cost was investigated. Within this study, in order to meet the contiguity, which is a legal requirement on the design of geographical rating territories, a clustering approach based on Delaunay triangulation is proposed. Furthermore, an entropy-based approach was introduced to quantify the homogeneity of clusters, while both the elbow method and the gap statistic are used to determine the initial number of clusters. This study illustrated the usefulness of the spatially-constrained clustering approach in defining geographical rating territories for insurance rate regulation purposes. The significance of this work is to provide a new solution for better designing geographical rating territories. The proposed method can be useful for other demographical data analysis because of the similar nature of the spatial constraint. Full article
Show Figures

Figure 1

29 pages, 5938 KiB  
Article
Pricing of Longevity Derivatives and Cost of Capital
by Fadoua Zeddouk and Pierre Devolder
Risks 2019, 7(2), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/risks7020041 - 15 Apr 2019
Cited by 14 | Viewed by 4572
Abstract
Annuities providers become more and more exposed to longevity risk due to the increase in life expectancy. To hedge this risk, new longevity derivatives have been proposed (longevity bonds, q-forwards, S-swaps…). Although academic researchers, policy makers and practitioners have talked about it for [...] Read more.
Annuities providers become more and more exposed to longevity risk due to the increase in life expectancy. To hedge this risk, new longevity derivatives have been proposed (longevity bonds, q-forwards, S-swaps…). Although academic researchers, policy makers and practitioners have talked about it for years, longevity-linked securities are not widely traded in financial markets, due in particular to the pricing difficulty. In this paper, we compare different existing pricing methods and propose a Cost of Capital approach. Our method is designed to be more consistent with Solvency II requirement (longevity risk assessment is based on a one year time horizon). The price of longevity risk is determined for a S-forward and a S-swap but can be used to price other longevity-linked securities. We also compare this Cost of capital method with some classical pricing approaches. The Hull and White and CIR extended models are used to represent the evolution of mortality over time. We use data for Belgian population to derive prices for the proposed longevity linked securities based on the different methods. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop