Preventing Bias in Machine Learning Models of Credit Risk

A special issue of Journal of Risk and Financial Management (ISSN 1911-8074). This special issue belongs to the section "Financial Technology and Innovation".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 9101

Special Issue Editor


E-Mail Website
Guest Editor
Prescient Models LLC, Santa Fe, NM 87505, USA
Interests: stress testing; loss reserves; credit risk modeling; survival models; machine learning

Special Issue Information

Dear Colleagues,

The greatest obstacle to widespread adoption of machine learning models in credit risk modelling and loan underwriting is the risk of unintended ethical bias. This is a case of asking the model to do what humans and regulations expect, not what the data reflects. Researchers are exploring ways to modify the data, constrain the algorithms, or alter the modeling process to eliminate these unwanted biases.

For this Special Issue, we invite researchers with novel work into any of these approaches to eliminate bias in the application of machine learning to loan credit risk modeling to submit their papers for consideration. These issues are critical in regulated environments such as lending, but also arise in almost any area where machine learning is applied to human behavior.

Dr. Joseph Breeden

Keywords

  • AI
  • machine learning
  • bias
  • fairness
  • fair lending
  • credit risk modeling
  • loan underwriting

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 218 KiB  
Article
Time to Assess Bias in Machine Learning Models for Credit Decisions
by Liming Brotcke
J. Risk Financial Manag. 2022, 15(4), 165; https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm15040165 - 05 Apr 2022
Cited by 2 | Viewed by 6008
Abstract
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be [...] Read more.
Focus on fair lending has become more intensified recently as bank and non-bank lenders apply artificial-intelligence (AI)-based credit determination approaches. The data analytics technique behind AI and machine learning (ML) has proven to be powerful in many application areas. However, ML can be less transparent and explainable than traditional regression models, which may raise unique questions about its compliance with fair lending laws. ML may also reduce potential for discrimination, by reducing discretionary and judgmental decisions. As financial institutions continue to explore ML applications in loan underwriting and pricing, the fair lending assessments typically led by compliance and legal functions will likely continue to evolve. In this paper, the author discusses unique considerations around ML in the existing fair lending risk assessment practice for underwriting and pricing models and proposes consideration of additional evaluations to be added in the present practice. Full article
(This article belongs to the Special Issue Preventing Bias in Machine Learning Models of Credit Risk)
15 pages, 2023 KiB  
Article
Creating Unbiased Machine Learning Models by Design
by Joseph L. Breeden and Eugenia Leonova
J. Risk Financial Manag. 2021, 14(11), 565; https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm14110565 - 22 Nov 2021
Cited by 4 | Viewed by 2004
Abstract
Unintended bias against protected groups has become a key obstacle to the widespread adoption of machine learning methods. This work presents a modeling procedure that carefully builds models around protected class information in order to make sure that the final machine learning model [...] Read more.
Unintended bias against protected groups has become a key obstacle to the widespread adoption of machine learning methods. This work presents a modeling procedure that carefully builds models around protected class information in order to make sure that the final machine learning model is independent of protected class status, even in a nonlinear sense. This procedure works for any machine learning method. The procedure was tested on subprime credit card data combined with demographic data by zip code from the US Census. The census data serves as an imperfect proxy for borrower demographics but serves to illustrate the procedure. Full article
(This article belongs to the Special Issue Preventing Bias in Machine Learning Models of Credit Risk)
Show Figures

Figure 1

Back to TopTop