EXPLORING FAIRNESS: BIAS DETECTION AND MITIGATION IN AUTOMATED LENDING SYSTEMS
ฝัง
- เผยแพร่เมื่อ 15 ม.ค. 2025
- EXPLORING FAIRNESS: BIAS DETECTION AND MITIGATION IN AUTOMATED LENDING SYSTEMS
Muna Abdelrahim, Jackson State University, United States of America / Sungbum Hong, Jackson State University, United States of America / Tor Kwembe, Jackson State University, United States of America
(Presented at the International Conference on Social and Education Sciences (IConSES), which took place on October 17-20, 2024 in Chicago, USA. (www.istes.org/..., and at the nternational Conference on Engineering, Science and Technology (IConEST) (www.istes.org/...) organized by the International Society for Technology, Education and Science (ISTES) www.istes.org).
In today's dynamic financial landscape, lending institutions constantly seek ways to improve their decision-making processes. As the lending ecosystem evolves, there is a growing need for more nuanced, data-driven approaches to evaluate and predict loan approval outcomes. Machine learning (ML) algorithms are playing an increasing role in automated decision-making systems as potentially promising solutions to reduce the cost of credit and increase financial inclusion. However, ML methods have remained essentially a black box for the nontechnical audience, and discrepancies and biases in the loan approval process, particularly in mortgage and business loan approvals, have been well-documented. Studies have shown that racial and gender bias persists in the mortgage approval process, with people of color or females being more likely to be denied mortgages compared to white and male applicants with similar financial backgrounds. In this research, we will explore different approaches to understand and quantify bias and assess the impact on automated lending algorithms. We will experiment on how to enhance fairness and equity in the lending system. We will evaluate two corrective techniques to remove unfairness. 1) pre-processing, where the dataset imbalances are treated as research showed that data imbalances are strongly connected to biased outcomes, and 2) post-processing, by deploying explainable AI (XAI) as a tool to understand how algorithms are producing a certain outcome, given the domain expert a role in correcting the algorithm result. The desired benefit of this research is to explore techniques to detect bias in datasets and develop an effective and secure method to mitigate bias in Lending algorithms. This endeavor aims to enhance fairness and transparency in loan approval systems. Thereby, fostering community equity.