As banks increasingly deploy artificial intelligence tools to make credit decisions, they are having to revisit an unwelcome fact about the practice of lending: Historically, it has been riddled with biases against protected characteristics, such as race, gender, and sexual orientation. Such biases are evident in institutions’ choices in terms of who gets credit and on what terms. In this context, relying on algorithms to make credit decisions instead of deferring to human judgment seems like an obvious fix. What machines lack in warmth, they surely make up for in objectivity, right?
AI Can Make Bank Loans More Fair
Many financial institutions are turning to AI reverse past discrimination in lending, and to foster a more inclusive economy. But many lenders find that artificial-intelligence-based engines exhibit many of the same biases as humans. How can they address the issue to ensure that biases of the past are not baked into algorithms and credit decisions going forward? The key lies in building AI-driven systems designed to encourage less historic accuracy, but greater equity. That means training and testing AI systems not merely on loans or mortgages issued in the past, but instead on how the money should have been lent in a more equitable world. Armed with a deeper awareness of bias lurking in the data and with objectives that reflect both financial and social goals, we can develop AI models that do well and that do good.