Three questions to ask your AI vendor about explainability

Are you interested to transform your credit decision processes using AI to reveal the bigger picture of your loan applicants? This in order to increase conversion, decrease credit losses and more effectively retrain and deploy new models to stay competitive and up-to-date with new payment behaviors?

Before making any investment in AI, banks and lenders must ask the right AI-questions to be able to utilize all the benefit AI can give you. Take the issue with explainability for example. The government requires explanation of certain decisions such as the risk your organization takes and ensuring that no bias against people for reasons such as ethnicity or gender exist. By old days, typical models have 20 variables and none interact with each other making it easy to understand the result of the model. By utilizing artificial intelligence, hundreds or even thousands of variables are analyzed and the interaction between them, making it impossible for humans to understand.

To capture the benefits of AI, it’s crucial that banks and lenders looking to transform their credit decision processes into AI retain the ability to get accurate, consistent and fast explanations from credit models. We have listed three key questions to ask any AI software vendor before deploying AI-credit decision models.

1) Does your explainability technique analyze the final model?

This might sound obvious, but it’s not, you need to know if your explainer is looking at the final model. Some technologies try to approximate the final model by using an interpretable model, such as a linear model, which is a model that can be explained. However, you are not analyzing the final model, which is used in production and a lot of valuable information will be lost.

2) Does your explainability technique have the speed in real-time to interpret your AI-model?

If your explainability technique can analyze the AI-model performance by only looking at one variable at a time, you will eventually have speed issues. With hundreds of variables in an AI-model, this brute force analyze might go on for hours, or even days, meaning that your explainability technique will be impossible to use in production.

3) Does your explainability technique always perform the same consistent results?

Consistency is key for your explainability technique. Every time a borrower application is scored, the AI-model must give the same prediction. If the application was rejected, it must give the same reasons why it was rejected. To summarize, the explainer technique you use for analyzing your AI-model should return the same answer when the input is the same.

At Evispot we use an explainability technique which is based on competitive game theory, and generates explanations from the actual underlying AI-model. This in order to deliver explainability technique tailored for lending, meaning 100% accuracy, 100% consistency and great speed.


Are you interested to learn more or have any further questions, don’t hesitate to ask:

    © Evispot 2022 All rights reserved.

    Machine learning in credit decisions

    Leveraging machine learning for smarter lending and obtain insights into the technology behind 100% transparent machine learning models.

    A link to download the file will be sent to your inbox.