Evaluation of Explainable ML models

Zahra Ahmad
6 min readJul 31, 2022

How to understand the decisions that are made by machine learning models using explainable AI.

Photo by Evan Dennis on Unsplash

Why Explainable AI is important?

  • Self-driving car can potentially miss interpret some objects, and the car could crash into them. Who will be responsible for the crash, the passenger, software developer, or car manufacturer?
  • Cancer detection system’s goal is to detect cancer. It could be disastrous if the system misinterprets the healthy tissue as cancer or misrecognize the cancer type as malignant whenever it benign. In this case, the patient could become depressed over nothing.
  • Mortgage system that is deciding if the person is eligible for a mortgage or not. Knowing why they did not get the loan can lead them to a path where they fix their problems.

Those problems are just a few examples where the human looks at the AI problem as a black box, and there is a need to make it white box and, if not entirely, then at least for AI explainability.

Evaluating the ML models are quite straightforward as the accuracy will give simple number value how good the model is. However, as the evaluation of the explainable models is intended for humans to read. This means that there might not be one right and correct answer for every case but multiple good…

--

--

Zahra Ahmad

MSc in Data Science, I love to extract the hell out of any raw data, sexy plots and figures are my coffee