Explainable AI for Software Defect Prediction with Gradient Boosting Classifier


GEZİCİ B., Tarhan A. K.

7th International Conference on Computer Science and Engineering, UBMK 2022, Diyarbakır, Turkey, 14 - 16 September 2022, pp.49-54 identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/ubmk55850.2022.9919490
  • City: Diyarbakır
  • Country: Turkey
  • Page Numbers: pp.49-54
  • Keywords: artificial intelligence, ELI5, explainability, LIME, post-hoc methods, SHAP, software defect prediction, XAI
  • Hacettepe University Affiliated: Yes

Abstract

© 2022 IEEE.Explainability is one of the most investigated quality attributes and nowadays, it has an increasing interest of the stakeholders using Artificial Intelligence (AI), especially Machine Learning software. Since AI-based software is different from traditional software in having a black-box nature, it has become very important to understand the logic behind the predictions made. In this study, we focus on the explainability of Gradient Boosting (GB) classifier used for software defect prediction (SDP). We apply post-hoc model-agnostic methods, namely 'Explain like I am a 5-year old' (ELl5), 'Local Interpretable Model Agnostic Explanations' (LIME), and 'SHapley Additive exPlanations' (SHAP), over an SDP dataset offered by NASA, in order to shed light on the explainability of GB classifier. More specifically, we use ELI5 and LIME to explain instances locally, and SHAP to get both local and global explanations. The results suggest a post-hoc and model-agnostic way to quantify explainability, and indicate that all three methods used in this study performed consistent results with each other while explaining the GB model.