Explainable AI for Software Defect Prediction with Gradient Boosting Classifier


GEZİCİ B., Tarhan A. K.

7th International Conference on Computer Science and Engineering, UBMK 2022, Diyarbakır, Türkiye, 14 - 16 Eylül 2022, ss.49-54 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ubmk55850.2022.9919490
  • Basıldığı Şehir: Diyarbakır
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.49-54
  • Anahtar Kelimeler: artificial intelligence, ELI5, explainability, LIME, post-hoc methods, SHAP, software defect prediction, XAI
  • Hacettepe Üniversitesi Adresli: Evet

Özet

© 2022 IEEE.Explainability is one of the most investigated quality attributes and nowadays, it has an increasing interest of the stakeholders using Artificial Intelligence (AI), especially Machine Learning software. Since AI-based software is different from traditional software in having a black-box nature, it has become very important to understand the logic behind the predictions made. In this study, we focus on the explainability of Gradient Boosting (GB) classifier used for software defect prediction (SDP). We apply post-hoc model-agnostic methods, namely 'Explain like I am a 5-year old' (ELl5), 'Local Interpretable Model Agnostic Explanations' (LIME), and 'SHapley Additive exPlanations' (SHAP), over an SDP dataset offered by NASA, in order to shed light on the explainability of GB classifier. More specifically, we use ELI5 and LIME to explain instances locally, and SHAP to get both local and global explanations. The results suggest a post-hoc and model-agnostic way to quantify explainability, and indicate that all three methods used in this study performed consistent results with each other while explaining the GB model.