© 2022 IEEE.Explainability is one of the most investigated quality attributes and nowadays, it has an increasing interest of the stakeholders using Artificial Intelligence (AI), especially Machine Learning software. Since AI-based software is different from traditional software in having a black-box nature, it has become very important to understand the logic behind the predictions made. In this study, we focus on the explainability of Gradient Boosting (GB) classifier used for software defect prediction (SDP). We apply post-hoc model-agnostic methods, namely 'Explain like I am a 5-year old' (ELl5), 'Local Interpretable Model Agnostic Explanations' (LIME), and 'SHapley Additive exPlanations' (SHAP), over an SDP dataset offered by NASA, in order to shed light on the explainability of GB classifier. More specifically, we use ELI5 and LIME to explain instances locally, and SHAP to get both local and global explanations. The results suggest a post-hoc and model-agnostic way to quantify explainability, and indicate that all three methods used in this study performed consistent results with each other while explaining the GB model.