Artificial intelligence (AI) and Machine Learning (ML) are widely employed to make the solutions more accurate and autonomous in many smart and intelligent applications in the Internet of Things (IoT). In these IoT applications, the performance and accuracy of AI/ML models are the main concerns; however, the transparency, interpretability, and responsibility of the models’ decisions are often neglected. Moreover, in AI/ML-supported next-generation IoT applications, there is a need for more reliable, transparent, and explainable systems. In particular, regardless of whether the decisions are simple or complex, how the decision is made, which features affect the decision, and their adoption and interpretation by people or experts are crucial issues. Also, people typically perceive unpredictable or opaque AI outcomes with skepticism, which reduces the adoption and proliferation of IoT applications. To that end, Explainable Artificial Intelligence (XAI) has emerged as a promising research topic that allows ante-hoc and post-hoc functioning and stages of black-box models to be transparent, understandable, and interpretable. In this paper, we provide an in-depth and systematic review of recent studies that use XAI models in the scope of the IoT domain. We classify the studies according to their methodology and application areas. Additionally, we highlight the challenges and open issues and provide promising future directions to lead the researchers in future investigations.