The impact of oversampling with “ubSMOTE” on the performance of machine learning classifiers in prediction of catastrophic health expenditures

Çinaroğlu S.

Operations Research for Health Care, vol.27, no.100275, pp.1-13, 2020 (Journal Indexed in ESCI)

  • Publication Type: Article / Article
  • Volume: 27 Issue: 100275
  • Publication Date: 2020
  • Doi Number: 10.1016/j.orhc.2020.100275
  • Title of Journal : Operations Research for Health Care
  • Page Numbers: pp.1-13


As a common problem in classification tasks, class imbalance degrades the performance of the classifier. Catastrophic out-of-pocket (OOP) health expenditure is a specific example of a rare event faced by very few households. The objective of the present study is to demonstrate a two-step learning approach for modeling highly unbalanced catastrophic OOP health expenditure data. The data are retrieved from the nationally representative Household Budget Survey collected in 2012 by the Turkish Statistical Institute. In total, 9987 households returned valid survey responses. The predictive models are based on eight common risk factors of catastrophic OOP health expenditure. The minority class in the training dataset is oversampled by using a synthetic minority oversampling technique (SMOTE) function, and the original and balanced oversampled training datasets are used to establish the classification models. Logistic regression (LR), random forest (RF) (100 trees), support vector machine (SVM), and neural network (NN) are determined as classifiers. The weighted percentage of households faced with catastrophic OOP health expenditure is 0.14. Balanced oversampling increases the area under the receiver operating characteristic (ROC) curve of LR, RF, SVM, and NN by 0.08%, 0.62%, 0.20%, and 0.23%, respectively. The ROC curve shows NN and RF to be the best classifiers for a balanced oversampled dataset. Identifying a classifier to model highly imbalanced catastrophic OOP health expenditure requires the two-stage procedure of (i) considering a balance between classes and (ii) comparing alternative classifiers. NN and RF are good classifiers in a prediction task with imbalanced catastrophic OOP health expenditure data.