Feature selection and effective classifiers


DEOGUN J., CHOUBEY S., RAGHAVAN V., Sever H.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, cilt.49, sa.5, ss.423-434, 1998 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 49 Sayı: 5
  • Basım Tarihi: 1998
  • Dergi Adı: JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Scopus
  • Sayfa Sayıları: ss.423-434
  • Hacettepe Üniversitesi Adresli: Evet

Özet

In this article, we develop and analyze four algorithms for feature selection in the context of rough set methodology. The initial state and the feasibility criterion of all these algorithms are the same. That is, they start with a given feature set and progressively remove features, while controlling the amount of degradation in classification quality. These algorithms, however, differ in the heuristics used for pruning the search space of features. Our experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. Our experiments demonstrate that a theta-reduct of a given feature set can be found efficiently. Although we have adopted upper classifiers in our investigations, the algorithms presented can, however, be used with any method of deriving a classifier, where the quality of classification is a monotonically decreasing function of the size of the feature set. We compare the performance of upper classifiers with those of lower classifiers. We find that upper classifiers perform better than lower classifiers for a duodenal ulcer data set. This should be generally true when there is a small number of elements in the boundary region. An upper classifier has some important features that make it suitable for data mining applications. In particular, we have shown that the upper classifiers can be summarized at a desired level of abstraction by using extended decision tables. We also point out that an upper classifier results in an inconsistent decision algorithm, which can be interpreted deterministically or non-deterministically to obtain a consistent decision algorithm.