Efficient Learning From Two-Class Categorical Imbalanced Healthcare Data

Efficient Learning From Two-Class Categorical Imbalanced Healthcare Data

Lincy Mathews, Hari Seetha
Copyright: © 2021 |Volume: 16 |Issue: 1 |Pages: 20
ISSN: 1555-3396|EISSN: 1555-340X|EISBN13: 9781799859789|DOI: 10.4018/IJHISI.2021010105
Cite Article Cite Article

MLA

Mathews, Lincy, and Hari Seetha. "Efficient Learning From Two-Class Categorical Imbalanced Healthcare Data." IJHISI vol.16, no.1 2021: pp.81-100. http://doi.org/10.4018/IJHISI.2021010105

APA

Mathews, L. & Seetha, H. (2021). Efficient Learning From Two-Class Categorical Imbalanced Healthcare Data. International Journal of Healthcare Information Systems and Informatics (IJHISI), 16(1), 81-100. http://doi.org/10.4018/IJHISI.2021010105

Chicago

Mathews, Lincy, and Hari Seetha. "Efficient Learning From Two-Class Categorical Imbalanced Healthcare Data," International Journal of Healthcare Information Systems and Informatics (IJHISI) 16, no.1: 81-100. http://doi.org/10.4018/IJHISI.2021010105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

When data classes are differently represented in one v. other data segment to be mined, it generates the imbalanced two-class data challenge. Many health-related datasets comprising categorical data are faced with the class imbalance challenge. This paper aims to address the limitations of imbalanced two-class categorical data and presents a re-sampling solution known as ‘Syn_Gen_Min' (SGM) to improve the class imbalance ratio. SGM involves finding the greedy neighbors for a given minority sample. To the best of one's knowledge, the accepted approach for a classifier is to find the numeric equivalence for categorical attributes, resulting in the loss of information. The novelty of this contribution is that the categorical attributes are kept in their raw form. Five distinct categorical similarity measures are employed and tested against six real-world datasets derived within the healthcare sector. The application of these similarity methods leads to the generation of different synthetic samples, which has significantly improved the performance measures of the classifier. This work further proves that there is no generic similarity measure that fits all datasets.