Reference Hub5
Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets

Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets

Leema N., Khanna H. Nehemiah, Elgin Christo V. R., Kannan A.
Copyright: © 2020 |Volume: 11 |Issue: 4 |Pages: 24
ISSN: 1947-9328|EISSN: 1947-9336|EISBN13: 9781799806561|DOI: 10.4018/IJORIS.2020100104
Cite Article Cite Article

MLA

Leema N., et al. "Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets." IJORIS vol.11, no.4 2020: pp.62-85. http://doi.org/10.4018/IJORIS.2020100104

APA

Leema N., Nehemiah, K. H., Elgin Christo V. R., & Kannan A. (2020). Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets. International Journal of Operations Research and Information Systems (IJORIS), 11(4), 62-85. http://doi.org/10.4018/IJORIS.2020100104

Chicago

Leema N., et al. "Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms: A Study With Clinical Datasets," International Journal of Operations Research and Information Systems (IJORIS) 11, no.4: 62-85. http://doi.org/10.4018/IJORIS.2020100104

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.