Reference Hub2
An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures

An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures

Darryl Hond, Hamid Asgari, Daniel Jeffery, Mike Newman
Copyright: © 2021 |Volume: 11 |Issue: 2 |Pages: 21
ISSN: 2642-1577|EISSN: 2642-1585|EISBN13: 9781799864110|DOI: 10.4018/IJAIML.289536
Cite Article Cite Article

MLA

Hond, Darryl, et al. "An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures." IJAIML vol.11, no.2 2021: pp.1-21. http://doi.org/10.4018/IJAIML.289536

APA

Hond, D., Asgari, H., Jeffery, D., & Newman, M. (2021). An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures. International Journal of Artificial Intelligence and Machine Learning (IJAIML), 11(2), 1-21. http://doi.org/10.4018/IJAIML.289536

Chicago

Hond, Darryl, et al. "An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures," International Journal of Artificial Intelligence and Machine Learning (IJAIML) 11, no.2: 1-21. http://doi.org/10.4018/IJAIML.289536

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

The specification and verification of algorithms is vital for safety-critical autonomous systems which incorporate deep learning elements. We propose an integrated process for verifying artificial neural network (ANN) classifiers. This process consists of an off-line verification and an on-line performance prediction phase. The process is intended to verify ANN classifier generalisation performance, and to this end makes use of dataset dissimilarity measures. We introduce a novel measure for quantifying the dissimilarity between the dataset used to train a classification algorithm, and the test dataset used to evaluate and verify classifier performance. A system-level requirement could specify the permitted form of the functional relationship between classifier performance and a dissimilarity measure; such a requirement could be verified by dynamic testing. Experimental results, obtained using publicly available datasets, suggest that the measures have relevance to real-world practice for both quantifying dataset dissimilarity, and specifying and verifying classifier performance.