Reference Hub1
Audio-Visual Emotion Recognition System Using Multi-Modal Features

Audio-Visual Emotion Recognition System Using Multi-Modal Features

Anand Handa, Rashi Agarwal, Narendra Kohli
Copyright: © 2021 |Volume: 15 |Issue: 4 |Pages: 14
ISSN: 1557-3958|EISSN: 1557-3966|EISBN13: 9781799859857|DOI: 10.4018/IJCINI.20211001.oa34
Cite Article Cite Article

MLA

Handa, Anand, et al. "Audio-Visual Emotion Recognition System Using Multi-Modal Features." IJCINI vol.15, no.4 2021: pp.1-14. http://doi.org/10.4018/IJCINI.20211001.oa34

APA

Handa, A., Agarwal, R., & Kohli, N. (2021). Audio-Visual Emotion Recognition System Using Multi-Modal Features. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 15(4), 1-14. http://doi.org/10.4018/IJCINI.20211001.oa34

Chicago

Handa, Anand, Rashi Agarwal, and Narendra Kohli. "Audio-Visual Emotion Recognition System Using Multi-Modal Features," International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 15, no.4: 1-14. http://doi.org/10.4018/IJCINI.20211001.oa34

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Due to the highly variant face geometry and appearances, Facial Expression Recognition (FER) is still a challenging problem. CNN can characterize 2-D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (Support Vector Machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral on the eNTERFACE’05 dataset with an overall accuracy of 76.61%.