Reference Hub15
Multimodal Information Fusion of Audiovisual Emotion Recognition Using Novel Information Theoretic Tools

Multimodal Information Fusion of Audiovisual Emotion Recognition Using Novel Information Theoretic Tools

Zhibing Xie, Ling Guan
Copyright: © 2013 |Volume: 4 |Issue: 4 |Pages: 14
ISSN: 1947-8534|EISSN: 1947-8542|EISBN13: 9781466635005|DOI: 10.4018/ijmdem.2013100101
Cite Article Cite Article

MLA

Xie, Zhibing, and Ling Guan. "Multimodal Information Fusion of Audiovisual Emotion Recognition Using Novel Information Theoretic Tools." IJMDEM vol.4, no.4 2013: pp.1-14. http://doi.org/10.4018/ijmdem.2013100101

APA

Xie, Z. & Guan, L. (2013). Multimodal Information Fusion of Audiovisual Emotion Recognition Using Novel Information Theoretic Tools. International Journal of Multimedia Data Engineering and Management (IJMDEM), 4(4), 1-14. http://doi.org/10.4018/ijmdem.2013100101

Chicago

Xie, Zhibing, and Ling Guan. "Multimodal Information Fusion of Audiovisual Emotion Recognition Using Novel Information Theoretic Tools," International Journal of Multimedia Data Engineering and Management (IJMDEM) 4, no.4: 1-14. http://doi.org/10.4018/ijmdem.2013100101

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

This paper aims at providing general theoretical analysis for the issue of multimodal information fusion and implementing novel information theoretic tools in multimedia application. The most essential issues for information fusion include feature transformation and reduction of feature dimensionality. Most previous solutions are largely based on the second order statistics, which is only optimal for Gaussian-like distribution, while in this paper we describe kernel entropy component analysis (KECA) which utilizes descriptor of information entropy and achieves improved performance by entropy estimation. The authors present a new solution based on the integration of information fusion theory and information theoretic tools in this paper. The proposed method has been applied to audiovisual emotion recognition. Information fusion has been implemented for audio and video channels at feature level and decision level. Experimental results demonstrate that the proposed algorithm achieves improved performance in comparison with the existing methods, especially when the dimension of feature space is substantially reduced.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.