Reference Hub4
The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth

The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth

Jinyu Chen, Ziqi Zhong, Qindi Feng, Lei Liu
Copyright: © 2022 |Volume: 30 |Issue: 11 |Pages: 17
ISSN: 1062-7375|EISSN: 1533-7995|EISBN13: 9781668464434|DOI: 10.4018/JGIM.315322
Cite Article Cite Article

MLA

Chen, Jinyu, et al. "The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth." JGIM vol.30, no.11 2022: pp.1-17. http://doi.org/10.4018/JGIM.315322

APA

Chen, J., Zhong, Z., Feng, Q., & Liu, L. (2022). The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth. Journal of Global Information Management (JGIM), 30(11), 1-17. http://doi.org/10.4018/JGIM.315322

Chicago

Chen, Jinyu, et al. "The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth," Journal of Global Information Management (JGIM) 30, no.11: 1-17. http://doi.org/10.4018/JGIM.315322

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

E-commerce has developed rapidly, and product promotion refers to how e-commerce promotes consumers' consumption activities. The demand and computational complexity in the decision-making process are urgent problems to be solved to optimize dynamic pricing decisions of the e-commerce product lines. Therefore, a Q-learning algorithm model based on the neural network is proposed on the premise of multimodal emotion information recognition and analysis, and the dynamic pricing problem of the product line is studied. The results show that a multi-modal fusion model is established through the multi-modal fusion of speech emotion recognition and image emotion recognition to classify consumers' emotions. Then, they are used as auxiliary materials for understanding and analyzing the market demand. The long short-term memory (LSTM) classifier performs excellent image feature extraction. The accuracy rate is 3.92%-6.74% higher than that of other similar classifiers, and the accuracy rate of the image single-feature optimal model is 9.32% higher than that of the speech single-feature model.