Reference Hub4
A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis

A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis

Benjamin Ghansah, Ben-Bright Benuwa, Augustine Monney
Copyright: © 2021 |Volume: 17 |Issue: 1 |Pages: 24
ISSN: 1548-3657|EISSN: 1548-3665|EISBN13: 9781799859628|DOI: 10.4018/IJIIT.2021010105
Cite Article Cite Article

MLA

Ghansah, Benjamin, et al. "A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis." IJIIT vol.17, no.1 2021: pp.1-24. http://doi.org/10.4018/IJIIT.2021010105

APA

Ghansah, B., Benuwa, B., & Monney, A. (2021). A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis. International Journal of Intelligent Information Technologies (IJIIT), 17(1), 1-24. http://doi.org/10.4018/IJIIT.2021010105

Chicago

Ghansah, Benjamin, Ben-Bright Benuwa, and Augustine Monney. "A Discriminative Locality-Sensitive Dictionary Learning With Kernel Weighted KNN Classification for Video Semantic Concepts Analysis," International Journal of Intelligent Information Technologies (IJIIT) 17, no.1: 1-24. http://doi.org/10.4018/IJIIT.2021010105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Video semantic concept analysis has received a lot of research attention in the area of human computer interactions in recent times. Reconstruction error classification methods based on sparse coefficients do not consider discrimination, essential for classification performance between video samples. To further improve the accuracy of video semantic classification, a video semantic concept classification approach based on sparse coefficient vector (SCV) and a kernel-based weighted KNN (KWKNN) is proposed in this paper. In the proposed approach, a loss function that integrates reconstruction error and discrimination is put forward. The authors calculate the loss function value between the test sample and training samples from each class according to the loss function criterion, and then vote on statistical results. Finally, this paper modifies the vote results combined with the kernel weight coefficient of each class and determine the video semantic concept. The experimental results show that this method effectively improves the classification accuracy for video semantic analysis and shorten the time used in the semantic classification compared with some baseline approaches.