Reference Hub1
Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization

Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization

Assia Belherazem, Redouane Tlemsani
Copyright: © 2022 |Volume: 12 |Issue: 1 |Pages: 20
ISSN: 2642-1577|EISSN: 2642-1585|EISBN13: 9781683183907|DOI: 10.4018/IJAIML.308815
Cite Article Cite Article

MLA

Belherazem, Assia, and Redouane Tlemsani. "Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization." IJAIML vol.12, no.1 2022: pp.1-20. http://doi.org/10.4018/IJAIML.308815

APA

Belherazem, A. & Tlemsani, R. (2022). Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization. International Journal of Artificial Intelligence and Machine Learning (IJAIML), 12(1), 1-20. http://doi.org/10.4018/IJAIML.308815

Chicago

Belherazem, Assia, and Redouane Tlemsani. "Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization," International Journal of Artificial Intelligence and Machine Learning (IJAIML) 12, no.1: 1-20. http://doi.org/10.4018/IJAIML.308815

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

This paper proposes a hybrid text classification model that combines 1D CNN with a single Bidirectional Fast GRU (BiFaGRU) termed as CNN-BiFaGRU. Single convolution layer captures features through a kernel applying 128 filters which are slide over these embeds to find convolutions and drop entire 1D feature maps by using Spatial Dropout, combined vectors using Max-Pooling layer. Then, the Bidirectional CUDNNGRU block to extract temporal features, results of this layer is normalize by the Batch Normalization layer and transmitted to the Fully Connected Layer. The output layer produces the final classification results. Precision/loss score was used as the main criterion on five different datasets (WebKb, R8, R52, AG-News, and 20 NG) to assess the performance of the proposed model. The results indicate that the precision score of the classifier on WebKb, R8, and R52 data sets significantly improved from 90% up to 97% compared to the best result achieved by other methods such as LSTM and Bi-LSTM. Thus, the proposed model shows higher precision and lower loss scores than other methods.