Reference Hub2
Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism

Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism

Zhang Runmei, Li Lulu, Yin Lei, Liu Jingjing, Xu Weiyi, Cao Weiwei, Chen Zhong
Copyright: © 2022 |Volume: 18 |Issue: 1 |Pages: 20
ISSN: 1552-6283|EISSN: 1552-6291|EISBN13: 9781799893967|DOI: 10.4018/IJSWIS.313946
Cite Article Cite Article

MLA

Runmei, Zhang, et al. "Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism." IJSWIS vol.18, no.1 2022: pp.1-20. http://doi.org/10.4018/IJSWIS.313946

APA

Runmei, Z., Lulu, L., Lei, Y., Jingjing, L., Weiyi, X., Weiwei, C., & Zhong, C. (2022). Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism. International Journal on Semantic Web and Information Systems (IJSWIS), 18(1), 1-20. http://doi.org/10.4018/IJSWIS.313946

Chicago

Runmei, Zhang, et al. "Chinese Named Entity Recognition Method Combining ALBERT and a Local Adversarial Training and Adding Attention Mechanism," International Journal on Semantic Web and Information Systems (IJSWIS) 18, no.1: 1-20. http://doi.org/10.4018/IJSWIS.313946

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

For Chinese NER tasks, there is very little annotation data available. To increase the dataset, improve the accuracy of Chinese NER task, and improve the model's stability, the authors propose a method to add local adversarial training to the transfer learning model and integrate the attention mechanism. The model uses ALBERT for migration pre-training and adds perturbation factors to the output matrix of the embedding layer to constitute local adversarial training. BILSTM is used to encode the shared and private features of the task, and the attention mechanism is introduced to capture the characters that focus more on the entities. Finally, the best entity annotation is obtained by CRF. Experiments are conducted on People's Daily 2004 and Tsinghua University open-source text classification datasets. The experimental results show that compared with the SOTA model, the F1 values of the proposed method in this paper are improved by 7.32 and 7.98 in the two different datasets, respectively, proving that the accuracy of the method in this paper is improved in the Chinese domain.