Reference Hub9
Understanding Universal Adversarial Attack and Defense on Graph

Understanding Universal Adversarial Attack and Defense on Graph

Tianfeng Wang, Zhisong Pan, Guyu Hu, Yexin Duan, Yu Pan
Copyright: © 2022 |Volume: 18 |Issue: 1 |Pages: 21
ISSN: 1552-6283|EISSN: 1552-6291|EISBN13: 9781799893967|DOI: 10.4018/IJSWIS.308812
Cite Article Cite Article

MLA

Wang, Tianfeng, et al. "Understanding Universal Adversarial Attack and Defense on Graph." IJSWIS vol.18, no.1 2022: pp.1-21. http://doi.org/10.4018/IJSWIS.308812

APA

Wang, T., Pan, Z., Hu, G., Duan, Y., & Pan, Y. (2022). Understanding Universal Adversarial Attack and Defense on Graph. International Journal on Semantic Web and Information Systems (IJSWIS), 18(1), 1-21. http://doi.org/10.4018/IJSWIS.308812

Chicago

Wang, Tianfeng, et al. "Understanding Universal Adversarial Attack and Defense on Graph," International Journal on Semantic Web and Information Systems (IJSWIS) 18, no.1: 1-21. http://doi.org/10.4018/IJSWIS.308812

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Compared with traditional machine learning model, graph neural networks (GNNs) have distinct advantages in processing unstructured data. However, the vulnerability of GNNs cannot be ignored. Graph universal adversarial attack is a special type of attack on graph which can attack any targeted victim by flipping edges connected to anchor nodes. In this paper, we propose the forward-derivative-based graph universal adversarial attack (FDGUA). Firstly, we point out that one node as training data is sufficient to generate an effective continuous attack vector. Then we discretize the continuous attack vector based on forward derivative. FDGUA can achieve impressive attack performance that three anchor nodes can result in attack success rate higher than 80% for the dataset Cora. Moreover, we propose the first graph universal adversarial training (GUAT) to defend against universal adversarial attack. Experiments show that GUAT can effectively improve the robustness of the GNNs without degrading the accuracy of the model.