RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

Ying He, Zhili Shen, Chang Xia, Jingyu Hua, Wei Tong, Sheng Zhong

With the development of Deep Neural Network (DNN), as well as the demand growth of third-party DNN model stronger, there leaves a gap for backdoor attack. Backdoor can be injected into a third-party model and has strong stealthiness in normal situation, thus has been widely discussed. Nowadays backdoor attack on deep neural network has been concerned a lot and there comes lots of researches about attack and defense around backdoor in DNN. In this paper, we propose a robust avatar backdoor attack that integrated with adversarial attack. Our attack can escape mainstream detection schemes with popularity and impact that detect whether a model has backdoor or not before deployed. It reveals that although many effective backdoor defense schemes has been put forward, backdoor attack in DNN still needs to be concerned. We select three popular datasets and two detection schemes with high impact factor to prove that our attack has a great performance in aggressivity and stealthiness.

Knowledge Graph



Sign up or login to leave a comment