Options
Crafting Adversarial Perturbations via Transformed Image Component Swapping
ISSN
10577149
Date Issued
2022-01-01
DOI
10.1109/TIP.2022.3204206
Abstract
Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.