Repository logo
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • Research Outputs
  • Projects
  • People
  • Statistics
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. INTELLIGENT AND ADAPTIVE MIXUP TECHNIQUE FOR ADVERSARIAL ROBUSTNESS
 
  • Details
Options

INTELLIGENT AND ADAPTIVE MIXUP TECHNIQUE FOR ADVERSARIAL ROBUSTNESS

ISSN
15224880
Date Issued
2021-01-01
Author(s)
Agarwal, Akshay
Vatsa, Mayank
Singh, Richa
Ratha, Nalini
DOI
10.1109/ICIP42928.2021.9506180
Abstract
Deep neural networks are generally trained using large amounts of data to achieve state-of-the-art accuracy in many possible computer vision and image analysis applications ranging from object recognition to natural language processing. It is also claimed that these networks can memorize the data which can be extracted from the network parameters such as weights and gradient information. The adversarial vulnerability of the deep networks is usually evaluated on the unseen test set of the databases. If the network is memorizing the data, then the small perturbation in the training image data should not drastically change its performance. Based on this assumption, we first evaluate the robustness of deep neural networks on small perturbations added in the training images used for learning the parameters of the network. It is observed that, even if the network has seen the images it is still vulnerable to these small perturbations. Further, we propose a novel data augmentation technique to increase the robustness of deep neural networks to such perturbations.
Subjects
  • Adversarial perturbat...

  • Data augmentation tec...

  • Deep networks

  • Object recognition

  • Robustness

Copyright © 2016-2025  Indian Institute of Technology Jodhpur

Developed and Maintaining by S. R. Ranganathan Learning Hub, IIT Jodhpur.

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback