Block switching: a stochastic approach for deep learning security
Chin, Sang
Wang, Xiao; Lin, Xue; Wang, Siyue; Chen, Pin-Yu
Recent study of adversarial attacks has revealed the vulnerability
of modern deep learning models. That is, subtly crafted perturbations
of the input can make a trained network with high accuracy
produce arbitrary incorrect predictions, while maintain imperceptible
to human vision system. In this paper, we introduce Block
Switching (BS), a defense strategy against adversarial attacks based
on stochasticity. BS replaces a block of model layers with multiple
parallel channels, and the active channel is randomly assigned in
the run time hence unpredictable to the adversary. We show empirically
that BS leads to a more dispersed input gradient distribution
and superior defense effectiveness compared with other stochastic
defenses such as stochastic activation pruning (SAP). Compared to
other defenses, BS is also characterized by the following features: (i)
BS causes less test accuracy drop; (ii) BS is attack-independent and
(iii) BS is compatible with other defenses and can be used jointly
with others.
↧