Next Article in Journal
Individual-Level and Population-Level Lateralization: Two Sides of the Same Coin
Previous Article in Journal
On the Role of Unitary-Symmetry for the Foundation of Probability and Time in a Realist Approach to Quantum Physics
Article

Random Untargeted Adversarial Example on Deep Neural Network

1
School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
2
Department of Electrical Engineering, Korea Military Academy, Seoul 01805, Korea
3
Department of Medical Information, Kongju National University, Gongju-si 32588, Korea
*
Author to whom correspondence should be addressed.
Current address: KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, Korea.
This paper is an extended version of our paper published in MILCOM 2018.
Symmetry 2018, 10(12), 738; https://doi.org/10.3390/sym10120738
Received: 19 November 2018 / Revised: 5 December 2018 / Accepted: 9 December 2018 / Published: 10 December 2018
Deep neural networks (DNNs) have demonstrated remarkable performance in machine learning areas such as image recognition, speech recognition, intrusion detection, and pattern analysis. However, it has been revealed that DNNs have weaknesses in the face of adversarial examples, which are created by adding a little noise to an original sample to cause misclassification by the DNN. Such adversarial examples can lead to fatal accidents in applications such as autonomous vehicles and disease diagnostics. Thus, the generation of adversarial examples has attracted extensive research attention recently. An adversarial example is categorized as targeted or untargeted. In this paper, we focus on the untargeted adversarial example scenario because it has a faster learning time and less distortion compared with the targeted adversarial example. However, there is a pattern vulnerability with untargeted adversarial examples: Because of the similarity between the original class and certain specific classes, it may be possible for the defending system to determine the original class by analyzing the output classes of the untargeted adversarial examples. To overcome this problem, we propose a new method for generating untargeted adversarial examples, one that uses an arbitrary class in the generation process. Moreover, we show that our proposed scheme can be applied to steganography. Through experiments, we show that our proposed scheme can achieve a 100% attack success rate with minimum distortion (1.99 and 42.32 using the MNIST and CIFAR10 datasets, respectively) and without the pattern vulnerability. Using a steganography test, we show that our proposed scheme can be used to fool humans, as demonstrated by the probability of their detecting hidden classes being equal to that of random selection. View Full-Text
Keywords: deep neural network; adversarial example; untargeted adversarial example; random selection deep neural network; adversarial example; untargeted adversarial example; random selection
Show Figures

Figure 1

MDPI and ACS Style

Kwon, H.; Kim, Y.; Yoon, H.; Choi, D. Random Untargeted Adversarial Example on Deep Neural Network. Symmetry 2018, 10, 738. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10120738

AMA Style

Kwon H, Kim Y, Yoon H, Choi D. Random Untargeted Adversarial Example on Deep Neural Network. Symmetry. 2018; 10(12):738. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10120738

Chicago/Turabian Style

Kwon, Hyun, Yongchul Kim, Hyunsoo Yoon, and Daeseon Choi. 2018. "Random Untargeted Adversarial Example on Deep Neural Network" Symmetry 10, no. 12: 738. https://0-doi-org.brum.beds.ac.uk/10.3390/sym10120738

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop