Next Article in Journal
In Vitro Analysis of Hemodynamics in the Ascending Thoracic Aorta: Sensitivity to the Experimental Setup
Previous Article in Journal
Adopting Signal Processing Technique for Osteoporosis Detection Based on CT Scan Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AESOP: Adjustable Exhaustive Search for One-Pixel Attacks in Deep Neural Networks

1
Department of Computer Science and Engineering, Konkuk University, Seoul 05029, Republic of Korea
2
Department of Software, Korea Aerospace University, Goyang 10540, Republic of Korea
*
Author to whom correspondence should be addressed.
Submission received: 13 March 2023 / Revised: 11 April 2023 / Accepted: 14 April 2023 / Published: 19 April 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Deep neural networks have achieved remarkable performance in various fields such as image recognition and natural language processing. However, recent research has revealed that even a small imperceptible perturbation can confound well-trained neural network models and yield incorrect answers. Such adversarial examples are regarded as a key hazard to the application of machine learning techniques to safety-critical systems, such as unmanned vehicle navigation and security systems. In this study, we propose an efficient technique for searching one-pixel attacks in deep neural networks, which are recently reported as an adversarial example. Using exhaustive search, our method can identify one-pixel attacks which existing methods cannot detect. Moreover, the method can adjust exhaustiveness to reduce the search space dramatically. However, it still identifies most attacks. We present our experiment using the MNIST data set to demonstrate that our adjustable search method efficiently identifies one-pixel attacks in well-trained deep neural networks, including convolutional layers.

1. Introduction

In the past decade, machine learning techniques including deep neural networks have made significant achievements [1,2]. The precision of artificial intelligence, particularly in image classification, recommendation systems and natural language processing, matches or outperforms human cognitive capabilities. Despite these successes, recent research has revealed that imperceptible modifications of input images may make well-trained networks unstable; that is, adversarial perturbations induce them to misclassify input images [3,4]. Thus, adversarial examples containing imperceptible perturbations are considered as significant obstacles to employment of neural networks in safety-critical systems, such as autonomous vehicles and air traffic collision avoidance systems [5,6].
In this study, we investigate one-pixel attacks [7] among adversarial examples. These are input images that differ from the original input image in only one pixel, but cause a given neural network to classify them incorrectly. Existing methods [7,8] for identifying one-pixel attacks utilize elaborate searches. However, they sometimes miss one-pixel attacks since they do not explore the entire search space. Moreover, the constraint solvers that they employ as a core engine cannot utilize the parallelism of state-of-the-art GPUs.
Therefore, in this paper, we propose an efficient method for identifying one-pixel attacks using an exhaustive search. Our technique explores all the search space in parallel using GPUs. An exhaustive search can identify attacks that existing methods overlook, but in certain cases it may require a large amount of time owing to the inherent search space. Hence, we analyze the patterns of one-pixel attacks on well-trained neural networks for the MNIST data set [9]. Our experiments reveal the following: even Convolutional neural networks (CNNs) [10,11] with over 99.7% accuracy (i.e., 0.3% error rate) are subjected to one-pixel attacks for 2% of input images. More importantly, one-pixel attacks occur consecutively with respect to location and pixel value. Based on these results, we propose an adjustable exhaustive search that can dramatically reduce the search space but still detect most attacks. Our method can adjust the exhaustiveness of the search, wherein we probe alternately by skipping pixels and pixel values, rather than examining for each pixel index and pixel value. In addition, our search utilizes the locality of the attack distribution. We present our analysis and experimental results obtained using the MNIST data set to demonstrate that our technique can efficiently detect one-pixel attacks. The contributions of this paper are as follows:
  • This study performs thorough experiment with well-trained CNNs for one-pixel attacks with the entire MNIST testing set. The experimental result provides deep insights into one-pixel attacks for hand written digits, which can improve the robustness of neural networks.
  • Based on the attack patterns we have observed, we propose a novel adjustable exhaustive search algorithm for the one-pixel attack problem.
  • We develop an efficient tool to identify one-pixel attacks by an adjustable exhaustive search. It could identify over 99% of one-pixel attacks with approximately 10% of the time.
The remainder of this paper is organized as follows: Section 2 presents the related research. In Section 3, we formalize the one-pixel attack problem we study in this paper. In Section 4, we present our first algorithm for exhaustively searching one-pixel attacks and provide insights gained from exhaustive experiments. In Section 5, we propose an adjustable exhaustive search to identify attacks efficiently and present experimental results to validate the proposed adjustable technique. Finally, the conclusions are presented in Section 6.

2. Related Work

Recently, Szegedy et al. [3] reported adversarial examples that are inputs with imperceptible perturbations from a given original input image and nonetheless cause a neural network model to classify them incorrectly. They generated significantly close and visually difficult-to-distinguish adversarial examples that were misclassified by all the networks they studied. Subsequently, several studies [12,13,14,15] conducted to identify, detect, and defend against such adversarial examples. Goodfellow et al. [12] demonstrated that the linear behavior in high-dimensional spaces is sufficient to yield adversarial examples. They designed a fast method for producing adversarial examples that make adversarial training practical. Athalye et al. [13] demonstrated the existence of robust 3D adversarial objects and presented an algorithm for synthesizing adversarial examples over a selected distribution of transformations. In addition, several studies [14,15] proposed efficient methods to defend against subtle adversarial examples to improve the robustness of neural networks.
Su et al. [7] first presented a one-pixel attack problem and applied a differential evolution method to perturb the input images. Nguyen-Son [16] et al. presented a framework called OPA2D. It generates, detects, and defends against one-pixel attacks. Their technique identifies vulnerable pixels by considering the differences in confidence scores. Korpihalkola et al. [17] applied the differential evolution method for digital pathology images, which is used in [7]. While previous efforts [7,16,17] proposed several methods to identify a one-pixel attack using differential evolution, our method is based on rigorous experiments on MNIST data set and utilizes parallelism with the properties of one-pixel attacks.
The formal verification of neural networks [18,19,20,21,22] is another valuable research direction for improving the robustness of neural networks. Pulina and Tacchella [18] first presented a case study on the formal verification of neural networks. Their work [18,19] proposed a technique for verifying the local and global invariants for multilayer perceptions. Gehr et al. [20] presented AI 2 , the first sound and scalable analyzer for deep neural networks. AI 2 can automatically prove the safety properties of realistic CNNs by abstracting a set of input points into zonotopes. For fine-grained abstraction, Singh et al. [21] utilized polyhedra with intervals, and Tran et al. [22] employed ImageStar which represents a set of input images.

3. One-Pixel Attacks

An adversarial example is the input of a neural network that includes a small perturbation from the original input but confounds the neural network by inducing it to classify into an incorrect class. The more fatal such adversarial examples are, the more imperceptible the perturbation is to the human eye. However, many studies have not specifically investigated the closeness between an adversarial example and the original input. Su et al. [7] first proposed one-pixel attacks in the image classification field. These are input images that differ from the original input by only with one pixel.
Now, we formalize the notion of one-pixel attacks we study in this paper. A neural network f, as an image classifier, takes an input image x = ( x 1 , , x n ) that is a vector where each scalar element x i represents a pixel, and classifies x into a class t x . An additive adversarial perturbation for the input x is a vector e = ( e 1 , , e n ) with the same size with x . Given a neural network f and its input x , an adversarial example with regard to f and x is an input x = x + e such that f classifies x into the class t x , t x t x , and | | e | | δ for some distance threshold δ . As a special case of an adversarial example, a one-pixel attack is an input x = x + e such that f classifies x into the class t x , t x t x , and | | e | | 0 1 . Finally, given a neural network f and its input x , the one-pixel attack problem we study in this paper is to determine whether there exists a perturbation e to produce a one-pixel attack x with regard to f and x . In the one-pixel attack problem, the perturbation e can be described simply by the index and value of the element of e which is nonzero.
Figure 1 shows an example of a one-pixel attack on a well-trained network for the MNIST data set. Figure 1a shows the original input classified as 7, and Figure 1b shows a one-pixel attack classified incorrectly as 9.

4. Exhaustive Search for One-Pixel Attacks

In this section, we propose our first search algorithm to find out one-pixel attacks, and present its experimental results.

4.1. Exhaustive Search Algorithm

A simple method for one-pixel attacks is to generate an input image different by a pixel for each pixel index and each pixel value, and to examine whether a given neural network classifies the one-pixel different input correctly. In the case of MNIST data, an exhaustive search explores all the search spaces (i.e., 28 × 28 × 256 ) to determine whether a one-pixel attack exists for a given input image. Since an exhaustive search inspects all the possibilities of one-pixel attacks with respect to the pixel locations and values, we can capture all the characteristics of one-pixel attacks. Although this method should attempt a number of one-pixel different images, we can utilize the parallelism of GPUs. Algorithm 1 presents our exhaustive search algorithm for the one-pixel attack problem described in Section 3. First, it identifies the class t x of a given original input image x (line 2). For each pixel ( i , j ) and each possible integer value v a l of the pixel, our algorithm generates an input x that is different only at the pixel ( i , j ) with a value of v a l (lines 6–7). It then examines whether the output class t x of x differ from the output class t x of the original input x (line 9). The loops are executed in parallel with GPUs. After the loop terminates, it returns a set E of perturbations for a one-pixel attack (line 13).   
Algorithm 1: Exhaustive search for one-pixel attacks for MNIST data set
Applsci 13 05092 i001

4.2. Experimental Results of Exhaustive Search

We implemented a tool for the exhaustive search algorithm described in Section 4.1 and experimented on the MNIST data set [9] using three well-trained CNNs [10,11]. All the experiments were performed on a PC using a 3.70 GHz 10 core processor, 128GB memory, and a 328 Tensor core GPU. To analyze the patterns of one-pixel attacks and the performance of our exhaustive search algorithm, we used 10,000 images (size: 28 × 28 ) from the MNIST testing set. Table 1 presents the architectures, accuracies, and OPA (One-Pixel Attack) error rates of the three networks where the OPA error rate indicates the one-pixel attack rate. For example, a neural network f 1 consists of 10 convolutional layers and a fully connected layer. Its error rate was 0.22%, and the OPA error rate was 2.16%. Note that all the three networks ( f 1 , f 2 , and f 3 ) exhibit OPA error rates that are 3–10 times higher than the error rates. This result implies that even very well-trained networks can be significantly vulnerable to one-pixel attacks; thus, it is essential to identify them to prevent errors in neural networks.
Figure 2 illustrates an example of a one-pixel attack for each digit in MNIST. For each digit, the left figure is the original input, the middle figure presents pixels that can be attacked, and the right figure represents pixel values that cause a one-pixel attack, where blue indicates that the corresponding pixel is modified to light in the one-pixel attack, and red indicates that the corresponding pixel is modified to dark, respectively. The attacks shown in Figure 2 reveal that one-pixel attacks generally emerge consecutively rather than individually. The right columns of Figure 2 show the pixel values that cause a one-pixel attack. The plots assert that a one-pixel attack occurs at a long interval of pixel values rather than at a particular pixel value. One of the reasons why one-pixel attacks occur consecutively with respect to location is that most deep neural networks employ convolutional layers to improve their accuracy. These consider the correlation with neighboring pixels rather than extracting the features of a single pixel. In addition, we consider the reason for consecutiveness of pixel values is because one-pixel attacks succeed as long as the pixel value exceeds a certain threshold.
Table 2 and Table 3 present the consecutiveness of one-pixel attacks in further detail. Table 2 describes the consecutiveness of the vulnerable pixels. The leftmost column indicates the number of pixels attacked consecutively, either horizontally or vertically. For instance, 2 indicates the case in which two consecutive pixels are vulnerable with a frequency of 605 and rate of 15.32%. In this case, even when we inspect only one pixel among two consecutive pixels, we can detect the corresponding attack. The rightmost column `Accumulated rate’ indicates the rate of accumulation of consecutive pixels higher than or equal to that of the corresponding consecutive pixels. That is, in the second row, the accumulated rate of 94.53% indicates that even when we examine only the alternating pixels horizontally and vertically (i.e., 2 × 2 lattice), which saves 75% of the search space, we can identify 94.53% of the one-pixel attacks.
Table 3 represents the consecutiveness of the vulnerable pixel values. The leftmost column indicates the number of consecutive values that can cause a one-pixel attack. For example, 5 indicates case in which five consecutive values induce a one-pixel attack. Its frequency and rate are 22% and 0.56%, respectively. In this case, similar to consecutive pixels, even if we examine only one value out of every five pixel values, we can detect a corresponding one-pixel attack. The rightmost column presents the accumulated rate; e.g., even if we inspect only one value out of every five values which saves 80% of search space, we can identify 99.02% of one-pixel attacks.
Figure 3 shows the distribution of one-pixel attacks for each digit. In each figure, the darkness represents the relative frequency of a one-pixel attack occurring on each digit. In the case of digit 1, many one-pixel attacks appear vertically according to shape of the digit 1. Overall, most one-pixel attacks occur primarily in the center rather than on the edges. Table 4 presents these results in detail. The outermost edge contains only 1.77% of the one-pixel attacks and the second outermost edge includes only 2.13%. Accordingly, even if we attempt to search for a one-pixel attack without them, we can identify 96.10% of attacks.

5. Adjustable Exhaustive Search for One-Pixel Attacks

5.1. Adjustable Exhaustive Search Algorithm

The experiment described in Section 4.2 provides the following insights:
  • Even well-trained networks can undergo one-pixel attacks on the MNIST data set.
  • Over 90% of one-pixel attacks occur at consecutive pixels.
  • In terms of the pixel value, more than 99% of one-pixel attacks occur on consecutive values.
  • One-pixel attacks rarely occur at the outermost edge.
Based on the results, we propose an adjustable exhaustive search for one-pixel attacks where users provide parameters that can tune the degree of alternation. The pixel alternation parameters a v and a h specify the number of pixels that we omit to inspect a one-pixel attack; that is, we examine only one pixel out of every a v pixels vertically and a h pixels horizontally, respectively. Similarly, the pixel value alternation parameter a v a l determines the number of pixel values which we leap. In addition, the parameter e configures not to probe for one-pixel attacks outside the e-th outer edge.
Algorithm 2 presents our adjustable exhaustive search algorithm. First, it identifies the class t x of a given original input image x (line 2). Then, this algorithm generates a one-pixel different input x only when the pixel index and value satisfy a constraint given by the parameters a h , a v , a v a l and e (lines 3–9). Then, it examines whether the output class t x of x is different from the output class t x of the original input image x (line 13). The loops are executed in parallel with GPUs. Once the loop terminates, it returns a set E of perturbations for one-pixel attacks (line 17).    
Algorithm 2: AESOP (Adjustable Exhaustive Search for One-Pixel attacks) for MNIST data
Applsci 13 05092 i002

5.2. Experimental Results of Adjustable Exhaustive Search

We implement a tool for the adjustable exhaustive search algorithm described in Section 5.1 and experiment in the same environment in Section 4.2 to demonstrate that our adjustable algorithm can save search space and identify most one-pixel attacks.
Table 5 presents a performance comparison for various parameter values, where ES refers the exhaustive search described in Section 4 and AESOP (Adjustable Exhaustive Search for One-Pixel) represents the adjustable exhaustive search for one-pixel attacks described in Section 5. In addition, OPA is the number of one-pixel attacks identified, time represents the average running time (in seconds) to determine whether a given input image is vulnerable to one-pixel attacks, and the final column (%) indicates how many one-pixel attacks our AESOP identifies compared with ES. In particular, in the model f 1 , the AESOP with the parameter (1, 1, 10, 2) identifies 99.1% of one-pixel attacks, but consuming only one-tenth of the time. The parameter (1, 2, 10, 2) identifies 89.4% of the attacks within 1/22 of the time. In f 2 and f 3 , the AESOP exhibits remarkable performance as in f 1 .
Figure 4 illustrates the relationship between the parameters (i.e., a h , a v , a h × a v ), accuracy and running time independently. That is, each plot shows how the accuracy and running time vary as the value of a h , a v , or a h · a v increases with the values of the other parameters remaining constant. These results show that the running time decreases significantly more rapidly than the OPA rate. Based on Table 5 and Figure 4, users can adjust the values of the parameters according to their preferences. In addition, our AESOP tool can save search space according to the user configuration.

6. Conclusions

In this study, we have presented experimental results for one-pixel attacks with well-trained CNNs. This provides valuable insights for improving the robustness of deep neural networks. Moreover, based on the pattern of attacks observed in the experiment, we have proposed an adjustable exhaustive search algorithm to save the search space dramatically and still identify most attacks. Finally, we have implemented an efficient tool for the algorithm, and our experiment has yielded promising results.
There are several noteworthy issues for future study. First, although our technique have been applied to MNIST hand written digits, various data sets need to be investigated for one-pixel attacks, such as the tumor data set [23] and CIFAR image date set [24]. Second, we wish to study an efficient method to adapt our technique to more state-of-the-art models like transformer models and contrastive learning. Finally, we plan to conduct ample experiment to improve the accuracy and gain of the proposed technique.

Author Contributions

Conceptualization, W.N. and H.K.; methodology, W.N. and H.K.; software, W.N.; validation, W.N. and H.K.; formal analysis, W.N. and H.K.; investigation, W.N. and H.K.; resources, H.K.; data curation, W.N. and H.K.; writing—original draft preparation, W.N. and H.K.; writing—review and editing, W.N. and H.K.; visualization, W.N.; supervision, W.N. and H.K.; project administration, W.N. and H.K.; funding acquisition, W.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant through the Korean Government (MSIT) under grant 2021R1F1A105038911.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  2. Sze, V.; Chen, Y.H.; Yang, T.J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef]
  3. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.J.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  4. Huang, X.; Kroening, D.; Ruan, W.; Sharp, J.; Sun, Y.; Thamo, E.; Wu, M.; Yi, X. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 2020, 37, 100270. [Google Scholar] [CrossRef]
  5. Huang, X.; Kwiatkowska, M.; Wang, S.; Wu, M. Safety Verification of Deep Neural Networks. In Proceedings of the 29th International Conference of Computer Aided Verification (CAV), Heidelberg, Germany, 24–28 July 2017; pp. 3–29. [Google Scholar]
  6. Katz, G.; Barrett, C.W.; Dill, D.L.; Julian, K.; Kochenderfer, M.J. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Proceedings of the 29th International Conference of Computer Aided Verification (CAV), Heidelberg, Germany, 24–28 July 2017; pp. 97–117. [Google Scholar]
  7. Su, J.; Vargas, D.V.; Sakurai, K. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar] [CrossRef]
  8. Gopinath, D.; Pasareanu, C.S.; Wang, K.; Zhang, M.; Khurshid, S. Symbolic execution for attribution and attack synthesis in neural networks. In Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings (ICSE), Montreal, QC, Canada, 25–31 May 2019; pp. 282–283. [Google Scholar]
  9. LeCun, Y.; Cortes, C.; Burges, C.J. THE MNIST DATABASE of Handwritten Digits. 1998. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 21 August 2022).
  10. Lawrence, S.; Giles, C.; Tsoi, A.C.; Back, A. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw. 1997, 8, 98–113. [Google Scholar] [CrossRef] [PubMed]
  11. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NeurIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1106–1114. [Google Scholar]
  12. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  13. Athalye, A.; Engstrom, L.; Ilyas, A.; Kwok, K. Synthesizing Robust Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning (ICML), Vienna, Austria, 25–31 July 2018; Volume 80, pp. 284–293. [Google Scholar]
  14. Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2805–2824. [Google Scholar] [CrossRef] [PubMed]
  15. Luo, W.; Zhang, H.; Kong, L.; Chen, Z.; Tang, K. Defending Adversarial Examples by Negative Correlation Ensemble. In Proceedings of the International Conference on Data Mining and Big Data, Beijing, China, 21–24 November 2022; pp. 424–438. [Google Scholar]
  16. Nguyen-Son, H.; Thao, T.P.; Hidano, S.; Bracamonte, V.; Kiyomoto, S.; Yamaguchi, R.S. OPA2D: One-Pixel Attack, Detection, and Defense in Deep Neural Networks. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–10. [Google Scholar]
  17. Korpihalkola, J.; Sipola, T.; Kokkonen, T. Color-Optimized One-Pixel Attack Against Digital Pathology Images. In Proceedings of the 29th Conference of Open Innovations Association (FRUCT), Tampere, Finland, 12–14 May 2021; pp. 206–213. [Google Scholar]
  18. Pulina, L.; Tacchella, A. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. In Proceedings of the 22nd International Conference of Computer Aided Verification (CAV), Edinburgh, UK, 15–19 July 2010; pp. 243–257. [Google Scholar]
  19. Pulina, L.; Tacchella, A. Challenging SMT solvers to verify neural networks. AI Commun. 2012, 25, 117–135. [Google Scholar] [CrossRef]
  20. Gehr, T.; Mirman, M.; Drachsler-Cohen, D.; Tsankov, P.; Chaudhuri, S.; Vechev, M.T. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 21–23 May 2018; pp. 3–18. [Google Scholar]
  21. Singh, G.; Gehr, T.; Püschel, M.; Vechev, M.T. An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 2019, 3, 41:1–41:30. [Google Scholar] [CrossRef]
  22. Tran, H.; Bak, S.; Xiang, W.; Johnson, T.T. Verification of Deep Convolutional Neural Networks Using ImageStars. In Proceedings of the 32nd International Conference of Computer Aided Verification (CAV), Los Angeles, CA, USA, 21–24 July 2020; pp. 18–42. [Google Scholar]
  23. Kaggle. Brain Tumor Data Set. 2020. Available online: https://www.kaggle.com/datasets/jakeshbohaju/brain-tumor (accessed on 2 January 2023).
  24. Krizhevsky, A. CIFAR Data Set. 2009. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 2 January 2023).
Figure 1. Example of a one-pixel attack.
Figure 1. Example of a one-pixel attack.
Applsci 13 05092 g001
Figure 2. One-pixel attack for each digit in MNIST.
Figure 2. One-pixel attack for each digit in MNIST.
Applsci 13 05092 g002
Figure 3. Distribution of one-pixel attacks for each digit.
Figure 3. Distribution of one-pixel attacks for each digit.
Applsci 13 05092 g003
Figure 4. Experiment for parameters of AESOP algorithm.
Figure 4. Experiment for parameters of AESOP algorithm.
Applsci 13 05092 g004
Table 1. Architectures and accuracies of networks.
Table 1. Architectures and accuracies of networks.
Network f 1 f 2 f 3
Arch10 × Conv( 3 × 3 )5 × Conv( 5 × 5 )4 × Conv( 7 × 7 )
FC( 11 , 264 × 10 )FC( 10 , 240 × 10 )FC( 3072 × 10 )
Accuracy99.78%99.73%99.67%
Error rate0.22%0.27%0.33%
OPA2.16%1.02%0.96%
Table 2. Index consecutiveness of one-pixel attacks.
Table 2. Index consecutiveness of one-pixel attacks.
Consecutive PixelsFrequencyRateAccumulated Rate
12165.47%100.00%
260515.32%94.53%
363015.95%79.21%
454513.80%63.26%
540410.23%49.46%
6–18154939.23%39.23%
Total3949100.00%
Table 3. Value consecutiveness of one-pixel attacks.
Table 3. Value consecutiveness of one-pixel attacks.
Consecutive ValuesFrequencyRateAccumulated Rate
1200.51%100.00%
2200.51%99.49%
3170.43%98.99%
4210.53%98.56%
5220.56%99.02%
6230.58%97.47%
7300.76%96.89%
8220.56%96.13%
9230.58%95.57%
10–256372694.99%94.99%
Total3949100.00%
Table 4. Regional distribution of one-pixel attacks.
Table 4. Regional distribution of one-pixel attacks.
OutmostFrequencyRateAccumulated Rate
1st701.77%100.00%
2nd842.13%98.23%
3rd1814.58%96.10%
4st2295.80%91.52%
5st3679.29%85.72%
6st42210.68%76.42%
7st42510.76%65.74%
8st45311.47%54.98%
9–14301843.50%43.50%
Total3949100.00%
Table 5. Performance comparison.
Table 5. Performance comparison.
ESAESOP
OPATime a h a v a val eOPA%Time
f 1 21618.31151216100.0%3.74
1152216100.0%3.25
1110221499.1%1.69
1210219389.4%0.85
2210216777.3%0.49
2215216777.3%0.32
f 2 10215.61151102100.0%3.27
1152102100.0%2.79
1110210098.0%1.42
121028886.3%0.73
221027977.5%0.39
221527977.5%0.28
f 3 9615.0115196100.0%3.18
115296100.0%2.74
111029396.9%1.41
121028386.5%0.73
221027881.3%0.39
221527881.3%0.28
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nam, W.; Kil, H. AESOP: Adjustable Exhaustive Search for One-Pixel Attacks in Deep Neural Networks. Appl. Sci. 2023, 13, 5092. https://0-doi-org.brum.beds.ac.uk/10.3390/app13085092

AMA Style

Nam W, Kil H. AESOP: Adjustable Exhaustive Search for One-Pixel Attacks in Deep Neural Networks. Applied Sciences. 2023; 13(8):5092. https://0-doi-org.brum.beds.ac.uk/10.3390/app13085092

Chicago/Turabian Style

Nam, Wonhong, and Hyunyoung Kil. 2023. "AESOP: Adjustable Exhaustive Search for One-Pixel Attacks in Deep Neural Networks" Applied Sciences 13, no. 8: 5092. https://0-doi-org.brum.beds.ac.uk/10.3390/app13085092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop