Next Article in Journal
Does the Presence of Birdsongs Improve Perceived Levels of Mental Restoration from Park Use? Experiments on Parkways of Harbin Sun Island in China
Next Article in Special Issue
What Are the Current Audiological Practices for Ototoxicity Assessment and Management in the South African Healthcare Context?
Previous Article in Journal
Spatio-temporal Index Based on Time Series of Leaf Area Index for Identifying Heavy Metal Stress in Rice under Complex Stressors
Previous Article in Special Issue
Increased Risk of Sensorineural Hearing Loss as a Result of Exposure to Air Pollution
Article

Environmental Noise Classification Using Convolutional Neural Networks with Input Transform for Hearing Aids

Department of Electronic Engineering, Inha University, Incheon 22212, Korea
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2020, 17(7), 2270; https://0-doi-org.brum.beds.ac.uk/10.3390/ijerph17072270
Received: 31 January 2020 / Revised: 24 March 2020 / Accepted: 25 March 2020 / Published: 27 March 2020
(This article belongs to the Special Issue Environmental Exposures and Hearing Loss)

Abstract

Hearing aids are essential for people with hearing loss, and noise estimation and classification are some of the most important technologies used in devices. This paper presents an environmental noise classification algorithm for hearing aids that uses convolutional neural networks (CNNs) and image signals transformed from sound signals. The algorithm was developed using the data of ten types of noise acquired from living environments where such noises occur. Spectrogram images transformed from sound data are used as the input of the CNNs after processing of the images by a sharpening mask and median filter. The classification results of the proposed algorithm were compared with those of other noise classification methods. A maximum correct classification accuracy of 99.25% was achieved by the proposed algorithm for a spectrogram time length of 1 s, with the correct classification accuracy decreasing with increasing spectrogram time length up to 8 s. For a spectrogram time length of 8 s and using the sharpening mask and median filter, the classification accuracy was 98.73%, which is comparable with the 98.79% achieved by the conventional method for a time length of 1 s. The proposed hearing aid noise classification algorithm thus offers less computational complexity without compromising on performance.
Keywords: hearing loss; hearing aids; environmental noise; deep learning; convolutional neural networks hearing loss; hearing aids; environmental noise; deep learning; convolutional neural networks

1. Introduction

Hearing difficulty is a symptom of hearing loss caused by anomalies in the human sound signal transmission process. The difficulty in hearing a particular sound is due to an increase in the corresponding hearing threshold and narrowing of the dynamic range [1,2]. The use of a hearing aid is one of the methods for solving hearing difficulty and compensating for hearing loss [3]. Hearing aids use various technologies such as noise reduction, sound compensation, directional microphones, and feedback cancelation, and are tuned to the hearing characteristics of the users and the environments of use [4].
Daily life is full of noises, and hearing aid technologies are continuously being developed to reduce these noises, such as those in restaurants, car horns, buzzing from electrical equipment, and random voices in the surroundings. However, because the operating environment of a hearing aid varies with time, place, and other factors, 100% performance satisfaction is not achieved [5,6]. One of the biggest complaints of hearing aid users is the inability to completely reduce ambient noise, and the tendency of the noise to be amplified with the human voice [7]. Speech intelligibility is affected when surrounding noise is incorrectly interpreted as human voice, or if the voice is misinterpreted and removed with noise. This is due to performance problems and inaccurate sound classification of the noise reduction algorithm [8].
The traditional noise classification algorithm in hearing aids proceeds with the extraction of characteristic features from the data, finding the class with the highest probability based on those features, and classifying them based on the identified class [9]. The noise classification algorithm mainly focuses in the performance of the hearing aid, which has to operate with a low computational complexity and low power [10].
However, with the recent development of hearing aid chips with early smart phone-level CPU performance, such as Ezioro 71XX, the use of environmental noise classification algorithms that use deep learning is now feasible. It is generally not easy to extract sound signal characteristics that can be used as input data for deep learning, compared with image signals. This is because the time-domain data of sounds are difficult to know with respect to their signal information or their characteristics in the frequency domain. Therefore, various feature extraction algorithms are used to switch the data into the frequency domain and to distinguish the frequency characteristics of the different sounds. Nevertheless, real sound signals are a mixture of different sounds, and it remains difficult to distinguish between the characteristics of the contained noises and voices.
In this study, a noise signal spectrogram was used to transform sound signals in the time frequency-domain into image signals for hearing aid noise classification, as an alternative to the use of extracted frequency-domain features. The long noise estimation period was employed, and deep learning was used to improve the low classification accuracy. The image data transformed from the sound signals were particularly used in the present study for the classification of environmental noises with the aid of convolutional neural networks (CNNs), which are some of the best methods for image classification.

2. Previous Research

2.1. Conventional Noise Classification Algorithms

One of the most basic classification algorithms in use is the Bayesian classifier [11]. It classifies with the help of histograms of the class-specific probabilities. The K-nearest neighbors classification algorithm is a simple process that determines the class of a new input [12]. It is suitable for simple classification problems with relatively few training features, because, as the number of training feature increases, both the computational complexity and time increase. Support Vector Machine [13,14] and Neural Networks [15] are discriminative classification algorithm. These algorithms can be effective when there is enough sufficiently varied data to train the classifier, and can work even in those situations where the underlying probability distributions for the features are unknown. Hidden Markov models [10,16,17] are a widely used statistical method for speech recognition. One major advantage of HMMs over the previously described classifiers is that they account for the temporal statistics of the occurrence of different states in the features. Clustering refers to a group of unsupervised processes that group features based on their measured similarity. Clustering is related to classification in that both divide unknown inputs into classes.

2.2. Convolutional Neural Networks

A CNN is a deep learning technology based on supervised learning, and is widely used for image processing while maintaining the spatial information of the image [18,19]. As shown in Figure 1, convolutional and pooling layers were added between the input and output layers of the present CNN, for excellent performance in processing data composed of multi-dimensional arrays such as color images. The convolution work is for extracting the high-level features such as edges, from input data. Similar to the convolutional layer, the pooling work is responsible for reducing the spatial size of the convolved feature [20]. This is to decrease the computational power required to process the data through dimensionality reduction.
The feature map of the input data is produced by moving a convolution filter in the convolutional layer, and the values obtained from the final feature maps are then extracted to reduce the computational complexity and improve the accuracy of the pooling layer [21].
In this paper, the CNN has two hidden layers, namely the 5 × 5 convolution layer and max pooling layer, which uses a 2 × 2 window. The activation function is a ReLu function, which is the most commonly used function, and the loss function is a cross-entropy function. The overall data was divided into training and test sets at a ratio of 75:25, respectively, the batch sizes of the training was set to 16. The number of epochs was 12 and learning rate was set 0.001.

3. The Proposed Algorithm

In this paper, the spectrogram images of noise signals were used as input data for the CNNs, without feature extraction or conversion to the frequency domain. A spectrogram is a visual representation of the frequency spectrum of the signals with respect to time. The amplitude of the sound frequency was indicated by color in the spectrogram. Because the spectrogram consisted of different image colors, it had the advantage of enabling verification of the time and energy information in the frequency domain over a certain period. Therefore, the characteristics of the spectrogram images varied with the amplitude of the frequency and the time information.
Figure 2 shows a flow chart of the proposed environment noise classification algorithm. Generally, the input sound signal data were transformed into spectrogram images represented as RGB colors for noise classification using the CNNs. Two types of filters were combined and used to distinguish the noise characteristics, because the spectral image of the sound signals contained irregular amplitude changes over time unlike normal image signals. Each filter was introduced and applied to compare the results of the proposed algorithm in the process.
The first image filter uses a sharpening mask (method #1: spectrogram + Sharpening Mask), which enables enhancement of the boundaries of the noise characteristics [22]. The filter clearly identifies the boundaries of the colors, so that the area of the high-energy noise signals can be more clearly displayed.
The second image filter uses a median filter (method #2: spectrogram + Median Filter) [23]. In a conventional noise signal spectrogram, there are irregular low-energy pixels between the noise feature pixels that appear red. The use of the median filter compensates for these low-energy pixels when the data is used as input for the CNNs. Sets of input data with four different time lengths (Sets A, B, C, and D) were fed to the CNNs, and the corresponding noise classification accuracies were compared.

4. Materials and Methods

The conditions of noise data and signal processing information will first be discussed in Section 4.1; in Section 4.2, the specific noise classification experiment and input data transformation process will be described. Overall, the determination of the input data and detailed pre-processing of image data in CNNs are described for the noise classification algorithm.

4.1. Recording Environmental Noises

Ten kinds of noise were recorded from real environments in which hearing aids are used: white noise (white, N0), café noise around Inha University, Korea (café, N1), interior noise in a moving car (car_interior, N2), single fan noise in a laboratory (fan, N3), laundry noise in a laundry room (laundry, N4), noise in the library of Inha University (library, N5), normal noise in a university laboratory (office, N6), various noises in a restaurant (restaurant, N7), noise in subway car (subway, N8), and traffic noise around an intersection (traffic, N9).
Each noise was recorded three times at different times on different weekdays, and noise data for each noise type was generated for 30 min. To be closely related to the hearing aid’s environment, recording places were selected such as Starbucks, the biggest restaurant in the Inha Student Union building, etc. The noises were recorded on an iPhone 6S at 44.1 kHz, which is the highest possible sampling frequency, and the artificial noise generated at the beginning and end of recording was not included. The noise data was subsequently down-sampled to 16 kHz, which is the proper frequency for signal processing for hearing aids.

4.2. Experiment Data

The Matlab R2019b program developed by MathWorks was used to divide the recorded noise data into certain time intervals. The noise signals consisted of 16,000 samples per second, divided into four sets with time lengths of 1.0, 2.0, 4.0, and 8.0 s, respectively. Each frame was overlapped by 25% on either side to achieve a continuous noise signal and prevent to loss some data [24]. The spectrogram images obtained from the sound signals consisted of 23,960 images with a time length of 1 s (Set A), 11,960 of length 2 s (Set B), 6,000 of length 4 s (Set C), and 3,000 of length 8 s (Set D) in each of the 10 noises.
The conversion functions in the signal processing tool in the Matlab Toolbox was used to transform the noise signals into a spectrogram. The spectrogram image had a resolution of 904 × 713 pixels and was used as the input of the CNNs. To increase the classification accuracy of the spectrogram images of the noise signals, a 3 × 3 sharpening mask was used to enhance the boundaries of the colors, while a 5 × 5 median filter was used to clearly represent the pattern of the colors and make a color smoothing for random noise pixels. Figure 3 shows the results of a transformation of sound signals into a spectrogram image and the application of the sharpening mask and median filter. The spectrogram images were also subsequently used as input data.
Using Set A as an example, there were four types (spectrogram image, spectrogram image + Sharpening Mask, spectrogram image + Median Filter and spectrogram image + Sharpening Mask + Median Filter) of input data for each of the 10 considered environments, from which 23,960 spectrogram images were obtained. The same number of images were obtained after the application of the sharpening mask and median filter, respectively.

5. Experimental Results

In this section, results of the classification for hearing aids are introduced with various conditions, showing the detailed performance as proposed algorithms. Results are presented as a Confusion Matrix and a Receiver Operating Characteristic (ROC) curve. A Confusion Matrix is a table that is often used to describe the performance of the classification on a set of test data [25]. A ROC curve is a graph showing the performance of a classification model at all classification thresholds [26].

5.1. Performance Evaluations

This section presents experimental results of the proposed environmental noise classification for hearing aids using a CNNs. The classification produced varying results because the noise signals were randomly divided into training (0.75) and test (0.25) sets, and the spectrogram images corresponded to different times. The total number of input data was 5990 when the length of time was 1 s, and the number of test data in each noise was 599. Because the number of input data was dependent on length of time, the number of test data is 2990 in 2 s, 1500 in 4 s, and 750 in 8 s.
In order to show significant classification results, every experiment of training set and test set were randomly divided at a constant rate. As indicated in Table 1, the values in the tables are the classification accuracy (%), which is the ratio of number of correct predictions to the total number of input data, and the noise classification was performed 10 times. Bold numbers in the bottom two rows of each table are an average and a standard deviation of classification accuracies for comparing with other conditions. The conventional method is based on the deep convolutional neural networks and became famous as the winner of the ImageNet Large Scale Visual Recognition Competition (ILSVRC) in 2012 [21].
Method #1 was used to classify the image data using the sharpening mask to emphasize the boundary of colors, while method #2 used the median filter to remove ambient noise pixels. The proposed algorithm involved the combined use of the sharpening mask and median filter for clear representation and removal of the noise pixels in the spectrogram.
Regarding the classification accuracy in the time division, Set A produced the highest percentage classification in comparison with other Sets. With increasing time length of the spectrogram, the percentage classification decreased when using the CNNs. This was because of the longer time spent in changing the noise environment, and the increased probability of error in the classification due to the reduced number of image data. The detailed confusion matrix of classification results is further analyzed in Section 5.2, below. Each number is the average of 10 classifications.

5.2. Data Analysis

Table 2 is a confusion matrix of classification results in the time length of Set A using the CNNs. The vertical noise numbers in the table represent the true class (Target Class), while the horizontal noise numbers represent the predicted class (Output Class). The numbers in the diagonal cells are the numbers of correct classifications, while those in the off-diagonal cells are the numbers of incorrect classifications. The percentage of correct classifications relative to the total number of observations are also shown for each noise number. The results reveal high classification accuracies irrespective of the use or type of filter. In addition, there are no significant differences between the spectrogram image classifications for the four different methods, because there was enough input data to classify, and the performance of the CNN was excellent.
Figure 4 shows the Receiver Operating Characteristic (ROC) curve of multilabel classification for Table 2. The classification performance was confirmed through the ROC curves of all noises, which are close to the top and left-hand borders. All of the area under the ROC curve (AUC) were 1.0, meaning that the score describes the quality of the classification performance.
As can be seen from Table 3, when the environmental noises of the spectrogram image with a time length of Set B were classified using the four different methods, the classification accuracies were similar to those of Set A. In the cases of subway noise (N8) and traffic noise (N9), the classification rates using the sharpening mask and median filter were much better than when the filters were not used.
Café noise (N1) was misclassified as subway noise (N8) and traffic noise (N9), respectively, because the irregular high-frequency noises in the café presented energy distributions similar to those of subway noise (N8) and traffic noise (N9). Subway noise (N8) was also misclassified as traffic noise (N9). Because the energy distributions of these two noises are similar, neither the sharpening mask (c) nor the median filter (d) produced significantly differing effects from them.
Figure 5 shows the Receiver Operating Characteristic (ROC) curve of multilabel classification for Table 3. The classification performance was confirmed through the ROC curves of all noises, which are close to the top and left-hand borders. All of the area under the ROC curve (AUC) were 1.0, meaning that the score describes the quality of the classification performance.
Table 4 presents the environmental noise classification results for a time length of Set C, which produced decreased image classification accuracies for some noise types compared with the classifications for Set A and B. Specifically, the classification accuracies when using the median filter (Method #2) were lower than the other results. This means that the characteristics and the distribution of noise could not be distinguished over longer time lengths because the median filter caused a smoothing effect. In Table 4c, café noise (N1) is incorrectly classified as subway noise (N8) and traffic noise (N9), and traffic noise (N9) s incorrectly classified as cafe noise (N1) and subway noise (N8), resulting in a reduced overall classification accuracy. Café noise (N1), traffic noise (N9) have similar energy distributions because they contain multiple voices in other complexed environments, with the sounds concentrated in the low-frequency range. In the case of café noise (N1), subway noise (N8) and traffic noise (N9), for which the conventional method produces relatively low classification accuracies, proposed algorithm affords significant improvements.
Figure 6 shows the Receiver Operating Characteristic (ROC) curve of multilabel classification for Table 4. The classification performance was confirmed through the ROC curves of all noises, which are close to the top and left-hand borders. All of the area under the ROC curve (AUC) were 0.99, meaning that the score describes the quality of the classification performance.
Table 5 presents the environmental noise classification results for a time length of Set D. Comparison of Table 5a,d shows that the proposed algorithm produces a 97.93% classification accuracy for cafe noise (N1), which is highly classified compared with the other methods. In addition, the classification accuracy of traffic noise (N9) with the proposed algorithm was also increased to 98.22%.
Overall, the proposed algorithm produces >96.4% classification accuracy for all environmental noises. That means the results show that the classification accuracy does not significantly decrease even for a time length of Set D when the two types of filters are applied to the input data of the CNNs.
Figure 7 shows the Receiver Operating Characteristic (ROC) curve of multilabel classification for Table 5. The classification performance was confirmed through the ROC curves of all noises, which are close to the top and left-hand borders. All of the area under the ROC curve (AUC) were 1.0, meaning that the score describes the quality of the classification performance.

6. Conclusions

In this study, we proposed an algorithm for the classification of environmental noises in hearing aids and verified the performance. The proposed algorithm was to transform the sound data into image data for using as the input data of CNNs. The spectrogram images of the transformation were generated by dividing 10 environmental noises using four different time lengths, respectively, and the correct classification accuracies were compared for cases when a sharpening mask, median filter, and both were applied to the image data, respectively. We found that the correct noise classification accuracies for hearing aids using a CNNs gradually decreased with increasing time length of the spectrogram images due to the randomly changing noise characteristics. Regarding the type of filter used, the classification accuracy for the sharpening mask was higher than that for the median filter. In other words, it was more effective to sharpen the boundaries of the energy distribution in the spectrogram images than to remove the noise pixels from the images with obvious colors. Particularly, the combined use of the sharpening mask and median filter for a spectrogram time length of Set D increased the classification accuracy from 95.24% when no filter is used to 98.73%, which is comparable to the classification accuracy (98.79%) without a filter (conventional method) for a time length of Set A.
The proposed noise classification algorithm is thus effective for low computational complexity in long-term noise estimation and classification for hearing aids, as well as for environmental noise monitoring over a period of time, eliminating the need for real-time noise estimation. In addition, other types of filters that can clearly identify noise characteristics can be combined to further improve the use of CNNs for noise classification toward enhancing the performance of hearing aids.

Author Contributions

Conceptualization, G.P and S.L.; data curation, G.P. and S.L.; formal analysis, G.P. and S.L.; funding acquisition, S.L.; investigation, G.P. and S.L.; methodology, G.P. and S.L.; project administration, S.L.; resources, S.L.; software, G.P.; supervision, S.L.; validation, G.P., S.L.; visualization, G.P.; writing—original draft, G.P.; writing—review and editing, G.P. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2020R1A2C2004624) and supported by INHA UNIVERSITY Research Grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Festen, J.M.; Plomp, R. Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing. J. Acoust. Soc. Am. 1990, 88, 1725–1736. [Google Scholar] [CrossRef] [PubMed]
  2. Hygge, S.; Rönnberg, J.; Larsby, B.; Arlinger, S. Normal-hearing and hearing-impaired subjects’ ability to just follow conversation in competing speech, reversed speech, and noise backgrounds. J. Speech Lang. Hear. Res. 1992, 35, 208–215. [Google Scholar] [CrossRef] [PubMed]
  3. Plomp, R. Auditory handicap of hearing impairment and the limited benefit of hearing aids. J. Acoust. Soc. Am. 1978, 63, 533–549. [Google Scholar] [CrossRef] [PubMed]
  4. Dillon, H. Hearing Aids; Hodder Arnold: London, UK, 2008. [Google Scholar]
  5. Seo, S.; Yook, S.; Nam, K.W.; Han, J.; Kwon, S.Y.; Hong, S.H.; Kim, D.; Lee, S.; Jang, D.P.; Kim, I.Y. Real time environmental classification algorithm using neural network for hearing aids. J. Biomed. Eng. Res. 2013, 34, 8–13. [Google Scholar] [CrossRef]
  6. Duquesnoy, A. Effect of a single interfering noise or speech source upon the binaural sentence intelligibility of aged persons. J. Acoust. Soc. Am. 1983, 74, 739–743. [Google Scholar] [CrossRef] [PubMed]
  7. Knudsen, L.V.; Öberg, M.; Nielsen, C.; Naylor, G.; Kramer, S.E. Factors influencing help seeking, hearing aid uptake, hearing aid use and satisfaction with hearing aids: A review of the literature. Trends Amplif. 2010, 14, 127–154. [Google Scholar] [CrossRef] [PubMed]
  8. Cox, R.M.; McDaniel, D.M. Development of the Speech Intelligibility Rating (SIR) test for hearing aid comparisons. J. Speech Lang. Hear. Res. 1989, 32, 347–352. [Google Scholar] [CrossRef] [PubMed]
  9. Kates, J.M. Digital Hearing Aids; Plural publishing: San Diego, CA, USA, 2008. [Google Scholar]
  10. Nordqvist, P.; Leijon, A. An efficient robust sound classification algorithm for hearing aids. J. Acoust. Soc. Am. 2004, 115, 3033–3041. [Google Scholar] [CrossRef] [PubMed]
  11. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973. [Google Scholar]
  12. Dudani, S.A. The distance-weighted k-nearest-neighbor rule. IEEE Trans. Syst. Man Cybern. 1976, SMC-6, 325–327. [Google Scholar] [CrossRef]
  13. Burges, C.J. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  14. Gunn, S.R. Support vector machines for classification and regression. ISIS Tech. Rep. 1998, 14, 5–16. [Google Scholar]
  15. Beale, H.D.; Demuth, H.B.; Hagan, M. Neural Network Design; Pws: Boston, MA, USA, 1996. [Google Scholar]
  16. Juang, B.H.; Rabiner, L.R. Hidden Markov models for speech recognition. Technometrics 1991, 33, 251–272. [Google Scholar] [CrossRef]
  17. Büchler, M.; Allegro, S.; Launer, S.; Dillier, N. Sound classification in hearing aids inspired by auditory scene analysis. EURASIP J. Adv. Signal Process. 2005, 2005, 387845. [Google Scholar] [CrossRef]
  18. Kim, Y. Convolutional neural networks for sentence classification. arXiv 2014, arXiv:1408.5882. [Google Scholar]
  19. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  20. Abdel-Hamid, O.; Mohamed, A.-R.; Jiang, H.; Deng, L.; Penn, G.; Yu, D. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2014, 22, 1533–1545. [Google Scholar] [CrossRef]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks; Advances in Neural Information Processing Systems: Lake Tahoe, NV, USA, 2012; pp. 1097–1105. [Google Scholar]
  22. Yang, C.-C. Improving the overshooting of a sharpened image by employing nonlinear transfer functions in the mask-filtering approach. Opt.-Int. J. Light Electron Opt. 2013, 124, 2784–2786. [Google Scholar] [CrossRef]
  23. Hong, S.-W.; Kim, N.-H. A study on median filter using directional mask in salt & pepper noise environments. J. Korea Inst. Inf. Commun. Eng. 2015, 19, 230–236. [Google Scholar]
  24. Heinzel, G.; Rüdiger, A.; Schilling, R. Spectrum and Spectral Density Estimation by the Discrete Fourier Transform (DFT), Including a Comprehensive List of Window Functions and Some New at-Top Windows; Max-Planck-Institut für Gravitationsphysik: Hoanover, Germany, 2002. [Google Scholar]
  25. Stehman, S.V. Selecting and interpreting measures of thematic classification accuracy. Remote Sens. Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  26. Hanley, J.A.; McNeil, B.J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143, 29–36. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The structure of Convolution Neural Networks.
Figure 1. The structure of Convolution Neural Networks.
Ijerph 17 02270 g001
Figure 2. The process of the noise classification with convolutional neural networks by the proposed algorithm.
Figure 2. The process of the noise classification with convolutional neural networks by the proposed algorithm.
Ijerph 17 02270 g002
Figure 3. The transformed spectrogram image (b) from the noise signal (a), and when applying the sharpening mask (c) and the median filter (d).
Figure 3. The transformed spectrogram image (b) from the noise signal (a), and when applying the sharpening mask (c) and the median filter (d).
Ijerph 17 02270 g003
Figure 4. The Receiver Operating Characteristic (ROC) curve for Table 2: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set A is 1 s.
Figure 4. The Receiver Operating Characteristic (ROC) curve for Table 2: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set A is 1 s.
Ijerph 17 02270 g004
Figure 5. The Receiver Operating Characteristic (ROC) curve for Table 3: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set B is 2 s.
Figure 5. The Receiver Operating Characteristic (ROC) curve for Table 3: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set B is 2 s.
Ijerph 17 02270 g005
Figure 6. The Receiver Operating Characteristic (ROC) curve for Table 4: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set C is 4 s.
Figure 6. The Receiver Operating Characteristic (ROC) curve for Table 4: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set C is 4 s.
Ijerph 17 02270 g006
Figure 7. The Receiver Operating Characteristic (ROC) curve for Table 5: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set D is 8 s.
Figure 7. The Receiver Operating Characteristic (ROC) curve for Table 5: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set D is 8 s.
Ijerph 17 02270 g007
Table 1. Summary of the classification accuracy (%) applying different methods: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask the median filter are applied; the length of time of Set A is 1 s, Set B is 2 s, Set C is 4 s and Set D is 8 s.
Table 1. Summary of the classification accuracy (%) applying different methods: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask the median filter are applied; the length of time of Set A is 1 s, Set B is 2 s, Set C is 4 s and Set D is 8 s.
Test #Set ASet BSet CSet DTest #Set ASet BSet CSet D
(1 s)(2 s)(4 s)(8 s)(1 s)(2 s)(4 s)(8 s)
198.9898.697.9395.2198.9599.1398.8797.6
298.9298.6697.895.07298.999.039998.27
398.898.4397.495.73398.889998.897.87
498.8798.6698.2795.2498.999.298.7398
598.7898.698.2794598.9598.8398.7398.13
698.6198.798.1396.4698.9598.4697.895.87
798.7398.8698.2794.8798.998.8698.7398.13
898.5898.8398.4793.87898.9298.699.296.13
998.8398.6396.4795.73998.8298.9698.697.87
1098.899.2698.2796.41098.8798.997.9395.73
AVG 198.7998.7297.9395.24AVG 198.998.998.6497.36
SD 20.120.230.60.87SD 20.040.230.441.02
(a) Conventional Method(Spectrogram only)(b) Method #1(Spectrogram + Sharpening Mask)
Test #Set ASet BSet CSet DTest #Set ASet BSet CSet D
(1 s)(2 s)(4 s)(8 s)(1 s)(2 s)(4 s)(8 s)
199.1299.0696.295.47199.3799.1399.1399.07
299.1298.839697.47299.2399.1699.3398.93
399.079996.0796.13399.2599.299.3398.67
499.1598.8396.5396.27499.3599.2399.2798.67
598.9598.5396.1395.33599.4299.2699.1399.2
699.0597.9995.8794.27699.299.2698.9397.73
798.9599.0395.5397.6798.9798.598.6798.8
899.0798.6397.5397.07899.3898.8399.4798.53
998.9399.169895.33999.0799.3398.899.2
1098.979997.2795.731099.2899.4398.8798.53
AVG 199.0498.8196.5196.07AVG 199.2599.1399.0998.73
SD 20.080.350.811.06SD 20.140.270.270.43
(c) Method #2(Spectrogram + Median Filter)(d) Proposed algorithm(Spectrogram + Sharpening Mask + Median Filter)
1 AVG: Average of classification accuracies. 2 SD: Standard Deviation of classification accuracies.
Table 2. Summary of the classification accuracy (%) applying different methods in Set A: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set A is 1 s.
Table 2. Summary of the classification accuracy (%) applying different methods in Set A: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set A is 1 s.
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0599000000000100N0599000000000100
N10582.800.20.80.440.94.35.697.29N10584.100.211.13.91.22.84.797.51
N200597.7000000.40.999.78N200597.6000001.10.399.76
N3000599000000100N3000598.8000000.299.96
N400.30.10.2594.10.41.10.10.32.299.18N400.900593.60.41.10.81.11.199.09
N50.10.3000.2593.140.300.999.02N50.10.3000593.44.10.700.399.07
N603.400.304.3587.30.802.898.05N602.30003.4591.10.901.298.68
N700.800000.1596.80.3199.63N700.80000.20.2595.70299.46
N809.70.100.10.700585.92.697.81N808.30001.100585.83.997.77
N90.280.10.10.43.11.30.43.3581.997.14N90.25.90.700.32.81.20.42.6584.997.64
Average Classification Accuracy98.79 Average Classification Accuracy98.9
(a) Conventional Method (Set A)(b) Method #1 (Set A)
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0599000000000100N0599000000000100
N10581.900.30.40.95.91.13.4597.14N10586.100.20.60.63.40.63.93.797.85
N200598.6000000.10.399.93N200598.1000000.60.399.85
N3000598.9000000.199.98N300.10598.900000099.98
N400.900595.30.80.80.20.80.299.39N400.600597000.20.40.299.67
N50.10.4000.1594.63.20.400.199.26N500.1000595.42.8000.799.41
N601.400.302.7592.8100.898.96N601.300.202.85931.400.299
N700.9000.100.1597.200.799.7N700.400000.1597.800.799.8
N806.40.100.10.4005892.998.33N805.9000.20.400590.71.898.61
N906.20.301.321.20.12.2585.697.76N904.90.300.61.31.10.11.8588.998.31
Average Classification Accuracy99.04 Average Classification Accuracy99.25
(c) Method #2 (Set A)(d) Proposed algorithm (Set A)
Note: the correct classifications in the diagonal cells and especially the incorrect classifications in the rest to be express in different colors.
Table 3. Summary of the classification accuracy (%) applying different methods in Set B: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set B is 2 s.
Table 3. Summary of the classification accuracy (%) applying different methods in Set B: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set B is 2 s.
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0299000000000100N0299000000000100
N1028900.20.90.61.312.83.296.66N10293.1000.20.90.90.90.92.198.03
N200298.20000000.899.74N200298.4000000.10.499.81
N300.10298.900000099.96N300.20298.4000000.399.81
N400.20.10294.30.20.40.91.31.498.44N4000.2029500.800.32.798.66
N500000297.21.6000.299.41N50.20.7000296.31.3000.499.11
N600.20001.829700099.33N600.30001296.20.301.199.07
N700.400000.1298.40099.81N70000000.2298.700.199.89
N804.9000.40.400291.61.797.51N803.3000.81.1002903.896.99
N904.10.80.10.60.720.62.228896.32N901.7000.71.71.10.81.129297.66
Average Classification Accuracy98.72 Average Classification Accuracy98.9
(a) Conventional Method (Set B)(b) Method #1 (Set B)
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0299000000000100N0299000000000100
N10292.3000.20.81.20.62.72.197.77N1029100.20.30.41.202.63.297.32
N200298.90000000.199.96N200298.80000000.299.93
N3000298.80.20000099.93N300.10298.70.100000.199.89
N400.60.20295.900.20.20.41.498.96N400.10.10296.40.10.800.31.199.15
N500.4000296.22.100.10.199.07N500.4000297.41.100099.48
N600.20.10.10.12.7295.30.200.298.77N600.30001.8296.30.200.399.11
N700.200000298.60.10.199.85N700.100000298.30.10.499.78
N806.4000.40.100289.72.396.88N802000.20.300294.22.298.4
N905.40.300.40.70.601.8289.896.92N90.12.20.100.40.60.30.21.3293.798.22
Average Classification Accuracy98.81 Average Classification Accuracy99.13
(c) Method #2 (Set B)(d) Proposed algorithm (Set B)
Note: the correct classifications in the diagonal cells and especially the incorrect classifications in the rest to be express in different colors.
Table 4. Summary of the classification accuracy (%) applying different methods in Set C: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set C is 4 s.
Table 4. Summary of the classification accuracy (%) applying different methods in Set C: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, only the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set C is 4 s.
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0150000000000100N0150000000000100
N10144.70000.40.80.421.796.44N10146.4000.10.30.80.10.31.997.63
N200149.1000000.30.699.41N200149.6000000.20.299.7
N3000149.70.30000099.78N3000150000000100
N400.60.30.1145.700.20.90.71.697.11N400.600.1146.30.20.70.20.41.497.56
N500000.1149.70.1000.199.78N500000149.40.600099.63
N600.6000.12.8146.20.30097.48N6000001148.9000.199.26
N70000000.1149.800.199.85N7000000015000100
N802.40.600000.2145.21.696.81N804.3000.1000144.11.496.07
N90.27.400.10.20.600.62138.992.59N902.600.10.30.80.80.10.4144.996.59
Average Classification Accuracy97.93 Average Classification Accuracy98.64
(a) Conventional Method (Set C)(b) Method #1 (Set C)
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N0150000000000100N0150000000000100
N10136.2000.70.22.60.73.66.190.81N10147.4000.200.60.10.70.998.37
N200148.800.10000.60.699.19N200149.6000000.20.299.7
N3000149.80.20000099.85N3000149.90.10000099.93
N400.30.31.8142.700.60.32.21.895.11N400.30.10.2147.100.20.30.11.198.37
N500.4000148.60.800.2099.04N500000149.40.600099.63
N600.600.203.4145.10.300.396.74N60.1000.101.4147.90.100.198.74
N700.100000.1148.80.10.999.19N700.100000149.700.299.78
N804.8000.40.200.2141.82.694.52N801.7000.10.100147.10.798.3
N905.41.20.60.20.21.80.83.813690.67N901.400.10.30.10.30.30.2146.998.07
Average Classification Accuracy96.51 Average Classification Accuracy99.09
(c) Method #2 (Set C)(d) Proposed algorithm (Set C)
Note: the correct classifications in the diagonal cells and especially the incorrect classifications in the rest to be express in different colors.
Table 5. Summary of the classification accuracy (%) applying different methods in Set D: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, onyl the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set D is 8 s.
Table 5. Summary of the classification accuracy (%) applying different methods in Set D: (a) Conventional Method; (b) Method #1, only the sharpening mask is applied; (c) Method #2, onyl the median filter is applied; (d) proposed algorithm, both the sharpening mask and the median filter are applied; the length of time of Set D is 8 s.
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N075000000000100N075000000000100
N1064.6000.30.41.40.13.24.986.07N1069.300002.301.91.492.44
N20074.2000000.10.798.96N20074.90000000.199.85
N300074.60.40000099.41N300074.90.10000099.85
N400.30.1071.60.200.60.41.895.41N4000.40.270.80.20.60.40.3294.37
N500.100074.30.300.10.199.11N50000074.90.100099.85
N6000004.2700.100.793.33N6000000.774.300099.11
N7000000.1074.300.699.11N70000000.174.90099.85
N802.10020.20069.21.492.3N802.1000.30.10.1070.61.894.07
N903.70.10.10.10.600.63.366.688.74N902000.70.60.10.30.770.794.22
Average Classification Accuracy95.24 Average Classification Accuracy97.36
(a) Conventional Method (Set D)(b) Method #1 (Set D)
N0N1N2N3N4N5N6N7N8N9ACC N0N1N2N3N4N5N6N7N8N9ACC
(%)(%)
N075000000000100N075000000000100
N1066.40010.11.801.83.988.59N1073.3000.10.10.400.20.897.78
N20074.10000000.998.81N20074.7000000.10.299.56
N300074.80.20000099.7N300074.80.20000099.7
N400.20.31.170.70.20.300.71.494.22N400.40.3072.90.1000.70.697.19
N50000074.60.400099.41N50000074.90.100099.85
N6000002.472.4000.196.59N6000000.874.200098.96
N70000000.474.200.398.96N70000000.174.90099.85
N801.30.100.40.10071294.67N801.1000.20.30072.60.896.74
N901.81.70.310.41.60.40.467.389.78N900.8000.10.40.30.1073.297.63
Average Classification Accuracy96.07 Average Classification Accuracy98.73
(c) Method #2 (Set D)(d) Proposed algorithm (Set D)
Note: the correct classifications in the diagonal cells and especially the incorrect classifications in the rest to be express in different colors.
Back to TopTop