Next Article in Journal
Impact of Test Conditions While Screening Lithium-Ion Batteries for Capacity Degradation in Low Earth Orbit CubeSat Space Applications
Previous Article in Journal
A Performance and Cost Overview of Selected Solid-State Electrolytes: Race between Polymer Electrolytes and Inorganic Sulfide Electrolytes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inline Monitoring of Battery Electrode Lamination Processes Based on Acoustic Measurements

1
Institute of Machine Tools and Production Technology, Technische Universität Braunschweig, 38106 Braunschweig, Germany
2
Battery LabFactory Braunschweig, Technische Universität Braunschweig, 38106 Braunschweig, Germany
3
Fraunhofer Institute for Surface Engineering and Thin Films IST, 38108 Braunschweig, Germany
4
Fraunhofer Project Center for Energy Storage and Systems ZESS, 38108 Braunschweig, Germany
5
Fraunhofer Institute for Ceramic Technologies and Systems IKTS, 01109 Dresden, Germany
*
Authors to whom correspondence should be addressed.
Authors contributed equally.
Submission received: 3 February 2021 / Revised: 24 February 2021 / Accepted: 2 March 2021 / Published: 8 March 2021

Abstract

:
Due to the energy transition and the growth of electromobility, the demand for lithium-ion batteries has increased in recent years. Great demands are being placed on the quality of battery cells and their electrochemical properties. Therefore, the understanding of interactions between products and processes and the implementation of quality management measures are essential factors that requires inline capable process monitoring. In battery cell lamination processes, a typical problem source of quality issues can be seen in missing or misaligned components (anodes, cathodes and separators). An automatic detection of missing or misaligned components, however, has not been established thus far. In this study, acoustic measurements to detect components in battery cell lamination were applied. Although the use of acoustic measurement methods for process monitoring has already proven its usefulness in various fields of application, it has not yet been applied to battery cell production. While laminating battery electrodes and separators, acoustic emissions were recorded. Signal analysis and machine learning techniques were used to acoustically distinguish the individual components that have been processed. This way, the detection of components with a balanced accuracy of up to 83% was possible, proving the feasibility of the concept as an inline capable monitoring system.

1. Introduction

The current increasing importance of lithium-ion batteries for the consumer and automotive sector pose complex challenges for industry and research [1]. Overall, the demand for lithium-ion battery cells is constantly increasing and the quality requirements placed on these cells, such as fast charging capability [2,3,4], are rising. Battery management systems for battery cells are continuously being developed to extend battery life, and cell embedded sensors are used to collect valuable data that can be used to optimize battery cell operating strategies [5,6]. New active material composites are developed to achieve higher energy and power densities, and cell components such as separators and electrolytes are continuously improved to enhance cell safety for example [7,8]. Likewise, the production processes of lithium-ion battery cells face the challenges of increased demand and quality requirements. This implies the necessity of refining existing production concepts and implementing new processes in the process chain [9,10]. Therefore, a profound understanding of processes and the resulting products is essential. In particular, this requires the identification of process-product interdependencies. It is this understanding that allows for the improvement and further development of processes and enables a quality assured production process.
Inline process monitoring is crucial to build up process knowledge by monitoring the quality of all intermediate products and allows detecting interdependencies with process parameters and conditions along the process chain. For example, the coating thickness of active material on the electrode current collector foil is typically measured subsequent to the coating process [11]. On the one hand, this intermediate product parameter is used for controlling the coating process, and, on the other hand, the coating thickness is an essential quality criterion of the electrode itself. Regarding the process step of cell stack assembly, the deposit position and orientation of the electrodes is recorded to control the complete coverage of the cathode area with the respective anode, since incorrect deposition of electrodes has a drastic influence on the electrochemical performance of battery cells [12]. Visualization of the electrolyte during the electrolyte filling and wetting process is another example for inline process monitoring, since not properly wetted battery components are not electrochemically active [13,14]. Especially for new processes such as lamination, it is therefore important to find suitable solutions for inline process monitoring methods.
Lamination is a promising process for integration into the battery cell production chain to increase throughput and even improve certain aspects of battery cell performance [15]. Within that process, the electrode and separator are bonded by means of temperature and pressure to form an electrode–separator laminate (ESL). The ESL is an intermediate product that has advanced mechanical properties, such as greater bending stiffness, compared to the individual electrode or separator. Further, it can reduce the risk of errors such as cross-contamination or separator displacement by fixating the respective laminated components. These properties enable the reduction of complexity of subsequent process steps, such as stacking and winding, enabling the potential of process acceleration.
In the lamination process, temperature sensors are usually used to monitor the lamination temperature and, if necessary, to control the surface temperature of the ESL after lamination. Imaging techniques can also be used to detect possible surface defects such as wrinkling of the separator after lamination. However, there is no inline capable measuring method that can be used to verify which product components are laminated and if electrodes or separator are misaligned or missing. An incorrect positioning of the electrodes can lead to a decrease in the electrical performance of the battery cell or, in the worst case, cell failure due to a local short circuit. A missing separator leads directly to a cell-internal short circuit. Due to the multiple layer setup of material, imaging methods are not established. A promising approach is the use of acoustic measurement methods, which can be simply installed in existing lamination equipment as a low-cost measurement system.
Apart from battery cell production, acoustic monitoring methods have been widely used in other fields of application for several decades. The utilization ranges from hard turning and grinding processes [16] and precision machining [17] to additive manufacturing technology. In additive production, acoustic monitoring is applied in fused filament fabrication [18], laser metal deposition [19,20], selective laser melting [21], etc. Therefore, microphones, acoustic emission or fiber Bragg grating sensors are used. To the authors knowledge, this monitoring technology has not yet been used in battery cell production.
This paper presents a novel approach for an inline capable process monitoring of the lamination process used in battery cell production based on acoustic measurement methods. Therefore, a feasibility study was carried out in which acoustic emissions were recorded during the lamination of battery cell components. The obtained data were assessed using neural networks, whereas different classification algorithms were evaluated to accomplish the most accurate signal analysis. The resulting confusion matrices show that missing components in the lamination process are detectable by acoustic measurements. Accordingly, the development of an inline capable acoustic measuring method contributes towards maintaining and extending process and product quality in battery cell production while being low cost and easily retrofittable at the same time.

2. Materials and Methods

The materials, intermediate products and processes on which the approach is based originate from the pilot production line for lithium-ion pouch cells at the Battery LabFactory Braunschweig (BLB). The shown approach was conceptualized in cooperation with the Fraunhofer Project Center for Energy Storage and Systems (ZESS). The respective composition of the components as well as the experimental setup and the applied methods are described in the following.

2.1. Materials

The active material of the anode is surface modified graphite (SMGA5), which accounts for 93 wt.% of the anode raw material. As binder material, carboxymethyl cellulose (CMC) and styrene-butadiene (SBR) were used with mass percentages of 1.33 wt.% and 2.67 wt.%, respectively. As conductive additives, 2 wt.% conductive carbon black (C65) and 1 wt.% conductive carbon black (SFG6L) were used. Distilled water was added to the dry raw material as a solvent in a ratio of 50% solid to 50% solvent. The anode paste was coated and dried on both sides of a 10 µm thick copper foil. After the subsequent calendering process, the anode thickness measured approximately 120 µm.
Nickel-cobalt-manganese-oxide (NMC-622) is the active material of the cathodes used in all experiments and accounts for 93 wt.% of the dry raw material mass. The remaining components of the cathode material were 4 wt.% polyvinylidene fluoride (PVDF) binder, 2 wt.% conductive carbon black (C65) and 1 wt.% conductive carbon black (SFG6L). As a solvent, N-methyl-2-pyrroldidone (NMP) was added to the dry components in a ratio of 70% solid to 30% solvent, and the resulting cathode paste was coated to a 10 µm thick hydro aluminum foil. The double-sided coated cathode has a thickness of approximately 120 µm after the calendering process.
A three-layer separator was used for all experiments. The base of the separator consists of an inner layer of polyethylene (PE) and two surrounding layers of polypropylene (PP). As a laminable layer, a 0.5 µm thick polyvinylidene fluoride (PVDF) coating was applied over the entire surface on both sides. The nominal thickness of the separator is 20 µm with a calculated porosity of 46%. The air permeability (Gurley value) given by the manufacturer was stated as 240 s for 100 mL and the shrinkage at 105 °C with 1.8% after 1 h of exposure.
Before lamination, laser cutting was used to cut the anodes and cathodes in single sheets with lateral dimensions of 70 × 50 and 65 × 45 mm2, respectively. The separator was cut manually into individual sheets using ceramic shears. Both the laser cutting and the lamination process were carried out in a dry room atmosphere at 20 °C room temperature and −45 °C dew point.

2.2. Test Setup and Recording Conditions

The lamination system used consists of two heated lamination rollers, joining the laminable separators and the electrodes under adjustable temperature and pressure. To avoid high local tension peaks, the upper roller has an aluminum core with a fluoroelastomer coating with a Shore hardness of 70 Sh, while the lower roller has an aluminum core with non-stick coating. The system was manufactured by Polatek SL- Laminiertechnik GmbH. Essential process parameters of lamination are temperature, lamination contact force and feeding rate, which can be adjusted in the range of 20–200 °C for the temperature, 0–2 kN for the contact force and 1–45 mm/s for the feeding rate.
The experimental setup for acoustic measurements in the lamination process consisted of two Beyerdynamics MM1 measurement microphones, each positioned directly above and below the focused area of material feed-through, as shown in Figure 1. The material was fed through the central area of the available working width of 500 mm. With the described electrode width, this corresponded to a feedthrough area of approximately 80 mm (small deviations of a few mm cannot be ruled out due to manual activities) with a respective distance of 210 mm to the roller ends. One of the measuring microphones was placed in the center of the working width below the lower roller at the gap between roller and infeed table. The distance between the microphone and the system components was kept to a minimum, taking physical contact avoidance and the smallest possible distance to noise generation into account. The other measuring microphone was positioned centrally to the working width 84 mm above the material feed through. In accordance with the system design, no closer distance to the noise generation was possible. Additionally, a mid-side-microphone was used for recording the ambient noise in the surrounding area in close proximity. Due to the experimental execution in a pilot-scale battery cell production facility, background noise had to be expected. To record the different channels of the measurement and the ambient microphones, a Zoom H6 portable recorder was used. The audio recordings sample rate was set to 96 kHz with a resolution of 24 Bit.
Initially, test recordings were made to evaluate the general noise emission during the lamination process with varying components (cathode, anode, separator and protective liner) and parameters (feed rate, roller temperature and pressure). The preliminary tests were used to identify acoustically measurable parameters and values to design the following experiments. The evaluation showed that, in contrast to roller temperature and pressure, both component variation and feed rate could be identified acoustically. Since the feed rate is a control parameter, it was not considered further in the course of the study and was kept constant. The same applies for the pressure and the roller temperature. Subsequently to the preliminary tests, a test plan with quasi-randomized component composition was created and executed. A total of 148 lamination tests was conducted within the same experimental setup at constant contact pressure of 1 kN, a feed rate of 16 mm/s and rollers without heating at ambient temperature of 20 °C. The component composition was varied between two different anodes and cathodes, each with and without separator, and one stand-alone separator (as shown in Table 1). The layered components surrounded by the protective liner were manually inserted into the contacting surface of the mechanical rollers. The same protective liner was used for each component composition (see ID in Table 1). Subsequently, the lamination rollers were moved together grasping the protruding protective liner and leading to a draw-in and lamination of the material. Finally, the rollers were moved to their default position and the created ESL was removed. On average, the duration of the acoustic signal was 4 s for the set material feed rate of the respective compound, which corresponds to the time between the closing and opening of the lamination rollers including the lamination process itself.
During the execution of experiments, all disturbing background noises and their specific time of occurrence were logged along with the time frame of the actual lamination process. Seven of the total 148 lamination tests could not be evaluated due to loud background noises. These background noises were noises with a high sound pressure, which were caused by manual work (punching out test specimens with a hammer) at a workplace in direct proximity of the laminating system in the laboratory environment. The audio recordings were exported from the recording device and transferred to the computer along with the exported CSV file from the machine control of the lamination aggregate for processing. A complete overview of all measurements performed including the associated time stamps can be found in the Appendix A (see Table A1).

2.3. Signal Processing and Classification Algorithms

The audio signals were recorded as one WAV file per microphone over the entire execution time. The first step of signal processing comprised a segmentation of the audio signals into single lamination process segments (see Figure 2). The signals were split roughly by extracting the segment from 500 ms before the machine control data indicate the start of the process to 2000 ms after the end. The segmentation was fine-tuned using a signal amplitude trigger on microphone 1 with an activation threshold of -24 dBFS and a reversed activation threshold of -28 dBFS at the end of the segment. The parameters were set to detect the closing and opening sound of the rollers. To remove these sounds from the recordings, 800 ms from the beginning and 200 ms from the end of each signal were additionally cut.
To find the optimal analysis method with suitable parameters, different algorithms and configurations were evaluated. The results are compared in Section 3. First, the signal of each lamination was transformed into the frequency domain using a Short-time Fourier transform. Four different setups were used with a resolution of 43, 21, 11 and 5 ms in the time and 12, 23, 47 and 94 Hz in frequency domains, respectively. For classification, only short parts of the signal of each lamination process were used for training and evaluation. Here, three different time lengths, namely 200 and 500 ms frames without overlap as well as the time resolution of the Fourier-transform, were investigated. Either the signals of both microphones or only the signal of microphone 1 (below the area of material feed-through) was used due to its shortest distance to the source of acoustic emission.
A four-class classification task, where the classifier should differentiate between four configurations (electrode with separator [E + S], only electrode [E], only separator [S] and none of both [N]) as well as a six-class classification task, where the classifier should additionally differentiate between the type of the electrode (anode [A, A + S] or cathode [C, C + S]) were distinguished. Table 2 shows both tasks with their respective class labels.
With these signal features, neural networks were trained using an experimentation system developed at the Fraunhofer IKTS. This experimentation system is a software framework implemented in Python for different artificial intelligence algorithms, methods and toolkits and is also used for data management and organization. For this study, we used algorithms supported by the Keras Toolkit (Python deep learning API) and the TensorFlow library [22]. Neural networks have become very popular for classifying many different types of data in the last years. In many cases, they perform as well as other classifiers or better. Therefore, this small study focused on the use of neural networks as classifier. Four different layer configurations for two neural network types were tested: multi-layer perceptron (MLP) and convolutional neural networks (CNN) each with a small (S) and a large (L) network size.
  • MLP-S with three full connected layers using 300, 20 and 4 or 6 neurons
  • MLP-L with three full connected layers using 1000, 50 and 4 or 6 neurons
  • CNN-S with one convolutional layer using 8 filters as well as three full connected layers with 300, 20 and 4 or 6 neurons
  • CNN-L with two convolutional layers using 8 and 16 filters as well as three full connected layers with 300, 20 and 4 or 6 neurons
Between each layer, batch normalization [23] and dropout were used for overfitting prohibition [24]. A two-fold cross validation was applied. The training was performed on all signals involving anode 1, cathode 1 and half of the signals with a missing electrode (Dataset X). The subsequent evaluation was performed on the remainder of signals (Dataset Y). Training and evaluation were then repeated with the respective other dataset. This procedure ensures that an evaluation result for every signal is obtained, but the classifier does not see a signal of the same electrode or protective liner in both training and evaluation (see Figure 3). Finally, two different recognition ranges were investigated. Here, either the results for each time frame (200 ms, 500 ms or Fourier-Transform time resolution) were counted or the most frequently recognized class label was used in each lamination process as result for the whole process.
As neural networks are initialized with random numbers, results may be misinterpreted if very good or bad values are used for initialization. Therefore, the whole training and evaluation procedure was repeated 20 times for each parameter and algorithm set with different random seeds. The accuracy for each class and repetition is defined as:
ACC c , r = COR c , r     N c
where Nc is the number of samples in class c and CORc,r is the number of correct recognized samples of class c in repetition r. The balanced accuracy over all classes for each repetition was calculated according to the authors of [25] by:
BACC r   =     1 C c = 1 C ACC c , r
with C the number of classes. The balanced accuracy for all repetitions is the mean of each individual one:
BACC   =     1 R r = 1 R BACC r
where R stands for the number of repetitions. For rating the stability of the training algorithm, the 95% confidence interval was estimated using the balanced accuracies of all repetitions. For estimation, a normal distribution assumption was used:
CONF   = 2 R ×   std ( BACC r )   = 2 R ×       1 R r = 1 R ( BACC r     BACC ) 2

3. Results

In this section, the results of a feasibility study are presented to demonstrate the potential of acoustic monitoring for the lamination process and a comparison between different analysis and classification methods used in this study is given. The aim was to acoustically distinguish different battery components such as electrode and separator during lamination.

3.1. Achieved Accuracies for Classification

In the feasibility study, models were trained using one part of the data. In evaluation with the other data part, a result class is automatically assigned to each recording or each frame. The result class is compared to the reference class which is known from the recordings. Figure 4 shows the best results achieved. The depicted confusion matrices show the relative amounts of recordings of the reference class (rows) which were classified as the result class (columns). For example, the second last column in Figure 2a shows that, of all recordings without electrode and separator (N), 86% were recognized correctly and 13% as anode with separator (A). The missing 1% is caused by rounding some low values to zero. The diagonal values, representing the accurate recognition, are higher for the four-class task. This results in a better recognition performance compared to the six-class task. Especially for the six-class task, a grid pattern indicates several confusions of anodes and cathodes. In addition, the balanced accuracy (average of relative amount of correct classified recordings per class) for the four-class task at 0.84 is 0.18 higher than that for the six-class task at 0.66.
This study indicates that the automatic recognition of a missing separator, a missing electrode or both is possible by acoustic observation with a balanced accuracy of 0.95, 0.94 and 0.96, respectively. The differentiation of individual electrodes for the detection of a false component (anode for cathode or vice versa) is still a challenge with the underlying database of the feasibility study. The achievable balanced accuracy was below 0.78. An increase in performance is suspected with an extended base of recordings.

3.2. Comparison of Different Analysis and Classification Methods

For the determination of ideal analysis and classification algorithms as well as parameters, different algorithms and parameters (two classification tasks, one or two microphone channels, four different frequency resolutions in signal analysis, three different classification time frame lengths, four different neural network topologies and two different recognition ranges) were tested. All together, 304 different classification exercises were performed in the feasibility study. A detailed table of all exercises can be found in the Appendix A (see Table A2). For further studies, it would be helpful to know which algorithm choice and which parameter influences the recognition performance and which one has no influence. The results were analyzed by calculating the difference in the balanced accuracy for each pair of classification exercises which distinguish themselves only in one algorithm type or parameter. The frequency of occurrence of the difference in balanced accuracy is plotted in Figure 3 for the variation of different algorithms and parameters.
In Figure 5a, a comparison of the six-class and four-class classification tasks is given. The four-class task performs better in all classification exercises, which was expected due to the fact that the six classes are subsets of the four classes. The averaged difference in the balanced accuracy is 0.21. Figure 5b shows the comparison of the usage of one or both microphone signals. Here, the use of both microphone signals is better in most cases (average difference 0.05). Although the upper measuring microphone above the material feed through was farther away from the noise generation, its use resulted in a detection advantage. The comparison of different frequency resolutions in signal analysis in Figure 5c shows that higher frequency resolutions give a slightly better recognition performance. The variation of classification time frame lengths in Figure 5d shows the best results with a frame length of 200 ms. Figure 5e compares different neural network topologies. Here, no markable differences between the different topologies are visible. Finally, Figure 5f shows that processing the whole recording (one whole lamination process) always gives better results than processing each time frame on its own with an average difference of 0.11 in balanced accuracy.
As a conclusion, it can be recommended to process the whole recording with both microphones, a frequency resolution of 12 Hz and a classification time frame length of 200 ms while the choice of the neural network topology (CNN or MLP) is of minor importance for the achieved accuracy.

4. Discussion and Conclusions

The feasibility study shows that it is possible to detect the absence of components in the lamination process by acoustic measurements, since the respective acoustic signals can be distinguished by neural networks. Even with the small amount of data recorded, missing components could be identified with a balanced accuracy of up to 83.8% for the four-class classification. Best results were achieved using both microphone signals, a low frequency resolution and a small convolutional neural network. Since the microphones used were at different distances from the noise generation, but the combination of the signals from both microphones proved to be advantageous, the question of the best positioning of the measurement instruments arises. In further studies, the influence of the distance to the noise source could be investigated in more detail.
The choice of pre-processing and feature analysis has a major influence on the achievable recognition performance. Therefore, different parameters and algorithms were investigated and compared. The fact that the four-class task as well as the use of both microphone signals improves the performance is obvious and was expected. The clear difference in recognition range shows that the classification of the whole signal is mandatory for a good performance. The change of the other parameters results in a less frequent performance change. Their influence is much lower. The slightly better performance with higher frequency resolution indicates that the precision in frequency measurement improves the recognition.
Based on the underlying data, it is not yet possible to distinguish the individual electrode types. An expanded database could be a suitable solution. Further, it may be possible to detect faulty components as well. Fault patterns such as incomplete or damaged components and misplaced layers in the stack, which have serious consequences for the electrochemistry of the cell [12,26], could potentially be sorted out as well. Therefore, error proofing by acoustic measurement is a promising way to assure product and process quality in lamination processes. It should be noted that, in addition to non-recognition of components, there may also be incorrect classification of components. Both cases (non-recognition and incorrect classification) are critical with regard to cell performance. A missing electrode leads to a reduced cell capacity after cell assembly and possibly to accelerated cell aging. A missing separator leads to a cell-internal short circuit after cell assembly and therefore with high probability to cell failure. Incorrect classification can also lead to considerable impairment of cell performance and even cell failure. If the costs of incorrect classification and the costs of non-recognition in the production of electrode–separator laminates, e.g., due to rejects or rework, are known, the classification method can be adjusted accordingly so that the probability of occurrence of certain classification errors is less frequent.
Besides quality related features, material scrap occurring in production can be reduced along with associated costs and environmental impacts. During the study, acoustic measurement was shown to be a robust measurement approach in a production environment. Solely surrounding acoustic emissions with a high sound pressure from manual work hindered the evaluability in a few cases. In an automated production environment such influences are not to be expected. Although the probability of a missing component failure is generally low, it would have a major impact on the produced cell. Therefore, the acoustic measurement as a low-cost measurement method is a very well-suited quality gate for the detection of missing components.
In the future, acoustic measurements may complement or, in some cases, even replace optical measurement methods or function as an additional low-cost inline process monitoring concept in battery production. Similar to image processing in optical measurement methods, acoustic measurement generates large datasets that must be evaluated in real time and require high computing capacity. However, this has already been tested for optical methods and should therefore not be an obstacle. A next step would be to evaluate the proven method in an automated production environment and test it for real-time capability with direct feedback to the process control system. Furthermore, the acoustic measurement can be transferred to and tested in further process steps in battery cell production, such as mixing electrode slurries and material comminution. Extending the method from acoustic emission to structure-borne sound could be of great interest as well, when, e.g., grinding and milling containers are involved.

Author Contributions

Conceptualization, R.L., N.D., F.D., S.B. and C.T.; methodology, F.D. and C.T.; validation, R.L., N.D., F.D., S.B. and C.T.; investigation, R.L., N.D., F.D., S.B. and C.T.; writing—original draft preparation, R.L., N.D., F.D., S.B., D.L. and C.T; writing—review and editing, R.L., N.D., F.D., S.B., C.H. and K.D.; visualization, N.D., F.D.; supervision, C.H. and K.D.; project administration, R.L., S.B., C.H. and K.D.; and funding acquisition, C.H. and K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by the German Federal Ministry for Economic Affairs and Energy within the research project DaLion 4.0 (Reference No. 03ETE017A) and the Fraunhofer Gesellschaft within the research project BattLTech (11—76251-99-2/17 (ZN3402)).

Institutional Review Board Statement

Not applicable, as studies on humans and animals are not involved.

Informed Consent Statement

Not applicable, as studies on humans are not involved.

Data Availability Statement

Part of the data presented in this study is available in the Appendix A of this manuscript. Part of the data are not publicly available due to the data required to reproduce these findings forms part of an ongoing study.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. Complete list of all acoustic measurements.
Table A1. Complete list of all acoustic measurements.
MeasurementElectrodeSeparatorProtective LinerTime FromTime UntilComment
1WithoutWithout113:57:0013:57:07
2WithoutWithout113:57:2013:57:28
3WithoutWithout113:57:4513:57:53
4WithoutSeparator213:58:3013:58:34
5WithoutSeparator213:59:0013:59:04
6WithoutSeparator213:59:1513:59:19
7Anode 1Without314:00:0014:00:04
8Anode 1Without314:00:1514:00:19
9Anode 1Without314:00:4014:00:44
10Anode 2Without414:01:0514:01:19
11Anode 2Without414:01:3014:01:34
12Anode 2Without414:01:4514:01:49
13Cathode 1Without714:02:2014:02:24
14Cathode 1Without714:02:3514:02:39
15Cathode 1Without714:02:5514:02:59
16Cathode 2Without814:05:3514:05:39
17Cathode 2Without814:05:5014:05:54Background noise
18Cathode 2Without814:06:1514:06:19
19Anode 2Without414:06:5514:06:59
20Anode 2Without414:07:1014:07:14
21Anode 2Without414:07:2514:07:29
22WithoutSeparator214:11:1514:11:19Background noise
23WithoutSeparator214:12:0514:12:09
24WithoutSeparator214:12:3014:12:34
25WithoutWithout114:14:1014:14:17
26WithoutWithout114:14:2514:14:33
27WithoutWithout114:14:4014:14:48
28Cathode 2Without814:15:1014:15:14Background noise
29Cathode 2Without814:15:2514:15:29
30Cathode 2Without814:16:0014:16:04Background noise
31Anode 1Without314:17:4014:17:44
32Anode 1Without314:17:5014:17:54
33Anode 1Without314:18:0014:18:04
34WithoutWithout114:19:5014:19:56
35WithoutWithout114:20:0514:20:12Background noise
36WithoutWithout114:20:2014:20:27
37Cathode 1Without714:20:4514:20:48
38Cathode 1Without714:20:5514:20:58
39Cathode 1Without714:21:0514:21:08
40Anode 1With314:32:0014:32:04
41Anode 1With314:32:1514:32:20
42Anode 1With314:32:2514:32:30
43Cathode 2With814:32:5514:32:59
44Cathode 2With814:33:0514:33:09
45Cathode 2With814:33:1514:33:19
46WithoutSeparator214:33:5614:34:00
47WithoutSeparator214:34:1014:34:14
48WithoutSeparator214:34:2514:34:30
49Anode 2With414:35:0814:35:10Aborted
50Anode 2With414:35:3514:35:40
51Anode 2With414:35:5014:35:54
52Anode 2With414:36:0514:36:09
53Cathode 2With814:36:5514:36:59
54Cathode 2With814:37:0514:37:09
55Cathode 2With814:37:1514:37:19Background noise
56Cathode 1With714:38:1514:38:19
57Cathode 1With714:38:2514:38:29
58Cathode 1With714:38:3514:38:39
59Anode 1With314:39:1514:39:19
60Anode 1With314:39:2514:39:29
61Anode 1With314:39:3514:39:39
62WithoutWithout114:40:1514:40:22
63WithoutWithout114:40:3014:40:37
64WithoutWithout114:40:4514:40:52
65Anode 2With414:41:1514:41:19
66Anode 2With414:41:2514:41:29
67Anode 2With414:41:3514:41:39
68Cathode 1With714:42:1514:42:19
69Cathode 1With714:42:2514:42:29
70Cathode 1With714:42:3514:42:39
71WithoutWithout114:43:1514:43:22
72WithoutWithout114:43:3014:43:37
73WithoutWithout114:43:4514:43:52
74WithoutSeparator214:44:1514:44:19
75WithoutSeparator214:44:2514:44:29
76WithoutSeparator214:44:3514:44:39
77WithoutWithout114:46:1514:46:22
78WithoutWithout114:46:3014:46:37Background noise
79WithoutWithout114:46:4514:46:53
80WithoutSeparator214:47:1514:47:19
81WithoutSeparator214:47:2514:47:29
82WithoutSeparator214:47:3514:47:39
83Anode 1With314:48:1514:48:19
84Anode 1With314:48:2514:48:29
85Anode 1With314:48:3514:48:39
86Anode 2With414:49:1514:49:19
87Anode 2With414:49:2514:49:29
88Anode 2With414:49:3514:49:39Background noise
89Cathode 1With714:50:1514:50:19Background noise
90Cathode 1With714:50:2714:50:31
91Cathode1With714:50:3514:50:39
92Cathode 2With814:51:1514:51:19
93Cathode 2With814:51:2714:51:31
94Cathode 2With814:51:3514:51:39
95Anode 2With414:52:1514:52:19
96Anode 2With414:52:2514:52:29
97Anode 2With414:52:3514:52:39
98Anode 1With314:53:1514:53:19
99Anode 1With314:53:2514:53:29
100Anode 1With314:53:3514:52:39
101WithoutSeparator214:54:1514:54:19
102WithoutSeparator214:54:2514:54:28
103WithoutSeparator214:54:3514:54:39
104Cathode 2With814:55:1514:55:19
105Cathode 2With814:55:2514:55:28
106Cathode 2With814:55:3514:55:39
107WithoutWithout114:56:1514:56:22
108WithoutWithout114:56:3014:56:37
109WithoutWithout114:56:4514:56:53Background noise
110Cathode 2With814:57:1514:57:19Background noise
111Cathode 2With814:57:2514:57:29
112Cathode 2With814:57:3514:57:39
113WithoutWithout114:58:1514:58:22Background noise
114WithoutWithout114:58:3014:58:37
115WithoutWithout114:58:4514:58:53
116Cathode 1With714:59:1514:59:19Background noise
117Cathode 1With714:59:2514:59:29Background noise
118Cathode 1With714:59:3514:59:39
119Anode 1Without315:02:1515:02:19
120Anode 1Without315:02:2515:02:29Background noise
121Anode 1Without315:02:3515:02:39
122Cathode 1Without715:03:1515:03:19
123Cathode 1Without715:03:2515:03:29
124CathodeWithout715:03:3515:03:39
125Anode 2Without415:04:1515:04:19
126Anode 2Without415:04:2515:04:29
127Anode 2Without415:04:3515:04:39
128WithoutSeparator215:11:1515:11:19
129WithoutSeparator215:11:2515:11:29
130WithoutSeparator215:11:3515:11:39
131Cathode 2Without815:12:1515:12:19
132Cathode 2Without815:12:2515:12:29
133Cathode 2Without815:12:3515:12:39
134Cathode 1Without715:13:1515:13:19
135Cathode 1Without715:13:2515:13:29
136Cathode 1Without715:13:3515:13:39
137Anode 1Without315:14:1515:14:19Background noise
138Anode 1Without315:14:2515:14:29Background noise
139Anode 1Without315:14:3515:14:39
140Cathode 2Without815:15:1515:15:19
141Cathode 2Without815:15:2515:15:29
142Cathode 2Without815:15:3515:15:39
143WithoutSeparator215:16:1515:16:19
144WithoutSeparator215:16:2515:16:29Background noise
145WithoutSeparator215:16:3515:16:39Background noise
146Anode 2Without415:17:1515:17:19Background noise
147Anode 2Without415:17:2515:17:29Background noise
148Anode 2Without415:17:3515:17:39Background noise
Table A2. Detailed results of all performed exercises. Abbreviations: classification task (Class. Task), microphones (Mics.), frequency resolution (Freq. res.), classification time frame length (Frame len.), neural network topology (Top.), recognition range (Rec. range), balanced accuracy (BACC) and confidence interval (CONF).
Table A2. Detailed results of all performed exercises. Abbreviations: classification task (Class. Task), microphones (Mics.), frequency resolution (Freq. res.), classification time frame length (Frame len.), neural network topology (Top.), recognition range (Rec. range), balanced accuracy (BACC) and confidence interval (CONF).
Class. TaskMics.Freq. Res.Frame Len.Top.Rec. RangeBACCCONF
4-classMic 194 Hz500 msCNN-Sper frame0.59±0.014
4-classMic 194 Hz500 msCNN-Sper rec.0.68±0.023
4-classMic 194 Hz500 msCNN-Lper frame0.57±0.021
4-classMic 194 Hz500 msCNN-Lper rec.0.63±0.031
4-classMic 194 Hz500 msMLP-Lper frame0.62±0.009
4-classMic 194 Hz500 msMLP-Lper rec.0.69±0.015
4-classMic 194 Hz500 msMLP-Sper frame0.60±0.010
4-classMic 194 Hz500 msMLP-Sper rec.0.66±0.018
4-classMic 194 HzFFTCNN-Sper frame0.46±0.008
4-classMic 194 HzFFTCNN-Sper rec.0.67±0.021
4-classMic 194 HzFFTCNN-Lper frame0.45±0.009
4-classMic 194 HzFFTCNN-Lper rec.0.64±0.026
4-classMic 194 HzFFTMLP-Lper frame0.47±0.004
4-classMic 194 HzFFTMLP-Lper rec.0.73±0.018
4-classMic 194 HzFFTMLP-Sper frame0.46±0.005
4-classMic 194 HzFFTMLP-Sper rec.0.73±0.020
4-classBoth94 Hz200 msCNN-Sper frame0.67±0.009
4-classBoth94 Hz200 msCNN-Sper rec.0.81±0.020
4-classBoth94 Hz200 msCNN-Lper frame0.69±0.014
4-classBoth94 Hz200 msCNN-Lper rec.0.82±0.022
4-classBoth94 Hz200 msMLP-Lper frame0.62±0.016
4-classBoth94 Hz200 msMLP-Lper rec.0.71±0.030
4-classBoth94 Hz200 msMLP-Sper frame0.66±0.011
4-classBoth94 Hz200 msMLP-Sper rec.0.78±0.023
4-classBoth94 Hz500 msCNN-Sper frame0.63±0.011
4-classBoth94 Hz500 msCNN-Sper rec.0.72±0.019
4-classBoth94 Hz500 msCNN-Lper frame0.65±0.015
4-classBoth94 Hz500 msCNN-Lper rec.0.74±0.025
4-classBoth94 Hz500 msMLP-Lper frame0.51±0.023
4-classBoth94 Hz500 msMLP-Lper rec.0.54±0.033
4-classBoth94 Hz500 msMLP-Sper frame0.65±0.010
4-classBoth94 Hz500 msMLP-Sper rec.0.73±0.018
4-classBoth94 HzFFTCNN-Sper frame0.50±0.008
4-classBoth94 HzFFTCNN-Sper rec.0.81±0.023
4-classBoth94 HzFFTCNN-Lper frame0.49±0.008
4-classBoth94 HzFFTCNN-Lper rec.0.78±0.018
4-classBoth94 HzFFTMLP-Lper frame0.52±0.006
4-classBoth94 HzFFTMLP-Lper rec.0.81±0.027
4-classBoth94 HzFFTMLP-Sper frame0.51±0.006
4-classBoth94 HzFFTMLP-Sper rec.0.81±0.024
4-classBoth47 Hz200 msCNN-Sper frame0.70±0.009
4-classBoth47 Hz200 msCNN-Sper rec.0.84±0.019
4-classBoth47 Hz200 msCNN-Lper frame0.68±0.016
4-classBoth47 Hz200 msCNN-Lper rec.0.81±0.023
4-classBoth47 Hz200 msMLP-Lper frame0.66±0.015
4-classBoth47 Hz200 msMLP-Lper rec.0.76±0.026
4-classBoth47 Hz200 msMLP-Sper frame0.68±0.012
4-classBoth47 Hz200 msMLP-Sper rec.0.77±0.024
4-classBoth47 Hz500 msCNN-Sper frame0.67±0.012
4-classBoth47 Hz500 msCNN-Sper rec.0.76±0.014
4-classBoth47 Hz500 msCNN-Lper frame0.65±0.013
4-classBoth47 Hz500 msCNN-Lper rec.0.74±0.021
4-classBoth47 Hz500 msMLP-Lper frame0.53±0.020
4-classBoth47 Hz500 msMLP-Lper rec.0.56±0.030
4-classBoth47 Hz500 msMLP-Sper frame0.66±0.009
4-classBoth47 Hz500 msMLP-Sper rec.0.75±0.015
4-classBoth47 HzFFTCNN-Sper frame0.58±0.009
4-classBoth47 HzFFTCNN-Sper rec.0.82±0.028
4-classBoth47 HzFFTCNN-Lper frame0.56±0.010
4-classBoth47 HzFFTCNN-Lper rec.0.82±0.026
4-classBoth47 HzFFTMLP-Lper frame0.56±0.009
4-classBoth47 HzFFTMLP-Lper rec.0.81±0.032
4-classBoth47 HzFFTMLP-Sper frame0.56±0.009
4-classBoth47 HzFFTMLP-Sper rec.0.82±0.034
4-classBoth23 Hz200 msCNN-Sper frame0.70±0.015
4-classBoth23 Hz200 msCNN-Sper rec.0.83±0.029
4-classBoth23 Hz200 msCNN-Lper frame0.64±0.018
4-classBoth23 Hz200 msCNN-Lper rec.0.77±0.029
4-classBoth23 Hz200 msMLP-Lper frame0.66±0.018
4-classBoth23 Hz200 msMLP-Lper rec.0.75±0.034
4-classBoth23 Hz200 msMLP-Sper frame0.68±0.018
4-classBoth23 Hz200 msMLP-Sper rec.0.78±0.029
4-classBoth23 Hz500 msCNN-Sper frame0.70±0.014
4-classBoth23 Hz500 msCNN-Sper rec.0.80±0.021
4-classBoth23 Hz500 msCNN-Lper frame0.66±0.024
4-classBoth23 Hz500 msCNN-Lper rec.0.74±0.036
4-classBoth23 Hz500 msMLP-Lper frame0.58±0.026
4-classBoth23 Hz500 msMLP-Lper rec.0.63±0.040
4-classBoth23 Hz500 msMLP-Sper frame0.69±0.012
4-classBoth23 Hz500 msMLP-Sper rec.0.79±0.017
4-classBoth23 HzFFTCNN-Sper frame0.62±0.014
4-classBoth23 HzFFTCNN-Sper rec.0.80±0.035
4-classBoth23 HzFFTCNN-Lper frame0.59±0.012
4-classBoth23 HzFFTCNN-Lper rec.0.79±0.035
4-classBoth23 HzFFTMLP-Lper frame0.61±0.009
4-classBoth23 HzFFTMLP-Lper rec.0.81±0.027
4-classBoth23 HzFFTMLP-Sper frame0.60±0.010
4-classBoth23 HzFFTMLP-Sper rec.0.83±0.029
4-classBoth12 Hz500 msCNN-Sper frame0.71±0.013
4-classBoth12 Hz500 msCNN-Sper rec.0.80±0.022
4-classBoth12 Hz500 msCNN-Lper frame0.67±0.019
4-classBoth12 Hz500 msCNN-Lper rec.0.77±0.027
4-classBoth12 Hz500 msMLP-Lper frame0.57±0.020
4-classBoth12 Hz500 msMLP-Lper rec.0.62±0.029
4-classBoth12 Hz500 msMLP-Sper frame0.71±0.015
4-classBoth12 Hz500 msMLP-Sper rec.0.80±0.023
4-classBoth12 HzFFTCNN-Sper frame0.64±0.019
4-classBoth12 HzFFTCNN-Sper rec.0.76±0.040
4-classBoth12 HzFFTCNN-Lper frame0.63±0.023
4-classBoth12 HzFFTCNN-Lper rec.0.79±0.039
4-classBoth12 HzFFTMLP-Lper frame0.63±0.017
4-classBoth12 HzFFTMLP-Lper rec.0.76±0.038
4-classBoth12 HzFFTMLP-Sper frame0.64±0.015
4-classBoth12 HzFFTMLP-Sper rec.0.78±0.032
4-classMic 147 Hz500 msCNN-Sper frame0.63±0.013
4-classMic 147 Hz500 msCNN-Sper rec.0.70±0.020
4-classMic 147 Hz500 msCNN-Lper frame0.56±0.019
4-classMic 147 Hz500 msCNN-Lper rec.0.61±0.028
4-classMic 147 Hz500 msMLP-Lper frame0.63±0.007
4-classMic 147 Hz500 msMLP-Lper rec.0.71±0.009
4-classMic 147 Hz500 msMLP-Sper frame0.63±0.009
4-classMic 147 Hz500 msMLP-Sper rec.0.68±0.016
4-classMic 147 HzFFTCNN-Sper frame0.53±0.012
4-classMic 147 HzFFTCNN-Sper rec.0.73±0.031
4-classMic 147 HzFFTCNN-Lper frame0.53±0.012
4-classMic 147 HzFFTCNN-Lper rec.0.70±0.034
4-classMic 147 HzFFTMLP-Lper frame0.52±0.007
4-classMic 147 HzFFTMLP-Lper rec.0.77±0.026
4-classMic 147 HzFFTMLP-Sper frame0.52±0.006
4-classMic 147 HzFFTMLP-Sper rec.0.77±0.021
4-classMic 123 Hz500 msCNN-Sper frame0.67±0.017
4-classMic 123 Hz500 msCNN-Sper rec.0.75±0.019
4-classMic 123 Hz500 msCNN-Lper frame0.55±0.027
4-classMic 123 Hz500 msCNN-Lper rec.0.59±0.034
4-classMic 123 Hz500 msMLP-Lper frame0.67±0.015
4-classMic 123 Hz500 msMLP-Lper rec.0.76±0.022
4-classMic 123 Hz500 msMLP-Sper frame0.66±0.015
4-classMic 123 Hz500 msMLP-Sper rec.0.74±0.020
4-classMic 123 HzFFTCNN-Sper frame0.56±0.010
4-classMic 123 HzFFTCNN-Sper rec.0.70±0.028
4-classMic 123 HzFFTCNN-Lper frame0.56±0.009
4-classMic 123 HzFFTCNN-Lper rec.0.71±0.022
4-classMic 123 HzFFTMLP-Lper frame0.57±0.008
4-classMic 123 HzFFTMLP-Lper rec.0.79±0.023
4-classMic 123 HzFFTMLP-Sper frame0.57±0.008
4-classMic 123 HzFFTMLP-Sper rec.0.77±0.026
4-classMic 112 Hz500 msCNN-Sper frame0.66±0.017
4-classMic 112 Hz500 msCNN-Sper rec.0.75±0.025
4-classMic 112 Hz500 msCNN-Lper frame0.55±0.023
4-classMic 112 Hz500 msCNN-Lper rec.0.59±0.030
4-classMic 112 Hz500 msMLP-Lper frame0.68±0.014
4-classMic 112 Hz500 msMLP-Lper rec.0.76±0.020
4-classMic 112 Hz500 msMLP-Sper frame0.68±0.018
4-classMic 112 Hz500 msMLP-Sper rec.0.75±0.026
4-classMic 112 HzFFTCNN-Sper frame0.57±0.011
4-classMic 112 HzFFTCNN-Sper rec.0.70±0.027
4-classMic 112 HzFFTCNN-Lper frame0.57±0.020
4-classMic 112 HzFFTCNN-Lper rec.0.69±0.036
4-classMic 112 HzFFTMLP-Lper frame0.58±0.011
4-classMic 112 HzFFTMLP-Lper rec.0.73±0.033
4-classMic 112 HzFFTMLP-Sper frame0.58±0.010
4-classMic 112 HzFFTMLP-Sper rec.0.74±0.028
6-classMic 194 Hz500 msCNN-Sper frame0.38±0.011
6-classMic 194 Hz500 msCNN-Sper rec.0.41±0.017
6-classMic 194 Hz500 msCNN-Lper frame0.36±0.016
6-classMic 194 Hz500 msCNN-Lper rec.0.38±0.023
6-classMic 194 Hz500 msMLP-Lper frame0.41±0.009
6-classMic 194 Hz500 msMLP-Lper rec.0.45±0.015
6-classMic 194 Hz500 msMLP-Sper frame0.40±0.010
6-classMic 194 Hz500 msMLP-Sper rec.0.42±0.017
6-classMic 194 HzFFTCNN-Sper frame0.31±0.006
6-classMic 194 HzFFTCNN-Sper rec.0.35±0.011
6-classMic 194 HzFFTCNN-Lper frame0.30±0.005
6-classMic 194 HzFFTCNN-Lper rec.0.34±0.009
6-classMic 194 HzFFTMLP-Lper frame0.32±0.005
6-classMic 194 HzFFTMLP-Lper rec.0.42±0.020
6-classMic 194 HzFFTMLP-Sper frame0.32±0.005
6-classMic 194 HzFFTMLP-Sper rec.0.41±0.014
6-classBoth94 Hz200 msCNN-Sper frame0.47±0.008
6-classBoth94 Hz200 msCNN-Sper rec.0.56±0.019
6-classBoth94 Hz200 msCNN-Lper frame0.48±0.014
6-classBoth94 Hz200 msCNN-Lper rec.0.57±0.026
6-classBoth94 Hz200 msMLP-Lper frame0.43±0.010
6-classBoth94 Hz200 msMLP-Lper rec.0.49±0.022
6-classBoth94 Hz200 msMLP-Sper frame0.45±0.012
6-classBoth94 Hz200 msMLP-Sper rec.0.52±0.026
6-classBoth94 Hz500 msCNN-Sper frame0.42±0.014
6-classBoth94 Hz500 msCNN-Sper rec.0.46±0.021
6-classBoth94 Hz500 msCNN-Lper frame0.43±0.011
6-classBoth94 Hz500 msCNN-Lper rec.0.47±0.018
6-classBoth94 Hz500 msMLP-Lper frame0.41±0.021
6-classBoth94 Hz500 msMLP-Lper rec.0.45±0.029
6-classBoth94 Hz500 msMLP-Sper frame0.44±0.012
6-classBoth94 Hz500 msMLP-Sper rec.0.47±0.018
6-classBoth94 HzFFTCNN-Sper frame0.34±0.007
6-classBoth94 HzFFTCNN-Sper rec.0.39±0.019
6-classBoth94 HzFFTCNN-Lper frame0.32±0.005
6-classBoth94 HzFFTCNN-Lper rec.0.38±0.014
6-classBoth94 HzFFTMLP-Lper frame0.36±0.005
6-classBoth94 HzFFTMLP-Lper rec.0.58±0.035
6-classBoth94 HzFFTMLP-Sper frame0.36±0.006
6-classBoth94 HzFFTMLP-Sper rec.0.53±0.031
6-classBoth47 Hz200 msCNN-Sper frame0.49±0.009
6-classBoth47 Hz200 msCNN-Sper rec.0.60±0.025
6-classBoth47 Hz200 msCNN-Lper frame0.50±0.010
6-classBoth47 Hz200 msCNN-Lper rec.0.61±0.021
6-classBoth47 Hz200 msMLP-Lper frame0.46±0.012
6-classBoth47 Hz200 msMLP-Lper rec.0.54±0.023
6-classBoth47 Hz200 msMLP-Sper frame0.48±0.013
6-classBoth47 Hz200 msMLP-Sper rec.0.56±0.028
6-classBoth47 Hz500 msCNN-Sper frame0.43±0.010
6-classBoth47 Hz500 msCNN-Sper rec.0.46±0.021
6-classBoth47 Hz500 msCNN-Lper frame0.45±0.013
6-classBoth47 Hz500 msCNN-Lper rec.0.52±0.017
6-classBoth47 Hz500 msMLP-Lper frame0.43±0.022
6-classBoth47 Hz500 msMLP-Lper rec.0.47±0.031
6-classBoth47 Hz500 msMLP-Sper frame0.44±0.011
6-classBoth47 Hz500 msMLP-Sper rec.0.47±0.016
6-classBoth47 HzFFTCNN-Sper frame0.41±0.009
6-classBoth47 HzFFTCNN-Sper rec.0.54±0.034
6-classBoth47 HzFFTCNN-Lper frame0.38±0.010
6-classBoth47 HzFFTCNN-Lper rec.0.46±0.020
6-classBoth47 HzFFTMLP-Lper frame0.40±0.007
6-classBoth47 HzFFTMLP-Lper rec.0.61±0.030
6-classBoth47 HzFFTMLP-Sper frame0.39±0.005
6-classBoth47 HzFFTMLP-Sper rec.0.60±0.028
6-classBoth23 Hz200 msCNN-Sper frame0.50±0.013
6-classBoth23 Hz200 msCNN-Sper rec.0.62±0.031
6-classBoth23 Hz200 msCNN-Lper frame0.44±0.016
6-classBoth23 Hz200 msCNN-Lper rec.0.51±0.033
6-classBoth23 Hz200 msMLP-Lper frame0.47±0.015
6-classBoth23 Hz200 msMLP-Lper rec.0.55±0.029
6-classBoth23 Hz200 msMLP-Sper frame0.49±0.013
6-classBoth23 Hz200 msMLP-Sper rec.0.58±0.026
6-classBoth23 Hz500 msCNN-Sper frame0.47±0.013
6-classBoth23 Hz500 msCNN-Sper rec.0.50±0.021
6-classBoth23 Hz500 msCNN-Lper frame0.45±0.016
6-classBoth23 Hz500 msCNN-Lper rec.0.51±0.021
6-classBoth23 Hz500 msMLP-Lper frame0.48±0.022
6-classBoth23 Hz500 msMLP-Lper rec.0.53±0.030
6-classBoth23 Hz500 msMLP-Sper frame0.49±0.016
6-classBoth23 Hz500 msMLP-Sper rec.0.55±0.027
6-classBoth23 HzFFTCNN-Sper frame0.46±0.010
6-classBoth23 HzFFTCNN-Sper rec.0.65±0.034
6-classBoth23 HzFFTCNN-Lper frame0.42±0.010
6-classBoth23 HzFFTCNN-Lper rec.0.53±0.022
6-classBoth23 HzFFTMLP-Lper frame0.42±0.007
6-classBoth23 HzFFTMLP-Lper rec.0.61±0.025
6-classBoth23 HzFFTMLP-Sper frame0.42±0.010
6-classBoth23 HzFFTMLP-Sper rec.0.60±0.033
6-classBoth12 Hz500 msCNN-Sper frame0.49±0.014
6-classBoth12 Hz500 msCNN-Sper rec.0.54±0.025
6-classBoth12 Hz500 msCNN-Lper frame0.49±0.014
6-classBoth12 Hz500 msCNN-Lper rec.0.55±0.020
6-classBoth12 Hz500 msMLP-Lper frame0.51±0.021
6-classBoth12 Hz500 msMLP-Lper rec.0.58±0.031
6-classBoth12 Hz500 msMLP-Sper frame0.51±0.014
6-classBoth12 Hz500 msMLP-Sper rec.0.57±0.019
6-classBoth12 HzFFTCNN-Sper frame0.48±0.014
6-classBoth12 HzFFTCNN-Sper rec.0.66±0.040
6-classBoth12 HzFFTCNN-Lper frame0.48±0.014
6-classBoth12 HzFFTCNN-Lper rec.0.64±0.028
6-classBoth12 HzFFTMLP-Lper frame0.45±0.012
6-classBoth12 HzFFTMLP-Lper rec.0.57±0.033
6-classBoth12 HzFFTMLP-Sper frame0.45±0.011
6-classBoth12 HzFFTMLP-Sper rec.0.57±0.030
6-classMic 147 Hz500 msCNN-Sper frame0.40±0.012
6-classMic 147 Hz500 msCNN-Sper rec.0.42±0.020
6-classMic 147 Hz500 msCNN-Lper frame0.37±0.014
6-classMic 147 Hz500 msCNN-Lper rec.0.39±0.019
6-classMic 147 Hz500 msMLP-Lper frame0.41±0.008
6-classMic 147 Hz500 msMLP-Lper rec.0.45±0.012
6-classMic 147 Hz500 msMLP-Sper frame0.40±0.008
6-classMic 147 Hz500 msMLP-Sper rec.0.44±0.013
6-classMic 147 HzFFTCNN-Sper frame0.37±0.009
6-classMic 147 HzFFTCNN-Sper rec.0.48±0.024
6-classMic 147 HzFFTCNN-Lper frame0.36±0.014
6-classMic 147 HzFFTCNN-Lper rec.0.44±0.029
6-classMic 147 HzFFTMLP-Lper frame0.36±0.005
6-classMic 147 HzFFTMLP-Lper rec.0.55±0.029
6-classMic 147 HzFFTMLP-Sper frame0.36±0.005
6-classMic 147 HzFFTMLP-Sper rec.0.56±0.025
6-classMic 123 Hz500 msCNN-Sper frame0.43±0.011
6-classMic 123 Hz500 msCNN-Sper rec.0.45±0.015
6-classMic 123 Hz500 msCNN-Lper frame0.36±0.018
6-classMic 123 Hz500 msCNN-Lper rec.0.40±0.024
6-classMic 123 Hz500 msMLP-Lper frame0.45±0.009
6-classMic 123 Hz500 msMLP-Lper rec.0.49±0.015
6-classMic 123 Hz500 msMLP-Sper frame0.44±0.012
6-classMic 123 Hz500 msMLP-Sper rec.0.48±0.017
6-classMic 123 HzFFTCNN-Sper frame0.38±0.013
6-classMic 123 HzFFTCNN-Sper rec.0.49±0.028
6-classMic 123 HzFFTCNN-Lper frame0.40±0.008
6-classMic 123 HzFFTCNN-Lper rec.0.49±0.019
6-classMic 123 HzFFTMLP-Lper frame0.39±0.006
6-classMic 123 HzFFTMLP-Lper rec.0.57±0.017
6-classMic 123 HzFFTMLP-Sper frame0.39±0.006
6-classMic 123 HzFFTMLP-Sper rec.0.56±0.018
6-classMic 112 Hz500 msCNN-Sper frame0.45±0.014
6-classMic 112 Hz500 msCNN-Sper rec.0.48±0.022
6-classMic 112 Hz500 msCNN-Lper frame0.36±0.016
6-classMic 112 Hz500 msCNN-Lper rec.0.39±0.020
6-classMic 112 Hz500 msMLP-Lper frame0.48±0.010
6-classMic 112 Hz500 msMLP-Lper rec.0.53±0.017
6-classMic 112 Hz500 msMLP-Sper frame0.45±0.011
6-classMic 112 Hz500 msMLP-Sper rec.0.48±0.017
6-classMic 112 HzFFTCNN-Sper frame0.40±0.007
6-classMic 112 HzFFTCNN-Sper rec.0.51±0.020
6-classMic 112 HzFFTCNN-Lper frame0.40±0.007
6-classMic 112 HzFFTCNN-Lper rec.0.52±0.020
6-classMic 112 HzFFTMLP-Lper frame0.41±0.005
6-classMic 112 HzFFTMLP-Lper rec.0.54±0.019
6-classMic 112 HzFFTMLP-Sper frame0.41±0.005
6-classMic 112 HzFFTMLP-Sper rec.0.57±0.017

References

  1. Cano, Z.P.; Banham, D.; Ye, S.; Hintennach, A.; Lu, J.; Fowler, M.; Chen, Z. Batteries and fuel cells for emerging electric vehicle markets. Nat. Energy 2018, 3, 279. [Google Scholar] [CrossRef]
  2. Zhu, G.; Zhao, C.; Huang, J.; He, C.; Zhang, J.; Chen, S.; Xu, L.; Yuan, H.; Zhang, Q. Fast Charging Lithium Batteries: Recent Progress and Future Prospects. Small 2019, 15, e1805389. [Google Scholar] [CrossRef]
  3. Tomaszewska, A.; Chu, Z.; Feng, X.; O’Kane, S.; Liu, X.; Chen, J.; Ji, C.; Endler, E.; Li, R.; Liu, L.; et al. Lithium-ion battery fast charging: A review. eTransportation 2019, 1, 100011. [Google Scholar] [CrossRef]
  4. Liu, Y.; Zhu, Y.; Cui, Y. Challenges and opportunities towards fast-charging battery materials. Nat. Energy 2019, 4, 540–550. [Google Scholar] [CrossRef]
  5. Wei, Z.; Zhao, J.; He, H.; Ding, G.; Cui, H.; Liu, L. Future smart battery and management: Advanced sensing from external to embedded multi-dimensional measurement. J. Power Sources 2021, 489, 229462. [Google Scholar] [CrossRef]
  6. Wei, Z.; Zhao, J.; Ji, D.; Tseng, K.J. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model. Appl. Energy 2017, 204, 1264–1274. [Google Scholar] [CrossRef]
  7. Adams, R.A.; Mistry, A.N.; Mukherjee, P.P.; Pol, V.G. Materials by Design: Tailored Morphology and Structures of Carbon Anodes for Enhanced Battery Safety. ACS Appl. Mater. Interfaces 2019, 11, 13334–13342. [Google Scholar] [CrossRef]
  8. Wang, L.; Wang, Z.; Sun, Y.; Liang, X.; Xiang, H. Sb2O3 modified PVDF-CTFE electrospun fibrous membrane as a safe lithium-ion battery separator. J. Membr. Sci. 2019, 572, 512–519. [Google Scholar] [CrossRef]
  9. Kwade, A.; Haselrieder, W.; Leithoff, R.; Modlinger, A.; Dietrich, F.; Droeder, K. Current status and challenges for automotive battery production technologies. Nat. Energy 2018, 3, 290–300. [Google Scholar] [CrossRef]
  10. Schröder, R.; Glodde, A.; Aydemir, M.; Seliger, G. Increasing Productivity in Grasping Electrodes in Lithium-ion Battery Manufacturing. Procedia CIRP 2016, 57, 775–780. [Google Scholar] [CrossRef]
  11. Li, J.; Du, Z.; Ruther, R.E.; An, S.J.; David, L.A.; Hays, K.; Wood, M.; Phillip, N.D.; Sheng, Y.; Mao, C.; et al. Toward Low-Cost, High-Energy Density, and High-Power Density Lithium-Ion Batteries. JOM 2017, 69, 1484–1496. [Google Scholar] [CrossRef] [Green Version]
  12. Leithoff, R.; Fröhlich, A.; Dröder, K. Investigation of the Influence of Deposition Accuracy of Electrodes on the Electrochemical Properties of Lithium-Ion Batteries. Energy Technol. 2020, 8, 1900129. [Google Scholar] [CrossRef]
  13. Weydanz, W.; Reisenweber, H.; Gottschalk, A.; Schulz, M.; Knoche, T.; Reinhart, G.; Masuch, M.; Franke, J.; Gilles, R. Visualization of electrolyte filling process and influence of vacuum during filling for hard case prismatic lithium ion cells by neutron imaging to optimize the production process. J. Power Sources 2018, 380, 126–134. [Google Scholar] [CrossRef]
  14. Schilling, A.; Gümbel, P.; Möller, M.; Kalkan, F.; Dietrich, F.; Dröder, K. X-ray Based Visualization of the Electrolyte Filling Process of Lithium Ion Batteries. J. Electrochem. Soc. 2018, 166, A5163–A5167. [Google Scholar] [CrossRef]
  15. Frankenberger, M.; Trunk, M.; Seidlmayer, S.; Dinter, A.; Dittloff, J.; Werner, L.; Gernhäuser, R.; Revay, Z.; Märkisch, B.; Gilles, R.; et al. SEI Growth Impacts of Lamination, Formation and Cycling in Lithium Ion Batteries. Batteries 2020, 6, 21. [Google Scholar] [CrossRef] [Green Version]
  16. Tönshoff, H.K.; Jung, M.; Männel, S.; Rietz, W. Using acoustic emission signals for monitoring of production processes. Ultrasonics 2000, 37, 681–686. [Google Scholar] [CrossRef]
  17. Lee, D.E.; Hwang, I.; Valente, C.M.O.; Oliveira, J.F.G.; Dornfeld, D.A. Precision Manufacturing Process Monitoring with Acoustic Emission. In Advances in Design; Springer: Berlin/Heidelberg, Germany, 2006; Volume 21, pp. 33–54. [Google Scholar]
  18. Wu, H.; Yu, Z.; Wang, Y. A New Approach for Online Monitoring of Additive Manufacturing Based on Acoustic Emission. In Proceedings of the 11th International Manufacturing Science and Engineering Conference, Blacksburg, VA, USA, 27 June–1 July 2016. Volume 3: Joint MSEC-NAMRC Symposia, 06272016. [Google Scholar]
  19. Gaja, H.; Liou, F. Defects monitoring of laser metal deposition using acoustic emission sensor. Int. J. Adv. Manuf. Technol. 2016, 90, 561–574. [Google Scholar] [CrossRef]
  20. Koester, L.W.; Taheri, H.; Bigelow, T.A.; Bond, L.J.; Faierson, E.J. In-situ acoustic signature monitoring in additive manufacturing processes. AIP Conf. Proc. 2018, 1949, 020006. [Google Scholar]
  21. Shevchik, S.A.; Masinelli, G.; Kenel, C.; Leinenbach, C.; Wasmer, K. Deep Learning for In Situ and Real-Time Quality Monitoring in Additive Manufacturing Using Acoustic Emission. IEEE Trans. Ind. Inf. 2019, 15, 5194–5203. [Google Scholar] [CrossRef]
  22. Chollet, F. Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek; MITP: Frechen, Germany, 2018. [Google Scholar]
  23. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  24. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  25. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The Balanced Accuracy and Its Posterior Distribution. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3121–3124. [Google Scholar]
  26. Mohanty, D.; Hockaday, E.; Li, J.; Hensley, D.; Daniel, C.; Wood, D. Effect of electrode manufacturing defects on electrochemical performance of lithium-ion batteries: Cognizance of the battery failure sources. J. Power Sources 2016, 312, 70–79. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental setup of the acoustic measurements during lamination.
Figure 1. Experimental setup of the acoustic measurements during lamination.
Batteries 07 00019 g001
Figure 2. Dataflow graph for pre-processing, feature analysis and evaluation.
Figure 2. Dataflow graph for pre-processing, feature analysis and evaluation.
Batteries 07 00019 g002
Figure 3. Two-fold cross validation on dataset X and Y with training (Train.) and recognition (Rec.).
Figure 3. Two-fold cross validation on dataset X and Y with training (Train.) and recognition (Rec.).
Batteries 07 00019 g003
Figure 4. Confusion matrices for (a) six-class and (b) four-class classification task with relative amounts of recordings of reference class classified as result class (class labels nomenclature see Table 2).
Figure 4. Confusion matrices for (a) six-class and (b) four-class classification task with relative amounts of recordings of reference class classified as result class (class labels nomenclature see Table 2).
Batteries 07 00019 g004
Figure 5. Comparison of different analysis and classification methods by the frequency of occurrence of differences in balanced accuracy between classification exercises which differ in one parameter: (a) comparison of the six-class and four-class classification tasks, (b) comparison of the usage of one or both microphone signals, (c) comparison of different frequency resolutions, (d) comparison of different frames lengths, (e) comparison of different neural network topologies, and (f) comparison of processing the whole recording and the time frame.
Figure 5. Comparison of different analysis and classification methods by the frequency of occurrence of differences in balanced accuracy between classification exercises which differ in one parameter: (a) comparison of the six-class and four-class classification tasks, (b) comparison of the usage of one or both microphone signals, (c) comparison of different frequency resolutions, (d) comparison of different frames lengths, (e) comparison of different neural network topologies, and (f) comparison of processing the whole recording and the time frame.
Batteries 07 00019 g005aBatteries 07 00019 g005bBatteries 07 00019 g005c
Table 1. Experimental design for acoustic measurement process monitoring.
Table 1. Experimental design for acoustic measurement process monitoring.
ElectrodeSeparatorProtective Liner IDRecordings TotalRecordings UsedSize (MB)
WithoutWith12423245
Without22423133
Anode 1With3121270
Without3121272
Anode 2With4131271
Without4121271
Cathode 1With7121265
Without7121270
Cathode 2With812842
Without8151584
Total 148141922
Table 2. Class labels for six-class and four-class classification task.
Table 2. Class labels for six-class and four-class classification task.
ElectrodeSeparator6-Class Task4-Class TaskDataset
WithoutWithSS50% X + 50% Y
WithoutNN50% X + 50% Y
Anode 1WithA + SE + SX
WithoutAEX
Anode 2WithA + SE + SY
WithoutAEY
Cathode 1WithC + SE + SX
WithoutCEX
Cathode 2WithC + SE + SY
WithoutCEY
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Leithoff, R.; Dilger, N.; Duckhorn, F.; Blume, S.; Lembcke, D.; Tschöpe, C.; Herrmann, C.; Dröder, K. Inline Monitoring of Battery Electrode Lamination Processes Based on Acoustic Measurements. Batteries 2021, 7, 19. https://0-doi-org.brum.beds.ac.uk/10.3390/batteries7010019

AMA Style

Leithoff R, Dilger N, Duckhorn F, Blume S, Lembcke D, Tschöpe C, Herrmann C, Dröder K. Inline Monitoring of Battery Electrode Lamination Processes Based on Acoustic Measurements. Batteries. 2021; 7(1):19. https://0-doi-org.brum.beds.ac.uk/10.3390/batteries7010019

Chicago/Turabian Style

Leithoff, Ruben, Nikolas Dilger, Frank Duckhorn, Stefan Blume, Dario Lembcke, Constanze Tschöpe, Christoph Herrmann, and Klaus Dröder. 2021. "Inline Monitoring of Battery Electrode Lamination Processes Based on Acoustic Measurements" Batteries 7, no. 1: 19. https://0-doi-org.brum.beds.ac.uk/10.3390/batteries7010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop