Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition
Abstract
:1. Introduction
2. Related Works
3. Materials and Methods
3.1. Database Descriptions
3.2. EEG Data Preprocessing
3.3. Feature Extraction
3.3.1. Classical EEG Features
3.3.2. Differential Entropies
3.4. Manifold Feature Fusion and Dynamical Feature Selection
3.4.1. Neighborhood Component Analysis
- Partition the EEG feature data into K subsets and each subset contains the EEG data of a subject;
- Perform K-fold leave-one-subject-out validation;
- For each fold, train a NCA model on K-1 subsets and validated the trained model on the remaining subset;
- Return the value of the classification loss defined as the mean square error for the current fold;
- Repeat steps (2)–(4) to find the lowest loss corresponding to optimal the value of ;
- Perform NCA feature selection according to the optimal .
3.4.2. Geodesic Flow Kernel
- (1)
- Obtain the optimal dimension of the subspaces.
- (2)
- Build geodesic flow.
- (3)
- Calculate the geodesic flow kernel.
3.4.3. Dynamical Feature Selection and Performance Evaluation of the MF-DFS
- Perform the leave-one-subject-out training and testing procedure;
- Select a CL or DE feature set from a database with subjects and compute the corresponding feature matrix ;
- Define a testing set, where the EEG data are drawn from a specific subject;
- A predefined emotion classifier is trained by the learning algorithm based on the remaining subjects’ EEG data. The dimension of the EEG feature is defined as ;
- Perform feature ranking according to the feature weights according to the trained classifier;
- Remove the feature with the lowest weight and update the feature matrix;
- Retrain the SVM classifier based on the current feature matrix and update the weight;
- Repeat steps (5)–(7);
- Generate a feature ranking according to the order of the feature removal. The first (or last) removed feature possesses the lowest (or highest) ranking;
- Given the classifier, compute classification accuracies. For instance, the 1st accuracy corresponds to that the optimal feature is adopted according the feature rankings to train the classifier, the 2nd accuracy indicates the optimal two features are adopted, and the nth accuracy indicates all features are used;
- Determine the optimal feature combination corresponding to the highest accuracy elicited in step (10);
- Repeat steps (3)–(11) for all testing subjects.
4. Results
4.1. NCA Model Selection
4.2. Feature Selection Performance with Different Classifiers
4.3. Statistical Test of Feature Selection Performance
4.4. Performance Comparison between the MF-DFS and Original EEG Features
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Mao, X.; Li, Z. Implementing emotion-based user-aware e-learning. In Proceedings of the CHI ‘09: CHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 3787–3792. [Google Scholar] [CrossRef]
- Thakur, N.; Han, C. An ambient intelligence-based human behavior monitoring framework for ubiquitous environments. Information 2021, 12, 81. [Google Scholar] [CrossRef]
- Al Machot, F.; Mosa, A.H.; Dabbour, K.; Fasih, A.; Schwarzlmuller, C.; Ali, M.; Kyamakya, K. A novel real-time emotion detection system from audio streams based on Bayesian Quadratic Discriminate Classifier for ADAS. In Proceedings of the Third International Workshop on Nonlinear Dynamics and Synchronizatio and Sixteenth International Symposium on Theoretical Electrical Engineering, Klagenfurt, Austria, 25–27 July 2011; pp. 1–5. [Google Scholar] [CrossRef]
- Tadić, B.; Gligorijević, V.; Mitrović, M.; Šuvakov, M. Co-Evolutionary mechanisms of emotional bursts in online social dynamics and networks. Entropy 2013, 15, 5084–5120. [Google Scholar] [CrossRef]
- Rincon, J.A.; Costa, Â.; Novais, P.; Julian, V.; Carrascosa, C. Using non-invasive wearables for detecting emotions with intelligent agents. In Proceedings of the Fifth International Congress on Information and Communication Technology, Melaka, Malaysia, 17–19 May 2017; pp. 73–84. [Google Scholar] [CrossRef]
- Chanel, G.; Rebetez, C.; Bétrancourt, M.; Pun, T. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Trans. Syst. Man. Cybern.-Part A Syst. Hum. 2011, 41, 1052–1063. [Google Scholar] [CrossRef] [Green Version]
- Filippini, C.; Perpetuini, D.; Cardone, D.; Chiarelli, A.M.; Merla, A. Thermal infrared imaging-based affective computing and its application to facilitate human robot interaction: A review. Appl. Sci. 2020, 10, 2924. [Google Scholar] [CrossRef]
- Filippini, C.; Spadolini, E.; Cardone, D.; Bianchi, D.; Preziuso, M.; Sciarretta, C.; del Cimmuto, V.; Lisciani, D.; Merla, A. Facilitating the child–robot interaction by endowing the robot with the capability of understanding the child engagement: The case of mio amico robot. Int. J. Soc. Robot. 2021, 13, 677–689. [Google Scholar] [CrossRef]
- Picard, R.W.; Member, S.; Vyzas, E.; Healey, J. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1175–1191. [Google Scholar] [CrossRef] [Green Version]
- Kim, J.; Andre, E. Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 2067–2083. [Google Scholar] [CrossRef] [PubMed]
- Zhu, J.-Y.; Zheng, W.-L.; Lu, B.-L. Cross-subject and cross-gender emotion classification from EEG. In Proceedings of the 7th WACBE World Congress on Bioengineering, Singapore, 6–8 July 2015; pp. 1188–1191. [Google Scholar] [CrossRef]
- Wang, X.-W.; Nie, D.; Lu, B.-L. Emotional state classification from EEG data using machine learning approach. Neurocomputing 2014, 129, 94–106. [Google Scholar] [CrossRef]
- Thrun, S.; Pratt, L. Learning to learn: Introduction and overview. Recent Res. Psychol. 1998, 3–17. [Google Scholar] [CrossRef]
- Peng, Y.; Lu, B.-L. Discriminative manifold extreme learning machine and applications to image and EEG signal classification. Neurocomputing 2016, 174, 265–277. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, J.; Huang, M.; Yu, H. Cross-position activity recognition with stratified transfer learning. Pervasive Mob. Comput. 2019, 57, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Guizzo, E.; Weyde, T.; Tarroni, G. Anti-transfer learning for task invariance in convolutional neural networks for speech processing. Neural Netw. 2021, 142, 238–251. [Google Scholar] [CrossRef]
- Gong, B.; Shi, Y.; Sha, F.; Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2066–2073. [Google Scholar]
- Gopalan, R.; Li, R.; Chellappa, R. Unsupervised adaptation across domain shifts by generating intermediate data representations. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2288–2302. [Google Scholar] [CrossRef]
- Koelstra, S.; Mühl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis; Using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Sourina, O. EEG-based valence level recognition for real-time applications. In Proceedings of the 2012 International Conference on Cyberworlds, Darmstadt, Germany, 25–27 September 2012; pp. 53–60. [Google Scholar]
- Atkinson, J.; Campos, D. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Syst. Appl. 2016, 47, 35–41. [Google Scholar] [CrossRef]
- Pandey, P.; Seeja, K. Subject independent emotion recognition from EEG using VMD and deep learning. J. King Saud Univ.-Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
- Salama, E.S.; El-Khoribi, R.A.; Shoman, M.E.; Wahby, M.A. EEG-based emotion recognition using 3D convolutional neural networks. Int. J. Adv. Comput. Sci. Appl. 2018, 9. [Google Scholar] [CrossRef]
- Rayatdoost, S.; Soleymani, M. Cross-corpus EEG-based emotion recognition. In Proceedings of the IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Kumar, N.; Khaund, K.; Hazarika, S.M. Bispectral analysis of EEG for emotion recognition. Procedia Comput. Sci. 2016, 84, 31–35. [Google Scholar] [CrossRef] [Green Version]
- Xu, H.; Plataniotis, K.N. Affective states classification using EEG and semi-supervised deep learning approaches. In Proceedings of the IEEE 18th International Workshop on Multimedia Signal Processing (MMSP), Montreal, QC, Canada, 21–23 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
- Yan, M.; Lv, Z.; Sun, W.; Bi, N. An improved common spatial pattern combined with channel-selection strategy for electroencephalography-based emotion recognition. Med. Eng. Phys. 2020, 83, 130–141. [Google Scholar] [CrossRef]
- Yin, Z.; Liu, L.; Chen, J.; Zhao, B.; Wang, Y. Locally robust EEG feature selection for individual-independent emotion recognition. Expert Syst. Appl. 2020, 162, 113768. [Google Scholar] [CrossRef]
- Tan, C.; Šarlija, M.; Kasabov, N. NeuroSense: Short-term emotion recognition and understanding based on spiking neural network modelling of spatio-temporal EEG patterns. Neurocomputing 2021, 434, 137–148. [Google Scholar] [CrossRef]
- Zheng, W.-L.; Lu, B.-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Wang, F.; Wu, S.; Zhang, W.; Xu, Z.; Zhang, Y.; Wu, C.; Coleman, S. Emotion recognition with convolutional neural network and EEG-based EFDMs. Neuropsychologia 2020, 146, 107506. [Google Scholar] [CrossRef] [PubMed]
- Lu, Y.; Wang, M.; Wu, W.; Han, Y.; Zhang, Q.; Chen, S. Dynamic entropy-based pattern learning to identify emotions from EEG signals across individuals. Meas. J. Int. Meas. Confed. 2020, 150, 107003. [Google Scholar] [CrossRef]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Baldo, D.; Parikh, H.; Piu, Y.; Müller, K.-M. Brain waves predict success of new fashion products: A practical application for the footwear retailing industry. J. Creat. Value 2015, 1, 61–71. [Google Scholar] [CrossRef] [Green Version]
- Murugappan, M.; Murugappan, S.; Balaganapathy, B.; Gerard, C. Wireless EEG signals based neuromarketing system using fast fourier transform (FFT). In Proceedings of the 2014 IEEE 10th International Colloquium on Signal Processing and Its Applications, Kuala Lumpur, Malaysia, 7–9 March 2014; pp. 25–30. [Google Scholar]
- Abadi, M.K.; Subramanian, R.; Kia, S.M.; Avesani, P.; Patras, I.; Sebe, N. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Trans. Affect. Comput. 2015, 6, 209–222. [Google Scholar] [CrossRef]
- Zhang, W.; Yin, Z.; Sun, Z.; Tian, Y.; Wang, Y. Selecting transferrable neurophysiological features for inter-individual emotion recognition via a shared-subspace feature elimination approach. Comput. Biol. Med. 2020, 123, 103875. [Google Scholar] [CrossRef]
- Shi, L.-C.; Jiao, Y.-Y.; Lu, B.-L. Differential entropy feature for EEG-based vigilance estimation. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6627–6630. [Google Scholar]
- Zhang, J.; Wei, Z.; Zou, J.; Fu, H. Automatic epileptic EEG classification based on differential entropy and attention model. Eng. Appl. Artif. Intell. 2020, 96, 103975. [Google Scholar] [CrossRef]
- Razali, N.M.; Wah, Y.B. Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. J. Stat. Model. Anal. 2011, 2, 21–33. [Google Scholar]
- Goldberger, J.; Roweis, S.; Hinton, G.; Salakhutdinov, R.R. Neighbourhood components analysis. In Proceedings of the 17th International Conference on Neural Information Processing Systems (NIPS’04), Vancouver, BC, Canada, 1 December 2004; MIT Press: Cambridge, MA, USA, 2004; pp. 513–520. Available online: https://0-dl-acm-org.brum.beds.ac.uk/doi/10.5555/2976040.2976105 (accessed on 21 October 2021).
- Yang, W.; Wang, K.; Zuo, W. Neighborhood component feature selection for high-dimensional data. J. Comput. 2012, 7, 161–168. [Google Scholar] [CrossRef]
- Samat, A.; Gamba, P.; Abuduwaili, J.; Liu, S.; Miao, Z. Geodesic flow kernel support vector machine for hyperspectral image classification by unsupervised subspace feature transfer. Remote Sens. 2016, 8, 234. [Google Scholar] [CrossRef] [Green Version]
- Gopalan, R.; Li, R.; Chellappa, R. Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 999–1006. [Google Scholar] [CrossRef]
- Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
Preprocessing Steps | Annotations | Applied Databases |
---|---|---|
Channel selection | 32 channels: Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, C4, T8, CP6, CP2, P4, P8, PO4, and O2 according to the 10–20 system. | All |
Downsampling | The downsampled frequency of the DEAP and HCI was 128 Hz and for the SEED it is 200 Hz. | All |
Rereferencing | Subtract the average amplitude of all 32 channels. | MAHNOB-HCI |
Bandpass-filtering | Five-order Butterworth filter with cutoff frequencies of 4 and 45 Hz. | All |
Highpass-filtering | Seventh-order Butterworth filter with cutoff frequency of 3 Hz. | MAHNOB-HCI and SEED |
Lowpass-filtering | Seventh-order Butterworth filter with cutoff frequency of 45 Hz. | SEED |
Data segmentation | Data segmentation | All |
Feature Type | Feature Description | Feature Dimension |
---|---|---|
Classical features |
| 160 |
| 128 | |
Central scalp: , , , Parietal scalp: , , , | 56 | |
| 20 | |
Differential entropy | The DE values of the frequency band in 4–8 Hz, 9–12 Hz, 13–30 Hz, and 31–45 Hz of each EEG channel. | 128 1 |
Classifier | Hyper-Parameter Settings |
---|---|
RF | Number of the estimators = 50 |
AdaBoost | Number of the estimators = 50, maximum depth = 24 |
GBDT | Number of the estimators = 50, maximum depth = 16 |
XGBoost | Number of the estimators = 50, maximum depth = 22 |
DT | Maximum depth = 10, minimum samples in the leaf node = 12 |
Performance | Classifier | Feature Selection Method | ||||
---|---|---|---|---|---|---|
NCA-GFK-CSBS | NCA-GFK-MI | NCA-GFK-RR | NCA-GFK-ERF | MF-DFS | ||
CL-Average valence | RF | 0.4490 (5.60 × 10−2) | 0.4445 (5.59 × 10−2) | 0.4449 (6.26 × 10−2) | 0.4678 (5.10 × 10−2) | 0.4791 (6.63 × 10−2) |
AdaBoost | 0.4455 (4.74 × 10−2) | 0.4283 (5.48 × 10−2) | 0.4230 (6.12 × 10−2) | 0.4400 (5.56 × 10−2) | 0.4959 (7.76 × 10−2) | |
GBDT | 0.4328 (5.05 × 10−2) | 0.4357 (4.71 × 10−2) | 0.4254 (6.83 × 10−2) | 0.4373 (5.88 × 10−2) | 0.4807 (7.85 × 10−2) | |
XGBoost | 0.4447 (6.36 × 10−2) | 0.4486 (6.30 × 10−2) | 0.4355 (5.77 × 10−2) | 0.4584 (5.67 × 10−2) | 0.4814 (7.10 × 10−2) | |
DT | 0.4039 (3.85 × 10−2) | 0.3980 (4.30 × 10−2) | 0.3887 (4.61 × 10−2) | 0.3967 (5.73 × 10−2) | 0.4754 (5.52 × 10−2) | |
CL-Average arousal | RF | 0.4477 (4.62 × 10−2) | 0.4543 (4.60 × 10−2) | 0.4465 (4.73 × 10−2) | 0.4717 (6.15 × 10−2) | 0.4984 (1.04 × 10−1) |
AdaBoost | 0.4609 (3.57 × 10−2) | 0.4506 (3.28 × 10−2) | 0.4541 (3.62 × 10−2) | 0.4625 (4.35 × 10−2) | 0.5166 (1.30 × 10−2) | |
GBDT | 0.4326 (4.88 × 10−2) | 0.4408 (4.90 × 10−2) | 0.4328 (4.73 × 10−2) | 0.4490 (4.96 × 10−2) | 0.5053 (1.40 × 10−1) | |
XGBoost | 0.4336 (5.04 × 10−2) | 0.4375 (5.03 × 10−2) | 0.4252 (5.22 × 10−2) | 0.4510 (6.42 × 10−2) | 0.5015 (1.40 × 10−1) | |
DT | 0.4123 (4.24 × 10−2) | 0.3982 (4.14 × 10−2) | 0.4027 (4.08 × 10−2) | 0.3957 (5.56 × 10−2) | 0.4879 (9.14 × 10−2) | |
DE-Average valence | RF | 0.4293 (3.42 × 10−2) | 0.4248 (3.03 × 10−2) | 0.4256 (2.52 × 10−2) | 0.4389 (2.36 × 10−2) | 0.4779 (6.64 × 10−2) |
AdaBoost | 0.4422 (2.85 × 10−2) | 0.4482 (2.61 × 10−2) | 0.4447 (2.90 × 10−2) | 0.4445 (2.13 × 10−2) | 0.4754 (7.42 × 10−2) | |
GBDT | 0.4162 (3.72 × 10−2) | 0.4211 (3.72 × 10−2) | 0.4188 (3.34 × 10−2) | 0.4201 (3.31 × 10−2) | 0.4831 (7.32 × 10−2) | |
XGBoost | 0.3885 (3.43 × 10−2) | 0.3822 (3.56 × 10−2) | 0.3809 (3.56 × 10−2) | 0.3924 (3.64 × 10−2) | 0.4754 (7.55 × 10−2) | |
DT | 0.4102 (3.61 × 10−2) | 0.4162 (4.41 × 10−2) | 0.4031 (3.70 × 10−2) | 0.3891 (3.87 × 10−2) | 0.4738 (6.08 × 10−2) | |
DE-Average arousal | RF | 0.4531 (2.52 × 10−2) | 0.416 (2.74 × 10−2) | 0.4564 (2.44 × 10−2) | 0.4635 (1.96 × 10−2) | 0.4971 (1.28 × 10−1) |
AdaBoost | 0.4686 (1.74 × 10−2) | 0.4703 (1.63 × 10−2) | 0.4773 (2.09 × 10−2) | 0.4684 (1.88 × 10−2) | 0.5000 (1.50 × 10−1) | |
GBDT | 0.4433 (3.18 × 10−2) | 0.4379 (3.46 × 10−2) | 0.4469 (2.69 × 10−2) | 0.4443 (3.37 × 10−2) | 0.4988 (1.49 × 10−1) | |
XGBoost | 0.4037 (3.11 × 10−2) | 0.3980 (3.86 × 10−2) | 0.4086 (3.58 × 10−2) | 0.3977 (4.87 × 10−2) | 0.4998 (1.55 × 10−1) | |
DT | 0.4268 (3.59 × 10−2) | 0.4143 (3.96 × 10−2) | 0.4053 (3.66 × 10−2) | 0.3904 (4.03 × 10−2) | 0.4953 (1.22 × 10−1) |
Performance | Classifier | Feature Selection Method | ||||
---|---|---|---|---|---|---|
NCA-GFK-CSBS | NCA-GFK-MI | NCA-GFK-RR | NCA-GFK-ERF | MF-DFS | ||
CL-Average valence | RF | 0.4333 (8.23 × 10−2) | 0.4213 (8.41 × 10−2) | 0.4427 (8.00 × 10−2) | 0.4421 (8.37 × 10−2) | 0.5380 (9.07 × 10−2) |
AdaBoost | 0.4182 (8.15 × 10−2) | 0.4317 (7.31 × 10−2) | 0.4208 (6.62 × 10−2) | 0.4177 (9.14 × 10−2) | 0.5047 (3.56 × 10−2) | |
GBDT | 0.4208 (8.68 × 10−2) | 0.4015 (8.43 × 10−2) | 0.4344 (8.38 × 10−2) | 0.4244 (7.01 × 10−2) | 0.5234 (9.56 × 10−2) | |
XGBoost | 0.4208 (6.83 × 10−2) | 0.4093 (6.31 × 10−2) | 0.4307 (7.90 × 10−2) | 0.4375 (6.87 × 10−2) | 0.5146 (7.16 × 10−2) | |
DT | 0.3969 (6.60 × 10−2) | 0.3838 (5.02 × 10−2) | 0.3974 (6.29 × 10−2) | 0.3875 (7.06 × 10−2) | 0.5078 (7.50 × 10−2) | |
CL-Average arousal | RF | 0.4477 (6.68 × 10−2) | 0.3854 (6.95 × 10−2) | 0.3771 (7.65 × 10−2) | 0.4031 (7.37 × 10−2) | 0.4755 (6.80 × 10−2) |
AdaBoost | 0.4609 (4.59 × 10−2) | 0.3708 (5.22 × 10−2) | 0.3516 (5.57 × 10−2) | 0.3948 (6.85 × 10−2) | 0.4495 (6.39 × 10−2) | |
GBDT | 0.3719 (6.32 × 10−2) | 0.3734 (5.92 × 10−2) | 0.3677 (6.33 × 10−2) | 0.3828 (8.23 × 10−2) | 0.4682 (8.57 × 10−2) | |
XGBoost | 0.3740 (6.56 × 10−2) | 0.3614 (6.63 × 10−2) | 0.3521 (6.58 × 10−2) | 0.3796 (7.25 × 10−2) | 0.4646 (4.42 × 10−2) | |
DT | 0.3458 (6.87 × 10−2) | 0.3802 (6.33 × 10−2) | 0.3656 (6.72 × 10−2) | 0.3750 (6.58 × 10−2) | 0.4521 (4.54 × 10−2) | |
DE-Average valence | RF | 0.4198 (9.21 × 10−2) | 0.4072 (8.56 × 10−2) | 0.4026 (8.37 × 10−2) | 0.4286 (8.12 × 10−2) | 0.5120 (9.10 × 10−2) |
AdaBoost | 0.3932 (9.48 × 10−2) | 0.3979 (7.47 × 10−2) | 0.3990 (6.39 × 10−2) | 0.4073 (7.22 × 10−2) | 0.4750 (6.99 × 10−2) | |
GBDT | 0.3911 (9.33 × 10−2) | 0.3822 (8.44 × 10−2) | 0.4021 (7.31 × 10−2) | 0.4062 (8.36 × 10−2) | 0.4953 (9.25 × 10−2) | |
XGBoost | 0.4156 (8.50 × 10−2) | 0.3744 (8.42 × 10−2) | 0.4177 (7.63 × 10−2) | 0.4020 (8.73 × 10−2) | 0.4859 (6.56 × 10−2) | |
DT | 0.3964 (6.58 × 10−2) | 0.3713 (8.85 × 10−2 | 0.3818 (6.37 × 10−2) | 0.3838 (9.31 × 10−2) | 0.4745 (6.43 × 10−2) | |
DE-Average arousal | RF | 0.4010 (7.33 × 10−2) | 0.3786 (6.89 × 10−2) | 0.4208 (7.67 × 10−2) | 0.4083 (7.95 × 10−2) | 0.4661 (5.21 × 10−2) |
AdaBoost | 0.3776 (6.85 × 10−2) | 0.3718 (6.17 × 10−2) | 0.3927 (6.17 × 10−2) | 0.3870 (6.26 × 10−2) | 0.4589 (6.34 × 10−2) | |
GBDT | 0.4208 (8.02 × 10−2) | 0.3867 (7.26 × 10−2) | 0.3818 (6.83 × 10−2) | 0.3630 (5.09 × 10−2) | 0.4630 (7.57 × 10−2) | |
XGBoost | 0.3625 (5.32 × 10−2) | 0.3546 (6.18 × 10−2) | 0.3724 (6.03 × 10−2) | 0.3651 (6.55 × 10−2) | 0.4578 (4.87 × 10−2) | |
DT | 0.3578 (6.42 × 10−2) | 0.3635 (6.32 × 10−2) | 0.3646 (5.43 × 10−2) | 0.3505 (8.24 × 10−2) | 0.4609 (5.00 × 10−2) |
Performance | Classifier | Feature Selection Method | ||||
---|---|---|---|---|---|---|
NCA-GFK-CSBS | NCA-GFK-MI | NCA-GFK-RR | NCA-GFK-ERF | MF-DFS | ||
CL-Average valence | RF | 0.3326 (3.17 × 10−2) | 0.3207 (2.48 × 10−2) | 0.3396 (3.74 × 10−2) | 0.3478 (4.56 × 10−2) | 0.3956 (2.14 × 10−2) |
AdaBoost | 0.3415 (2.57 × 10−2) | 0.3385 (3.40 × 10−2) | 0.3552 (5.68 × 10−2) | 0.3478 (5.07 × 10−2) | 0.3859 (2.35 × 10−2) | |
GBDT | 0.3219 (4.42 × 10−2) | 0.3381 (2.92 × 10−2) | 0.3311 (3.27 × 10−2) | 0.3400 (5.09 × 10−2) | 0.3867 (2.04 × 10−2) | |
XGBoost | 0.3178 (3.60 × 10−2) | 0.3237 (2.72 × 10−2) | 0.3274 (3.34 × 10−2) | 0.3363 (4.94 × 10−2) | 0.3907 (2.04 × 10−2) | |
DT | 0.3326 (3.64 × 10−2) | 0.3307 (3.78 × 10−2) | 0.3370 (5.34 × 10−2) | 0.3407 (3.24 × 10−2) | 0.4037 (1.95 × 10−2) | |
DE-Average valence | RF | 0.3463 (4.28 × 10−2) | 0.3263 (3.70 × 10−2) | 0.3411 (4.57 × 10−2) | 0.3415 (3.44 × 10−2) | 0.3901 (2.33 × 10−2) |
AdaBoost | 0.3374 (3.38 × 10−2) | 0.3189 (3.29 × 10−2) | 0.3452 (3.30 × 10−2) | 0.3267 (2.52 × 10−2) | 0.3889 (2.68 × 10−2) | |
GBDT | 0.3437 (3.96 × 10−2) | 0.3278 (3.52 × 10−2) | 0.3356 (3.13 × 10−2) | 0.3278 (3.16 × 10−2) | 0.3822 (2.61 × 10−2) | |
XGBoost | 0.3452 (4.36 × 10−2) | 0.3263 (4.09 × 10−2) | 0.3459 (3.67 × 10−2) | 0.3315 (3.03 × 10−2) | 0.3833 (3.34 × 10−2) | |
DT | 0.3326 (3.35 × 10−2) | 0.3130 (3.24 × 10−2) | 0.3330 (3.26 × 10−2) | 0.3370 (2.45 × 10−2) | 0.3974 (2.09 × 10−2) |
Classifier | DEAP Database | |||
---|---|---|---|---|
Valence | Arousal | |||
without Feature Selection | MF-DFS | without Feature Selection | MF-DFS | |
RF-CL | 0.4367 | 0.4984 | 0.4613 | 0.4984 |
AdaBoost-CL | 0.4215 | 0.4959 | 0.4719 | 0.5166 |
GBDT-CL | 0.4418 | 0.4807 | 0.4660 | 0.5053 |
XGBoost-CL | 0.4418 | 0.4814 | 0.4582 | 0.5016 |
DT-CL | 0.3975 | 0.4754 | 0.4037 | 0.4879 |
RF-DE | 0.4223 | 0.4779 | 0.4441 | 0.4970 |
AdaBoost-DE | 0.4307 | 0.4754 | 0.4541 | 0.5000 |
GBDT-DE | 0.4383 | 0.4831 | 0.4646 | 0.4988 |
XGBoost-DE | 0.4156 | 0.4754 | 0.4383 | 0.4998 |
DT-DE | 0.4174 | 0.4738 | 0.4098 | 0.4953 |
Classifier | MAHNOB-HCI Database | |||
---|---|---|---|---|
Valence | Arousal | |||
without Feature Selection | MF-DFS | without Feature Selection | MF-DFS | |
RF-CL | 0.4365 | 0.5380 | 0.3844 | 0.4755 |
AdaBoost-CL | 0.4135 | 0.5047 | 0.3922 | 0.4495 |
GBDT-CL | 0.4250 | 0.5234 | 0.4193 | 0.4682 |
XGBoost-CL | 0.4271 | 0.5146 | 0.4073 | 0.4646 |
DT-CL | 0.4047 | 0.5078 | 0.3589 | 0.4521 |
RF-DE | 0.4302 | 0.5120 | 0.3615 | 0.4661 |
AdaBoost-DE | 0.4104 | 0.475 | 0.3734 | 0.4589 |
GBDT-DE | 0.4271 | 0.4953 | 0.3766 | 0.4630 |
XGBoost-DE | 0.4260 | 0.4860 | 0.3651 | 0.4578 |
DT-DE | 0.4297 | 0.4745 | 0.3250 | 0.4609 |
Classifier | SEED-CL | SEED-DE | ||
---|---|---|---|---|
Valence | ||||
without Feature Selection | MF-DFS | without Feature Selection | MF-DFS | |
RF | 0.3685 | 0.3956 | 0.3393 | 0.3901 |
AdaBoost | 0.3530 | 0.3859 | 0.3285 | 0.3889 |
GBDT | 0.3477 | 0.3867 | 0.3315 | 0.3822 |
XGBoost | 0.3555 | 0.3907 | 0.3211 | 0.3833 |
DT | 0.3470 | 0.4037 | 0.3241 | 0.3974 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hua, Y.; Zhong, X.; Zhang, B.; Yin, Z.; Zhang, J. Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition. Brain Sci. 2021, 11, 1392. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111392
Hua Y, Zhong X, Zhang B, Yin Z, Zhang J. Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition. Brain Sciences. 2021; 11(11):1392. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111392
Chicago/Turabian StyleHua, Yue, Xiaolong Zhong, Bingxue Zhang, Zhong Yin, and Jianhua Zhang. 2021. "Manifold Feature Fusion with Dynamical Feature Selection for Cross-Subject Emotion Recognition" Brain Sciences 11, no. 11: 1392. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111392