Next Article in Journal
Building Capacity for a User-Centred Integrated Early Warning System for Drought in Papua New Guinea
Next Article in Special Issue
An Endmember Bundle Extraction Method Based on Multiscale Sampling to Address Spectral Variability for Hyperspectral Unmixing
Previous Article in Journal
Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification
Previous Article in Special Issue
Hyperspectral and Multispectral Image Fusion by Deep Neural Network in a Self-Supervised Manner
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neighboring Discriminant Component Analysis for Asteroid Spectrum Classification

1
Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau 999078, China
2
State Key Laboratory of Lunar and Planetary Sciences, Macau University of Science and Technology, Taipa, Macau 999078, China
3
School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
4
Global Information and Telecommunication Institute, Waseda University, Tokyo 169-8050, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(16), 3306; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163306
Submission received: 1 July 2021 / Revised: 15 August 2021 / Accepted: 16 August 2021 / Published: 20 August 2021
(This article belongs to the Special Issue Recent Advances in Hyperspectral Image Processing)

Abstract

:
With the rapid development of aeronautic and deep space exploration technologies, a large number of high-resolution asteroid spectral data have been gathered, which can provide diagnostic information for identifying different categories of asteroids as well as their surface composition and mineralogical properties. However, owing to the noise of observation systems and the ever-changing external observation environments, the observed asteroid spectral data always contain noise and outliers exhibiting indivisible pattern characteristics, which will bring great challenges to the precise classification of asteroids. In order to alleviate the problem and to improve the separability and classification accuracy for different kinds of asteroids, this paper presents a novel Neighboring Discriminant Component Analysis (NDCA) model for asteroid spectrum feature learning. The key motivation is to transform the asteroid spectral data from the observation space into a feature subspace wherein the negative effects of outliers and noise will be minimized while the key category-related valuable knowledge in asteroid spectral data can be well explored. The effectiveness of the proposed NDCA model is verified on real-world asteroid reflectance spectra measured over the wavelength range from 0.45 to 2.45 μm, and promising classification performance has been achieved by the NDCA model in combination with different classifier models, such as the nearest neighbor (NN), support vector machine (SVM) and extreme learning machine (ELM).

Graphical Abstract

1. Introduction

Deep space exploration is the focus of space activities around the world, which aims to explore the mysteries of the universe, search for extraterrestrial life and acquire new knowledge [1,2,3]. Planetary science plays an increasingly important role in the high-quality and sustainable development of deep space exploration [4,5]. Asteroids, as a kind of special celestial body revolving around the sun, are of great scientific significance for human beings in studying the origin and evolution of the solar system, exploring the mineral resources and protecting the safety of the earth due to their large number, different individual characteristics and special orbits [6,7,8]. Studies have shown that the thermal radiation from asteroids mainly depends on its size, shape, albedo, thermal inertia and roughness of the surface [9,10]. The asteroids with different types (such as the S-type, V-type, etc.) in different regions (such as the Jupiter trojans, Hungarian group, etc.) show different spectral characteristics, which establishes the foundations for identifying different kinds of asteroids via remote spectral observation [11,12]. For example, the near-infrared data can reveal the diagnostic compositional information, and the salient features at 1 and 2 μm bands can be used to indicate the existence or absence of olivine and pyroxene [12]. The astronomers have developed many remote observation methods for asteroids, such as spectral and polychromatic photometry, infrared and radio radiation methods [13,14,15,16]. Thus, a large volume of asteroid visible and near-infrared spectral data has been collected with the development of space and ground-based telescope observation technologies, which induced great progress in the field of asteroid taxonomy through their spectral characteristics [17,18,19,20].
The Eight-Color Asteroid Survey (ECAS) is the most remarkable ground-based asteroid observation survey, which gathered the spectrophotometric observations of about 600 large asteroids [14]. However, very few small main-belt asteroids have been observed due to their faintness. With the appearance of charge-coupled device (CCD), it has been possible to study the large-scale spectral data of small main-belt asteroids with a diameter less than 1 km [21]. The first phase of the Small Main belt Asteroid Spectroscopic Survey (SMASSI) was implemented from 1991 to 1993 at the Michigan-Dartmouth-MIT Observatory [15,20]. The main objective of SMASSI was to measure the spectral properties for small and medium-sized asteroids, and it primarily focuses on the objects in the inner main belt aiming to study the correlations between meteorites and asteroids. Based on the survey, abundant spectral measurements for 316 different asteroids have been collected. In view of the successes of SMASSI, the second phase of the Small Main-belt Asteroid Spectroscopic Survey (SMASSII) mainly focused on gathering an even larger and internally consistent asteroid dataset with spectral observations and reductions, which were carried out as consistently as possible [20]. Thus, SMASSII has provided a new basis for studying the composition and structure of the asteroid belt [9].
For asteroid taxonomy, Tholen et al. applied the minimal tree method by a combination with the principal component analysis (PCA) method in order to classify nearly 600 asteroid spectra from the ECAS [14]. For more comprehensive and accurate classification of asteroids, DeMeo et al. developed an extended taxonomy to characterize visible and near -infrared wavelength spectra [20]. The asteroid spectral data used for the taxonomy are based on the reflectance spectral characteristics measured in the wavelength range from 0.45 to 2.45 μm with 379–688 bands. In summary, the dataset was comprised of 371 objects with both visible and near-infrared data. SMASSII dataset provided the most visible wavelength spectra, and the near-infrared spectral measurements from 0.8 to 2.5 μm were obtained by using SpeX, the low-resolution to medium-resolution near-infrared spectrograph and imager at the 3-m NASA IRTF in Mauna Kea, Hawaii [20]. A detailed description for the dataset is illustrated in Table 1. Based on the dataset, DeMeo et al. have presented the taxonomy, as well as the method and rationale, for the class definitions of different kinds of asteroids. Specifically, three main complexes, i.e., S-complex, C-complex and X-complex, were defined based on some empirical spectral characteristics/features, such as the spectral curve slope, absorption bands and so on.
Nevertheless, the question of how to automatically discover the key category-related spectral characteristics/features for different kinds of asteroids remains an open problem [9,22,23]. Meanwhile, owing to the noise of observation systems and the ever-changing external conditions, the observed spectral data usually contain noise and distortions, which will cause spectrum mixture due to the random perturbation of electronic observation devices. As a result, the observed asteroid spectra data often show indivisible pattern characteristics [24,25]. Furthermore, the observed spectral data always have wide bands, such as the visible and near-infrared bands. Thus, the reflectance at one wavelength is usually correlated with the reflectance of the adjacent wavelengths [26]. Accordingly, the adjacent spectral bands are usually redundant, and some bands may not contain discriminant information for asteroid classification. Moreover, the abundant spectral information will result in high data dimensionality containing useless or even harmful information and bring about the “curse of dimensionality” problem, i.e., under a fixed and limited number of training samples, the classification accuracy of spectral data might decrease when the dimensionality of spectral feature increases [27]. Therefore, it is necessary to develop effective low-dimensional asteroid spectral feature learning methods and find the latent key discriminative knowledge for different kinds of asteroids, which will be very beneficial for the precise classification of asteroids.
Machine learning techniques have developed rapidly in recent years for spectral data processing and applications, such as the classification and target detection [28,29,30,31,32,33,34,35]. For example, the classic PCA has been applied to extract meaningful features from the observed spectral data without using the prior label information. PCA is also useful for asteroid and meteorite spectra analysis due to the fact that many of the variables, i.e., the reflectance at different wavelengths, are highly correlated [15,20,36]. Linear discriminant analysis (LDA) can make full use of the label priors by concurrently minimizing the within-class scatter and maximizing the between-class scatter in a dimension-reduced subspace [37]. In addition to the above statistics-based methods, some geometry theory-based methods have also been proposed for the problem of data dimensionality reduction. For example, the locality preserving projections (LPP) assume that neighboring samples are likely to share similar labels, and the affinity relationships among samples should be preserved in subspace learning/dimension reduction [38]. Locality preserving discriminant projections (LPDP) have also been developed with locality and Fisher criterions, which can be seen as a combination of LDA and LPP [39,40].
In order to define the class boundaries for asteroid classification, traditional methods always empirically determine the spectral features by relying on the presence or absence of specific features, such as the spectral curve slope, absorption wavelengths and so on, which might be intricate and less reliable. Based on the well labeled asteroid spectral dataset described in Table 1 the main objective of this paper is to study the pattern characteristics of different categories of asteroids from the perspective of data-driven machine learning technique and to develop efficient asteroid spectral feature learning and classification method in a supervised fashion, as shown in Figure 1. In order to be specific, it is assumed that not only the specified absorption bands, such as the 1 μm and 2 μm bands but also all the spectral wavelengths might carry some useful diagnostic information for asteroid category identification and will contribute to the accurate classification of different kinds of asteroids. As a result, the spectral data spanning across the visible to near-infrared wavelengths, i.e., from 0.45 to 2.45 μm, are treated as a whole in order to automatically discover the key category-related discriminative information for efficient asteroid spectral feature learning and classification by using supervised data-driven machine learning methodology. The novelties and contributions of this paper are summarized as below.
(1)
Instead of empirically determining the spectral features via the presence or absence of specific spectral features to define asteroid class boundaries for classification, this paper presents a novel supervised Neighboring Discriminant Components Analysis (NDCA) model for discriminative asteroid spectral feature learning by simultaneously maximizing the neighboring between-class scatter and data variances, minimizing the neighboring within-class scatter to alleviate the overfitting problem caused by outliers and enhancing the discrimination and generalization ability of the model.
(2)
With the neighboring discrimination learning strategy, the proposed NDCA model has stronger robustness to abnormal samples and outliers, and the generalization performance can thus be improved. In addition, the NDCA model transforms the data from the observation space into a more separable subspace, and the key category-related knowledge can be well discovered and preserved for different classes of asteroids with neighboring structure preservation and label prior guidance.
(3)
The performance of the proposed NDCA model is verified on real-world asteroid dataset covering the spectral wavelengths from 0.45 to 2.45 μm by combining with different baseline classifier models, including the nearest neighbor (NN), support vector machine (SVM) and extreme learning machine (ELM). In particular, the best result is achieved by ELM, with a classification accuracy of about 95.19%.
The reminder of this paper is structured as follows. Section 2 introduces related works on subspace learning/dimension reduction and machine learning classifier models. The proposed NDCA model is meticulously introduced in Section 3. Section 4 contains the experimental results and discussions. The final conclusion is given in Section 5.

2. Related Work

2.1. Notations Used in This Paper

In this paper, the observed asteroid visible and near-infrared spectroscopy dataset is denoted as X = [ x 1 , x 2 , , x N ] D × N comprising N spectral samples with dimensionality D from C classes. N i is the number of the samples in the i-th class. The label matrix for X is denoted as T = [ t 1 , t 2 , , t N ] C × N with t i as the label vector for x i . The label of each sample in X is coded as a C-dimensional vector, and the j-th entry of t i is +1 with the remaining entities as 0, which indicates that sample x i belongs to the j-th category. The basic idea of linear low-dimensional feature learning, i.e., dimension reduction, is to automatically learn an optimal transformation matrix P = [ p 1 , p 2 , , p N ] D × d with d < D, which can project the observed spectral data from the original high D-dimensional observation space into a lower d-dimensional feature subspace, and obtains the low-dimensional meaningful features Y d × N of X via Y = P T X = [ y 1 , y 2 , , y N ] d × N . Table 2 summarizes the important notations used in this paper.

2.2. Low-Dimensional Feature Learning for Spectral Data

In the process of low-dimensional feature learning, the key data knowledge and information, such as the discriminative structures, should be preserved and enhanced. Meanwhile, the noise and redundant information should be removed and suppressed. Principal component analysis (PCA) is a widely applied unsupervised statistical dimension reduction and feature learning method, which focuses on maximizing the variance of the data with significant principal components [33]. A formulation for PCA can be derived by solving the following least squares problem:
min P X P P T X F 2   s . t .   P T P = I d
where F 2 means the Frobenius norm of a matrix, and I d is an identity matrix with the size of d . Formula (1) is equivalent to maximizing the variance of the transformed data as follows [33].
max P T r ( P T X X T P )   s . t .   P T P = I d
Unlike PCA, LDA is a supervised dimension reduction learning method and aims to maximize the separability between different classes and enhance the compactness within each class with the guidance of label information as described below [34,41,42,43]:
max P T r ( P T S W P ) T r ( P T S B P )
where S W and S B are, respectively, the within-class and between-class scatter matrices of data, which are calculated in the following way [34,41,42,43]:
S W = i = 1 C j = 1 N i ( x i j μ i ) ( x i j μ i ) T
S B = i = 1 C N i ( μ i μ ) ( μ i μ ) T
where x i j is the j-th sample of the i-th class, and μ i and μ are the mean value of the samples in i-th class and all the samples in X , respectively.

2.3. Classifier Models for Spectral Data Classification

Classifier models, such as NN, SVM [44] and ELM [45,46,47,48], have been commonly used in the contexts of machine learning and pattern recognition communities in order to recognize and classify spectral data. In particular, the extreme learning machine (ELM) is a newly developed machine learning paradigm for the generalized single hidden layer feed forward neural networks and has been widely studied and applied due to its some unique characteristics, such as the high learning speed, good generalization and universal approximation abilities [47]. The most noteworthy characteristic for ELM is that the weights between the input and the hidden layers are randomly generated without further adjustments. The objective function of ELM is formulated as below:
min β 1 2 β F 2 + α 2 i = 1 N ξ i F 2 s . t . h ( x i ) β = t i ξ i , i = 1 , 2 N H β = T ξ
where β L × C denotes the output weights connecting the hidden layer and the output layer. ξ = [ ξ 1 , ξ 2 , , ξ N ] T N × C is the prediction error matrix with respect to the training data. H N × L is the hidden layer output matrix and is computed with the following method:
H = [ h ( w 1 T x 1 + b 1 ) h ( w L T x 1 + b L ) h ( w 1 T x N + b 1 ) h ( w L T x N + b L ) ]
where h ( ) is the activation function in the hidden layer, for example the sigmoid function. W = [ w 1 , w 2 , , w L ] d × L and b = [ b 1 , b 2 , , b L ] L refer to the randomly generated input weights and bias, respectively. The output weight matrix β is used to transform the data from the L-dimensional hidden layer space into the C-dimensional high-level label space and is analytically calculated in the following manner.
β * = { ( H T H + I L × L α ) H T T ,   i f   N L H T ( H H T + I N × N α ) 1 T ,   i f   N < L
With the optimal output weight matrix β * obtained, the predicted label for a new test sample z can be computed as follows:
label ( z ) = h ( z ) β *
where h ( z ) is the hidden layer output for test sample z .

3. The Proposed Neighboring Discriminant Component Analysis Model: Formulation and Optimization

The remote observed asteroid spectral data usually contain noise and outliers, which will mix different categories of asteroids and make them inseparable. In addition, learning with outliers will easily cause overfitting problem, which will decrease the generalization ability of machine learning models for testing samples. Thus, the key problem is to distinguish the outliers and to select the most valuable samples for the learning of low-dimensional feature subspace and preserve the key discriminative data knowledge for different classes of asteroids.
To this end, the idea of neighboring learning is introduced to find a neighboring group of valuable samples from all the training samples as well as the samples in each class, and the outliers and noised samples are excluded in dimension reduction learning in order to enhance the generalization ability of the model. As shown in Figure 2, the normalized asteroid spectral data are firstly inputted as in (a). Secondly, (b) finds the neighboring samples in each asteroid class in order to characterize the neighboring within-class and between-class properties of data. Meanwhile, the neighboring samples from all the samples for neighboring principal components were found to preserve the most valuable data information as in (c). With the basic principles of (b) and (c), a clearer class boundary can be found to alleviate the overfitting problem caused by the outliers and noised samples and enhance the neighboring and discriminative information of data for efficient spectral feature learning shown in (d). In order to achieve this goal, the neighboring between-class and within-class scatter matrices need to be calculated in order to characterize the neighboring discriminative properties of the observed asteroid spectra.
Neighboring between-class scatter matrix S N b computation: Firstly, calculate the global centroid m b = ( 1 / N ) × i = 1 N x i for all the samples in training dataset X and find Nb = Rb · N neighboring samples to m b by using between-class neighboring ratio Rb (0 < Rb < 1). Thus, Nb = [Nb1, Nb2, …, Nbc, …, NbC] global neighboring samples X b can be obtained with Nbc as the number of the neighboring samples in the c-th class for computing the neighboring between-class scatter matrix. Secondly, compute the local centroid m b c = ( 1 / N b c ) × j = 1 N b c x b c j for the c-th class, and x b c j is the j-th sample in the c-th class of the neighboring samples X b . Finally, the neighboring between-class scatter matrix is calculated as follows.
S N b = c = 1 C N b c ( m b c m b ) ( m b c m b ) T
At the same time, the N b global neighboring samples are used to calculate the covariance matrix as below.
i = 1 N b x b i ( x b i ) T
Neighboring within-class scatter matrix S N w computation: Firstly, calculate the basic local centroid m w c = ( 1 / N c ) × i = 1 N c x c i for each class of samples, where x c i is the i-th sample in the c-th class of X , and then find the samples group containing Nwc = Rw · Nc neighboring samples to m w c by using within-class neighboring ratio Rw (0 < Rw < 1) in the i-th class. Secondly, refine the local centroid of each class using the samples in the obtained neighboring group of samples X w c . Finally, compute the neighboring within-class scatter matrix as follows:
S N w = c = 1 C i = 1 N w c ( x w c i m w c ) ( x w c i m w c ) T
where m w c = ( 1 / N w c ) × i = 1 N w c x w c i is the refined centroid of each class based on the neighboring sample groups X w c , and x w c i is the i-th samples of c-th class samples in the neighboring group. By comprehensively consider Equations (10)–(12) in a dimension-reduced subspace, the following optimization problem is formulated.
max P T r ( P T S N b P ) + γ T r ( P T X b X b T P ) μ T r ( P T S N w P )
The details for deriving Equation (13) based on Equations (10)–(12) are shown in Appendix A. In Equation (13), γ and μ are the tradeoff parameters for balancing the corresponding components in the objective function, from which one can observe that, in the subspace formed by P, the goals of neighboring between-class scatter maximization, within-class scatter minimization and neighboring principal components preservation can be simultaneously achieved. Accordingly, the side effects of outliers and noised samples will be suppressed to the largest extent. As a result, the global and local neighboring discriminative structures and principal components will be enhanced and preserved by using the neighboring learning mechanism. Furthermore, optimization problem (13) can be transformed into the following one by introducing an equality constraint [49]:
max P T r ( P T S N b P )   s . t . μ T r ( P T S N w P ) γ T r ( P T X b X b T P ) = ϖ
where ϖ is a constant used to ensure a unique solution for model (13). The objective function for model (14) can be formulated as the following unconstrained one by introducing the Lagrange multiplier λ.
( P , λ ) = T r ( P T S N b P ) λ ( μ T r ( P T S N w P ) γ T r ( P T X b X b T P ) ϖ )
Then, the partial derivative of the objective function (15) with respect to P is calculated and set as zero, resulting in the following equations:
( P , λ ) P = S N b P λ ( μ S N w P γ X b X b T P ) = 0
S N b P = λ ( μ S N w γ X b X b T ) P
where the projection matrix P = [ p 1 , p 2 , , p d ] can be acquired, which is composed of the eigenvectors corresponding to the first d largest eigenvalues λ1, λ2 …, λd of the eigenvalue decomposition problem as described below.
( μ S N w γ X b X b T ) 1 S N b p = λ p
Once the above optimal projection matrix P is calculated, the training data are projected into the subspace using P in order to acquire the low-dimensional discriminative feature of the observed spectral data. Afterwards, a classifier model is trained using the dimension-reduced training data. For testing, an asteroid spectral sample with unknown label is firstly transformed into the subspace by using the optimal projection matrix P and then classified by the trained classifier model.

4. Experiments

4.1. Preprocessing for the Asteroid Spectral Data

As shown in Table 3, a part of the samples described in Table 1 was used in the study. The data preprocessing was performed to preliminarily reduce the influences of noise for ease of classification. Firstly, the original spectral data were filtered and smoothed by using some data filtering method, such as the moving average filter. Secondly, the discrete spectrum measurements were fitted using the high-order polynomial method. Thirdly, the obtained fitted spectral curves within the spectral wavelengths from 0.45 to 2.45 μm were sampled with certain step interval. Several examples the original spectra, smoothed spectra and the fitted spectra for different kinds of asteroids are shown in Figure 3, Figure 4 and Figure 5, from which one can see that the abnormal noises in some spectral bands were suppressed to a certain extent.

4.2. Experimental Setup and Results

As previously mentioned, the smoothed asteroid spectral curves were fitted using a high order polynomial, which was furthered sampled in wavelength region from 0.45 to 2.45 μm with an increment step interval of 0.05 μm, obtaining 41 measurements for each asteroid spectrum. In order to valid the effectiveness of the proposed method, the data from different classes were firstly approximately equally divided into five groups as shown in Figure 6. Afterwards, the five groups of samples from different classes are merged into 5-folds. Each fold contains five groups of samples with each group from distinct classes. The data partition of the 5-folds was illustrated in Table 4. Then, the 5-fold cross validation (CV) strategy was adopted for the performance evaluation of different methods. Specifically, random 4folds of samples were selected and used as the training dataset, and the remaining 1 fold of samples was utilized for testing; thus, five experiments were carried out. A detailed description for the five experiment settings is shown in Table 5, and the individual and average classification accuracy of different methods on the five experiments will be reported. All the experiments were conducted under the same settings and computing platform. Thus, a fair comparison between different methods can be guaranteed. The proposed NDCA model was compared with several representative subspace learning methods, including PCA, LDA, LPP and LPDP. Moreover, the sampled raw asteroid spectral data without feature learning were also included for comparison. In addition, some baseline classifier models, such as the nearest neighbor (NN), SVM and ELM, were adopted in the experiments for the classification of the asteroid features.
The performance of different dimension reduction methods under gradually increasing reduced dimension d (from 2 to 41 with an interval of 1) by using different baseline classifier models, i.e., NN, SVM and ELM, is illustrated in Figure 7, Figure 8 and Figure 9. In addition, the highest classification accuracy of different comparative methods under varying dimensions for each experiment is reported in Table 6, Table 7 and Table 8, respectively. Based on the experimental results, all the comparative methods tend to achieve improved classification performance with the growth of feature dimension. For the proposed NDCA method, the classification accuracy of NDCA method increases first, then decreases and finally tends to be stable. This could be due to the fact that too many dimension features might introduce redundant harmful information and decrease classification performance. It is also notable that the classification performance of LPP stabilizes first and then increases when the feature dimension increases to about 33 in the case of SVM and ELM. Meanwhile, when LPP combines with the NN classifier, the classification performance increases first and, finally, tends to be stable. Even though LPP can only achieve comparative classification performance in a relatively high dimensionality, the best classification accuracies for LPP in combination with NN, SVM and ELM also reached 89.7565%, 92.8158% and 94.4711%, respectively.
Generally speaking, the proposed NDAC method can yield the best classification accuracies of 94.1971%, 93.6377% and 95.1895% with different classifiers. Table 9 further summarizes the performance improvement of the proposed NDCA method in comparison with different comparative methods by using different classifiers. Specifically, the maximal performance improvement of NDCA method is 4.9886% in comparison with raw feature and the PCA method by using NN as the classifier, and the minimal performance improvement of NDCA method is 0.4521% when comparing with LPDP plus ELM method. In summary, the average performance improvement in all the experimental settings is 2.045%. Therefore, the effectiveness and superiority of the proposed NDCA method can be clearly observed from the perspective of experimental verifications.
In addition, the results show that the raw data without feature learning achieves worse classification performance among all the comparative methods. In contrast, the proposed NDCA model can achieve the highest classification accuracy by combining with different classifier models. Moreover, it should be noted that the highest accuracy can be achieved when the feature dimension is around nine. Thus, the optimal reduced dimension d can be searched around the total number of categories for the samples in asteroid spectral dataset.
Furthermore, the scatter points for the first two dimensions acquired by different methods are visualized in Figure 10 in order to further intuitively observe the low-dimensional feature learning performance. In contrast, the scatter points obtained by the comparative methods have serious data mixture effects between different classes, especially the “K”, “L” and “Q” classes, which will result in lower classification performance. From Figure 10e, it can be observed that the scatter points derived by the proposed NDCA model show better within-class compactness and between-class separation characteristics with relatively clearer category boundaries. Accordingly, the spectral characteristics within each class and the discriminant between different classes of asteroids are fully explored and enhanced by using the proposed NDCA model. By combining with the off-the-shelf classifier models, the class boundaries between different kinds of asteroid spectral data can be easily found, which will result in promising generalization and classification performance.

4.3. Analysis for NDCA Parameters

Apart from the dimensionality of the derived feature subspace d, the proposed NDCA model has several key other parameters, including the between-class neighboring ratio Rb, the within-class neighboring ratio Rw and the balance parameters γ and μ in the model formulation (13). Obviously, different parameter settings will result in fluctuating performances. Thus, parameter sensitivity analyses were needed to be conducted in order to show the classification performance variation with respect to these parameters. Specifically, the four parameters were divided into two groups, i.e., (γ, μ) and (Rw, Rb). Among them, γ and μ were selected from the candidate parameter set {10g, g = −4, −3, −2, −1, 0, 1, 2, 3, 4}, while Rw and Rb were selected from the candidate parameter set {0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1}. As shown in Figure 11, Figure 12 and Figure 13, one can observe that the average classification performance change surfaces in sub-figures (a) of Figure 11, Figure 12 and Figure 13 are smoother and more stable within a wide parameter setting range, which means that the classification is not very sensitive to the settings of parameter pair (γ, μ). By contrast, the classification performance changes more acutely with the variations of different parameter pairs (Rw, Rb).

4.4. Analysis for ELM Classifier Parameters

The former experiments show that the proposed NDCA method can generally achieve promising and higher classification accuracy in combination with ELM. As shown in formulation (6), ELM has two key hyper-parameters, i.e., the number of hidden neurons L and the balance parameter α. Figure 14 shows that classification performance changes with different settings of L and α. In general, with the increase in the hidden neurons, the classification accuracy increases first and then tends to be stable. In the experiments, the number of hidden neurons L in ELM is empirically set around 9000. As for the trade-off parameter α, the classification accuracy first improves when α increases from 10−5 to 10 and then degrades when α increases from 10 to 105. In the experiments, α can be set around 10 by which promising performance can be expected.
From the above experimental results, we observe the following:
(1)
The benefits of feature learning for asteroid spectrum classification. In the experiments shown in Table 6, Table 7 and Table 8, the original observed raw spectral data without feature learning were directly fed into the classifier models, i.e., NN, SVM and ELM, for classification. The average classification performances achieved by NN, SVM and ELM were 89.2085%, 91.7047% and 92.6963%, respectively, which were generally the worst performance among all the comparative methods. In contrast, the classification performance achieved by the same classifier models after feature learning obtained some improvement. For example, LPP plus NN, SVM and ELM can, respectively, achieve the improved classification accuracies of 89.7565%, 92.8158% and 94.4711%. The results can verify the benefits of feature learning for the improvement of asteroid spectral classification accuracy.
(2)
The advantages of the proposed NDCA model. In comparison with several representative low-dimensional feature learning methods, the proposed NDCA model can generally achieve better classification performance by combining with different classifier models. Specifically, NDCA plus NN, SVM and ELM can achieve the highest classification accuracies of 94.1971%, 93.6377% and 95.1895%, respectively. The improvements are mainly due to the following two aspects. Firstly, the NDCA model is a supervised dimension reduction method and inherits the merits of the existing methods, which can fully utilize label knowledge in order to find the key category-related information of spectral data for discriminative asteroid spectral feature learning and classification. Secondly, the introduction of neighboring learning methodology can significantly reduce the side effects of outliers and noised samples in order to alleviate the overfitting problem, which will enhance the robustness of the leant low-dimensional features and finally improve the generalization ability and classification performance of the proposed model in testing.
(3)
The superiority of ELM. Three baseline classifier models, including NN, SVM and ELM, were used in the experiments. In particular, the best results are obtained by NDCA plus ELM with a classification accuracy of about 95.19%, which is generally superior to the comparing classifier models. To the best of our knowledge, this work is the first attempt to apply ELM in asteroid spectrum classification, and very competitive performance has been achieved, which can provide new application scenarios and perspectives for ELM community.
(4)
Future work discussion. First, future work will consider employing feature selection methods in order to study the asteroid spectral characteristics. Distinct from feature learning/extraction methods, which adopts the idea of data transformation, feature/band selection methods use the idea of selection and aim to automatically select a small subset of representative spectral bands in order to remove spectral redundancy while simultaneously preserving the significant spectral knowledge. Since the feature selection is performed in the original observation space, the specific selected bands have clearer physical meanings with better interpretability. As a result, feature/band selection is an important technique for spectral dimensionality reduction and has room for further improvement. Second, the visualization in Figure 10 for the scatter points of the first two components acquired by different methods showed that some classes of asteroid spectra with limited training samples are seriously mixed and overlapped. One possible reason is that the numbers of training samples from different classes were unbalanced. For example, the number of samples for ‘S’ class asteroid is 199, while the ‘A’ class asteroid only has six samples. When classifying data with complex class distribution, the regular learning algorithm has a natural tendency to favor the majority class by assuming balanced class distribution or equal misclassification cost. As a result, the sample imbalance problem will result in learning bias, and the generalization ability of the obtained model is, thus, restricted. It is significant to deal with the data imbalanced problem and establish balanced data distribution by some sampling or algorithmic methods in future works such that the imbalanced class distribution problem can be well handled and alleviated, which can improve the accuracy of asteroid spectral data analysis.

5. Conclusions

This paper has introduced a novel supervised NDCA learning model for asteroid spectral feature learning and classification. The key idea is to distinguish the outliers and noised samples in order to alleviate the overfitting problem and to find the significant category-related features such that the classification performance can be improved. The goals are technically achieved by simultaneously maximizing the neighboring between-class scatter, minimizing the within-class scatter and preserving the neighboring principal components. Experimental results on reflectance spectrum characteristics measured across the spectral wavelengths ranging from 0.45 to 2.45 μm show the effectiveness of the proposed model by combining with different baseline classifier models, including NN, SVM and ELM, and the highest classification accuracy is achieved using the ELM classifier, which also verifies the superiority of ELM for multiclass classification problem.

Author Contributions

All the authors made significant contributions to the study. T.G. and X.-P.L. conceived and designed the global structure and methodology of the manuscript; T.G. analyzed the data and wrote the manuscript. Y.-X.Z. and K.Y. provided some valuable advice and proofread the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by The Science and Technology Development Fund, Macau SAR (No. 0073/2019/A2). Tan Guo is also funded by The Macao Young Scholars Program under Grant AM2020008, The Natural Science Foundation of Chongqing under Grant cstc2020jcyj-msxmX0636, The Key Scientific and Technological Innovation Project for “Chengdu-Chongqing Double City Economic Circle” under grant KJCXZD2020025, The National Key Research and Development Program of China under Grant2019YFB2102001 and The 2019 Outstanding Chinese and Foreign Youth Exchange Program of China Association of Science and Technology (CAST).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the author.

Acknowledgments

The authors would like to thank Francesca E. DeMeo from MIT for providing the asteroid spectral dataset.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Equation (13) shows the model formulation for the proposed NDCA method and contains three key components, i.e., T r ( P T S N b P ) , T r ( P T X b X b T P ) , and T r ( P T S N w P ) . The three components can be, respectively, derived from Equations (10)–(12). The details are as follows.
1
Deriving T r ( P T S N b P ) from Equation (10).
Indicate m b c and m b as the local and global centroids in the original space. Similarly, m ¯ b c and m ¯ b indicate the local and global centroids in the feature space, which can be calculated as m ¯ b c = P T m b c and m ¯ b = P T m b by using the subspace projection matrix P. The neighboring between-class scatter matrix in feature space is described below.
c = 1 C N c ( m ¯ b c m ¯ b ) ( m ¯ b c m ¯ b ) T
Since m ¯ b c = P T m b c and m ¯ b = P T m b , the following two formulations can be obtained via Equation (A1).
c = 1 C N c ( P T m b c P T m b ) ( P T m b c P T m b ) T
P T c = 1 C N c ( m b c m b ) ( m b c m b ) T P
It can be easily observed that c = 1 C N c ( m b c m b ) ( m b c m b ) T is the neighboring between-class scatter matrix in the original space, which will result in Equation (10) in the paper as follows.
S N b = c = 1 C N c ( m b c m b ) ( m b c m b ) T
Thus, Equation (A3) can be rewritten as below.
P T S N b P
Furthermore, the trace of Equation (A5) is used for the optimization of subspace projection matrix P , resulting in the following formulation.
T r ( P T S N b P )
Following the above derivations from Equations (A1)–(A6), the component T r ( P T S N b P ) in Equation (13) can be obtained based on Equation (10).
2
Deriving P T X b X b T P from Equation (11).
Signify x ¯ b i as the low dimensional feature of x b i projected by P, i.e., x ¯ b i = P T x b i . With the idea of PCA, the variance of the projected data is maximized as follows.
i N b ( x ¯ b i ) ( x ¯ b i ) T
i N b ( P T x b i ) ( P T x b i ) T
Equation (A8) can be transformed into the following form.
P T i = 1 N b ( x b i ) ( x b i ) T P
i = 1 N b ( x b i ) ( x b i ) T is the covariance matrix of dataset X b shown in Equation (11) and can be expressed as X b X b T . Therefore, Equation (A9) is formulated in the following form.
P T X b X b T P
Furthermore, the trace of Equation (A10) is used for optimization as described below.
T r ( P T X b X b T P )
In this way, the component T r ( P T X b X b T P ) in Equation (13) is obtained based on
Equation (11) via the derivations from Equations (A7)–(A11).
3
Deriving T r ( P T S N w P ) from Equation (12).
Denote x w c and m w c as the samples and within-class centroid for the c-th class in original space. x ¯ w c and m ¯ w c denote the samples and within-class centroid for the c-th class in the feature space, which can be calculated as x ¯ w c = P T x w c , m ¯ w c = P T m w c using projection matrix P. The neighboring within-class scatter in feature space is described as below.
c = 1 C i = 1 N w c ( x ¯ w c m ¯ w c ) ( x ¯ w c m ¯ w c ) T
Substitute x ¯ w c = P T x w c and m ¯ w c = P T m w c into Equation (A12),the following two formulations can be successively obtained.
c = 1 C i = 1 N w c ( P T x w c P T m w c ) ( P T x w c P T m w c ) T
P T c = 1 C i = 1 N w c ( x w c m w c ) ( x w c m w c ) T P
It can be observed that c = 1 C i = 1 N w c ( x w c m w c ) ( x w c m w c ) T = S N w is the neighboring within-class scatter matrix in the original space, i.e., Equation (12). Thus, Equation (A14) can be rewritten as described below.
P T S N w P
Similarly, the trace of Equation (A15) is used for the optimization of subspace projection matrix P.
T r ( P T S N w P )
According to the above derivations from Equation (A12)–(A16), the component T r ( P T S N w P ) in Equation (13) can be acquired based on Equation (12). In summary, Equation (13) was obtained based on Equations (10)–(12) by using the above procedures.

References

  1. Zhang, Y.; Jiang, J.; Zhang, G. Compression of remotely sensed astronomical image using wavelet-based compressed sensing in deep space exploration. Remote Sens. 2021, 13, 288. [Google Scholar] [CrossRef]
  2. Wu, W.; Liu, W.; Qiao, D.; Jie, D. Investigation on the development of deep space exploration. Sci. China Technol. Sci. 2012, 55, 1086–1091. [Google Scholar] [CrossRef]
  3. Dorsky, L.I. Trends in instrument systems for deep space exploration. IEEE Aerosp. Electron. Syst. Mag. 2001, 16, 3–12. [Google Scholar] [CrossRef]
  4. Seager, S.; Bains, W. The search for signs of life on exoplanets at the interface of chemistry and planetary science. Sci. Adv. 2015, 1, e1500047. [Google Scholar] [CrossRef] [Green Version]
  5. Cole, G.H. Planetary Science: The Science of Planets around Stars; Taylor & Francis: Abingdon, UK, 2002. [Google Scholar]
  6. Keil, K. Thermal alteration of asteroids: Evidence from meteorites. Planet. Space Sci. 2000, 48, 887–903. [Google Scholar] [CrossRef]
  7. Carry, B. Density of asteroids. Planet. Space Sci. 2012, 73, 98–118. [Google Scholar] [CrossRef] [Green Version]
  8. Lu, X.P.; Jewitt, D. Dependence of light curves on phase angle and asteroid Shape. Astron. J. 2019, 158, 220. [Google Scholar] [CrossRef]
  9. Bus, S.J.; Binzel, R.P. Phase II of the small main-belt asteroid spectroscopic survey: A feature-based taxonomy. Icarus 2002, 158, 146–177. [Google Scholar] [CrossRef]
  10. Xu, S.; Binzel, R.P.; Burbine, T.H.; Bus, S.J. Small main-belt asteroid spectroscopic survey. Bull. Am. Astron. Soc. 1993, 25, 1135. [Google Scholar]
  11. Howell, E.S.; Merényi, E.; Lebofsky, L.A. Classification of asteroid spectra using a neural network. J. Geophys. Res. 1994, 99, 10847–10865. [Google Scholar] [CrossRef] [Green Version]
  12. Binzel, R.P.; Harris, A.W.; Bus, S.J.; Burbine, T.H. Spectral properties of near-Earth objects: Palomar and IRTF results for 48 objects including spacecraft targets (9969) Braille and (10302) 1989 ML. Icarus 2001, 151, 139–149. [Google Scholar] [CrossRef] [Green Version]
  13. Vilas, F.; Mcfadden, L.A. CCD reflectance spectra of selected asteroids: I. Presentation and data analysis considerations. Icarus 1992, 100, 85–94. [Google Scholar] [CrossRef]
  14. Zellner, B.; Tholen, D.J.; Tedesco, E.F. The eight-color asteroid survey: Results for 589 minor planets. Icarus 1985, 61, 355–416. [Google Scholar] [CrossRef]
  15. Xu, S.; Binzel, R.P.; Burbine, T.H.; Bus, S.J. Small main-belt asteroid spectroscopic survey: Initial results. Icarus 1995, 115, 1–35. [Google Scholar] [CrossRef]
  16. Burbine, T.H.; Binzel, R.P. Small main-belt asteroid spectroscopic survey in the near-infrared. Icarus 2002, 159, 468–499. [Google Scholar] [CrossRef] [Green Version]
  17. Bus, S.J.; Binzel, R.P. Phase II of the small main-belt asteroid spectroscopic survey: The observations. Icarus 2002, 158, 106–145. [Google Scholar] [CrossRef]
  18. Bus, S.J. Compositional Structure in the Asteroid Belt: Results of a Spectroscopic Survey. Ph.D. Thesis, Massachusetts Institute of Technology, Massachusetts Avenue, Cambridge, MA, USA, 1999. [Google Scholar]
  19. Tholen, D.J. Asteroid Taxonomy from Cluster Analysis of Photometry. Ph.D. Thesis, University of Arizona, Tucson, AZ, USA, 1984. [Google Scholar]
  20. DeMeo, F.E.; Binzel, R.P.; Slivan, S.M.; Bus, S.J. An extension of the Bus asteroid taxonomy into the near-infrared. Icarus 2009, 202, 160–180. [Google Scholar] [CrossRef] [Green Version]
  21. Xu, S. CCD Photometry and Spectroscopy of Small Main-Belt Asteroids. Ph.D. Thesis, Massachusetts Institute of Technology, Massachusetts Avenue, Cambridge, MA, USA, 1994. [Google Scholar]
  22. Imani, M.; Ghassemian, H. Band clustering-based feature extraction for classification of hyperspectral images using limited training samples. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1325–1329. [Google Scholar] [CrossRef]
  23. Taşkın, G.; Kaya, H.; Bruzzone, L. Feature selection based on high dimensional model representation for hyperspectral images. IEEE Trans. Image Process. 2017, 26, 2918–2928. [Google Scholar] [CrossRef]
  24. Wood, X.H.; Kuiper, G.P. Photometric studies of asteroids. Astrophys. J. 1963, 137, 1279. [Google Scholar] [CrossRef]
  25. Gaffey, M.J.; Burbine, T.H.; Binzel, R.P. Asteroid spectroscopy: Progress and perspectives. Meteoritics 1993, 28, 161–187. [Google Scholar] [CrossRef]
  26. Sun, W.; Du, Q. Hyperspectral band selection: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  27. Herrmann, F.J.; Friedlander, M.P.; Yilmaz, O. Fighting the curse of dimensionality: Compressive sensing in exploration seismology. IEEE Signal Process. Mag. 2012, 29, 88–100. [Google Scholar] [CrossRef]
  28. Zhang, L.; Zhang, L.; Tao, D.; Bu, B. Hyperspectral remote sensing image subpixel target detection based on supervised metriclearning. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4955–4965. [Google Scholar] [CrossRef]
  29. Dong, Y.; Liang, T.; Zhang, Y.; Du, B. Spectral-spatial weighted kernel manifold embedded distribution alignment for remote sensing image classification. IEEE Trans. Cybern. 2021, 51, 3185–3197. [Google Scholar] [CrossRef] [PubMed]
  30. Guo, T.; Luo, F.; Zhang, L.; Zhang, B.; Tan, X.; Zhou, X. Learning structurally incoherent background and target dictionaries for hyperspectral target detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3521–3533. [Google Scholar] [CrossRef]
  31. Rodger, A.; Laukamp, C.; Fabris, A. Feature Extraction and Clustering of Spectrally Measured Drill Core to Identify Mineral Assemblages and Potential Spatial Boundaries. Minerals 2021, 11, 136. [Google Scholar] [CrossRef]
  32. Luo, F.; Zhang, L.; Zhou, X.; Guo, T.; Cheng, Y.; Yin, T. Sparse-adaptive hypergraph discriminant analysis for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1082–1086. [Google Scholar] [CrossRef]
  33. Luo, F.; Zhang, L.; Du, B.; Zhang, L. Dimensionality reduction with enhanced hybrid-graph discriminant learning for hyperspetral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5336–5353. [Google Scholar] [CrossRef]
  34. Guo, T.; Luo, F.; Zhang, L.; Tan, X.; Liu, J.; Zhou, X. Target detection in hyperspectral imagery via sparse and dense hybrid representation. IEEE Geosci. Remote Sens. Lett. 2020, 17, 716–720. [Google Scholar] [CrossRef]
  35. Luo, F.; Du, B.; Zhang, L.; Zhang, L.; Tao, D. Feature learning using spatial-spectral hypergraph discriminant analysis for hyperspectral Image. IEEE Trans. Cybern. 2019, 49, 2406–2419. [Google Scholar] [CrossRef] [PubMed]
  36. Hotelling, H.H. Analysis of complex statistical variables into principal components. Br. J. Educ. Psychol. 1933, 24, 417–520. [Google Scholar] [CrossRef]
  37. Fisher, R.A. The statistical utilization of multiple measurements. Ann. Hum. Genet. 1938, 8, 376–386. [Google Scholar] [CrossRef]
  38. He, X.; Niyogi, P. Locality preserving projections. Adv. Neural Inf. Process. Syst. 2004, 16, 153–160. [Google Scholar]
  39. Gui, J.; Wang, C.; Zhu, L. Locality preserving discriminant projections. In Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence, Proceedings of the International Conference on Intelligent Computing, Ulsan, Korea, 16–19 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 566–572. [Google Scholar]
  40. Zhang, L.; Wang, X.; Huang, G.B.; Liu, T.; Tan, X. Taste recognition in E-tongue using local discriminant preservation projection. IEEE Trans. Cybern. 2018, 49, 947–960. [Google Scholar] [CrossRef] [PubMed]
  41. Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
  42. He, X.; Yan, S.; Hu, Y.; Niyogi, P.; Zhang, H.J. Face recognition using Laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 328–340. [Google Scholar]
  43. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.; Yang, Q.; Lin, S. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [Green Version]
  44. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [Green Version]
  45. Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multi class classification. IEEE Trans. Syst. Man Cybern. Part B 2011, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  46. Zhang, L.; Zhang, D. Domain adaptation extreme learning machines for drift compensation in E-nose systems. IEEE Trans. Instrum. Meas. 2014, 64, 1790–1801. [Google Scholar] [CrossRef] [Green Version]
  47. Guo, T.; Zhang, L.; Tan, X. Neuron pruning based discriminative extreme learning machine for pattern classification. Cogn. Comput. 2017, 9, 581–595. [Google Scholar] [CrossRef]
  48. Zhang, L.; Zhang, D. Robust visual knowledge transfer via extreme learning machine based domain adaptation. IEEE Trans. Image Process. 2016, 25, 4959–4973. [Google Scholar] [CrossRef] [PubMed]
  49. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: New York, NY, USA, 2009. [Google Scholar]
Figure 1. Overview of the asteroid feature learning and classification scheme.
Figure 1. Overview of the asteroid feature learning and classification scheme.
Remotesensing 13 03306 g001
Figure 2. Illustration of the proposed Neighboring Discriminant Component Analysis (NDCA) model.
Figure 2. Illustration of the proposed Neighboring Discriminant Component Analysis (NDCA) model.
Remotesensing 13 03306 g002
Figure 3. The spectral preprocessing for (1) Ceres with category ‘C’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range from 0.45 to 2.45 μm.
Figure 3. The spectral preprocessing for (1) Ceres with category ‘C’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range from 0.45 to 2.45 μm.
Remotesensing 13 03306 g003
Figure 4. The spectral preprocessing for (2957) Tatsuo with category ‘K’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range from 0.45 to 2.45 μm.
Figure 4. The spectral preprocessing for (2957) Tatsuo with category ‘K’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range from 0.45 to 2.45 μm.
Remotesensing 13 03306 g004
Figure 5. The spectral preprocessing for (1807) Slovakia with category ‘S’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range 0.45 to 2.45 μm.
Figure 5. The spectral preprocessing for (1807) Slovakia with category ‘S’. (a) Original spectra; (b) smoothed spectra; (c) fitted spectra spanning the wavelength range 0.45 to 2.45 μm.
Remotesensing 13 03306 g005
Figure 6. Five-fold cross verification scheme for asteroid spectral data.
Figure 6. Five-fold cross verification scheme for asteroid spectral data.
Remotesensing 13 03306 g006
Figure 7. The performance of different dimension reduction methods under different reduced dimensions using NN as the classifier.
Figure 7. The performance of different dimension reduction methods under different reduced dimensions using NN as the classifier.
Remotesensing 13 03306 g007
Figure 8. The performance of different dimension reduction methods under different reduced dimensions using SVM as the classifier.
Figure 8. The performance of different dimension reduction methods under different reduced dimensions using SVM as the classifier.
Remotesensing 13 03306 g008
Figure 9. The performance of different dimension reduction methods under different reduced dimensions using ELM as the classifier.
Figure 9. The performance of different dimension reduction methods under different reduced dimensions using ELM as the classifier.
Remotesensing 13 03306 g009
Figure 10. Visualization for the scatter points of the first two components acquired by different methods. By comparison, the proposed NDCA model shows better within-class compactness and between-class separation characteristics.
Figure 10. Visualization for the scatter points of the first two components acquired by different methods. By comparison, the proposed NDCA model shows better within-class compactness and between-class separation characteristics.
Remotesensing 13 03306 g010
Figure 11. NN plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 90.60%; worst: 89.76%); (b) Rb and Rw (best: 92.81%; worst: 87.82%).
Figure 11. NN plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 90.60%; worst: 89.76%); (b) Rb and Rw (best: 92.81%; worst: 87.82%).
Remotesensing 13 03306 g011
Figure 12. SVM plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 91.7009%; worst: 90.8676%); (b) Rb and Rw (best: 91.9787%; worst: 87.2679%).
Figure 12. SVM plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 91.7009%; worst: 90.8676%); (b) Rb and Rw (best: 91.9787%; worst: 87.2679%).
Remotesensing 13 03306 g012
Figure 13. ELM plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 90.0894%; worst: 88.4365%); (b) Rb and Rw (best: 90.3676%; worst: 87.2660%).
Figure 13. ELM plus NDCA performance under different combinations of parameters. (a) μ and γ (best: 90.0894%; worst: 88.4365%); (b) Rb and Rw (best: 90.3676%; worst: 87.2660%).
Remotesensing 13 03306 g013
Figure 14. Classification performance variations of NDCA plus ELM under different settings of L and α.
Figure 14. Classification performance variations of NDCA plus ELM under different settings of L and α.
Remotesensing 13 03306 g014
Table 1. Description of the asteroid spectral datasets for 371 asteroids with 24 classes.
Table 1. Description of the asteroid spectral datasets for 371 asteroids with 24 classes.
ClassABCCbCgCghChD
# samples641331101816
Class‘K’‘L’‘O’‘Q’‘R’‘S’‘Sa’‘Sq’
# samples1622181144229
Class‘Sr’‘Sv’‘T’‘V’‘X’‘Xc’‘Xe’‘Xk’
# samples22241743718
Table 2. Important notations used in this paper.
Table 2. Important notations used in this paper.
NotationMeaningNotationMeaning
PSubspace projection matrixTLabel matrix
XHigh-dimensional dataset with dimension Dxi, xjData points with index i and j
YLower-dimensional features of X with dimension dNNumber of datapoints
CNumber of classes in X and YNi, i = 1, 2 … CNumber of data points in i-th class
αBalance parameter in ELM modelyi, yjLower-dimensional features for xi, xj
Table 3. Description of asteroid spectral datasets used in the experiments.
Table 3. Description of asteroid spectral datasets used in the experiments.
ClassACDKLQSVXTotal
# Samples64516162281991732361
Table 4. Experimental data partition of 5-folds.
Table 4. Experimental data partition of 5-folds.
ClassFold 1Fold 2Fold 3Fold 4Fold 5Total
‘A’111126
‘C’9999945
‘D’3334316
‘K’4333316
‘L’4445522
‘Q’221218
‘S’4040404039199
‘V’3443317
‘X’6676732
# samples7272727372361
Table 5. Experiment settings with different fold partitions.
Table 5. Experiment settings with different fold partitions.
ExperimentsTraining DatasetTesting Dataset
Exp. 1fold 1, fold 2, fold 3 and fold 4 (289 samples in total)fold 5 (72 samples in total)
Exp. 2fold 1, fold 2, fold 3 and fold 5 (288 samples in total) fold 4 (73 samples in total)
Exp. 3fold 1, fold 2, fold 4 and fold 5 (289 samples in total)fold 3 (72 samples in total)
Exp. 4fold 1, fold 3, fold 4 and fold 5 (289 samples in total)fold 2 (72 samples in total)
Exp. 5fold 2, fold 3, fold 4 and fold 5 (289 samples in total)fold 1 (72 samples in total)
Table 6. Classification accuracy (%) of different dimension reduction algorithms using NN as the classifier.
Table 6. Classification accuracy (%) of different dimension reduction algorithms using NN as the classifier.
MethodsExp. 1Exp. 2Exp. 3Exp. 4Exp. 5Average
Raw94.444484.931587.500093.055686.111189.2085
PCA94.444484.931587.500093.055686.111189.2085
LDA95.833390.411088.888997.222288.888992.2489
LPP90.277887.671290.277891.666788.888989.7565
LPDP95.833390.411091.666797.222288.888992.8044
NDCA97.222289.041193.055698.611193.055694.1971
Table 7. Classification accuracy (%) of different dimension reduction algorithms using SVM as the classifier.
Table 7. Classification accuracy (%) of different dimension reduction algorithms using SVM as the classifier.
MethodsExp. 1Exp. 2Exp. 3Exp. 4Exp. 5Average
Raw94.444486.301493.055693.055691.666791.7047
PCA94.444489.041193.055693.055691.666792.2527
LDA94.444490.411088.888994.444491.666791.9711
LPP97.222286.301490.277895.833394.444492.8158
LPDP94.444490.411093.055694.444491.666792.8044
NDCA94.444490.411094.444495.833393.055693.6377
Table 8. Classification accuracy (%) of different dimension reduction algorithms using ELM as the classifier.
Table 8. Classification accuracy (%) of different dimension reduction algorithms using ELM as the classifier.
MethodsExp. 1Exp. 2Exp. 3Exp. 4Exp. 5Average
Raw94.861189.315192.083395.694491.527892.6963
PCA95.000090.411092.083396.527892.361193.2766
LDA95.416794.520591.805697.222293.472294.4874
LPP95.833390.411093.055698.611194.444494.4711
LPDP95.972294.520592.638997.222293.333394.7374
NDCA97.777891.780893.472297.222295.694495.1895
Table 9. Performance improvement between different pairs methods by using different classifiers.
Table 9. Performance improvement between different pairs methods by using different classifiers.
ClassifiersComparison Pairs
<Ours, Raw><Ours, PCA><Ours, LDA><Ours, LPP><Ours, LPDP>
NN↑ 4.9886%↑ 4.9886%↑ 1.9482%↑ 4.4406%↑ 1.3927%
SVM↑ 1.9330%↑ 1.3850%↑ 1.6666%↑ 0.8219%↑ 0.8333%
ELM↑ 2.4932%↑ 1.9129%↑ 0.7021%↑ 0.7184%↑ 0.4521%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, T.; Lu, X.-P.; Zhang, Y.-X.; Yu, K. Neighboring Discriminant Component Analysis for Asteroid Spectrum Classification. Remote Sens. 2021, 13, 3306. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163306

AMA Style

Guo T, Lu X-P, Zhang Y-X, Yu K. Neighboring Discriminant Component Analysis for Asteroid Spectrum Classification. Remote Sensing. 2021; 13(16):3306. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163306

Chicago/Turabian Style

Guo, Tan, Xiao-Ping Lu, Yong-Xiong Zhang, and Keping Yu. 2021. "Neighboring Discriminant Component Analysis for Asteroid Spectrum Classification" Remote Sensing 13, no. 16: 3306. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop