entropy-logo

Journal Browser

Journal Browser

Information-Theoretical Methods in Data Mining

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (20 September 2019) | Viewed by 47264

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
Interests: information-theoretic learning theory; data mining; knowledge discovery; data science; big data analysis; machine learning; minimum description length principle; model selection; anomaly detection; change detection; health care data analysis; glaucoma progression prediction

Special Issue Information

Dear Colleagues,

Data mining is a rapidly growing field with the aim of analyzing big data in academia and industry. In it information-theoretical methods play a key role in discovering useful knowledge from a large amount of data. For example, probabilistic modeling of data sources based on information-theoretical methods such as maximum entropy, the minimum description length principle, rate-distortion theory, Kolmogorov complexity, etc. have turned to be very effective in machine learning problems in data mining such as model selection, regression, clustering, classification, structural/relational learning, association/causality analysis, transfer learning, change/anomaly detection, stream data mining, sparse modeling, etc.  As real data become complex, further advanced information-theoretical methods are currently emerging to adapt  data in realistic sources such as non-i.i.d. sources, heterogeneous sources, network type data sources, sparse sources, etc. Information-theoretical data mining methods have successfully been applied to a wide range of application areas including finance, education, marketing, intelligent transportation systems, multi-media processing, health care, network science, etc.

This special issue specifically emphasizes research that addresses data mining problems using information-theoretical methods. It includes research on a novel development of information-theoretical methods for specific applications to data mining, and a new data mining problem using information theory.  Submissions at the boundaries of information theory, data mining, and other related areas such as machine learning, network science, etc. are also welcome.

Prof. Kenji Yamanishi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data Mining
  • Knowledge Discovery
  • Data Science
  • Machine Learning
  • Big Data
  • Information Theory
  • Minimum Description Length Principle
  • Source Coding
  • Probabilistic Modeling
  • Latent Variable Modeling
  • Network Science

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1065 KiB  
Article
An Efficient, Parallelized Algorithm for Optimal Conditional Entropy-Based Feature Selection
by Gustavo Estrela, Marco Dimas Gubitoso, Carlos Eduardo Ferreira, Junior Barrera and Marcelo S. Reis
Entropy 2020, 22(4), 492; https://0-doi-org.brum.beds.ac.uk/10.3390/e22040492 - 24 Apr 2020
Cited by 4 | Viewed by 3085
Abstract
In Machine Learning, feature selection is an important step in classifier design. It consists of finding a subset of features that is optimum for a given cost function. One possibility to solve feature selection is to organize all possible feature subsets into a [...] Read more.
In Machine Learning, feature selection is an important step in classifier design. It consists of finding a subset of features that is optimum for a given cost function. One possibility to solve feature selection is to organize all possible feature subsets into a Boolean lattice and to exploit the fact that the costs of chains in that lattice describe U-shaped curves. Minimization of such cost function is known as the U-curve problem. Recently, a study proposed U-Curve Search (UCS), an optimal algorithm for that problem, which was successfully used for feature selection. However, despite of the algorithm optimality, the UCS required time in computational assays was exponential on the number of features. Here, we report that such scalability issue arises due to the fact that the U-curve problem is NP-hard. In the sequence, we introduce the Parallel U-Curve Search (PUCS), a new algorithm for the U-curve problem. In PUCS, we present a novel way to partition the search space into smaller Boolean lattices, thus rendering the algorithm highly parallelizable. We also provide computational assays with both synthetic data and Machine Learning datasets, where the PUCS performance was assessed against UCS and other golden standard algorithms in feature selection. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

24 pages, 1942 KiB  
Article
Detecting Metachanges in Data Streams from the Viewpoint of the MDL Principle
by Shintaro Fukushima and Kenji Yamanishi
Entropy 2019, 21(12), 1134; https://0-doi-org.brum.beds.ac.uk/10.3390/e21121134 - 20 Nov 2019
Cited by 3 | Viewed by 3952
Abstract
This paper addresses the issue of how we can detect changes of changes, which we call metachanges, in data streams. A metachange refers to a change in patterns of when and how changes occur, referred to as “metachanges along time” and “metachanges [...] Read more.
This paper addresses the issue of how we can detect changes of changes, which we call metachanges, in data streams. A metachange refers to a change in patterns of when and how changes occur, referred to as “metachanges along time” and “metachanges along state”, respectively. Metachanges along time mean that the intervals between change points significantly vary, whereas metachanges along state mean that the magnitude of changes varies. It is practically important to detect metachanges because they may be early warning signals of important events. This paper introduces a novel notion of metachange statistics as a measure of the degree of a metachange. The key idea is to integrate metachanges along both time and state in terms of “code length” according to the minimum description length (MDL) principle. We develop an online metachange detection algorithm (MCD) based on the statistics to apply it to a data stream. With synthetic datasets, we demonstrated that MCD detects metachanges earlier and more accurately than existing methods. With real datasets, we demonstrated that MCD can lead to the discovery of important events that might be overlooked by conventional change detection methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

21 pages, 18026 KiB  
Article
Information Theoretic Modeling of High Precision Disparity Data for Lossy Compression and Object Segmentation
by Ioan Tăbuş and Emre Can Kaya
Entropy 2019, 21(11), 1113; https://0-doi-org.brum.beds.ac.uk/10.3390/e21111113 - 13 Nov 2019
Viewed by 2674
Abstract
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the [...] Read more.
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the image pixels, is viewed as a model structure selection problem, where the decisions are based on the implementable codelength of the model, akin to minimum description length for lossy representations. The intended usage of the extracted disparity map is to provide to the decoder the geometry information at a very small fraction from what is required for a lossless compressed version, and secondly, to convey to the decoder a segmentation describing the contours of the objects from the scene. We propose first an algorithm for constructing a hierarchical segmentation based on the persistency of the contours of regions in an iterative re-estimation algorithm. Then, we propose a second algorithm for constructing a new sequence of segmentations, by selecting the order in which the persistent contours are included in the model, driven by decisions based on the descriptive codelength. We consider real disparity datasets which have the geometry information at a high precision, in floating point format, but for which encoding of the raw information, in about 32 bits per pixels, is too expensive, and we then demonstrate good approximations preserving the object structure of the scene, achieved for rates below 0.2 bits per pixels. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

21 pages, 774 KiB  
Article
Study on the Influence of Diversity and Quality in Entropy Based Collaborative Clustering
by Jérémie Sublime, Guénaël Cabanes and Basarab Matei
Entropy 2019, 21(10), 951; https://0-doi-org.brum.beds.ac.uk/10.3390/e21100951 - 28 Sep 2019
Cited by 3 | Viewed by 2023
Abstract
The aim of collaborative clustering is to enhance the performances of clustering algorithms by enabling them to work together and exchange their information to tackle difficult data sets. The fundamental concept of collaboration is that clustering algorithms operate locally but collaborate by exchanging [...] Read more.
The aim of collaborative clustering is to enhance the performances of clustering algorithms by enabling them to work together and exchange their information to tackle difficult data sets. The fundamental concept of collaboration is that clustering algorithms operate locally but collaborate by exchanging information about the local structures found by each algorithm. This kind of collaborative learning can be beneficial to a wide number of tasks including multi-view clustering, clustering of distributed data with privacy constraints, multi-expert clustering and multi-scale analysis. Within this context, the main difficulty of collaborative clustering is to determine how to weight the influence of the different clustering methods with the goal of maximizing the final results and minimizing the risk of negative collaborations—where the results are worse after collaboration than before. In this paper, we study how the quality and diversity of the different collaborators, but also the stability of the partitions can influence the final results. We propose both a theoretical analysis based on mathematical optimization, and a second study based on empirical results. Our findings show that on the one hand, in the absence of a clear criterion to optimize, a low diversity pool of solution with a high stability are the best option to ensure good performances. And on the other hand, if there is a known criterion to maximize, it is best to rely on a higher diversity pool of solution with a high quality on the said criterion. While our approach focuses on entropy based collaborative clustering, we believe that most of our results could be extended to other collaborative algorithms. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

21 pages, 1321 KiB  
Article
A Hierarchical Gamma Mixture Model-Based Method for Classification of High-Dimensional Data
by Muhammad Azhar, Mark Junjie Li and Joshua Zhexue Huang
Entropy 2019, 21(9), 906; https://0-doi-org.brum.beds.ac.uk/10.3390/e21090906 - 18 Sep 2019
Cited by 3 | Viewed by 2963
Abstract
Data classification is an important research topic in the field of data mining. With the rapid development in social media sites and IoT devices, data have grown tremendously in volume and complexity, which has resulted in a lot of large and complex high-dimensional [...] Read more.
Data classification is an important research topic in the field of data mining. With the rapid development in social media sites and IoT devices, data have grown tremendously in volume and complexity, which has resulted in a lot of large and complex high-dimensional data. Classifying such high-dimensional complex data with a large number of classes has been a great challenge for current state-of-the-art methods. This paper presents a novel, hierarchical, gamma mixture model-based unsupervised method for classifying high-dimensional data with a large number of classes. In this method, we first partition the features of the dataset into feature strata by using k-means. Then, a set of subspace data sets is generated from the feature strata by using the stratified subspace sampling method. After that, the GMM Tree algorithm is used to identify the number of clusters and initial clusters in each subspace dataset and passing these initial cluster centers to k-means to generate base subspace clustering results. Then, the subspace clustering result is integrated into an object cluster association (OCA) matrix by using the link-based method. The ensemble clustering result is generated from the OCA matrix by the k-means algorithm with the number of clusters identified by the GMM Tree algorithm. After producing the ensemble clustering result, the dominant class label is assigned to each cluster after computing the purity. A classification is made on the object by computing the distance between the new object and the center of each cluster in the classifier, and the class label of the cluster is assigned to the new object which has the shortest distance. A series of experiments were conducted on twelve synthetic and eight real-world data sets, with different numbers of classes, features, and objects. The experimental results have shown that the new method outperforms other state-of-the-art techniques to classify data in most of the data sets. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

29 pages, 2239 KiB  
Article
Estimating Topic Modeling Performance with Sharma–Mittal Entropy
by Sergei Koltcov, Vera Ignatenko and Olessia Koltsova
Entropy 2019, 21(7), 660; https://0-doi-org.brum.beds.ac.uk/10.3390/e21070660 - 05 Jul 2019
Cited by 21 | Viewed by 5850
Abstract
Topic modeling is a popular approach for clustering text documents. However, current tools have a number of unsolved problems such as instability and a lack of criteria for selecting the values of model parameters. In this work, we propose a method to solve [...] Read more.
Topic modeling is a popular approach for clustering text documents. However, current tools have a number of unsolved problems such as instability and a lack of criteria for selecting the values of model parameters. In this work, we propose a method to solve partially the problems of optimizing model parameters, simultaneously accounting for semantic stability. Our method is inspired by the concepts from statistical physics and is based on Sharma–Mittal entropy. We test our approach on two models: probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) with Gibbs sampling, and on two datasets in different languages. We compare our approach against a number of standard metrics, each of which is able to account for just one of the parameters of our interest. We demonstrate that Sharma–Mittal entropy is a convenient tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do. Furthermore, we show that concepts from statistical physics can be used to contribute to theory construction for machine learning, a rapidly-developing sphere that currently lacks a consistent theoretical ground. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

31 pages, 1480 KiB  
Article
Double-Granule Conditional-Entropies Based on Three-Level Granular Structures
by Taopin Mu, Xianyong Zhang and Zhiwen Mo
Entropy 2019, 21(7), 657; https://0-doi-org.brum.beds.ac.uk/10.3390/e21070657 - 03 Jul 2019
Cited by 5 | Viewed by 2607
Abstract
Rough set theory is an important approach for data mining, and it refers to Shannon’s information measures for uncertainty measurements. The existing local conditional-entropies have both the second-order feature and application limitation. By improvements of hierarchical granulation, this paper establishes double-granule conditional-entropies based [...] Read more.
Rough set theory is an important approach for data mining, and it refers to Shannon’s information measures for uncertainty measurements. The existing local conditional-entropies have both the second-order feature and application limitation. By improvements of hierarchical granulation, this paper establishes double-granule conditional-entropies based on three-level granular structures (i.e., micro-bottom, meso-middle, macro-top ), and then investigates the relevant properties. In terms of the decision table and its decision classification, double-granule conditional-entropies are proposed at micro-bottom by the dual condition-granule system. By virtue of successive granular summation integrations, they hierarchically evolve to meso-middle and macro-top, to respectively have part and complete condition-granulations. Then, the new measures acquire their number distribution, calculation algorithm, three bounds, and granulation non-monotonicity at three corresponding levels. Finally, the hierarchical constructions and achieved properties are effectively verified by decision table examples and data set experiments. Double-granule conditional-entropies carry the second-order characteristic and hierarchical granulation to deepen both the classical entropy system and local conditional-entropies, and thus they become novel uncertainty measures for information processing and knowledge reasoning. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

12 pages, 453 KiB  
Article
Model Selection for Non-Negative Tensor Factorization with Minimum Description Length
by Yunhui Fu, Shin Matsushima and Kenji Yamanishi
Entropy 2019, 21(7), 632; https://0-doi-org.brum.beds.ac.uk/10.3390/e21070632 - 27 Jun 2019
Cited by 2 | Viewed by 3899
Abstract
Non-negative tensor factorization (NTF) is a widely used multi-way analysis approach that factorizes a high-order non-negative data tensor into several non-negative factor matrices. In NTF, the non-negative rank has to be predetermined to specify the model and it greatly influences the factorized matrices. [...] Read more.
Non-negative tensor factorization (NTF) is a widely used multi-way analysis approach that factorizes a high-order non-negative data tensor into several non-negative factor matrices. In NTF, the non-negative rank has to be predetermined to specify the model and it greatly influences the factorized matrices. However, its value is conventionally determined by specialists’ insights or trial and error. This paper proposes a novel rank selection criterion for NTF on the basis of the minimum description length (MDL) principle. Our methodology is unique in that (1) we apply the MDL principle on tensor slices to overcome a problem caused by the imbalance between the number of elements in a data tensor and that in factor matrices, and (2) we employ the normalized maximum likelihood (NML) code-length for histogram densities. We employ synthetic and real data to empirically demonstrate that our method outperforms other criteria in terms of accuracies for estimating true ranks and for completing missing values. We further show that our method can produce ranks suitable for knowledge discovery. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

15 pages, 887 KiB  
Article
A Maximum Entropy Procedure to Solve Likelihood Equations
by Antonio Calcagnì, Livio Finos, Gianmarco Altoé and Massimiliano Pastore
Entropy 2019, 21(6), 596; https://0-doi-org.brum.beds.ac.uk/10.3390/e21060596 - 15 Jun 2019
Cited by 3 | Viewed by 3646
Abstract
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy [...] Read more.
In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon’s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth’s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

17 pages, 866 KiB  
Article
SIMIT: Subjectively Interesting Motifs in Time Series
by Junning Deng, Jefrey Lijffijt, Bo Kang and Tijl De Bie
Entropy 2019, 21(6), 566; https://0-doi-org.brum.beds.ac.uk/10.3390/e21060566 - 05 Jun 2019
Cited by 2 | Viewed by 3285
Abstract
Numerical time series data are pervasive, originating from sources as diverse as wearable devices, medical equipment, to sensors in industrial plants. In many cases, time series contain interesting information in terms of subsequences that recur in approximate form, so-called motifs. Major open [...] Read more.
Numerical time series data are pervasive, originating from sources as diverse as wearable devices, medical equipment, to sensors in industrial plants. In many cases, time series contain interesting information in terms of subsequences that recur in approximate form, so-called motifs. Major open challenges in this area include how one can formalize the interestingness of such motifs and how the most interesting ones can be found. We introduce a novel approach that tackles these issues. We formalize the notion of such subsequence patterns in an intuitive manner and present an information-theoretic approach for quantifying their interestingness with respect to any prior expectation a user may have about the time series. The resulting interestingness measure is thus a subjective measure, enabling a user to find motifs that are truly interesting to them. Although finding the best motif appears computationally intractable, we develop relaxations and a branch-and-bound approach implemented in a constraint programming solver. As shown in experiments on synthetic data and two real-world datasets, this enables us to mine interesting patterns in small or mid-sized time series. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

19 pages, 907 KiB  
Article
An Information Criterion for Auxiliary Variable Selection in Incomplete Data Analysis
by Shinpei Imori and Hidetoshi Shimodaira
Entropy 2019, 21(3), 281; https://0-doi-org.brum.beds.ac.uk/10.3390/e21030281 - 14 Mar 2019
Cited by 2 | Viewed by 3864
Abstract
Statistical inference is considered for variables of interest, called primary variables, when auxiliary variables are observed along with the primary variables. We consider the setting of incomplete data analysis, where some primary variables are not observed. Utilizing a parametric model of joint distribution [...] Read more.
Statistical inference is considered for variables of interest, called primary variables, when auxiliary variables are observed along with the primary variables. We consider the setting of incomplete data analysis, where some primary variables are not observed. Utilizing a parametric model of joint distribution of primary and auxiliary variables, it is possible to improve the estimation of parametric model for the primary variables when the auxiliary variables are closely related to the primary variables. However, the estimation accuracy reduces when the auxiliary variables are irrelevant to the primary variables. For selecting useful auxiliary variables, we formulate the problem as model selection, and propose an information criterion for predicting primary variables by leveraging auxiliary variables. The proposed information criterion is an asymptotically unbiased estimator of the Kullback–Leibler divergence for complete data of primary variables under some reasonable conditions. We also clarify an asymptotic equivalence between the proposed information criterion and a variant of leave-one-out cross validation. Performance of our method is demonstrated via a simulation study and a real data example. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

14 pages, 3604 KiB  
Article
Mixture of Experts with Entropic Regularization for Data Classification
by Billy Peralta, Ariel Saavedra, Luis Caro and Alvaro Soto
Entropy 2019, 21(2), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/e21020190 - 18 Feb 2019
Cited by 4 | Viewed by 4660
Abstract
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition. “Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by [...] Read more.
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition. “Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by a gate network that is typically based on softmax functions, combined with learnable complex patterns in data. In this scheme, one data point is influenced by only one expert; as a result, the training process can be misguided in real datasets for which complex data need to be explained by multiple experts. In this work, we propose a variant of the regular mixture-of-experts model. In the proposed model, the cost classification is penalized by the Shannon entropy of the gating network in order to avoid a “winner-takes-all” output for the gating network. Experiments show the advantage of our approach using several real datasets, with improvements in mean accuracy of 3–6% in some datasets. In future work, we plan to embed feature selection into this model. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

16 pages, 9764 KiB  
Article
The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery
by Xianzhi Wang, Shubin Si, Yu Wei and Yongbo Li
Entropy 2019, 21(2), 170; https://0-doi-org.brum.beds.ac.uk/10.3390/e21020170 - 12 Feb 2019
Cited by 20 | Viewed by 3645
Abstract
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection [...] Read more.
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection of embedding dimension and time delay. To complete the automatic parameter selection of MPE, a novel parameter optimization strategy of MPE is proposed, namely optimized multi-scale permutation entropy (OMPE). In the OMPE method, an improved Cao method is proposed to adaptively select the embedding dimension. Meanwhile, the time delay is determined based on mutual information. To verify the effectiveness of OMPE method, a simulated signal and two experimental signals are used for validation. Results demonstrate that the proposed OMPE method has a better feature extraction ability comparing with existing MPE methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

Back to TopTop