entropy-logo

Journal Browser

Journal Browser

Theory and Applications of Information Processing Algorithms

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 21605

Special Issue Editors


E-Mail Website
Guest Editor
Departamento de Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain
Interests: signal processing; information theory; machine learning; communications; audio
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Departamento de Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain
Interests: latent variable analysis; independent component analysis; blind source separation; applications of signal processing in audio and communications

E-Mail Website
Guest Editor
Departamento de Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain
Interests: digital signal processing; biomedical engineering; digital communications
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
Skolkovo Institute of Science and Technology (SKOLTECH), 143026 Moscow, Russia
Interests: biomedical signal processing; brain-computer interface (BCI) and human computer interactions (HCI); tensor decomposition and tensor networks; blind source separation; deep neural networks and AI

Special Issue Information

Dear Colleagues,

During the last decades of research, we have witnessed a progressive consolidation of the concept of information at the inner core of the design and evaluation of many modern algorithmic procedures for the processing of the observed data. Information measures and statistical divergences have revealed themselves as transversal tools whose widespread use tends to blur some of the already diffuse boundaries between interrelated research fields such as artificial intelligence, cybernetics, statistical signal processing, communications, multimedia processing and biomedical signal analysis.

In this special issue, we encourage researchers to present original results in the use of information and divergence measures as building blocks for both the principles and criteria that drive the processing of the observations and, also, their associated performance evaluation. Possible topics include, but are not limited to, advances in the theory and applications of machine learning for signal processing, shallow and deep learning methods, estimation and detection techniques, compression, model selection or comparison. Furthermore, we also welcome exceptional review contributions covering the state-of-the-art research areas that fall within the scope of this special issue.

Prof. Dr. Sergio Cruces
Dr. Iván Durán-Díaz
Dr. Rubén Martín-Clemente
Prof. Dr. Andrzej Cichocki
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information-theoretic criteria
  • applications of information processing algorithms
  • machine learning for signal processing
  • shallow and deep learning methods
  • estimation and detection techniques
  • Bayesian methods
  • model optimization, compression, and comparison

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 381 KiB  
Article
Group Testing with Blocks of Positives and Inhibitors
by Thach V. Bui, Isao Echizen, Minoru Kuribayashi, Tetsuya Kojima and Thuc D. Nguyen
Entropy 2022, 24(11), 1562; https://0-doi-org.brum.beds.ac.uk/10.3390/e24111562 - 30 Oct 2022
Viewed by 1291
Abstract
The main goal of group testing is to identify a small number of specific items among a large population of items. In this paper, we consider specific items as positives and inhibitors and non-specific items as negatives. In particular, we consider a novel [...] Read more.
The main goal of group testing is to identify a small number of specific items among a large population of items. In this paper, we consider specific items as positives and inhibitors and non-specific items as negatives. In particular, we consider a novel model called group testing with blocks of positives and inhibitors. A test on a subset of items is positive if the subset contains at least one positive and does not contain any inhibitors, and it is negative otherwise. In this model, the input items are linearly ordered, and the positives and inhibitors are subsets of small blocks (at unknown locations) of consecutive items over that order. We also consider two specific instantiations of this model. The first instantiation is that model that contains a single block of consecutive items consisting of exactly known numbers of positives and inhibitors. The second instantiation is the model that contains a single block of consecutive items containing known numbers of positives and inhibitors. Our contribution is to propose efficient encoding and decoding schemes such that the numbers of tests used to identify only positives or both positives and inhibitors are less than the ones in the state-of-the-art schemes. Moreover, the decoding times mostly scale to the numbers of tests that are significantly smaller than the state-of-the-art ones, which scale to both the number of tests and the number of items. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

27 pages, 27456 KiB  
Article
Lossless Medical Image Compression by Using Difference Transform
by Rafael Rojas-Hernández, Juan Luis Díaz-de-León-Santiago, Grettel Barceló-Alonso, Jorge Bautista-López, Valentin Trujillo-Mora and Julio César Salgado-Ramírez
Entropy 2022, 24(7), 951; https://0-doi-org.brum.beds.ac.uk/10.3390/e24070951 - 08 Jul 2022
Cited by 3 | Viewed by 1925
Abstract
This paper introduces a new method of compressing digital images by using the Difference Transform applied in medical imaging. The Difference Transform algorithm performs the decorrelation process of image data, and in this way improves the encoding process, achieving a file with a [...] Read more.
This paper introduces a new method of compressing digital images by using the Difference Transform applied in medical imaging. The Difference Transform algorithm performs the decorrelation process of image data, and in this way improves the encoding process, achieving a file with a smaller size than the original. The proposed method proves to be competitive and in many cases better than the standards used for medical images such as TIFF or PNG. In addition, the Difference Transform can replace other transforms like Cosine or Wavelet. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

16 pages, 7263 KiB  
Article
A Method for Unsupervised Semi-Quantification of Inmunohistochemical Staining with Beta Divergences
by Auxiliadora Sarmiento, Iván Durán-Díaz, Irene Fondón, Mercedes Tomé, Clément Bodineau and Raúl V. Durán
Entropy 2022, 24(4), 546; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040546 - 13 Apr 2022
Cited by 2 | Viewed by 1899
Abstract
In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a [...] Read more.
In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a maximum number set by the researcher and assigning a score to each image that should represent some predefined characteristic of the IHC staining, such as its intensity. However, manual scoring depends on the judgment of an observer and therefore exposes the assessment to a certain level of bias. In this work, we present a fully automatic and unsupervised method for comparative biomarker quantification in histopathological brightfield images. The method relies on a color separation method that discriminates between two chromogens expressed as brown and blue colors robustly, independent of color variation or biomarker expression level. For this purpose, we have adopted a two-stage stain separation approach in the optical density space. First, a preliminary separation is performed using a deconvolution method in which the color vectors of the stains are determined after an eigendecomposition of the data. Then, we adjust the separation using the non-negative matrix factorization method with beta divergences, initializing the algorithm with the matrices resulting from the previous step. After that, a feature vector of each image based on the intensity of the two chromogens is determined. Finally, the images are annotated using a systematically initialized k-means clustering algorithm with beta divergences. The method clearly defines the initial boundaries of the categories, although some flexibility is added. Experiments for the semi-quantitative scoring of images in five categories have been carried out by comparing the results with the scores of four expert researchers yielding accuracies that range between 76.60% and 94.58%. These results show that the proposed automatic scoring system, which is definable and reproducible, produces consistent results. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

21 pages, 2613 KiB  
Article
Discrete Infomax Codes for Supervised Representation Learning
by Yoonho Lee, Wonjae Kim, Wonpyo Park and Seungjin Choi
Entropy 2022, 24(4), 501; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040501 - 02 Apr 2022
Cited by 1 | Viewed by 1716
Abstract
For high-dimensional data such as images, learning an encoder that can output a compact yet informative representation is a key task on its own, in addition to facilitating subsequent processing of data. We present a model that produces discrete infomax codes (DIMCO); we [...] Read more.
For high-dimensional data such as images, learning an encoder that can output a compact yet informative representation is a key task on its own, in addition to facilitating subsequent processing of data. We present a model that produces discrete infomax codes (DIMCO); we train a probabilistic encoder that yields k-way d-dimensional codes associated with input data. Our model maximizes the mutual information between codes and ground-truth class labels, with a regularization which encourages entries of a codeword to be statistically independent. In this context, we show that the infomax principle also justifies existing loss functions, such as cross-entropy as its special cases. Our analysis also shows that using shorter codes reduces overfitting in the context of few-shot classification, and our various experiments show this implicit task-level regularization effect of DIMCO. Furthermore, we show that the codes learned by DIMCO are efficient in terms of both memory and retrieval time compared to prior methods. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

10 pages, 431 KiB  
Article
An Information-Theoretic Analysis of the Cost of Decentralization for Learning and Inference under Privacy Constraints
by Sharu Theresa Jose and Osvaldo Simeone
Entropy 2022, 24(4), 485; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040485 - 30 Mar 2022
Viewed by 1464
Abstract
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. [...] Read more.
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

14 pages, 305 KiB  
Article
An Efficient Parallel Reverse Conversion of Residue Code to Mixed-Radix Representation Based on the Chinese Remainder Theorem
by Mikhail Selianinau and Yuriy Povstenko
Entropy 2022, 24(2), 242; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020242 - 05 Feb 2022
Viewed by 1222
Abstract
In this paper, we deal with the critical problems in residue arithmetic. The reverse conversion from a Residue Number System (RNS) to positional notation is a main non-modular operation, and it constitutes a basis of other non-modular procedures used to implement various computational [...] Read more.
In this paper, we deal with the critical problems in residue arithmetic. The reverse conversion from a Residue Number System (RNS) to positional notation is a main non-modular operation, and it constitutes a basis of other non-modular procedures used to implement various computational algorithms. We present a novel approach to the parallel reverse conversion from the residue code into a weighted number representation in the Mixed-Radix System (MRS). In our proposed method, the calculation of mixed-radix digits reduces to a parallel summation of the small word-length residues in the independent modular channels corresponding to the primary RNS moduli. The computational complexity of the developed method concerning both required modular addition operations and one-input lookup tables is estimated as Ok2/2, where k equals the number of used moduli. The time complexity is Olog2k modular clock cycles. In pipeline mode, the throughput rate of the proposed algorithm is one reverse conversion in one modular clock cycle. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
16 pages, 907 KiB  
Article
A Fast Approach to Removing Muscle Artifacts for EEG with Signal Serialization Based Ensemble Empirical Mode Decomposition
by Yangyang Dai, Feng Duan, Fan Feng, Zhe Sun, Yu Zhang, Cesar F. Caiafa, Pere Marti-Puig and Jordi Solé-Casals
Entropy 2021, 23(9), 1170; https://0-doi-org.brum.beds.ac.uk/10.3390/e23091170 - 06 Sep 2021
Cited by 3 | Viewed by 2939
Abstract
An electroencephalogram (EEG) is an electrophysiological signal reflecting the functional state of the brain. As the control signal of the brain–computer interface (BCI), EEG may build a bridge between humans and computers to improve the life quality for patients with movement disorders. The [...] Read more.
An electroencephalogram (EEG) is an electrophysiological signal reflecting the functional state of the brain. As the control signal of the brain–computer interface (BCI), EEG may build a bridge between humans and computers to improve the life quality for patients with movement disorders. The collected EEG signals are extremely susceptible to the contamination of electromyography (EMG) artifacts, affecting their original characteristics. Therefore, EEG denoising is an essential preprocessing step in any BCI system. Previous studies have confirmed that the combination of ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA) can effectively suppress EMG artifacts. However, the time-consuming iterative process of EEMD may limit the application of the EEMD-CCA method in real-time monitoring of BCI. Compared with the existing EEMD, the recently proposed signal serialization based EEMD (sEEMD) is a good choice to provide effective signal analysis and fast mode decomposition. In this study, an EMG denoising method based on sEEMD and CCA is discussed. All of the analyses are carried out on semi-simulated data. The results show that, in terms of frequency and amplitude, the intrinsic mode functions (IMFs) decomposed by sEEMD are consistent with the IMFs obtained by EEMD. There is no significant difference in the ability to separate EMG artifacts from EEG signals between the sEEMD-CCA method and the EEMD-CCA method (p > 0.05). Even in the case of heavy contamination (signal-to-noise ratio is less than 2 dB), the relative root mean squared error is about 0.3, and the average correlation coefficient remains above 0.9. The running speed of the sEEMD-CCA method to remove EMG artifacts is significantly improved in comparison with that of EEMD-CCA method (p < 0.05). The running time of the sEEMD-CCA method for three lengths of semi-simulated data is shortened by more than 50%. This indicates that sEEMD-CCA is a promising tool for EMG artifact removal in real-time BCI systems. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

26 pages, 645 KiB  
Article
Entropy Estimation Using a Linguistic Zipf–Mandelbrot–Li Model for Natural Sequences
by Andrew D. Back and Janet Wiles
Entropy 2021, 23(9), 1100; https://0-doi-org.brum.beds.ac.uk/10.3390/e23091100 - 24 Aug 2021
Cited by 3 | Viewed by 2414
Abstract
Entropy estimation faces numerous challenges when applied to various real-world problems. Our interest is in divergence and entropy estimation algorithms which are capable of rapid estimation for natural sequence data such as human and synthetic languages. This typically requires a large amount of [...] Read more.
Entropy estimation faces numerous challenges when applied to various real-world problems. Our interest is in divergence and entropy estimation algorithms which are capable of rapid estimation for natural sequence data such as human and synthetic languages. This typically requires a large amount of data; however, we propose a new approach which is based on a new rank-based analytic Zipf–Mandelbrot–Li probabilistic model. Unlike previous approaches, which do not consider the nature of the probability distribution in relation to language; here, we introduce a novel analytic Zipfian model which includes linguistic constraints. This provides more accurate distributions for natural sequences such as natural or synthetic emergent languages. Results are given which indicates the performance of the proposed ZML model. We derive an entropy estimation method which incorporates the linguistic constraint-based Zipf–Mandelbrot–Li into a new non-equiprobable coincidence counting algorithm which is shown to be effective for tasks such as entropy rate estimation with limited data. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

18 pages, 18064 KiB  
Article
Exploring Neurofeedback Training for BMI Power Augmentation of Upper Limbs: A Pilot Study
by Hongbo Liang, Shota Maedono, Yingxin Yu, Chang Liu, Naoya Ueda, Peirang Li and Chi Zhu
Entropy 2021, 23(4), 443; https://0-doi-org.brum.beds.ac.uk/10.3390/e23040443 - 09 Apr 2021
Cited by 1 | Viewed by 2082
Abstract
Electroencephalography neurofeedback (EEG-NFB) training can induce changes in the power of targeted EEG bands. The objective of this study is to enhance and evaluate the specific changes of EEG power spectral density that the brain-machine interface (BMI) users can reliably generate for power [...] Read more.
Electroencephalography neurofeedback (EEG-NFB) training can induce changes in the power of targeted EEG bands. The objective of this study is to enhance and evaluate the specific changes of EEG power spectral density that the brain-machine interface (BMI) users can reliably generate for power augmentation through EEG-NFB training. First, we constructed an EEG-NFB training system for power augmentation. Then, three subjects were assigned to three NFB training stages, based on a 6-day consecutive training session as one stage. The subjects received real-time feedback from their EEG signals by a robotic arm while conducting flexion and extension movement with their elbow and shoulder joints, respectively. EEG signals were compared with each NFB training stage. The training results showed that EEG beta (12–40 Hz) power increased after the NFB training for both the elbow and the shoulder joints’ movements. EEG beta power showed sustained improvements during the 3-stage training, which revealed that even the short-term training could improve EEG signals significantly. Moreover, the training effect of the shoulder joints was more obvious than that of the elbow joints. These results suggest that NFB training can improve EEG signals and clarify the specific EEG changes during the movement. Our results may even provide insights into how the neural effects of NFB can be better applied to the BMI power augmentation system and improve the performance of healthy individuals. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

19 pages, 3448 KiB  
Article
Entropy-Based Approach in Selection Exact String-Matching Algorithms
by Ivan Markić, Maja Štula, Marija Zorić and Darko Stipaničev
Entropy 2021, 23(1), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/e23010031 - 28 Dec 2020
Cited by 5 | Viewed by 3092
Abstract
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging [...] Read more.
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

Back to TopTop