Next Issue
Volume 23, September
Previous Issue
Volume 23, July

Entropy, Volume 23, Issue 8 (August 2021) – 174 articles

Cover Story (view full-size image): We perform the coherent, delocalized addition of a single photon to multiple, distinct light modes populated by different combinations of multiphoton quantum states. We experimentally demonstrate that such a simple operation may give rise to a wealth of interesting effects, ranging from the generation of a tunable degree of entanglement to the birth of peculiar correlations, both in the number of photons and in the field quadratures, among the resulting multimode light states. Besides their fundamental interest, these investigations open new avenues for applications to technologies involving quantum-enhanced sensing and quantum information processing and communication. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Generalized Ordinal Patterns and the KS-Entropy
Entropy 2021, 23(8), 1097; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081097 - 23 Aug 2021
Viewed by 537
Abstract
Ordinal patterns classifying real vectors according to the order relations between their components are an interesting basic concept for determining the complexity of a measure-preserving dynamical system. In particular, as shown by C. Bandt, G. Keller and B. Pompe, the permutation entropy based [...] Read more.
Ordinal patterns classifying real vectors according to the order relations between their components are an interesting basic concept for determining the complexity of a measure-preserving dynamical system. In particular, as shown by C. Bandt, G. Keller and B. Pompe, the permutation entropy based on the probability distributions of such patterns is equal to Kolmogorov–Sinai entropy in simple one-dimensional systems. The general reason for this is that, roughly speaking, the system of ordinal patterns obtained for a real-valued “measuring arrangement” has high potential for separating orbits. Starting from a slightly different approach of A. Antoniouk, K. Keller and S. Maksymenko, we discuss the generalizations of ordinal patterns providing enough separation to determine the Kolmogorov–Sinai entropy. For defining these generalized ordinal patterns, the idea is to substitute the basic binary relation ≤ on the real numbers by another binary relation. Generalizing the former results of I. Stolz and K. Keller, we establish conditions that the binary relation and the dynamical system have to fulfill so that the obtained generalized ordinal patterns can be used for estimating the Kolmogorov–Sinai entropy. Full article
Show Figures

Figure 1

Article
Joint Lossless Image Compression and Encryption Scheme Based on CALIC and Hyperchaotic System
Entropy 2021, 23(8), 1096; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081096 - 23 Aug 2021
Viewed by 394
Abstract
For efficiency and security of image transmission and storage, the joint image compression and encryption method that performs compression and encryption in a single step is a promising solution due to better security. Moreover, on some important occasions, it is necessary to save [...] Read more.
For efficiency and security of image transmission and storage, the joint image compression and encryption method that performs compression and encryption in a single step is a promising solution due to better security. Moreover, on some important occasions, it is necessary to save images in high quality by lossless compression. Thus, a joint lossless image compression and encryption scheme based on a context-based adaptive lossless image codec (CALIC) and hyperchaotic system is proposed to achieve lossless image encryption and compression simultaneously. Making use of the characteristics of CALIC, four encryption locations are designed to realize joint image compression and encryption: encryption for the predicted values of pixels based on gradient-adjusted prediction (GAP), encryption for the final prediction error, encryption for two lines of pixel values needed by prediction mode and encryption for the entropy coding file. Moreover, a new four-dimensional hyperchaotic system and plaintext-related encryption based on table lookup are all used to enhance the security. The security tests show information entropy, correlation and key sensitivity of the proposed methods reach 7.997, 0.01 and 0.4998, respectively. This indicates that the proposed methods have good security. Meanwhile, compared to original CALIC without security, the proposed methods increase the security and reduce the compression ratio by only 6.3%. The test results indicate that the proposed methods have high security and good lossless compression performance. Full article
Show Figures

Figure 1

Article
The Problem of Engines in Statistical Physics
Entropy 2021, 23(8), 1095; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081095 - 22 Aug 2021
Viewed by 977
Abstract
Engines are open systems that can generate work cyclically at the expense of an external disequilibrium. They are ubiquitous in nature and technology, but the course of mathematical physics over the last 300 years has tended to make their dynamics in time a [...] Read more.
Engines are open systems that can generate work cyclically at the expense of an external disequilibrium. They are ubiquitous in nature and technology, but the course of mathematical physics over the last 300 years has tended to make their dynamics in time a theoretical blind spot. This has hampered the usefulness of statistical mechanics applied to active systems, including living matter. We argue that recent advances in the theory of open quantum systems, coupled with renewed interest in understanding how active forces result from positive feedback between different macroscopic degrees of freedom in the presence of dissipation, point to a more realistic description of autonomous engines. We propose a general conceptualization of an engine that helps clarify the distinction between its heat and work outputs. Based on this, we show how the external loading force and the thermal noise may be incorporated into the relevant equations of motion. This modifies the usual Fokker–Planck and Langevin equations, offering a thermodynamically complete formulation of the irreversible dynamics of simple oscillating and rotating engines. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

Article
A Multi-Objective Multi-Label Feature Selection Algorithm Based on Shapley Value
Entropy 2021, 23(8), 1094; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081094 - 22 Aug 2021
Viewed by 567
Abstract
Multi-label learning is dedicated to learning functions so that each sample is labeled with a true label set. With the increase of data knowledge, the feature dimensionality is increasing. However, high-dimensional information may contain noisy data, making the process of multi-label learning difficult. [...] Read more.
Multi-label learning is dedicated to learning functions so that each sample is labeled with a true label set. With the increase of data knowledge, the feature dimensionality is increasing. However, high-dimensional information may contain noisy data, making the process of multi-label learning difficult. Feature selection is a technical approach that can effectively reduce the data dimension. In the study of feature selection, the multi-objective optimization algorithm has shown an excellent global optimization performance. The Pareto relationship can handle contradictory objectives in the multi-objective problem well. Therefore, a Shapley value-fused feature selection algorithm for multi-label learning (SHAPFS-ML) is proposed. The method takes multi-label criteria as the optimization objectives and the proposed crossover and mutation operators based on Shapley value are conducive to identifying relevant, redundant and irrelevant features. The comparison of experimental results on real-world datasets reveals that SHAPFS-ML is an effective feature selection method for multi-label classification, which can reduce the classification algorithm’s computational complexity and improve the classification accuracy. Full article
Show Figures

Figure 1

Article
Research on the Fastest Detection Method for Weak Trends under Noise Interference
Entropy 2021, 23(8), 1093; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081093 - 22 Aug 2021
Viewed by 487
Abstract
Trend anomaly detection is the practice of comparing and analyzing current and historical data trends to detect real-time abnormalities in online industrial data-streams. It has the advantages of tracking a concept drift automatically and predicting trend changes in the shortest time, making it [...] Read more.
Trend anomaly detection is the practice of comparing and analyzing current and historical data trends to detect real-time abnormalities in online industrial data-streams. It has the advantages of tracking a concept drift automatically and predicting trend changes in the shortest time, making it important both for algorithmic research and industry. However, industrial data streams contain considerable noise that interferes with detecting weak anomalies. In this paper, the fastest detection algorithm “sliding nesting” is adopted. It is based on calculating the data weight in each window by applying variable weights, while maintaining the method of trend-effective integration accumulation. The new algorithm changes the traditional calculation method of the trend anomaly detection score, which calculates the score in a short window. This algorithm, SNWFD–DS, can detect weak trend abnormalities in the presence of noise interference. Compared with other methods, it has significant advantages. An on-site oil drilling data test shows that this method can significantly reduce delays compared with other methods and can improve the detection accuracy of weak trend anomalies under noise interference. Full article
(This article belongs to the Special Issue Information Theory for Anomaly Detection in Complex Systems)
Show Figures

Figure 1

Article
Self-Organization, Entropy Generation Rate, and Boundary Defects: A Control Volume Approach
Entropy 2021, 23(8), 1092; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081092 - 22 Aug 2021
Viewed by 624
Abstract
Self-organization that leads to the discontinuous emergence of optimized new patterns is related to entropy generation and the export of entropy. Compared to the original pattern that the new, self-organized pattern replaces, the new features could involve an abrupt change in the pattern-volume. [...] Read more.
Self-organization that leads to the discontinuous emergence of optimized new patterns is related to entropy generation and the export of entropy. Compared to the original pattern that the new, self-organized pattern replaces, the new features could involve an abrupt change in the pattern-volume. There is no clear principle of pathway selection for self-organization that is known for triggering a particular new self-organization pattern. The new pattern displays different types of boundary-defects necessary for stabilizing the new order. Boundary-defects can contain high entropy regions of concentrated chemical species. On the other hand, the reorganization (or refinement) of an established pattern is a more kinetically tractable process, where the entropy generation rate varies continuously with the imposed variables that enable and sustain the pattern features. The maximum entropy production rate (MEPR) principle is one possibility that may have predictive capability for self-organization. The scale of shapes that form or evolve during self-organization and reorganization are influenced by the export of specific defects from the control volume of study. The control volume (CV) approach must include the texture patterns to be located inside the CV for the MEPR analysis to be applicable. These hypotheses were examined for patterns that are well-characterized for solidification and wear processes. We tested the governing equations for bifurcations (the onset of new patterns) and for reorganization (the fine tuning of existing patterns) with published experimental data, across the range of solidification morphologies and nonequilibrium phases, for metallic glass and featureless crystalline solids. The self-assembling features of surface-texture patterns for friction and wear conditions were also modeled with the entropy generation (MEPR) principle, including defect production (wear debris). We found that surface texture and entropy generation in the control volume could be predictive for self-organization. The main results of this study provide support to the hypothesis that self-organized patterns are a consequence of the maximum entropy production rate per volume principle. Patterns at any scale optimize a certain outcome and have utility. We discuss some similarities between the self-organization behavior of both inanimate and living systems, with ideas regarding the optimizing features of self-organized pattern features that impact functionality, beauty, and consciousness. Full article
(This article belongs to the Special Issue Patterns, Entropy, Surface Textures and Related Applications)
Show Figures

Figure 1

Article
An Improved Chinese String Comparator for Bloom Filter Based Privacy-Preserving Record Linkage
Entropy 2021, 23(8), 1091; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081091 - 22 Aug 2021
Viewed by 538
Abstract
With the development of information technology, it has become a popular topic to share data from multiple sources without privacy disclosure problems. Privacy-preserving record linkage (PPRL) can link the data that truly matches and does not disclose personal information. In the existing studies, [...] Read more.
With the development of information technology, it has become a popular topic to share data from multiple sources without privacy disclosure problems. Privacy-preserving record linkage (PPRL) can link the data that truly matches and does not disclose personal information. In the existing studies, the techniques of PPRL have mostly been studied based on the alphabetic language, which is much different from the Chinese language environment. In this paper, Chinese characters (identification fields in record pairs) are encoded into strings composed of letters and numbers by using the SoundShape code according to their shapes and pronunciations. Then, the SoundShape codes are encrypted by Bloom filter, and the similarity of encrypted fields is calculated by Dice similarity. In this method, the false positive rate of Bloom filter and different proportions of sound code and shape code are considered. Finally, we performed the above methods on the synthetic datasets, and compared the precision, recall, F1-score and computational time with different values of false positive rate and proportion. The results showed that our method for PPRL in Chinese language environment improved the quality of the classification results and outperformed others with a relatively low additional cost of computation. Full article
Show Figures

Figure 1

Article
High-Dimensional Separability for One- and Few-Shot Learning
Entropy 2021, 23(8), 1090; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081090 - 22 Aug 2021
Cited by 1 | Viewed by 732
Abstract
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two [...] Read more.
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher’s discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA. Full article
Show Figures

Figure 1

Article
Unified Generative Adversarial Networks for Multidomain Fingerprint Presentation Attack Detection
Entropy 2021, 23(8), 1089; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081089 - 21 Aug 2021
Viewed by 460
Abstract
With the rapid growth of fingerprint-based biometric systems, it is essential to ensure the security and reliability of the deployed algorithms. Indeed, the security vulnerability of these systems has been widely recognized. Thus, it is critical to enhance the generalization ability of fingerprint [...] Read more.
With the rapid growth of fingerprint-based biometric systems, it is essential to ensure the security and reliability of the deployed algorithms. Indeed, the security vulnerability of these systems has been widely recognized. Thus, it is critical to enhance the generalization ability of fingerprint presentation attack detection (PAD) cross-sensor and cross-material settings. In this work, we propose a novel solution for addressing the case of a single source domain (sensor) with large labeled real/fake fingerprint images and multiple target domains (sensors) with only few real images obtained from different sensors. Our aim is to build a model that leverages the limited sample issues in all target domains by transferring knowledge from the source domain. To this end, we train a unified generative adversarial network (UGAN) for multidomain conversion to learn several mappings between all domains. This allows us to generate additional synthetic images for the target domains from the source domain to reduce the distribution shift between fingerprint representations. Then, we train a scale compound network (EfficientNetV2) coupled with multiple head classifiers (one classifier for each domain) using the source domain and the translated images. The outputs of these classifiers are then aggregated using an additional fusion layer with learnable weights. In the experiments, we validate the proposed methodology on the public LivDet2015 dataset. The experimental results show that the proposed method improves the average classification accuracy over twelve classification scenarios from 67.80 to 80.44% after adaptation. Full article
Show Figures

Figure 1

Article
The Truncated Burr X-G Family of Distributions: Properties and Applications to Actuarial and Financial Data
Entropy 2021, 23(8), 1088; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081088 - 21 Aug 2021
Cited by 1 | Viewed by 393
Abstract
In this article, the “truncated-composed” scheme was applied to the Burr X distribution to motivate a new family of univariate continuous-type distributions, called the truncated Burr X generated family. It is mathematically simple and provides more modeling freedom for any parental distribution. Additional [...] Read more.
In this article, the “truncated-composed” scheme was applied to the Burr X distribution to motivate a new family of univariate continuous-type distributions, called the truncated Burr X generated family. It is mathematically simple and provides more modeling freedom for any parental distribution. Additional functionality is conferred on the probability density and hazard rate functions, improving their peak, asymmetry, tail, and flatness levels. These characteristics are represented analytically and graphically with three special distributions of the family derived from the exponential, Rayleigh, and Lindley distributions. Subsequently, we conducted asymptotic, first-order stochastic dominance, series expansion, Tsallis entropy, and moment studies. Useful risk measures were also investigated. The remainder of the study was devoted to the statistical use of the associated models. In particular, we developed an adapted maximum likelihood methodology aiming to efficiently estimate the model parameters. The special distribution extending the exponential distribution was applied as a statistical model to fit two sets of actuarial and financial data. It performed better than a wide variety of selected competing non-nested models. Numerical applications for risk measures are also given. Full article
Show Figures

Figure 1

Article
Causal Information Rate
Entropy 2021, 23(8), 1087; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081087 - 21 Aug 2021
Viewed by 528
Abstract
Information processing is common in complex systems, and information geometric theory provides a useful tool to elucidate the characteristics of non-equilibrium processes, such as rare, extreme events, from the perspective of geometry. In particular, their time-evolutions can be viewed by the rate (information [...] Read more.
Information processing is common in complex systems, and information geometric theory provides a useful tool to elucidate the characteristics of non-equilibrium processes, such as rare, extreme events, from the perspective of geometry. In particular, their time-evolutions can be viewed by the rate (information rate) at which new information is revealed (a new statistical state is accessed). In this paper, we extend this concept and develop a new information-geometric measure of causality by calculating the effect of one variable on the information rate of the other variable. We apply the proposed causal information rate to the Kramers equation and compare it with the entropy-based causality measure (information flow). Overall, the causal information rate is a sensitive method for identifying causal relations. Full article
Show Figures

Figure 1

Article
An Analytical Technique, Based on Natural Transform to Solve Fractional-Order Parabolic Equations
Entropy 2021, 23(8), 1086; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081086 - 21 Aug 2021
Viewed by 411
Abstract
This research article is dedicated to solving fractional-order parabolic equations using an innovative analytical technique. The Adomian decomposition method is well supported by natural transform to establish closed form solutions for targeted problems. The procedure is simple, attractive and is preferred over other [...] Read more.
This research article is dedicated to solving fractional-order parabolic equations using an innovative analytical technique. The Adomian decomposition method is well supported by natural transform to establish closed form solutions for targeted problems. The procedure is simple, attractive and is preferred over other methods because it provides a closed form solution for the given problems. The solution graphs are plotted for both integer and fractional-order, which shows that the obtained results are in good contact with the exact solution of the problems. It is also observed that the solution of fractional-order problems are convergent to the solution of integer-order problem. In conclusion, the current technique is an accurate and straightforward approximate method that can be applied to solve other fractional-order partial differential equations. Full article
(This article belongs to the Special Issue Advanced Numerical Methods for Differential Equations)
Show Figures

Figure 1

Article
Dirichlet Polynomials and Entropy
Entropy 2021, 23(8), 1085; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081085 - 21 Aug 2021
Viewed by 727
Abstract
A Dirichlet polynomial d in one variable y is a function of the form d(y)=anny++a22y+a11y+a00y for some [...] Read more.
A Dirichlet polynomial d in one variable y is a function of the form d(y)=anny++a22y+a11y+a00y for some n,a0,,anN. We will show how to think of a Dirichlet polynomial as a set-theoretic bundle, and thus as an empirical distribution. We can then consider the Shannon entropy H(d) of the corresponding probability distribution, and we define its length (or, classically, its perplexity) by L(d)=2H(d). On the other hand, we will define a rig homomorphism h:DirRect from the rig of Dirichlet polynomials to the so-called rectangle rig, whose underlying set is R0×R0 and whose additive structure involves the weighted geometric mean; we write h(d)=(A(d),W(d)), and call the two components area and width (respectively). The main result of this paper is the following: the rectangle-area formula A(d)=L(d)W(d) holds for any Dirichlet polynomial d. In other words, the entropy of an empirical distribution can be calculated entirely in terms of the homomorphism h applied to its corresponding Dirichlet polynomial. We also show that similar results hold for the cross entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
A UoI-Optimal Policy for Timely Status Updates with Resource Constraint
Entropy 2021, 23(8), 1084; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081084 - 20 Aug 2021
Viewed by 822
Abstract
Timely status updates are critical in remote control systems such as autonomous driving and the industrial Internet of Things, where timeliness requirements are usually context dependent. Accordingly, the Urgency of Information (UoI) has been proposed beyond the well-known Age of Information (AoI) by [...] Read more.
Timely status updates are critical in remote control systems such as autonomous driving and the industrial Internet of Things, where timeliness requirements are usually context dependent. Accordingly, the Urgency of Information (UoI) has been proposed beyond the well-known Age of Information (AoI) by further including context-aware weights which indicate whether the monitored process is in an emergency. However, the optimal updating and scheduling strategies in terms of UoI remain open. In this paper, we propose a UoI-optimal updating policy for timely status information with resource constraint. We first formulate the problem in a constrained Markov decision process and prove that the UoI-optimal policy has a threshold structure. When the context-aware weights are known, we propose a numerical method based on linear programming. When the weights are unknown, we further design a reinforcement learning (RL)-based scheduling policy. The simulation reveals that the threshold of the UoI-optimal policy increases as the resource constraint tightens. In addition, the UoI-optimal policy outperforms the AoI-optimal policy in terms of average squared estimation error, and the proposed RL-based updating policy achieves a near-optimal performance without the advanced knowledge of the system model. Full article
(This article belongs to the Special Issue Age of Information: Concept, Metric and Tool for Network Control)
Show Figures

Figure 1

Article
Schrödinger’s Ballot: Quantum Information and the Violation of Arrow’s Impossibility Theorem
Entropy 2021, 23(8), 1083; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081083 - 20 Aug 2021
Viewed by 442
Abstract
We study Arrow’s Impossibility Theorem in the quantum setting. Our work is based on the work of Bao and Halpern, in which it is proved that the quantum analogue of Arrow’s Impossibility Theorem is not valid. However, we feel unsatisfied about the proof [...] Read more.
We study Arrow’s Impossibility Theorem in the quantum setting. Our work is based on the work of Bao and Halpern, in which it is proved that the quantum analogue of Arrow’s Impossibility Theorem is not valid. However, we feel unsatisfied about the proof presented in Bao and Halpern’s work. Moreover, the definition of Quantum Independence of Irrelevant Alternatives (QIIA) in Bao and Halpern’s work seems not appropriate to us. We give a better definition of QIIA, which properly captures the idea of the independence of irrelevant alternatives, and a detailed proof of the violation of Arrow’s Impossibility Theorem in the quantum setting with the modified definition. Full article
(This article belongs to the Special Issue Quantum Communication)
Article
Passive Tracking of Multiple Underwater Targets in Incomplete Detection and Clutter Environment
Entropy 2021, 23(8), 1082; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081082 - 20 Aug 2021
Cited by 1 | Viewed by 419
Abstract
A major advantage of the use of passive sonar in the tracking multiple underwater targets is that they can be kept covert, which reduces the risk of being attacked. However, the nonlinearity of the passive Doppler and bearing measurements, the range unobservability problem, [...] Read more.
A major advantage of the use of passive sonar in the tracking multiple underwater targets is that they can be kept covert, which reduces the risk of being attacked. However, the nonlinearity of the passive Doppler and bearing measurements, the range unobservability problem, and the complexity of data association between measurements and targets make the problem of underwater passive multiple target tracking challenging. To deal with these problems, the cardinalized probability hypothesis density (CPHD) recursion, which is based on Bayesian information theory, is developed to handle the data association uncertainty, and to acquire existing targets’ numbers and states (e.g., position and velocity). The key idea of the CPHD recursion is to simultaneously estimate the targets’ intensity and the probability distribution of the number of targets. The CPHD recursion is the first moment approximation of the Bayesian multiple targets filter, which avoids the data association procedure between the targets and measurements including clutter. The Bayesian-filter-based extended Kalman filter (EKF) is applied to deal with the nonlinear bearing and Doppler measurements. The experimental results show that the EKF-based CPHD recursion works well in the underwater passive multiple target tracking system in cluttered and noisy environments. Full article
Show Figures

Figure 1

Article
Improving Seismic Inversion Robustness via Deformed Jackson Gaussian
Entropy 2021, 23(8), 1081; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081081 - 20 Aug 2021
Viewed by 395
Abstract
The seismic data inversion from observations contaminated by spurious measures (outliers) remains a significant challenge for the industrial and scientific communities. This difficulty is due to slow processing work to mitigate the influence of the outliers. In this work, we introduce a robust [...] Read more.
The seismic data inversion from observations contaminated by spurious measures (outliers) remains a significant challenge for the industrial and scientific communities. This difficulty is due to slow processing work to mitigate the influence of the outliers. In this work, we introduce a robust formulation to mitigate the influence of spurious measurements in the seismic inversion process. In this regard, we put forth an outlier-resistant seismic inversion methodology for model estimation based on the deformed Jackson Gaussian distribution. To demonstrate the effectiveness of our proposal, we investigated a classic geophysical data-inverse problem in three different scenarios: (i) in the first one, we analyzed the sensitivity of the seismic inversion to incorrect seismic sources; (ii) in the second one, we considered a dataset polluted by Gaussian errors with different noise intensities; and (iii) in the last one we considered a dataset contaminated by many outliers. The results reveal that the deformed Jackson Gaussian outperforms the classical approach, which is based on the standard Gaussian distribution. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Using the Relative Entropy of Linguistic Complexity to Assess L2 Language Proficiency Development
Entropy 2021, 23(8), 1080; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081080 - 20 Aug 2021
Viewed by 429
Abstract
This study applies relative entropy in naturalistic large-scale corpus to calculate the difference among L2 (second language) learners at different levels. We chose lemma, token, POS-trigram, conjunction to represent lexicon and grammar to detect the patterns of language proficiency development among different L2 [...] Read more.
This study applies relative entropy in naturalistic large-scale corpus to calculate the difference among L2 (second language) learners at different levels. We chose lemma, token, POS-trigram, conjunction to represent lexicon and grammar to detect the patterns of language proficiency development among different L2 groups using relative entropy. The results show that information distribution discrimination regarding lexical and grammatical differences continues to increase from L2 learners at a lower level to those at a higher level. This result is consistent with the assumption that in the course of second language acquisition, L2 learners develop towards a more complex and diverse use of language. Meanwhile, this study uses the statistics method of time series to process the data on L2 differences yielded by traditional frequency-based methods processing the same L2 corpus to compare with the results of relative entropy. However, the results from the traditional methods rarely show regularity. As compared to the algorithms in traditional approaches, relative entropy performs much better in detecting L2 proficiency development. In this sense, we have developed an effective and practical algorithm for stably detecting and predicting the developments in L2 learners’ language proficiency. Full article
(This article belongs to the Special Issue Statistical Methods for Complex Systems)
Show Figures

Figure 1

Article
Structural and Parametric Optimization of S–CO2 Nuclear Power Plants
Entropy 2021, 23(8), 1079; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081079 - 19 Aug 2021
Cited by 1 | Viewed by 466
Abstract
The transition to the use of supercritical carbon dioxide as a working fluid for power generation units will significantly reduce the equipment′s overall dimensions while increasing fuel efficiency and environmental safety. Structural and parametric optimization of S–CO2 nuclear power plants was carried [...] Read more.
The transition to the use of supercritical carbon dioxide as a working fluid for power generation units will significantly reduce the equipment′s overall dimensions while increasing fuel efficiency and environmental safety. Structural and parametric optimization of S–CO2 nuclear power plants was carried out to ensure the maximum efficiency of electricity production. Based on the results of mathematical modeling, it was found that the transition to a carbon dioxide working fluid for the nuclear power plant with the BREST–OD–300 reactor leads to an increase of efficiency from 39.8 to 43.1%. Nuclear power plant transition from the Rankine water cycle to the carbon dioxide Brayton cycle with recompression is reasonable at a working fluid temperature above 455 °C due to the carbon dioxide cycle′s more effective regeneration system. Full article
(This article belongs to the Special Issue Supercritical Fluids for Thermal Energy Applications)
Show Figures

Figure 1

Article
Which Physical Quantity Deserves the Name “Quantity of Heat”?
Entropy 2021, 23(8), 1078; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081078 - 19 Aug 2021
Viewed by 724
Abstract
“What is heat?” was the title of a 1954 article by Freeman J. Dyson, published in Scientific American. Apparently, it was appropriate to ask this question at that time. The answer is given in the very first sentence of the article: heat is [...] Read more.
“What is heat?” was the title of a 1954 article by Freeman J. Dyson, published in Scientific American. Apparently, it was appropriate to ask this question at that time. The answer is given in the very first sentence of the article: heat is disordered energy. We will ask the same question again, but with a different expectation for its answer. Let us imagine that all the thermodynamic knowledge is already available: both the theory of phenomenological thermodynamics and that of statistical thermodynamics, including quantum statistics, but that the term “heat” has not yet been attributed to any of the variables of the theory. With the question “What is heat?” we now mean: which of the physical quantities deserves this name? There are several candidates: the quantities Q, H, Etherm and S. We can then formulate a desideratum, or a profile: What properties should such a measure of the quantity or amount of heat ideally have? Then, we evaluate all the candidates for their suitability. It turns out that the winner is the quantity S, which we know by the name of entropy. In the second part of the paper, we examine why entropy has not succeeded in establishing itself as a measure for the amount of heat, and we show that there is a real chance today to make up for what was missed. Full article
(This article belongs to the Special Issue Nature of Entropy and Its Direct Metrology)
Article
Nestedness-Based Measurement of Evolutionarily Stable Equilibrium of Global Production System
Entropy 2021, 23(8), 1077; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081077 - 19 Aug 2021
Viewed by 405
Abstract
A nested structure is a structural feature that is conducive to system stability formed by the coevolution of biological species in mutualistic ecosystems The coopetition relationship and value flow between industrial sectors in the global value chain are similar to the mutualistic ecosystem [...] Read more.
A nested structure is a structural feature that is conducive to system stability formed by the coevolution of biological species in mutualistic ecosystems The coopetition relationship and value flow between industrial sectors in the global value chain are similar to the mutualistic ecosystem in nature. That is, the global economic system is always changing to form one dynamic equilibrium after another. In this paper, a nestedness-based analytical framework is used to define the generalist and specialist sectors for the purpose of analyzing the changes in the global supply pattern. We study why the global economic system can reach a stable equilibrium, what the role of different sectors play in the steady status, and how to enhance the stability of the global economic system. In detail, the domestic trade network, export trade network and import trade network of each country are extracted. Then, an econometric model is designed to analyze how the microstructure of the production system affects a country’s macroeconomic performance. Full article
(This article belongs to the Special Issue Structure and Dynamics of Complex Socioeconomic Networks)
Show Figures

Figure 1

Article
Some Interesting Observations on the Free Energy Principle
Entropy 2021, 23(8), 1076; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081076 - 19 Aug 2021
Cited by 8 | Viewed by 523
Abstract
Biehl et al. (2021) present some interesting observations on an early formulation of the free energy principle. We use these observations to scaffold a discussion of the technical arguments that underwrite the free energy principle. This discussion focuses on solenoidal coupling between various [...] Read more.
Biehl et al. (2021) present some interesting observations on an early formulation of the free energy principle. We use these observations to scaffold a discussion of the technical arguments that underwrite the free energy principle. This discussion focuses on solenoidal coupling between various (subsets of) states in sparsely coupled systems that possess a Markov blanket—and the distinction between exact and approximate Bayesian inference, implied by the ensuing Bayesian mechanics. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Article
A Non-Contact Fault Diagnosis Method for Bearings and Gears Based on Generalized Matrix Norm Sparse Filtering
Entropy 2021, 23(8), 1075; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081075 - 19 Aug 2021
Viewed by 355
Abstract
Fault diagnosis of mechanical equipment is mainly based on the contact measurement and analysis of vibration signals. In some special working conditions, the non-contact fault diagnosis method represented by the measurement of acoustic signals can make up for the lack of contact testing. [...] Read more.
Fault diagnosis of mechanical equipment is mainly based on the contact measurement and analysis of vibration signals. In some special working conditions, the non-contact fault diagnosis method represented by the measurement of acoustic signals can make up for the lack of contact testing. However, its engineering application value is greatly restricted due to the low signal-to-noise ratio (SNR) of the acoustic signal. To solve this deficiency, a novel fault diagnosis method based on the generalized matrix norm sparse filtering (GMNSF) is proposed in this paper. Specially, the generalized matrix norm is introduced into the sparse filtering to seek the optimal sparse feature distribution to overcome the defect of low SNR of acoustic signals. Firstly, the collected acoustic signals are randomly overlapped to form the sample fragment data set. Then, three constraints are imposed on the multi-period data set by the GMNSF model to extract the sparse features in the sample. Finally, softmax is used to as a classifier to categorize different fault types. The diagnostic performance of the proposed method is verified by the bearing and planetary gear datasets. Results show that the GMNSF model has good feature extraction ability performance and anti-noise ability than other traditional methods. Full article
Show Figures

Figure 1

Article
Low-Resolution ADCs for Two-Hop Massive MIMO Relay System under Rician Channels
Entropy 2021, 23(8), 1074; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081074 - 19 Aug 2021
Viewed by 315
Abstract
This paper works on building an effective massive multi-input multi-output (MIMO) relay system by increasing the achievable sum rate and energy efficiency. First, we design a two-hop massive MIMO relay system instead of a one-hop system to shorten the distance and create a [...] Read more.
This paper works on building an effective massive multi-input multi-output (MIMO) relay system by increasing the achievable sum rate and energy efficiency. First, we design a two-hop massive MIMO relay system instead of a one-hop system to shorten the distance and create a Line-of-Sight (LOS) path between relays. Second, we apply Rician channels between relays in this system. Third, we apply low-resolution Analog-to-Digital Converters (ADCs) at both relays to quantize signals, and apply Amplify-and-Forward (AF) and Maximum Ratio Combining (MRC) to the processed signal at relay R1 and relay R2 correspondingly. Fourth, we use higher-order statistics to derive the closed-form expression of the achievable sum rate. Fifth, we derive the power scaling law and achieve the asymptotic expressions under different power scales. Last, we validate the correctness of theoretical analysis with numerical simulation results and show the superiority of the two-hop relay system over the one-hop relay system. From both closed-form expressions and simulation results, we discover that the two-hop system has a higher achievable sum rate than the one-hop system. Besides, the energy efficiency in the two-hop system is higher than the one-hop system. Moreover, in the two-hop system, when quantization bits q = 4, the achievable sum rate converges. Therefore, deploying low-resolution ADCs can improve the energy efficiency and achieve a fairly considerable achievable sum rate. Full article
Show Figures

Figure 1

Article
Isospectral Twirling and Quantum Chaos
Entropy 2021, 23(8), 1073; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081073 - 19 Aug 2021
Cited by 5 | Viewed by 497
Abstract
We show that the most important measures of quantum chaos, such as frame potentials, scrambling, Loschmidt echo and out-of-time-order correlators (OTOCs), can be described by the unified framework of the isospectral twirling, namely the Haar average of a k-fold unitary channel. We [...] Read more.
We show that the most important measures of quantum chaos, such as frame potentials, scrambling, Loschmidt echo and out-of-time-order correlators (OTOCs), can be described by the unified framework of the isospectral twirling, namely the Haar average of a k-fold unitary channel. We show that such measures can then always be cast in the form of an expectation value of the isospectral twirling. In literature, quantum chaos is investigated sometimes through the spectrum and some other times through the eigenvectors of the Hamiltonian generating the dynamics. We show that thanks to this technique, we can interpolate smoothly between integrable Hamiltonians and quantum chaotic Hamiltonians. The isospectral twirling of Hamiltonians with eigenvector stabilizer states does not possess chaotic features, unlike those Hamiltonians whose eigenvectors are taken from the Haar measure. As an example, OTOCs obtained with Clifford resources decay to higher values compared with universal resources. By doping Hamiltonians with non-Clifford resources, we show a crossover in the OTOC behavior between a class of integrable models and quantum chaos. Moreover, exploiting random matrix theory, we show that these measures of quantum chaos clearly distinguish the finite time behavior of probes to quantum chaos corresponding to chaotic spectra given by the Gaussian Unitary Ensemble (GUE) from the integrable spectra given by Poisson distribution and the Gaussian Diagonal Ensemble (GDE). Full article
(This article belongs to the Special Issue Scrambling of Quantum Information in Chaotic Quantum Systems)
Show Figures

Figure 1

Article
Mathematical Models to Measure the Variability of Nodes and Networks in Team Sports
Entropy 2021, 23(8), 1072; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081072 - 19 Aug 2021
Viewed by 631
Abstract
Pattern analysis is a widely researched topic in team sports performance analysis, using information theory as a conceptual framework. Bayesian methods are also used in this research field, but the association between these two is being developed. The aim of this paper is [...] Read more.
Pattern analysis is a widely researched topic in team sports performance analysis, using information theory as a conceptual framework. Bayesian methods are also used in this research field, but the association between these two is being developed. The aim of this paper is to present new mathematical concepts that are based on information and probability theory and can be applied to network analysis in Team Sports. These results are based on the transition matrices of the Markov chain, associated with the adjacency matrices of a network with n nodes and allowing for a more robust analysis of the variability of interactions in team sports. The proposed models refer to individual and collective rates and indexes of total variability between players and teams as well as the overall passing capacity of a network, all of which are demonstrated in the UEFA 2020/2021 Champions League Final. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics II)
Show Figures

Figure 1

Communication
Latent Network Construction for Univariate Time Series Based on Variational Auto-Encode
Entropy 2021, 23(8), 1071; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081071 - 18 Aug 2021
Viewed by 443
Abstract
Time series analysis has been an important branch of information processing, and the conversion of time series into complex networks provides a new means to understand and analyze time series. In this work, using Variational Auto-Encode (VAE), we explored the construction of latent [...] Read more.
Time series analysis has been an important branch of information processing, and the conversion of time series into complex networks provides a new means to understand and analyze time series. In this work, using Variational Auto-Encode (VAE), we explored the construction of latent networks for univariate time series. We first trained the VAE to obtain the space of latent probability distributions of the time series and then decomposed the multivariate Gaussian distribution into multiple univariate Gaussian distributions. By measuring the distance between univariate Gaussian distributions on a statistical manifold, the latent network construction was finally achieved. The experimental results show that the latent network can effectively retain the original information of the time series and provide a new data structure for the downstream tasks. Full article
(This article belongs to the Special Issue Complex Systems Modeling and Analysis)
Show Figures

Figure 1

Article
Estimating Phase Amplitude Coupling between Neural Oscillations Based on Permutation and Entropy
Entropy 2021, 23(8), 1070; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081070 - 18 Aug 2021
Viewed by 363
Abstract
Cross-frequency phase–amplitude coupling (PAC) plays an important role in neuronal oscillations network, reflecting the interaction between the phase of low-frequency oscillation (LFO) and amplitude of the high-frequency oscillations (HFO). Thus, we applied four methods based on permutation analysis to measure PAC, including multiscale [...] Read more.
Cross-frequency phase–amplitude coupling (PAC) plays an important role in neuronal oscillations network, reflecting the interaction between the phase of low-frequency oscillation (LFO) and amplitude of the high-frequency oscillations (HFO). Thus, we applied four methods based on permutation analysis to measure PAC, including multiscale permutation mutual information (MPMI), permutation conditional mutual information (PCMI), symbolic joint entropy (SJE), and weighted-permutation mutual information (WPMI). To verify the ability of these four algorithms, a performance test including the effects of coupling strength, signal-to-noise ratios (SNRs), and data length was evaluated by using simulation data. It was shown that the performance of SJE was similar to that of other approaches when measuring PAC strength, but the computational efficiency of SJE was the highest among all these four methods. Moreover, SJE can also accurately identify the PAC frequency range under the interference of spike noise. All in all, the results demonstrate that SJE is better for evaluating PAC between neural oscillations. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications II)
Show Figures

Figure 1

Article
Heat Transfer and Entropy in a Vertical Porous Plate Subjected to Suction Velocity and MHD
Entropy 2021, 23(8), 1069; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081069 - 18 Aug 2021
Viewed by 428
Abstract
This article presents an investigation of heat transfer in a porous medium adjacent to a vertical plate. The porous medium is subjected to a magnetohydrodynamic effect and suction velocity. The governing equations are nondepersonalized and converted into ordinary differential equations. The resulting equations [...] Read more.
This article presents an investigation of heat transfer in a porous medium adjacent to a vertical plate. The porous medium is subjected to a magnetohydrodynamic effect and suction velocity. The governing equations are nondepersonalized and converted into ordinary differential equations. The resulting equations are solved with the help of the finite difference method. The impact of various parameters, such as the Prandtl number, Grashof number, permeability parameter, radiation parameter, Eckert number, viscous dissipation parameter, and magnetic parameter, on fluid flow characteristics inside the porous medium is discussed. Entropy generation in the medium is analyzed with respect to various parameters, including the Brinkman number and Reynolds number. It is noted that the velocity profile decreases in magnitude with respect to the Prandtl number, but increases with the radiation parameter. The Eckert number has a marginal effect on the velocity profile. An increased radiation effect leads to a reduced thermal gradient at the hot surface. Full article
(This article belongs to the Special Issue Entropy in Renewable Energy Systems)
Show Figures

Figure 1

Article
Dissipation-Driven Selection under Finite Diffusion: Hints from Equilibrium and Separation of Time Scales
Entropy 2021, 23(8), 1068; https://0-doi-org.brum.beds.ac.uk/10.3390/e23081068 - 17 Aug 2021
Viewed by 519
Abstract
When exposed to a thermal gradient, reaction networks can convert thermal energy into the chemical selection of states that would be unfavourable at equilibrium. The kinetics of reaction paths, and thus how fast they dissipate available energy, might be dominant in dictating the [...] Read more.
When exposed to a thermal gradient, reaction networks can convert thermal energy into the chemical selection of states that would be unfavourable at equilibrium. The kinetics of reaction paths, and thus how fast they dissipate available energy, might be dominant in dictating the stationary populations of all chemical states out of equilibrium. This phenomenology has been theoretically explored mainly in the infinite diffusion limit. Here, we show that the regime in which the diffusion rate is finite, and also slower than some chemical reactions, might bring about interesting features, such as the maximisation of selection or the switch of the selected state at stationarity. We introduce a framework, rooted in a time-scale separation analysis, which is able to capture leading non-equilibrium features using only equilibrium arguments under well-defined conditions. In particular, it is possible to identify fast-dissipation sub-networks of reactions whose Boltzmann equilibrium dominates the steady-state of the entire system as a whole. Finally, we also show that the dissipated heat (and so the entropy production) can be estimated, under some approximations, through the heat capacity of fast-dissipation sub-networks. This work provides a tool to develop an intuitive equilibrium-based grasp on complex non-isothermal reaction networks, which are important paradigms to understand the emergence of complex structures from basic building blocks. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

Previous Issue
Back to TopTop