Next Issue
Volume 22, September
Previous Issue
Volume 22, July

Entropy, Volume 22, Issue 8 (August 2020) – 111 articles

Cover Story (view full-size image): The hermiticity of closed quantum systems is a fundamental principle; however, a real physical system cannot be completely isolated from the environment. Generally, the Lamb shift describes this gap by quantifying the difference between energy eigenvalues. However, a certain appearance of Lamb shift and collective Lamb shift between two subsystems is very weak when the openness effects are involved. Accordingly, we need to consider a different disparity, not of eigenvalues but eigenfunctions. Here, we have exploited the relative entropy to quantify the gap and found that the average value of relative entropy in the collective Lamb shift is large, while that in self-energy is small. Furthermore, weak and strong interactions in the non-Hermitian system display an obvious exchange of eigenfunctions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Creative
How It All Began
Entropy 2020, 22(8), 908; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080908 - 18 Aug 2020
Cited by 13 | Viewed by 1225
Abstract
The first paper published as Finite-Time Thermodynamics is from 1977 [...] Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Graphical abstract

Review
Polygenic Adaptation in a Population of Finite Size
Entropy 2020, 22(8), 907; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080907 - 18 Aug 2020
Cited by 1 | Viewed by 1015
Abstract
Polygenic adaptation in response to selection on quantitative traits has become an important topic in evolutionary biology. Here we review the recent literature on models of polygenic adaptation. In particular, we focus on a model that includes mutation and both directional and stabilizing [...] Read more.
Polygenic adaptation in response to selection on quantitative traits has become an important topic in evolutionary biology. Here we review the recent literature on models of polygenic adaptation. In particular, we focus on a model that includes mutation and both directional and stabilizing selection on a highly polygenic trait in a population of finite size (thus experiencing random genetic drift). Assuming that a sudden environmental shift of the fitness optimum occurs while the population is in a stochastic equilibrium, we analyze the adaptation of the trait to the new optimum. When the shift is not too large relative to the equilibrium genetic variance and this variance is determined by loci with mostly small effects, the approach of the mean phenotype to the optimum can be approximated by a rapid exponential process (whose rate is proportional to the genetic variance). During this rapid phase the underlying changes to allele frequencies, however, may depend strongly on genetic drift. While trait-increasing alleles with intermediate equilibrium frequencies are dominated by selection and contribute positively to changes of the trait mean (i.e., are aligned with the direction of the optimum shift), alleles with low or high equilibrium frequencies show more of a random dynamics, which is expected when drift is dominating. A strong effect of drift is also predicted for population size bottlenecks. Our simulations show that the presence of a bottleneck results in a larger deviation of the population mean of the trait from the fitness optimum, which suggests that more loci experience the influence of drift. Full article
(This article belongs to the Special Issue Statistical Physics of Living Systems)
Show Figures

Figure 1

Article
From Knowledge Transmission to Knowledge Construction: A Step towards Human-Like Active Learning
Entropy 2020, 22(8), 906; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080906 - 18 Aug 2020
Cited by 2 | Viewed by 1238
Abstract
Machines usually employ a guess-and-check strategy to analyze data: they take the data, make a guess, check the answer, adjust it with regard to the correct one if necessary, and try again on a new data set. An active learning environment guarantees better [...] Read more.
Machines usually employ a guess-and-check strategy to analyze data: they take the data, make a guess, check the answer, adjust it with regard to the correct one if necessary, and try again on a new data set. An active learning environment guarantees better performance while training on less, but carefully chosen, data which reduces the costs of both annotating and analyzing large data sets. This issue becomes even more critical for deep learning applications. Human-like active learning integrates a variety of strategies and instructional models chosen by a teacher to contribute to learners’ knowledge, while machine active learning strategies lack versatile tools for shifting the focus of instruction away from knowledge transmission to learners’ knowledge construction. We approach this gap by considering an active learning environment in an educational setting. We propose a new strategy that measures the information capacity of data using the information function from the four-parameter logistic item response theory (4PL IRT). We compared the proposed strategy with the most common active learning strategies—Least Confidence and Entropy Sampling. The results of computational experiments showed that the Information Capacity strategy shares similar behavior but provides a more flexible framework for building transparent knowledge models in deep learning. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

Article
Project Management Monitoring Based on Expected Duration Entropy
Entropy 2020, 22(8), 905; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080905 - 18 Aug 2020
Viewed by 966
Abstract
Projects are rarely executed exactly as planned. Often, the actual duration of a project’s activities differ from the planned duration, resulting in costs stemming from the inaccurate estimation of the activity’s completion date. While monitoring a project at various inspection points is pricy, [...] Read more.
Projects are rarely executed exactly as planned. Often, the actual duration of a project’s activities differ from the planned duration, resulting in costs stemming from the inaccurate estimation of the activity’s completion date. While monitoring a project at various inspection points is pricy, it can lead to a better estimation of the project completion time, hence saving costs. Nonetheless, identifying the optimal inspection points is a difficult task, as it requires evaluating a large number of the project’s path options, even for small-scale projects. This paper proposes an analytical method for identifying the optimal project inspection points by using information theory measures. We search for monitoring (inspection) points that can maximize the information about the project’s estimated duration or completion time. The proposed methodology is based on a simulation-optimization scheme using a Monte Carlo engine that simulates potential activities’ durations. An exhaustive search is performed of all possible monitoring points to find those with the highest expected information gain on the project duration. The proposed algorithm’s complexity is little affected by the number of activities, and the algorithm can address large projects with hundreds or thousands of activities. Numerical experimentation and an analysis of various parameters are presented. Full article
(This article belongs to the Special Issue Applications of Information Theory to Industrial and Service Systems)
Show Figures

Figure 1

Article
Classification of Literary Works: Fractality and Complexity of the Narrative, Essay, and Research Article
Entropy 2020, 22(8), 904; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080904 - 17 Aug 2020
Cited by 1 | Viewed by 925
Abstract
A complex network as an abstraction of a language system has attracted much attention during the last decade. Linguistic typological research using quantitative measures is a current research topic based on the complex network approach. This research aims at showing the node degree, [...] Read more.
A complex network as an abstraction of a language system has attracted much attention during the last decade. Linguistic typological research using quantitative measures is a current research topic based on the complex network approach. This research aims at showing the node degree, betweenness, shortest path length, clustering coefficient, and nearest neighbourhoods’ degree, as well as more complex measures such as: the fractal dimension, the complexity of a given network, the Area Under Box-covering, and the Area Under the Robustness Curve. The literary works of Mexican writers were classify according to their genre. Precisely 87% of the full word co-occurrence networks were classified as a fractal. Also, empirical evidence is presented that supports the conjecture that lemmatisation of the original text is a renormalisation process of the networks that preserve their fractal property and reveal stylistic attributes by genre. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Article
E-Bayesian Estimation for the Weibull Distribution under Adaptive Type-I Progressive Hybrid Censored Competing Risks Data
Entropy 2020, 22(8), 903; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080903 - 17 Aug 2020
Cited by 5 | Viewed by 910
Abstract
This article focuses on using E-Bayesian estimation for the Weibull distribution based on adaptive type-I progressive hybrid censored competing risks (AT-I PHCS). The case of Weibull distribution for the underlying lifetimes is considered assuming a cumulative exposure model. The E-Bayesian estimation is discussed [...] Read more.
This article focuses on using E-Bayesian estimation for the Weibull distribution based on adaptive type-I progressive hybrid censored competing risks (AT-I PHCS). The case of Weibull distribution for the underlying lifetimes is considered assuming a cumulative exposure model. The E-Bayesian estimation is discussed by considering three different prior distributions for the hyper-parameters. The E-Bayesian estimators as well as the corresponding E-mean square errors are obtained by using squared and LINEX loss functions. Some properties of the E-Bayesian estimators are also derived. A simulation study to compare the various estimators and real data application is applied to show the applicability of the different estimators are proposed. Full article
Article
A Novel Model on Reinforce K-Means Using Location Division Model and Outlier of Initial Value for Lowering Data Cost
Entropy 2020, 22(8), 902; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080902 - 17 Aug 2020
Cited by 3 | Viewed by 1088
Abstract
Today, semi-structured and unstructured data are mainly collected and analyzed for data analysis applicable to various systems. Such data have a dense distribution of space and usually contain outliers and noise data. There have been ongoing research studies on clustering algorithms to classify [...] Read more.
Today, semi-structured and unstructured data are mainly collected and analyzed for data analysis applicable to various systems. Such data have a dense distribution of space and usually contain outliers and noise data. There have been ongoing research studies on clustering algorithms to classify such data (outliers and noise data). The K-means algorithm is one of the most investigated clustering algorithms. Researchers have pointed out a couple of problems such as processing clustering for the number of clusters, K, by an analyst through his or her random choices, producing biased results in data classification through the connection of nodes in dense data, and higher implementation costs and lower accuracy according to the selection models of the initial centroids. Most K-means researchers have pointed out the disadvantage of outliers belonging to external or other clusters instead of the concerned ones when K is big or small. Thus, the present study analyzed problems with the selection of initial centroids in the existing K-means algorithm and investigated a new K-means algorithm of selecting initial centroids. The present study proposed a method of cutting down clustering calculation costs by applying an initial center point approach based on space division and outliers so that no objects would be subordinate to the initial cluster center for dependence lower from the initial cluster center. Since data containing outliers could lead to inappropriate results when they are reflected in the choice of a center point of a cluster, the study proposed an algorithm to minimize the error rates of outliers based on an improved algorithm for space division and distance measurement. The performance experiment results of the proposed algorithm show that it lowered the execution costs by about 13–14% compared with those of previous studies when there was an increase in the volume of clustering data or the number of clusters. It also recorded a lower frequency of outliers, a lower effectiveness index, which assesses performance deterioration with outliers, and a reduction of outliers by about 60%. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
An Intelligent Multi-View Active Learning Method Based on a Double-Branch Network
Entropy 2020, 22(8), 901; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080901 - 17 Aug 2020
Cited by 1 | Viewed by 1012
Abstract
Artificial intelligence is one of the most popular topics in computer science. Convolutional neural network (CNN), which is an important artificial intelligence deep learning model, has been widely used in many fields. However, training a CNN requires a large amount of labeled data [...] Read more.
Artificial intelligence is one of the most popular topics in computer science. Convolutional neural network (CNN), which is an important artificial intelligence deep learning model, has been widely used in many fields. However, training a CNN requires a large amount of labeled data to achieve a good performance but labeling data is a time-consuming and laborious work. Since active learning can effectively reduce the labeling effort, we propose a new intelligent active learning method for deep learning, which is called multi-view active learning based on double-branch network (MALDB). Different from most existing active learning methods, our proposed MALDB first integrates two Bayesian convolutional neural networks (BCNNs) with different structures as two branches of a classifier to learn the effective features for each sample. Then, MALDB performs data analysis on unlabeled dataset and queries the useful unlabeled samples based on different characteristics of two branches to iteratively expand the training dataset and improve the performance of classifier. Finally, MALDB combines multiple level information from multiple hidden layers of BCNNs to further improve the stability of sample selection. The experiments are conducted on five extensively used datasets, Fashion-MNIST, Cifar-10, SVHN, Scene-15 and UIUC-Sports, the experimental results demonstrate the validity of our proposed MALDB. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Understanding of Collective Atom Phase Control in Modified Photon Echoes for a Near-Perfect Storage Time-Extended Quantum Memory
Entropy 2020, 22(8), 900; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080900 - 15 Aug 2020
Cited by 1 | Viewed by 1076
Abstract
A near-perfect storage time-extended photon echo-based quantum memory protocol has been analyzed by solving the Maxwell–Bloch equations for a backward scheme in a three-level system. The backward photon echo scheme is combined with a controlled coherence conversion process via controlled Rabi flopping to [...] Read more.
A near-perfect storage time-extended photon echo-based quantum memory protocol has been analyzed by solving the Maxwell–Bloch equations for a backward scheme in a three-level system. The backward photon echo scheme is combined with a controlled coherence conversion process via controlled Rabi flopping to a third state, where the control Rabi flopping collectively shifts the phase of the ensemble coherence. The propagation direction of photon echoes is coherently determined by the phase-matching condition between the data (quantum) and the control (classical) pulses. Herein, we discuss the classical controllability of a quantum state for both phase and propagation direction by manipulating the control pulses in both single and double rephasing photon echo schemes of a three-level system. Compared with the well-understood uses of two-level photon echoes, the Maxwell–Bloch equations for a three-level system have a critical limitation regarding the phase change when interacting with an arbitrary control pulse area. Full article
(This article belongs to the Special Issue Quantum Information Processing)
Show Figures

Figure 1

Article
On the Capacity of Amplitude Modulated Soliton Communication over Long Haul Fibers
Entropy 2020, 22(8), 899; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080899 - 15 Aug 2020
Cited by 1 | Viewed by 1184
Abstract
The capacity limits of fiber-optic communication systems in the nonlinear regime are not yet well understood. In this paper, we study the capacity of amplitude modulated first-order soliton transmission, defined as the maximum of the so-called time-scaled mutual information. Such definition allows us [...] Read more.
The capacity limits of fiber-optic communication systems in the nonlinear regime are not yet well understood. In this paper, we study the capacity of amplitude modulated first-order soliton transmission, defined as the maximum of the so-called time-scaled mutual information. Such definition allows us to directly incorporate the dependence of soliton pulse width to its amplitude into capacity formulation. The commonly used memoryless channel model based on noncentral chi-squared distribution is initially considered. Applying a variance normalizing transform, this channel is approximated by a unit-variance additive white Gaussian noise (AWGN) model. Based on a numerical capacity analysis of the approximated AWGN channel, a general form of capacity-approaching input distributions is determined. These optimal distributions are discrete comprising a mass point at zero (off symbol) and a finite number of mass points almost uniformly distributed away from zero. Using this general form of input distributions, a novel closed-form approximation of the capacity is determined showing a good match to numerical results. Finally, mismatch capacity bounds are developed based on split-step simulations of the nonlinear Schro¨dinger equation considering both single soliton and soliton sequence transmissions. This relaxes the initial assumption of memoryless channel to show the impact of both inter-soliton interaction and Gordon–Haus effects. Our results show that the inter-soliton interaction effect becomes increasingly significant at higher soliton amplitudes and would be the dominant impairment compared to the timing jitter induced by the Gordon–Haus effect. Full article
(This article belongs to the Special Issue Information Theory of Optical Fiber)
Show Figures

Figure 1

Editorial
Entropy in Image Analysis II
Entropy 2020, 22(8), 898; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080898 - 15 Aug 2020
Viewed by 841
Abstract
Image analysis is a fundamental task for any application where extracting information from images is required [...] Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Article
Trading Imbalance in Chinese Stock Market—A High-Frequency View
Entropy 2020, 22(8), 897; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080897 - 15 Aug 2020
Cited by 1 | Viewed by 1201
Abstract
Although an imbalance of buying and selling profoundly affects the formation of market trends, a fine-granularity investigation of this perplexity of trading behavior is still missing. Instead of using existing entropy measures, this paper proposed a new indicator based on transaction dataset that [...] Read more.
Although an imbalance of buying and selling profoundly affects the formation of market trends, a fine-granularity investigation of this perplexity of trading behavior is still missing. Instead of using existing entropy measures, this paper proposed a new indicator based on transaction dataset that enables us to inspect both the direction and the magnitude of this imbalance at high frequency, which we call “polarity”. The polarity aims to measure the unevenness of the very essence trading desire based on the most micro decision making units. We investigate the relationship between the polarity and the return at both market-level and stock-level and find that the autocorrelated polarities cause a positive relation between lagged polarities and returns, while the current polarity is the opposite. It is also revealed that these associations shift according to the market conditions. In fact, when aggregating the one-minute polarities into daily signals, we find not only significant correlations disclosed by the market polarity and market emotion, but also the reliability of these signals in terms of reflecting the transitions of market-level behavior. These results imply that our presented polarity can reflect the market sentiment and condition in real time. Indeed, the trading polarity provides a new indicator from a high-frequency perspective to understand and foresee the market’s behavior in a data-driven manner. Full article
Show Figures

Figure 1

Article
Inferring an Observer’s Prediction Strategy in Sequence Learning Experiments
Entropy 2020, 22(8), 896; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080896 - 15 Aug 2020
Cited by 1 | Viewed by 1162
Abstract
Cognitive systems exhibit astounding prediction capabilities that allow them to reap rewards from regularities in their environment. How do organisms predict environmental input and how well do they do it? As a prerequisite to answering that question, we first address the limits on [...] Read more.
Cognitive systems exhibit astounding prediction capabilities that allow them to reap rewards from regularities in their environment. How do organisms predict environmental input and how well do they do it? As a prerequisite to answering that question, we first address the limits on prediction strategy inference, given a series of inputs and predictions from an observer. We study the special case of Bayesian observers, allowing for a probability that the observer randomly ignores data when building her model. We demonstrate that an observer’s prediction model can be correctly inferred for binary stimuli generated from a finite-order Markov model. However, we can not necessarily infer the model’s parameter values unless we have access to several “clones” of the observer. As stimuli become increasingly complicated, correct inference requires exponentially more data points, computational power, and computational time. These factors place a practical limit on how well we are able to infer an observer’s prediction strategy in an experimental or observational setting. Full article
(This article belongs to the Special Issue Information Theory for Human and Social Processes)
Show Figures

Figure 1

Article
Non-Markovianity of a Central Spin Interacting with a Lipkin–Meshkov–Glick Bath via a Conditional Past–Future Correlation
Entropy 2020, 22(8), 895; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080895 - 15 Aug 2020
Cited by 1 | Viewed by 837
Abstract
Based on conditional past–future (CPF) correlations, we study the non-Markovianity of a central spin coupled to an isotropic Lipkin–Meshkov–Glick (LMG) bath. Although the dynamics of a system is always non-Markovian, it is found that some measurement time intervals considering a specific process, with [...] Read more.
Based on conditional past–future (CPF) correlations, we study the non-Markovianity of a central spin coupled to an isotropic Lipkin–Meshkov–Glick (LMG) bath. Although the dynamics of a system is always non-Markovian, it is found that some measurement time intervals considering a specific process, with respect to a particular set of CPF measurement operators, can be zero, which means that in this case the non-Markovianity of the system could not be detected. Furthermore, the initial system–bath correlations only slightly influence the non-Markovianity of the system in our model. Significantly, it is also found that the dynamics of the system for LMG baths, initially in the ground states corresponding to the symmetric phase and symmetry broken phase, exhibit different properties, and the maximal value of the CPF at the critical point is the smallest, independent of the measurement operator, which means that the criticality can manifest itself by the CPF. Moreover, the effect of bath temperature on the quantum criticality of the CPF depends on the measurement operator. Full article
Show Figures

Figure 1

Article
Dynamic Defense against Stealth Malware Propagation in Cyber-Physical Systems: A Game-Theoretical Framework
Entropy 2020, 22(8), 894; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080894 - 15 Aug 2020
Viewed by 1235
Abstract
Stealth malware is a representative tool of advanced persistent threat (APT) attacks, which poses an increased threat to cyber-physical systems (CPS) today. Due to the use of stealthy and evasive techniques, stealth malwares usually render conventional heavy-weight countermeasures inapplicable. Light-weight countermeasures, on the [...] Read more.
Stealth malware is a representative tool of advanced persistent threat (APT) attacks, which poses an increased threat to cyber-physical systems (CPS) today. Due to the use of stealthy and evasive techniques, stealth malwares usually render conventional heavy-weight countermeasures inapplicable. Light-weight countermeasures, on the other hand, can help retard the spread of stealth malwares, but the ensuing side effects might violate the primary safety requirement of CPS. Hence, defenders need to find a balance between the gain and loss of deploying light-weight countermeasures, which normally is a challenging task. To address this challenge, we model the persistent anti-malware process as a shortest-path tree interdiction (SPTI) Stackelberg game with both static version (SSPTI) and multi-stage dynamic version (DSPTI), and safety requirements of CPS are introduced as constraints in the defender’s decision model. The attacker aims to stealthily penetrate the CPS at the lowest cost (e.g., time, effort) by selecting optimal network links to spread, while the defender aims to retard the malware epidemic as much as possible. Both games are modeled as bi-level integer programs and proved to be NP-hard. We then develop a Benders decomposition algorithm to achieve the Stackelberg equilibrium of SSPTI, and design a Model Predictive Control strategy to solve DSPTI approximately by sequentially solving an 1+δ approximation of SSPTI. Extensive experiments have been conducted by comparing proposed algorithms and strategies with existing ones on both static and dynamic performance metrics. The evaluation results demonstrate the efficiency of proposed algorithms and strategies on both simulated and real-case-based CPS networks. Furthermore, the proposed dynamic defense framework shows its advantage of achieving a balance between fail-secure ability and fail-safe ability while retarding the stealth malware propagation in CPS. Full article
Show Figures

Figure 1

Article
Separated Channel Attention Convolutional Neural Network (SC-CNN-Attention) to Identify ADHD in Multi-Site Rs-fMRI Dataset
Entropy 2020, 22(8), 893; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080893 - 14 Aug 2020
Cited by 6 | Viewed by 1898
Abstract
The accurate identification of an attention deficit hyperactivity disorder (ADHD) subject has remained a challenge for both neuroscience research and clinical diagnosis. Unfortunately, the traditional methods concerning the classification model and feature extraction usually depend on the single-channel model and static measurements (i.e., [...] Read more.
The accurate identification of an attention deficit hyperactivity disorder (ADHD) subject has remained a challenge for both neuroscience research and clinical diagnosis. Unfortunately, the traditional methods concerning the classification model and feature extraction usually depend on the single-channel model and static measurements (i.e., functional connectivity, FC) in the small, homogenous single-site dataset, which is limited and may cause the loss of intrinsic information in functional MRI (fMRI). In this study, we proposed a new two-stage network structure by combing a separated channel convolutional neural network (SC-CNN) with an attention-based network (SC-CNN-attention) to discriminate ADHD and healthy controls on a large-scale multi-site database (5 sites and n = 1019). To utilize both intrinsic temporal feature and the interactions of temporal dependent in whole-brain resting-state fMRI, in the first stage of our proposed network structure, a SC- CNN is used to learn the temporal feature of each brain region, and an attention network in the second stage is adopted to capture temporal dependent features among regions and extract fusion features. Using a “leave-one-site-out” cross-validation framework, our proposed method obtained a mean classification accuracy of 68.6% on five different sites, which is higher than those reported in previous studies. The classification results demonstrate that our proposed network is robust to data variants and is also replicated across sites. The combination of the SC-CNN with the attention network is powerful to capture the intrinsic fMRI information to discriminate ADHD across multi-site resting-state fMRI data. Full article
Show Figures

Figure 1

Correction
Correction: Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy 2016, 18, 382
Entropy 2020, 22(8), 892; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080892 - 14 Aug 2020
Viewed by 1192
Abstract
Section 3.3 of “Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy2016, 18, 382” contains errors. Therefore, this section is retracted. However, these [...] Read more.
Section 3.3 of “Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon Entropies for Finite Mixtures of Multivariate Skew-Normal Distributions: Application to Swordfish (Xiphias gladius Linnaeus). Entropy2016, 18, 382” contains errors. Therefore, this section is retracted. However, these changes do not influence the conclusions and the other results of the paper. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Article
Finite-Time Thermodynamics in Economics
Entropy 2020, 22(8), 891; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080891 - 13 Aug 2020
Cited by 10 | Viewed by 971
Abstract
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure [...] Read more.
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure of the irreversibility of processes in the microeconomic system, and the economic balances are recorded, including capital dissipation. Problems in the form of kinetic equations leading to given conditions of minimal dissipation are considered. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Bayesian3 Active Learning for the Gaussian Process Emulator Using Information Theory
Entropy 2020, 22(8), 890; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080890 - 13 Aug 2020
Cited by 3 | Viewed by 1484
Abstract
Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current [...] Read more.
Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE’s quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

Article
Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations
Entropy 2020, 22(8), 889; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080889 - 13 Aug 2020
Cited by 20 | Viewed by 3015
Abstract
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance—in relation to [...] Read more.
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance—in relation to the ontological and epistemological status of representations—is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account—an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the ‘aboutness’ or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors. Full article
Show Figures

Figure 1

Article
Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data
Entropy 2020, 22(8), 888; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080888 - 13 Aug 2020
Cited by 1 | Viewed by 1138
Abstract
One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code [...] Read more.
One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code in the ancestral sampling procedure for data generations. In this paper, we propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), with a two-level generative process for the observed data where continuous z and a discrete c variables are introduced in addition to the observed variables x. By learning data-dependent conditional priors, the new variational objective naturally encourages a better match between the posterior and prior conditionals, and the learning of the latent categories encoding the major source of variation of the original data in an unsupervised manner. Through sampling continuous latent code from the data-dependent conditional priors, we are able to generate new samples from the individual mixture components corresponding, to the multimodal structure over the original data. Moreover, we unify and analyse our objective under different independence assumptions for the joint distribution of the continuous and discrete latent variables. We provide an empirical evaluation on one synthetic dataset and three image datasets, FashionMNIST, MNIST, and Omniglot, illustrating the generative performance of our new model comparing to multiple baselines. Full article
Show Figures

Figure 1

Review
Time, Irreversibility and Entropy Production in Nonequilibrium Systems
Entropy 2020, 22(8), 887; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080887 - 13 Aug 2020
Cited by 8 | Viewed by 1528
Abstract
The aim of this review is to shed light on time and irreversibility, in order to link macroscopic to microscopic approaches to these complicated problems. After a brief summary of the standard notions of thermodynamics, we introduce some considerations about certain fundamental aspects [...] Read more.
The aim of this review is to shed light on time and irreversibility, in order to link macroscopic to microscopic approaches to these complicated problems. After a brief summary of the standard notions of thermodynamics, we introduce some considerations about certain fundamental aspects of temporal evolution of out-of-equilibrium systems. Our focus is on the notion of entropy generation as the marked characteristic of irreversible behaviour. The concept of time and the basic aspects of the thermalization of thermal radiation, due to the interaction of thermal radiation with matter, are explored concisely from complementary perspectives. The implications and relevance of time for the phenomenon of thermal radiation and irreversible thermophysics are carefully discussed. The concept of time is treated from a different viewpoint, in order to make it as clear as possible in relation to its different fundamental problems. Full article
Article
Adaptive State Fidelity Estimation for Higher Dimensional Bipartite Entanglement
Entropy 2020, 22(8), 886; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080886 - 12 Aug 2020
Viewed by 873
Abstract
An adaptive method for quantum state fidelity estimation in bipartite higher dimensional systems is established. This method employs state verifier operators which are constructed by local POVM operators and adapted to the measurement statistics in the computational basis. Employing this method, the state [...] Read more.
An adaptive method for quantum state fidelity estimation in bipartite higher dimensional systems is established. This method employs state verifier operators which are constructed by local POVM operators and adapted to the measurement statistics in the computational basis. Employing this method, the state verifier operators that stabilize Bell-type entangled states are constructed explicitly. Together with an error operator in the computational basis, one can estimate the lower and upper bounds on the state fidelity for Bell-type entangled states in few measurement configurations. These bounds can be tighter than the fidelity bounds derived in [Bavaresco et al., Nature Physics (2018), 14, 1032–1037], if one constructs more than one local POVM measurements additional to the measurement in the computational basis. Full article
(This article belongs to the Special Issue Quantum Probability, Statistics and Control)
Show Figures

Figure 1

Essay
Complexity in Biological Organization: Deconstruction (and Subsequent Restating) of Key Concepts
Entropy 2020, 22(8), 885; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080885 - 12 Aug 2020
Cited by 6 | Viewed by 1235
Abstract
The “magic” word complexity evokes a multitude of meanings that obscure its real sense. Here we try and generate a bottom-up reconstruction of the deep sense of complexity by looking at the convergence of different features shared by complex systems. We specifically focus [...] Read more.
The “magic” word complexity evokes a multitude of meanings that obscure its real sense. Here we try and generate a bottom-up reconstruction of the deep sense of complexity by looking at the convergence of different features shared by complex systems. We specifically focus on complexity in biology but stressing the similarities with analogous features encountered in inanimate and artefactual systems in order to track an integrative path toward a new “mainstream” of science overcoming the actual fragmentation of scientific culture. Full article
(This article belongs to the Special Issue Biological Statistical Mechanics)
Show Figures

Figure 1

Article
Hybrid Algorithm Based on Ant Colony Optimization and Simulated Annealing Applied to the Dynamic Traveling Salesman Problem
Entropy 2020, 22(8), 884; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080884 - 12 Aug 2020
Cited by 6 | Viewed by 1200
Abstract
The dynamic traveling salesman problem (DTSP) falls under the category of combinatorial dynamic optimization problems. The DTSP is composed of a primary TSP sub-problem and a series of TSP iterations; each iteration is created by changing the previous iteration. In this article, a [...] Read more.
The dynamic traveling salesman problem (DTSP) falls under the category of combinatorial dynamic optimization problems. The DTSP is composed of a primary TSP sub-problem and a series of TSP iterations; each iteration is created by changing the previous iteration. In this article, a novel hybrid metaheuristic algorithm is proposed for the DTSP. This algorithm combines two metaheuristic principles, specifically ant colony optimization (ACO) and simulated annealing (SA). Moreover, the algorithm exploits knowledge about the dynamic changes by transferring the information gathered in previous iterations in the form of a pheromone matrix. The significance of the hybridization, as well as the use of knowledge about the dynamic environment, is examined and validated on benchmark instances including small, medium, and large DTSP problems. The results are compared to the four other state-of-the-art metaheuristic approaches with the conclusion that they are significantly outperformed by the proposed algorithm. Furthermore, the behavior of the algorithm is analyzed from various points of view (including, for example, convergence speed to local optimum, progress of population diversity during optimization, and time dependence and computational complexity). Full article
Show Figures

Figure 1

Article
Optimization of a New Design of Molten Salt-to-CO2 Heat Exchanger Using Exergy Destruction Minimization
Entropy 2020, 22(8), 883; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080883 - 12 Aug 2020
Cited by 1 | Viewed by 1304
Abstract
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element [...] Read more.
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element to be developed in this scheme is the molten salt-to-CO2 heat exchanger. This paper presents a heat exchanger design that avoids the molten salt plugging and the mechanical stress due to the high pressure of the CO2, while improving the heat transfer of the supercritical phase, due to its compactness with a high heat transfer area. This design is based on a honeycomb-like configuration, in which a thermal unit consists of a circular channel for the molten salt surrounded by six smaller trapezoidal ducts for the CO2. Further, an optimization based on the exergy destruction minimization has been accomplished, obtained the best working conditions of this heat exchanger: a temperature approach of 50 °C between both streams and a CO2 pressure drop of 2.7 bar. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Complex Energy Systems)
Show Figures

Figure 1

Article
Performance Improvement of Discretely Modulated Continuous-Variable Quantum Key Distribution with Untrusted Source via Heralded Hybrid Linear Amplifier
Entropy 2020, 22(8), 882; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080882 - 12 Aug 2020
Viewed by 806
Abstract
In practical quantum communication networks, the scheme of continuous-variable quantum key distribution (CVQKD) faces a challenge that the entangled source is controlled by a malicious eavesdropper, and although it still can generate a positive key rate and security, its performance needs to be [...] Read more.
In practical quantum communication networks, the scheme of continuous-variable quantum key distribution (CVQKD) faces a challenge that the entangled source is controlled by a malicious eavesdropper, and although it still can generate a positive key rate and security, its performance needs to be improved, especially in secret key rate and maximum transmission distance. In this paper, we proposed a method based on the four-state discrete modulation and a heralded hybrid linear amplifier to enhance the performance of CVQKD where the entangled source originates from malicious eavesdropper. The four-state CVQKD encodes information by nonorthogonal coherent states in phase space. It has better transmission distance than Gaussian modulation counterpart, especially at low signal-to-noise ratio (SNR). Moreover, the hybrid linear amplifier concatenates a deterministic linear amplifier (DLA) and a noiseless linear amplifier (NLA), which can improve the probability of amplification success and reduce the noise penalty caused by the measurement. Furthermore, the hybrid linear amplifier can raise the SNR of CVQKD and tune between two types of performance for high-gain mode and high noise-reduction mode, therefore it can extend the maximal transmission distance while the entangled source is untrusted. Full article
(This article belongs to the Special Issue Quantum Entanglement)
Show Figures

Figure 1

Article
Characteristics of Shannon’s Information Entropy of Atomic States in Strongly Coupled Plasma
Entropy 2020, 22(8), 881; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080881 - 11 Aug 2020
Viewed by 899
Abstract
The influence of shielding on the Shannon information entropy for atomic states in strong coupled plasma is investigated using the perturbation method and the Ritz variational method. The analytic expressions for the Shannon information entropies of the ground (1s) and the [...] Read more.
The influence of shielding on the Shannon information entropy for atomic states in strong coupled plasma is investigated using the perturbation method and the Ritz variational method. The analytic expressions for the Shannon information entropies of the ground (1s) and the first excited states (2p) are derived as functions of the ion-sphere radius including the radial and angular parts. It is shown that the entropy change in the atomic state is found to be more significant in the excite state than in the ground state. It is also found that the influence of the localization on the entropy change is more significant for an ion with a higher charge number. The variation of the 1s and 2p Shannon information entropies are discussed. Full article
Show Figures

Graphical abstract

Article
A Time-Varying Information Measure for Tracking Dynamics of Neural Codes in a Neural Ensemble
Entropy 2020, 22(8), 880; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080880 - 11 Aug 2020
Viewed by 1300
Abstract
The amount of information that differentially correlated spikes in a neural ensemble carry is not the same; the information of different types of spikes is associated with different features of the stimulus. By calculating a neural ensemble’s information in response to a mixed [...] Read more.
The amount of information that differentially correlated spikes in a neural ensemble carry is not the same; the information of different types of spikes is associated with different features of the stimulus. By calculating a neural ensemble’s information in response to a mixed stimulus comprising slow and fast signals, we show that the entropy of synchronous and asynchronous spikes are different, and their probability distributions are distinctively separable. We further show that these spikes carry a different amount of information. We propose a time-varying entropy (TVE) measure to track the dynamics of a neural code in an ensemble of neurons at each time bin. By applying the TVE to a multiplexed code, we show that synchronous and asynchronous spikes carry information in different time scales. Finally, a decoder based on the Kalman filtering approach is developed to reconstruct the stimulus from the spikes. We demonstrate that slow and fast features of the stimulus can be entirely reconstructed when this decoder is applied to asynchronous and synchronous spikes, respectively. The significance of this work is that the TVE can identify different types of information (for example, corresponding to synchronous and asynchronous spikes) that might simultaneously exist in a neural code. Full article
Show Figures

Figure 1

Article
Gintropy: Gini Index Based Generalization of Entropy
Entropy 2020, 22(8), 879; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080879 - 10 Aug 2020
Cited by 3 | Viewed by 1694
Abstract
Entropy is being used in physics, mathematics, informatics and in related areas to describe equilibration, dissipation, maximal probability states and optimal compression of information. The Gini index, on the other hand, is an established measure for social and economical inequalities in a society. [...] Read more.
Entropy is being used in physics, mathematics, informatics and in related areas to describe equilibration, dissipation, maximal probability states and optimal compression of information. The Gini index, on the other hand, is an established measure for social and economical inequalities in a society. In this paper, we explore the mathematical similarities and connections in these two quantities and introduce a new measure that is capable of connecting these two at an interesting analogy level. This supports the idea that a generalization of the Gibbs–Boltzmann–Shannon entropy, based on a transformation of the Lorenz curve, can properly serve in quantifying different aspects of complexity in socio- and econo-physics. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop