Editor's Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
Entropy 2020, 22(12), 1432; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121432 - 18 Dec 2020
Cited by 3
Abstract
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when [...] Read more.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

Article
Examining the Causal Structures of Deep Neural Networks Using Information Theory
Entropy 2020, 22(12), 1429; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121429 - 18 Dec 2020
Cited by 1
Abstract
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the [...] Read more.
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the “causal plane”, which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. Full article
Show Figures

Figure 1

Article
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
Entropy 2020, 22(12), 1435; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121435 - 18 Dec 2020
Cited by 3
Abstract
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market [...] Read more.
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market indices: namely Dow Jones Industrial Average, Ibex 35, Nasdaq-100 and Nikkei 225. To this end, a comprehensive framework has been adopted encompassing a wide range of techniques and the most suitable methods for the analysis of noisy time series. By using daily closing values from January 1992 to July 2013, this study employs twelve techniques and tools of which five are specific to detecting chaos. The findings show no clear evidence of chaos, suggesting that the behavior of financial markets is nonlinear and stochastic. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

Article
Foundations of the Quaternion Quantum Mechanics
Entropy 2020, 22(12), 1424; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121424 - 17 Dec 2020
Cited by 2
Abstract
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy [...] Read more.
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy theory (classical balance equations for isotropic Cauchy-elastic material) and using the Hamilton quaternion algebra, we present a rigorous derivation of the quaternion form of the non- and relativistic wave equations. The family of the wave equations and the Poisson equation are a straightforward consequence of the quaternion representation of the Cauchy model of the elastic continuum. This is the most general kind of quantum mechanics possessing the same kind of calculus of assertions as conventional quantum mechanics. The problem of the Schrödinger equation, where imaginary ‘i’ should emerge, is solved. This interpretation is a serious attempt to describe the ontology of quantum mechanics, and demonstrates that, besides Bohmian mechanics, the complete ontological interpretations of quantum theory exists. The model can be generalized and falsified. To ensure this theory to be true, we specified problems, allowing exposing its falsity. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Article
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach
Entropy 2020, 22(12), 1421; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121421 - 16 Dec 2020
Cited by 6
Abstract
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel [...] Read more.
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Full article
Show Figures

Figure 1

Article
Statistical Features in High-Frequency Bands of Interictal iEEG Work Efficiently in Identifying the Seizure Onset Zone in Patients with Focal Epilepsy
Entropy 2020, 22(12), 1415; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121415 - 15 Dec 2020
Abstract
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the [...] Read more.
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the possible seizure onset zone (SOZ) channels. It is known that the activity of HFCs in interictal iEEGs, including ripple and fast ripple bands, is associated with epileptic seizures. This paper proposes to decompose multi-channel interictal iEEG signals into a number of subbands. For every 20 s segment, twelve features are computed from each subband. A mutual information (MI)-based method with grid search was applied to select the most prominent bands and features. A gradient-boosting decision tree-based algorithm called LightGBM was used to score each segment of the channels and these were averaged together to achieve a final score for each channel. The possible SOZ channels were localized based on the higher value channels. The experimental results with eleven epilepsy patients were tested to observe the efficiency of the proposed design compared to the state-of-the-art methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
Entropy 2020, 22(11), 1326; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111326 - 20 Nov 2020
Cited by 1
Abstract
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors [...] Read more.
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors come into play. Ion dynamics both inside and outside the geometric volume of the pore can be critical in determining the transport properties of the channel due to several commensurate length scales, such as the effective membrane thickness, radii of the first and the second hydration layers, pore radius, and Debye length. In particular, for biomimetic pores, such as the graphene crown ether we examine here, there are regimes where transport is highly sensitive to the pore size due to the interplay of dehydration and interaction with pore charge. Picometer changes in the size, e.g., due to a minute strain, can lead to a large change in conductance. Outside of these regimes, the small pore size itself gives a large resistance, even when electrostatic factors and dehydration compensate each other to give a relatively flat—e.g., near barrierless—free energy landscape. The permeability, though, can still be large and ions will translocate rapidly after they arrive within the capture radius of the pore. This, in turn, leads to diffusion and drift effects dominating the conductance. The current thus plateaus and becomes effectively independent of pore-free energy characteristics. Measurement of this effect will give an estimate of the magnitude of kinetically limiting features, and experimentally constrain the local electromechanical conditions. Full article
Show Figures

Figure 1

Article
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
Entropy 2020, 22(11), 1315; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111315 - 18 Nov 2020
Cited by 3
Abstract
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient [...] Read more.
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient C=1U. The latter is a multiplicative version of the Kullback-Leibler distance, with values between 0 and 1. For product measures and self-similar phenomena, it does not depend on the measurement level. Hence, C is an alternative to Gini’s concentration coefficient for measures with variation on different levels. Simple examples concern population density and gross domestic product. Application to time series patterns is indicated with a Markov chain. For the Covid-19 pandemic, entropy ratios indicate a homogeneous distribution of infections and the potential of local action when compared to measures for a whole region. Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

Article
Coherence and Entanglement Dynamics in Training Variational Quantum Perceptron
Entropy 2020, 22(11), 1277; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111277 - 11 Nov 2020
Cited by 1
Abstract
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum [...] Read more.
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum perceptron proposed by Y. Du et al., arXiv:1809.06056 (2018). In detail, we show that in the first part of the training of the variational quantum perceptron, the quantum coherence of the total system is concentrated in the index register and in the second part, the Grover algorithm consumes the quantum coherence in the index register. This implies that the quantum coherence distribution and the quantum coherence depletion are required in the training of variational quantum perceptron. In addition, we investigate the behavior of entanglement during the training of variational quantum perceptron. We show that the bipartite concurrence between feature and index register decreases since Grover operation is only performed on the index register. Also, we reveal that the concurrence between the two qubits of index register increases as the variational quantum perceptron is trained. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

Article
Quantum Finite-Time Thermodynamics: Insight from a Single Qubit Engine
Entropy 2020, 22(11), 1255; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111255 - 04 Nov 2020
Cited by 14
Abstract
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin [...] Read more.
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin of irreversibility, originating from heat transport, quantum friction, and thermalization in the presence of external driving. We construct various finite-time engine cycles that are based on the Otto and Carnot templates. Our analysis highlights the role of coherence and the quantum origin of entropy production. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Graphical abstract

Article
Entropy Production in Exactly Solvable Systems
Entropy 2020, 22(11), 1252; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111252 - 03 Nov 2020
Cited by 3
Abstract
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and [...] Read more.
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and its many applications in the study of non-equilibrium stochastic particle systems, a comprehensive list of typical examples illustrating the fundamentals of entropy production is lacking. Here, we present a brief, self-contained review of entropy production and calculate it from first principles in a catalogue of exactly solvable setups, encompassing both discrete- and continuous-state Markov processes, as well as single- and multiple-particle systems. The examples covered in this work provide a stepping stone for further studies on entropy production of more complex systems, such as many-particle active matter, as well as a benchmark for the development of alternative mathematical formalisms. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

Article
Quantum Work Statistics with Initial Coherence
Entropy 2020, 22(11), 1223; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111223 - 27 Oct 2020
Cited by 1
Abstract
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both [...] Read more.
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both schemes in terms of the amount of coherence present in the initial state of the system, as quantified by the l1-coherence measure. We show that the difference between the two first moments of work, the variances of work, and the average entropy production obtained in both schemes can be cast in terms of such initial coherence. Moreover, we prove that the average entropy production can take negative values in the Margenau–Hill framework. Full article
(This article belongs to the Special Issue Thermodynamics of Quantum Information)
Show Figures

Figure 1

Article
Numerical Investigation into the Development Performance of Gas Hydrate by Depressurization Based on Heat Transfer and Entropy Generation Analyses
Entropy 2020, 22(11), 1212; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111212 - 26 Oct 2020
Cited by 9
Abstract
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the [...] Read more.
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the change behaviors of various heat flows and entropy generations have been evaluated. Simulation results show that most of the heat supplied from outside is assimilated by methane hydrate. The energy loss caused by the fluid production is insignificant in comparison to the heat assimilation of the hydrate reservoir. The entropy generation of gas hydrate can be considered as the entropy flow from the ambient environment to the hydrate particles, and it is favorable from the perspective of efficient hydrate exploitation. On the contrary, the undesirable entropy generations of water, gas and quartz sand are induced by the irreversible heat conduction and thermal convection under notable temperature gradient in the deposit. Although lower production pressure will lead to larger entropy production of the whole system, the irreversible energy loss is always extremely limited when compared with the amount of thermal energy utilized by methane hydrate. The production pressure should be set as low as possible for the purpose of enhancing exploitation efficiency, as the entropy production rate is not sensitive to the energy recovery rate under depressurization. Full article
Show Figures

Figure 1

Article
Thermodynamic Curvature of the Binary van der Waals Fluid
Entropy 2020, 22(11), 1208; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111208 - 26 Oct 2020
Cited by 2
Abstract
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In [...] Read more.
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In this paper, we calculate R for two examples involving binary fluid mixtures: a binary mixture of a van der Waals (vdW) fluid with only repulsive interactions, and a binary vdW mixture with attractive interactions added. In both of these examples, we evaluate R for full 3D thermodynamic geometries. Our finding is that basic physical patterns found for R in the pure fluid are reproduced to a large extent for the binary fluid. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
The Heisenberg Indeterminacy Principle in the Context of Covariant Quantum Gravity
Entropy 2020, 22(11), 1209; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111209 - 26 Oct 2020
Cited by 3
Abstract
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of [...] Read more.
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of analogous Heisenberg inequalities in quantum gravity, which must be based on strictly physically observable quantities (i.e., necessarily either 4-scalar or 4-vector in nature), is shown to require the adoption of a manifestly covariant and unitary quantum theory of the gravitational field. Based on the prescription of a suitable notion of Hilbert space scalar product, the relevant Heisenberg inequalities are established. Besides the coordinate-conjugate momentum inequalities, these include a novel proper-time-conjugate extended momentum inequality. Physical implications and the connection with the deterministic limit recovering General Relativity are investigated. Full article
(This article belongs to the Special Issue Axiomatic Approaches to Quantum Mechanics)
Article
The World as a Neural Network
Entropy 2020, 22(11), 1210; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111210 - 26 Oct 2020
Cited by 7
Abstract
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). [...] Read more.
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other. Full article
(This article belongs to the Section Statistical Physics)
Article
Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers
Entropy 2020, 22(11), 1202; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111202 - 24 Oct 2020
Cited by 2
Abstract
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex [...] Read more.
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample. We have used an open-system quantum annealer to sample from complex distributions and implement Boltzmann machines with looping connectivity. Further, we have created policies mapping Boltzmann machine variables to the quantum bits of an annealer. These policies, based on correlation and entropy metrics, dynamically reconfigure the topology of Boltzmann machines during training and improve performance. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

Article
Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
Entropy 2020, 22(10), 1186; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101186 - 21 Oct 2020
Cited by 1
Abstract
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best [...] Read more.
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER. Full article
Show Figures

Figure 1

Article
Two-Qubit Entanglement Generation through Non-Hermitian Hamiltonians Induced by Repeated Measurements on an Ancilla
Entropy 2020, 22(10), 1184; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101184 - 20 Oct 2020
Cited by 2
Abstract
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian [...] Read more.
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian Hamiltonians as a result of repetitive measurements on an ancillary qubit has been proposed. The induced conditional dynamics of the main system is described by the effective non-Hermitian Hamiltonian arising from the procedure. In this paper, we demonstrate the effectiveness of such a protocol by applying it to physically relevant multi-spin models, showing that the effective non-Hermitian Hamiltonian drives the system to a maximally entangled stationary state. In addition, we report a new recipe to construct a physical scenario where the quantum dynamics of a physical system represented by a given non-Hermitian Hamiltonian model may be simulated. The physical implications and the broad scope potential applications of such a scheme are highlighted. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Figure 1

Article
The Smoluchowski Ensemble—Statistical Mechanics of Aggregation
Entropy 2020, 22(10), 1181; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101181 - 20 Oct 2020
Abstract
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean [...] Read more.
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean distribution in the mean-field approximation is governed by the Smoluchowski equation. In the scaling limit this ensemble gives rise to a set of relationships identical to those of familiar statistical thermodynamics. The central element of the thermodynamic treatment is the selection functional, a functional of feasible distributions that connects the probability of distribution to the details of the aggregation model. We obtain scaling expressions for general kernels and closed-form results for the special case of the constant, sum and product kernel. We study the stability of the most probable distribution, provide criteria for the sol-gel transition and obtain the distribution in the post-gel region by simple thermodynamic arguments. Full article
(This article belongs to the Special Issue Generalized Statistical Thermodynamics)
Show Figures

Figure 1

Article
Unifying Aspects of Generalized Calculus
Entropy 2020, 22(10), 1180; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101180 - 19 Oct 2020
Cited by 3
Abstract
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate [...] Read more.
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate courses of mathematics. Its theoretical potential is huge, and yet it remains unknown or unappreciated. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy)
Show Figures

Figure 1

Article
Segmentation of High Dimensional Time-Series Data Using Mixture of Sparse Principal Component Regression Model with Information Complexity
Entropy 2020, 22(10), 1170; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101170 - 17 Oct 2020
Cited by 4
Abstract
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our [...] Read more.
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our approach encompasses dimension reduction in high dimensional time-series data and, at the same time, determines the number of component clusters (i.e., number of segments across time-series data) and selects the best subset of predictors. A large-scale Monte Carlo simulation is performed to show the capability of the MIX-SPCR model to identify the correct structure of the time-series data successfully. MIX-SPCR model is also applied to a high dimensional Standard & Poor’s 500 (S&P 500) index data to uncover the time-series’s hidden structure and identify the structure change points. The approach presented in this paper determines both the relationships among the predictor variables and how various predictor variables contribute to the explanatory power of the response variable through the sparsity settings cluster wise. Full article
Show Figures

Figure 1

Article
Non-Equilibrium Living Polymers
Entropy 2020, 22(10), 1130; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101130 - 06 Oct 2020
Cited by 1
Abstract
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These [...] Read more.
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These systems are not only interesting in order to realise novel “living” soft matter but can also shed insight into how genomes are (topologically) regulated in vivo. Full article
(This article belongs to the Special Issue Statistical Physics of Living Systems)
Show Figures

Figure 1

Article
Exploration of Outliers in If-Then Rule-Based Knowledge Bases
Entropy 2020, 22(10), 1096; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101096 - 29 Sep 2020
Cited by 2
Abstract
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier [...] Read more.
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier detection algorithms to detect unusual rules in knowledge bases. The aim of the paper is the analysis of using four algorithms for outlier detection in rule-based knowledge bases: Local Outlier Factor (LOF), Connectivity-based Outlier Factor (COF), K-MEANS, and SMALLCLUSTERS. The subject of outlier mining is very important nowadays. Outliers in rules If-Then mean unusual rules, which are rare in comparing to others and should be explored by the domain expert as soon as possible. In the research, the authors use the outlier detection methods to find a given number of outliers in rules (1%, 5%, 10%), while in small groups, the number of outliers covers no more than 5% of the rule cluster. Subsequently, the authors analyze which of seven various quality indices, which they use for all rules and after removing selected outliers, improve the quality of rule clusters. In the experimental stage, the authors use six different knowledge bases. The best results (the most often the clusters quality was improved) are achieved for two outlier detection algorithms LOF and COF. Full article
Show Figures

Graphical abstract

Article
Modified Distribution Entropy as a Complexity Measure of Heart Rate Variability (HRV) Signal
Entropy 2020, 22(10), 1077; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101077 - 24 Sep 2020
Cited by 2
Abstract
The complexity of a heart rate variability (HRV) signal is considered an important nonlinear feature to detect cardiac abnormalities. This work aims at explaining the physiological meaning of a recently developed complexity measurement method, namely, distribution entropy ( [...] Read more.
The complexity of a heart rate variability (HRV) signal is considered an important nonlinear feature to detect cardiac abnormalities. This work aims at explaining the physiological meaning of a recently developed complexity measurement method, namely, distribution entropy (DistEn), in the context of HRV signal analysis. We thereby propose modified distribution entropy (mDistEn) to remove the physiological discrepancy involved in the computation of DistEn. The proposed method generates a distance matrix that is devoid of over-exerted multi-lag signal changes. Restricted element selection in the distance matrix makes “mDistEn” a computationally inexpensive and physiologically more relevant complexity measure in comparison to DistEn. Full article
(This article belongs to the Special Issue Entropy in Data Analysis)
Show Figures

Figure 1

Article
The Quantum Friction and Optimal Finite-Time Performance of the Quantum Otto Cycle
Entropy 2020, 22(9), 1060; https://0-doi-org.brum.beds.ac.uk/10.3390/e22091060 - 22 Sep 2020
Cited by 7
Abstract
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider [...] Read more.
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider frictionless trajectories since these have been shown to maximize the work extraction during the adiabatic processes. Furthermore, for frictionless cycles, the energy of the system decouples from the other degrees of freedom, thereby simplifying the mathematical treatment. Instead, we considered general limit cycles and we used analytical techniques to compute the derivative of the work production over the whole cycle with respect to the time allocated for each of the adiabatic processes. By doing so, we were able to directly show that the frictionless cycle maximizes the work production, implying that the optimal power production must necessarily allow for some friction generation so that the duration of the cycle is reduced. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Surface-Codes-Based Quantum Communication Networks
Entropy 2020, 22(9), 1059; https://0-doi-org.brum.beds.ac.uk/10.3390/e22091059 - 22 Sep 2020
Abstract
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission [...] Read more.
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission distance between arbitrary two nodes by using the SCs. The numerical results indicate that transmission distance between nodes can be extended to beyond 1000 km by employing simple syndrome decoding. Finally, we describe how to operate the proposed QCN by employing the software-defined networking (SDN) concept. Full article
Show Figures

Figure 1

Article
TMEA: A Thermodynamically Motivated Framework for Functional Characterization of Biological Responses to System Acclimation
Entropy 2020, 22(9), 1030; https://0-doi-org.brum.beds.ac.uk/10.3390/e22091030 - 15 Sep 2020
Cited by 1
Abstract
The objective of gene set enrichment analysis (GSEA) in modern biological studies is to identify functional profiles in huge sets of biomolecules generated by high-throughput measurements of genes, transcripts, metabolites, and proteins. GSEA is based on a two-stage process using classical statistical analysis [...] Read more.
The objective of gene set enrichment analysis (GSEA) in modern biological studies is to identify functional profiles in huge sets of biomolecules generated by high-throughput measurements of genes, transcripts, metabolites, and proteins. GSEA is based on a two-stage process using classical statistical analysis to score the input data and subsequent testing for overrepresentation of the enrichment score within a given functional coherent set. However, enrichment scores computed by different methods are merely statistically motivated and often elusive to direct biological interpretation. Here, we propose a novel approach, called Thermodynamically Motivated Enrichment Analysis (TMEA), to account for the energy investment in biological relevant processes. Therefore, TMEA is based on surprisal analysis, which offers a thermodynamic-free energy-based representation of the biological steady state and of the biological change. The contribution of each biomolecule underlying the changes in free energy is used in a Monte Carlo resampling procedure resulting in a functional characterization directly coupled to the thermodynamic characterization of biological responses to system perturbations. To illustrate the utility of our method on real experimental data, we benchmark our approach on plant acclimation to high light and compare the performance of TMEA with the most frequently used method for GSEA. Full article
(This article belongs to the Special Issue Thermodynamics and Information Theory of Living Systems)
Show Figures

Graphical abstract

Article
Life’s Energy and Information: Contrasting Evolution of Volume- versus Surface-Specific Rates of Energy Consumption
Entropy 2020, 22(9), 1025; https://0-doi-org.brum.beds.ac.uk/10.3390/e22091025 - 13 Sep 2020
Cited by 2
Abstract
As humanity struggles to find a path to resilience amidst global change vagaries, understanding organizing principles of living systems as the pillar for human existence is rapidly growing in importance. However, finding quantitative definitions for order, complexity, information and functionality of living systems [...] Read more.
As humanity struggles to find a path to resilience amidst global change vagaries, understanding organizing principles of living systems as the pillar for human existence is rapidly growing in importance. However, finding quantitative definitions for order, complexity, information and functionality of living systems remains a challenge. Here, we review and develop insights into this problem from the concept of the biotic regulation of the environment developed by Victor Gorshkov (1935–2019). Life’s extraordinary persistence—despite being a strongly non-equilibrium process—requires a quantum-classical duality: the program of life is written in molecules and thus can be copied without information loss, while life’s interaction with its non-equilibrium environment is performed by macroscopic classical objects (living individuals) that age. Life’s key energetic parameter, the volume-specific rate of energy consumption, is maintained within universal limits by most life forms. Contrary to previous suggestions, it cannot serve as a proxy for “evolutionary progress”. In contrast, ecosystem-level surface-specific energy consumption declines with growing animal body size in stable ecosystems. High consumption by big animals is associated with instability. We suggest that the evolutionary increase in body size may represent a spontaneous loss of information about environmental regulation, a manifestation of life’s algorithm ageing as a whole. Full article
(This article belongs to the Special Issue Evolution and Thermodynamics)
Show Figures

Figure 1

Article
Investigation of Forced Convection Enhancement and Entropy Generation of Nanofluid Flow through a Corrugated Minichannel Filled with a Porous Media
Entropy 2020, 22(9), 1008; https://0-doi-org.brum.beds.ac.uk/10.3390/e22091008 - 09 Sep 2020
Cited by 7
Abstract
Corrugating channel wall is considered to be an efficient procedure for achieving improved heat transfer. Further enhancement can be obtained through the utilization of nanofluids and porous media with high thermal conductivity. This paper presents the effect of geometrical parameters for the determination [...] Read more.
Corrugating channel wall is considered to be an efficient procedure for achieving improved heat transfer. Further enhancement can be obtained through the utilization of nanofluids and porous media with high thermal conductivity. This paper presents the effect of geometrical parameters for the determination of an appropriate configuration. Furthermore, the optimization of forced convective heat transfer and fluid/nanofluid flow through a sinusoidal wavy-channel inside a porous medium is performed through the optimization of entropy generation. The fluid flow in porous media is considered to be laminar and Darcy–Brinkman–Forchheimer model has been utilized. The obtained results were compared with the corresponding numerical data in order to ensure the accuracy and reliability of the numerical procedure. As a result, increasing the Darcy number leads to the increased portion of thermal entropy generation as well as the decreased portion of frictional entropy generation in all configurations. Moreover, configuration with wavelength of 10 mm, amplitude of 0.5 mm and phase shift of 60° was selected as an optimum geometry for further investigations on the addition of nanoparticles. Additionally, increasing trend of average Nusselt number and friction factor, besides the decreasing trend of performance evaluation criteria (PEC) index, were inferred by increasing the volume fraction of the nanofluid (Al2O3 and CuO). Full article
(This article belongs to the Special Issue Thermal Radiation and Entropy Analysis)
Show Figures

Figure 1

Article
Using Matrix-Product States for Open Quantum Many-Body Systems: Efficient Algorithms for Markovian and Non-Markovian Time-Evolution
Entropy 2020, 22(9), 984; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090984 - 04 Sep 2020
Cited by 4
Abstract
This paper presents an efficient algorithm for the time evolution of open quantum many-body systems using matrix-product states (MPS) proposing a convenient structure of the MPS-architecture, which exploits the initial state of system and reservoir. By doing so, numerically expensive re-ordering protocols are [...] Read more.
This paper presents an efficient algorithm for the time evolution of open quantum many-body systems using matrix-product states (MPS) proposing a convenient structure of the MPS-architecture, which exploits the initial state of system and reservoir. By doing so, numerically expensive re-ordering protocols are circumvented. It is applicable to systems with a Markovian type of interaction, where only the present state of the reservoir needs to be taken into account. Its adaption to a non-Markovian type of interaction between the many-body system and the reservoir is demonstrated, where the information backflow from the reservoir needs to be included in the computation. Also, the derivation of the basis in the quantum stochastic Schrödinger picture is shown. As a paradigmatic model, the Heisenberg spin chain with nearest-neighbor interaction is used. It is demonstrated that the algorithm allows for the access of large systems sizes. As an example for a non-Markovian type of interaction, the generation of highly unusual steady states in the many-body system with coherent feedback control is demonstrated for a chain length of N=30. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

Article
Strong Coupling and Nonextensive Thermodynamics
Entropy 2020, 22(9), 975; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090975 - 01 Sep 2020
Cited by 1
Abstract
We propose a Hamiltonian-based approach to the nonextensive thermodynamics of small systems, where small is a relative term comparing the size of the system to the size of the effective interaction region around it. We show that the effective Hamiltonian approach gives easy [...] Read more.
We propose a Hamiltonian-based approach to the nonextensive thermodynamics of small systems, where small is a relative term comparing the size of the system to the size of the effective interaction region around it. We show that the effective Hamiltonian approach gives easy accessibility to the thermodynamic properties of systems strongly coupled to their surroundings. The theory does not rely on the classical concept of dividing surface to characterize the system’s interaction with the environment. Instead, it defines an effective interaction region over which a system exchanges extensive quantities with its surroundings, easily producing laws recently shown to be valid at the nanoscale. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Article
On Products of Random Matrices
Entropy 2020, 22(9), 972; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090972 - 31 Aug 2020
Cited by 4
Abstract
We introduce a family of models, which we name matrix models associated with children’s drawings—the so-called dessin d’enfant. Dessins d’enfant are graphs of a special kind drawn on a closed connected orientable surface (in the sky). The vertices of such a graph are [...] Read more.
We introduce a family of models, which we name matrix models associated with children’s drawings—the so-called dessin d’enfant. Dessins d’enfant are graphs of a special kind drawn on a closed connected orientable surface (in the sky). The vertices of such a graph are small disks that we call stars. We attach random matrices to the edges of the graph and get multimatrix models. Additionally, to the stars we attach source matrices. They play the role of free parameters or model coupling constants. The answers for our integrals are expressed through quantities that we call the “spectrum of stars”. The answers may also include some combinatorial numbers, such as Hurwitz numbers or characters from group representation theory. Full article
(This article belongs to the Special Issue Random Matrix Approaches in Classical and Quantum Information Theory)
Show Figures

Figure 1

Article
Fractional Lotka-Volterra-Type Cooperation Models: Impulsive Control on Their Stability Behavior
Entropy 2020, 22(9), 970; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090970 - 31 Aug 2020
Cited by 5
Abstract
We present a biological fractional n-species delayed cooperation model of Lotka-Volterra type. The considered fractional derivatives are in the Caputo sense. Impulsive control strategies are applied for several stability properties of the states, namely Mittag-Leffler stability, practical stability and stability with respect [...] Read more.
We present a biological fractional n-species delayed cooperation model of Lotka-Volterra type. The considered fractional derivatives are in the Caputo sense. Impulsive control strategies are applied for several stability properties of the states, namely Mittag-Leffler stability, practical stability and stability with respect to sets. The proposed results extend the existing stability results for integer-order nspecies delayed Lotka-Volterra cooperation models to the fractional-order case under impulsive control. Full article
(This article belongs to the Special Issue Dynamics in Complex Neural Networks)
Show Figures

Figure 1

Article
A New Look on Financial Markets Co-Movement through Cooperative Dynamics in Many-Body Physics
Entropy 2020, 22(9), 954; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090954 - 29 Aug 2020
Cited by 4
Abstract
One of the main contributions of the Capital Assets Pricing Model (CAPM) to portfolio theory was to explain the correlation between assets through its relationship with the market index. According to this approach, the market index is expected to explain the co-movement between [...] Read more.
One of the main contributions of the Capital Assets Pricing Model (CAPM) to portfolio theory was to explain the correlation between assets through its relationship with the market index. According to this approach, the market index is expected to explain the co-movement between two different stocks to a great extent. In this paper, we try to verify this hypothesis using a sample of 3.000 stocks of the USA market (attending to liquidity, capitalization, and free float criteria) by using some functions inspired by cooperative dynamics in physical particle systems. We will show that all of the co-movement among the stocks is completely explained by the market, even without considering the market beta of the stocks. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

Article
Effect of Machine Entropy Production on the Optimal Performance of a Refrigerator
Entropy 2020, 22(9), 913; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090913 - 20 Aug 2020
Cited by 8
Abstract
The need for cooling is more and more important in current applications, as environmental constraints become more and more restrictive. Therefore, the optimization of reverse cycle machines is currently required. This optimization could be split in two parts, namely, (1) the design optimization, [...] Read more.
The need for cooling is more and more important in current applications, as environmental constraints become more and more restrictive. Therefore, the optimization of reverse cycle machines is currently required. This optimization could be split in two parts, namely, (1) the design optimization, leading to an optimal dimensioning to fulfill the specific demand (static or nominal steady state optimization); and (2) the dynamic optimization, where the demand fluctuates, and the system must be continuously adapted. Thus, the variability of the system load (with or without storage) implies its careful control-command. The topic of this paper is concerned with part (1) and proposes a novel and more complete modeling of an irreversible Carnot refrigerator that involves the coupling between sink (source) and machine through a heat transfer constraint. Moreover, it induces the choice of a reference heat transfer entropy, which is the heat transfer entropy at the source of a Carnot irreversible refrigerator. The thermodynamic optimization of the refrigerator provides new results regarding the optimal allocation of heat transfer conductances and minimum energy consumption with associated coefficient of performance (COP) when various forms of entropy production owing to internal irreversibility are considered. The reported results and their consequences represent a new fundamental step forward regarding the performance upper bound of Carnot irreversible refrigerator. Full article
(This article belongs to the Special Issue Thermodynamics of Heat Pump and Refrigeration Cycles)
Show Figures

Figure 1

Article
Rate of Entropy Production in Evolving Interfaces and Membranes under Astigmatic Kinematics: Shape Evolution in Geometric-Dissipation Landscapes
Entropy 2020, 22(9), 909; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090909 - 19 Aug 2020
Cited by 2
Abstract
This paper presents theory and simulation of viscous dissipation in evolving interfaces and membranes under kinematic conditions, known as astigmatic flow, ubiquitous during growth processes in nature. The essential aim is to characterize and explain the underlying connections between curvedness and shape evolution [...] Read more.
This paper presents theory and simulation of viscous dissipation in evolving interfaces and membranes under kinematic conditions, known as astigmatic flow, ubiquitous during growth processes in nature. The essential aim is to characterize and explain the underlying connections between curvedness and shape evolution and the rate of entropy production due to viscous bending and torsion rates. The membrane dissipation model used here is known as the Boussinesq-Scriven fluid model. Since the standard approaches in morphological evolution are based on the average, Gaussian and deviatoric curvatures, which comingle shape with curvedness, this paper introduces a novel decoupled approach whereby shape is independent of curvedness. In this curvedness-shape landscape, the entropy production surface under constant homogeneous normal velocity decays with growth but oscillates with shape changes. Saddles and spheres are minima while cylindrical patches are maxima. The astigmatic flow trajectories on the entropy production surface, show that only cylinders and spheres grow under the constant shape. Small deviations from cylindrical shapes evolve towards spheres or saddles depending on the initial condition, where dissipation rates decrease. Taken together the results and analysis provide novel and significant relations between shape evolution and viscous dissipation in deforming viscous membrane and surfaces. Full article
(This article belongs to the Special Issue Statistical Physics of Soft Matter and Complex Systems)
Show Figures

Figure 1

Article
Finite-Time Thermodynamics in Economics
Entropy 2020, 22(8), 891; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080891 - 13 Aug 2020
Cited by 9
Abstract
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure [...] Read more.
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure of the irreversibility of processes in the microeconomic system, and the economic balances are recorded, including capital dissipation. Problems in the form of kinetic equations leading to given conditions of minimal dissipation are considered. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Optimization of a New Design of Molten Salt-to-CO2 Heat Exchanger Using Exergy Destruction Minimization
Entropy 2020, 22(8), 883; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080883 - 12 Aug 2020
Cited by 1
Abstract
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element [...] Read more.
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element to be developed in this scheme is the molten salt-to-CO2 heat exchanger. This paper presents a heat exchanger design that avoids the molten salt plugging and the mechanical stress due to the high pressure of the CO2, while improving the heat transfer of the supercritical phase, due to its compactness with a high heat transfer area. This design is based on a honeycomb-like configuration, in which a thermal unit consists of a circular channel for the molten salt surrounded by six smaller trapezoidal ducts for the CO2. Further, an optimization based on the exergy destruction minimization has been accomplished, obtained the best working conditions of this heat exchanger: a temperature approach of 50 °C between both streams and a CO2 pressure drop of 2.7 bar. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Complex Energy Systems)
Show Figures

Figure 1

Article
Direct and Indirect Effects—An Information Theoretic Perspective
Entropy 2020, 22(8), 854; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080854 - 31 Jul 2020
Cited by 3
Abstract
Information theoretic (IT) approaches to quantifying causal influences have experienced some popularity in the literature, in both theoretical and applied (e.g., neuroscience and climate science) domains. While these causal measures are desirable in that they are model agnostic and can capture non-linear interactions, [...] Read more.
Information theoretic (IT) approaches to quantifying causal influences have experienced some popularity in the literature, in both theoretical and applied (e.g., neuroscience and climate science) domains. While these causal measures are desirable in that they are model agnostic and can capture non-linear interactions, they are fundamentally different from common statistical notions of causal influence in that they (1) compare distributions over the effect rather than values of the effect and (2) are defined with respect to random variables representing a cause rather than specific values of a cause. We here present IT measures of direct, indirect, and total causal effects. The proposed measures are unlike existing IT techniques in that they enable measuring causal effects that are defined with respect to specific values of a cause while still offering the flexibility and general applicability of IT techniques. We provide an identifiability result and demonstrate application of the proposed measures in estimating the causal effect of the El Niño–Southern Oscillation on temperature anomalies in the North American Pacific Northwest. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Article
Forecasting Bitcoin Trends Using Algorithmic Learning Systems
Entropy 2020, 22(8), 838; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080838 - 30 Jul 2020
Cited by 3
Abstract
This research has examined the ability of two forecasting methods to forecast Bitcoin’s price trends. The research is based on Bitcoin—USA dollar prices from the beginning of 2012 until the end of March 2020. Such a long period of time that includes volatile [...] Read more.
This research has examined the ability of two forecasting methods to forecast Bitcoin’s price trends. The research is based on Bitcoin—USA dollar prices from the beginning of 2012 until the end of March 2020. Such a long period of time that includes volatile periods with strong up and downtrends introduces challenges to any forecasting system. We use particle swarm optimization to find the best forecasting combinations of setups. Results show that Bitcoin’s price changes do not follow the “Random Walk” efficient market hypothesis and that both Darvas Box and Linear Regression techniques can help traders to predict the bitcoin’s price trends. We also find that both methodologies work better predicting an uptrend than a downtrend. The best setup for the Darvas Box strategy is six days of formation. A Darvas box uptrend signal was found efficient predicting four sequential daily returns while a downtrend signal faded after two days on average. The best setup for the Linear Regression model is 42 days with 1 standard deviation. Full article
Show Figures

Figure 1

Article
Deep Learning for Stock Market Prediction
Entropy 2020, 22(8), 840; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080840 - 30 Jul 2020
Cited by 35
Abstract
The prediction of stock groups values has always been attractive and challenging for shareholders due to its inherent dynamics, non-linearity, and complex nature. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals, and [...] Read more.
The prediction of stock groups values has always been attractive and challenging for shareholders due to its inherent dynamics, non-linearity, and complex nature. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals, and basic metals from Tehran stock exchange were chosen for experimental evaluations. Data were collected for the groups based on 10 years of historical records. The value predictions are created for 1, 2, 5, 10, 15, 20, and 30 days in advance. Various machine learning algorithms were utilized for prediction of future values of stock market groups. We employed decision tree, bagging, random forest, adaptive boosting (Adaboost), gradient boosting, and eXtreme gradient boosting (XGBoost), and artificial neural networks (ANN), recurrent neural network (RNN) and long short-term memory (LSTM). Ten technical indicators were selected as the inputs into each of the prediction models. Finally, the results of the predictions were presented for each technique based on four metrics. Among all algorithms used in this paper, LSTM shows more accurate results with the highest model fitting ability. In addition, for tree-based models, there is often an intense competition between Adaboost, Gradient Boosting, and XGBoost. Full article
(This article belongs to the Special Issue Information Transfer in Multilayer/Deep Architectures)
Show Figures

Figure 1

Article
Hybrid Quantum-Classical Neural Network for Calculating Ground State Energies of Molecules
Entropy 2020, 22(8), 828; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080828 - 29 Jul 2020
Cited by 3
Abstract
We present a hybrid quantum-classical neural network that can be trained to perform electronic structure calculation and generate potential energy curves of simple molecules. The method is based on the combination of parameterized quantum circuits and measurements. With unsupervised training, the neural network [...] Read more.
We present a hybrid quantum-classical neural network that can be trained to perform electronic structure calculation and generate potential energy curves of simple molecules. The method is based on the combination of parameterized quantum circuits and measurements. With unsupervised training, the neural network can generate electronic potential energy curves based on training at certain bond lengths. To demonstrate the power of the proposed new method, we present the results of using the quantum-classical hybrid neural network to calculate ground state potential energy curves of simple molecules such as H2, LiH, and BeH2. The results are very accurate and the approach could potentially be used to generate complex molecular potential energy surfaces. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

Article
An Efficient Method Based on Framelets for Solving Fractional Volterra Integral Equations
Entropy 2020, 22(8), 824; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080824 - 28 Jul 2020
Cited by 7
Abstract
This paper is devoted to shedding some light on the advantages of using tight frame systems for solving some types of fractional Volterra integral equations (FVIEs) involved by the Caputo fractional order derivative. A tight frame or simply framelet, is a generalization of [...] Read more.
This paper is devoted to shedding some light on the advantages of using tight frame systems for solving some types of fractional Volterra integral equations (FVIEs) involved by the Caputo fractional order derivative. A tight frame or simply framelet, is a generalization of an orthonormal basis. A lot of applications are modeled by non-negative functions; taking this into account in this paper, we consider framelet systems generated using some refinable non-negative functions, namely, B-splines. The FVIEs we considered were reduced to a set of linear system of equations and were solved numerically based on a collocation discretization technique. We present many important examples of FVIEs for which accurate and efficient numerical solutions have been accomplished and the numerical results converge very rapidly to the exact ones. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines II)
Show Figures

Figure 1