Next Issue
Volume 23, July
Previous Issue
Volume 23, May
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 23, Issue 6 (June 2021) – 148 articles

Cover Story (view full-size image): The thermodynamics of information and computation examines the requirements for a physical process that manipulates information to approach thermodynamic reversibility. A key principle, due to Landauer, tells us that a strict increase in total entropy results from the loss of correlations (i.e., mutual information) between deterministically computed subsystems when information in isolated subsystems is ejected to a thermal environment. Reversible computing refers to computation without ejection of correlated information, which can avoid the associated entropy increase. We show that the classic understanding of these topics from Landauer and Bennett is supported by the modern theory of non-equilibrium quantum thermodynamics, and we outline an approach for deriving fundamental limits of energy efficiency in reversible computing machines as a function of speed. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 9443 KiB  
Article
Are Mobility and COVID-19 Related? A Dynamic Analysis for Portuguese Districts
by António Casa Nova, Paulo Ferreira, Dora Almeida, Andreia Dionísio and Derick Quintino
Entropy 2021, 23(6), 786; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060786 - 21 Jun 2021
Cited by 12 | Viewed by 2877
Abstract
In this research work, we propose to assess the dynamic correlation between different mobility indices, measured on a daily basis, and the new cases of COVID-19 in the different Portuguese districts. The analysis is based on global correlation measures, which capture linear and [...] Read more.
In this research work, we propose to assess the dynamic correlation between different mobility indices, measured on a daily basis, and the new cases of COVID-19 in the different Portuguese districts. The analysis is based on global correlation measures, which capture linear and non-linear relationships in time series, in a robust and dynamic way, in a period without significant changes of non-pharmacological measures. The results show that mobility in retail and recreation, grocery and pharmacy, and public transport shows a higher correlation with new COVID-19 cases than mobility in parks, workplaces or residences. It should also be noted that this relationship is lower in districts with lower population density, which leads to the need for differentiated confinement policies in order to minimize the impacts of a terrible economic and social crisis. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

20 pages, 1881 KiB  
Article
Performance Analysis and Optimization of a Cooperative Transmission Protocol in NOMA-Assisted Cognitive Radio Networks with Discrete Energy Harvesting
by Hui Wang, Ronghua Shi, Kun Tang, Jian Dong and Shaowei Liao
Entropy 2021, 23(6), 785; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060785 - 20 Jun 2021
Cited by 8 | Viewed by 2339
Abstract
In this paper, we propose a spectrum-sharing protocol for a cooperative cognitive radio network based on non-orthogonal multiple access technology, where the base station (BS) transmits the superimposed signal to the primary user and secondary user with/without the assistance of a relay station [...] Read more.
In this paper, we propose a spectrum-sharing protocol for a cooperative cognitive radio network based on non-orthogonal multiple access technology, where the base station (BS) transmits the superimposed signal to the primary user and secondary user with/without the assistance of a relay station (RS) by adopting the decode-and-forward technique. RS performs discrete-time energy harvesting for opportunistically cooperative transmission. If the RS harvests sufficient energy, the system performs cooperative transmission; otherwise, the system performs direct transmission. Moreover, the outage probabilities and outage capacities of both primary and secondary systems are analyzed, and the corresponding closed-form expressions are derived. In addition, one optimization problem is formulated, where our objective is to maximize the energy efficiency of the secondary system while ensuring that of the primary system exceeds or equals a threshold value. A joint optimization algorithm of power allocation at BS and RS is considered to solve the optimization problem and to realize a mutual improvement in the performance of energy efficiency for both the primary and secondary systems. The simulation results demonstrate the validity of the analysis results and prove that the proposed transmission scheme has a higher energy efficiency than the direct transmission scheme and the transmission scheme with simultaneous wireless information and power transfer technology. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications)
Show Figures

Figure 1

14 pages, 837 KiB  
Article
Improving Log-Likelihood Ratio Estimation with Bi-Gaussian Approximation under Multiuser Interference Scenarios
by Yu Fu and Hongwen Yang
Entropy 2021, 23(6), 784; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060784 - 20 Jun 2021
Viewed by 2371
Abstract
Accurate estimation of channel log-likelihood ratio (LLR) is crucial to the decoding of modern channel codes like turbo, low-density parity-check (LDPC), and polar codes. Under an additive white Gaussian noise (AWGN) channel, the calculation of LLR is relatively straightforward since the closed-form expression [...] Read more.
Accurate estimation of channel log-likelihood ratio (LLR) is crucial to the decoding of modern channel codes like turbo, low-density parity-check (LDPC), and polar codes. Under an additive white Gaussian noise (AWGN) channel, the calculation of LLR is relatively straightforward since the closed-form expression for the channel likelihood function can be perfectly known to the receiver. However, it would be much more complicated for heterogeneous networks where the global noise (i.e., noise plus interference) may be dominated by non-Gaussian interference with an unknown distribution. Although the LLR can still be calculated by approximating the distribution of global noise as Gaussian, it will cause performance loss due to the non-Gaussian nature of global noise. To address this problem, we propose to use bi-Gaussian (BG) distribution to approximate the unknown distribution of global noise, for which the two parameters of BG distribution can easily be estimated from the second and fourth moments of the overall received signals without any knowledge of interfering channel state information (CSI) or signaling format information. Simulation results indicate that the proposed BG approximation can effectively improve the word error rate (WER) performance. The gain of BG approximation over Gaussian approximation depends heavily on the interference structure. For the scenario of a single BSPK interferer with a 5 dB interference-to-noise ratio (INR), we observed a gain of about 0.6 dB. The improved LLR estimation can also accelerate the convergence of iterative decoding, thus involving a lower overall decoding complexity. In general, the overall decoding complexity can be reduced by 25 to 50%. Full article
(This article belongs to the Special Issue Information Theory for Channel Coding)
Show Figures

Figure 1

57 pages, 4131 KiB  
Article
The Radically Embodied Conscious Cybernetic Bayesian Brain: From Free Energy to Free Will and Back Again
by Adam Safron
Entropy 2021, 23(6), 783; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060783 - 20 Jun 2021
Cited by 18 | Viewed by 14163
Abstract
Drawing from both enactivist and cognitivist perspectives on mind, I propose that explaining teleological phenomena may require reappraising both “Cartesian theaters” and mental homunculi in terms of embodied self-models (ESMs), understood as body maps with agentic properties, functioning as predictive-memory systems and cybernetic [...] Read more.
Drawing from both enactivist and cognitivist perspectives on mind, I propose that explaining teleological phenomena may require reappraising both “Cartesian theaters” and mental homunculi in terms of embodied self-models (ESMs), understood as body maps with agentic properties, functioning as predictive-memory systems and cybernetic controllers. Quasi-homuncular ESMs are suggested to constitute a major organizing principle for neural architectures due to their initial and ongoing significance for solutions to inference problems in cognitive (and affective) development. Embodied experiences provide foundational lessons in learning curriculums in which agents explore increasingly challenging problem spaces, so answering an unresolved question in Bayesian cognitive science: what are biologically plausible mechanisms for equipping learners with sufficiently powerful inductive biases to adequately constrain inference spaces? Drawing on models from neurophysiology, psychology, and developmental robotics, I describe how embodiment provides fundamental sources of empirical priors (as reliably learnable posterior expectations). If ESMs play this kind of foundational role in cognitive development, then bidirectional linkages will be found between all sensory modalities and frontal-parietal control hierarchies, so infusing all senses with somatic-motoric properties, thereby structuring all perception by relevant affordances, so solving frame problems for embodied agents. Drawing upon the Free Energy Principle and Active Inference framework, I describe a particular mechanism for intentional action selection via consciously imagined (and explicitly represented) goal realization, where contrasts between desired and present states influence ongoing policy selection via predictive coding mechanisms and backward-chained imaginings (as self-realizing predictions). This embodied developmental legacy suggests a mechanism by which imaginings can be intentionally shaped by (internalized) partially-expressed motor acts, so providing means of agentic control for attention, working memory, imagination, and behavior. I further describe the nature(s) of mental causation and self-control, and also provide an account of readiness potentials in Libet paradigms wherein conscious intentions shape causal streams leading to enaction. Finally, I provide neurophenomenological handlings of prototypical qualia including pleasure, pain, and desire in terms of self-annihilating free energy gradients via quasi-synesthetic interoceptive active inference. In brief, this manuscript is intended to illustrate how radically embodied minds may create foundations for intelligence (as capacity for learning and inference), consciousness (as somatically-grounded self-world modeling), and will (as deployment of predictive models for enacting valued goals). Full article
(This article belongs to the Special Issue Applying the Free-Energy Principle to Complex Adaptive Systems)
Show Figures

Figure 1

21 pages, 5118 KiB  
Article
Variable-Order Fractional Models for Wall-Bounded Turbulent Flows
by Fangying Song and George Em Karniadakis
Entropy 2021, 23(6), 782; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060782 - 20 Jun 2021
Cited by 3 | Viewed by 2707
Abstract
Modeling of wall-bounded turbulent flows is still an open problem in classical physics, with relatively slow progress in the last few decades beyond the log law, which only describes the intermediate region in wall-bounded turbulence, i.e., 30–50 y+ to 0.1–0.2 R+ in a [...] Read more.
Modeling of wall-bounded turbulent flows is still an open problem in classical physics, with relatively slow progress in the last few decades beyond the log law, which only describes the intermediate region in wall-bounded turbulence, i.e., 30–50 y+ to 0.1–0.2 R+ in a pipe of radius R. Here, we propose a fundamentally new approach based on fractional calculus to model the entire mean velocity profile from the wall to the centerline of the pipe. Specifically, we represent the Reynolds stresses with a non-local fractional derivative of variable-order that decays with the distance from the wall. Surprisingly, we find that this variable fractional order has a universal form for all Reynolds numbers and for three different flow types, i.e., channel flow, Couette flow, and pipe flow. We first use existing databases from direct numerical simulations (DNSs) to lean the variable-order function and subsequently we test it against other DNS data and experimental measurements, including the Princeton superpipe experiments. Taken together, our findings reveal the continuous change in rate of turbulent diffusion from the wall as well as the strong nonlocality of turbulent interactions that intensify away from the wall. Moreover, we propose alternative formulations, including a divergence variable fractional (two-sided) model for turbulent flows. The total shear stress is represented by a two-sided symmetric variable fractional derivative. The numerical results show that this formulation can lead to smooth fractional-order profiles in the whole domain. This new model improves the one-sided model, which is considered in the half domain (wall to centerline) only. We use a finite difference method for solving the inverse problem, but we also introduce the fractional physics-informed neural network (fPINN) for solving the inverse and forward problems much more efficiently. In addition to the aforementioned fully-developed flows, we model turbulent boundary layers and discuss how the streamwise variation affects the universal curve. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

15 pages, 1289 KiB  
Article
Effect of Ergodic and Non-Ergodic Fluctuations on a Charge Diffusing in a Stochastic Magnetic Field
by Gerardo Aquino, Kristopher J. Chandía and Mauro Bologna
Entropy 2021, 23(6), 781; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060781 - 19 Jun 2021
Cited by 1 | Viewed by 1884
Abstract
In this paper, we study the basic problem of a charged particle in a stochastic magnetic field. We consider dichotomous fluctuations of the magnetic field where the sojourn time in one of the two states are distributed according to a given waiting-time distribution [...] Read more.
In this paper, we study the basic problem of a charged particle in a stochastic magnetic field. We consider dichotomous fluctuations of the magnetic field where the sojourn time in one of the two states are distributed according to a given waiting-time distribution either with Poisson or non-Poisson statistics, including as well the case of distributions with diverging mean time between changes of the field, corresponding to an ergodicity breaking condition. We provide analytical and numerical results for all cases evaluating the average and the second moment of the position and velocity of the particle. We show that the field fluctuations induce diffusion of the charge with either normal or anomalous properties, depending on the statistics of the fluctuations, with distinct regimes from those observed, e.g., in standard Continuous-Time Random Walk models. Full article
(This article belongs to the Special Issue Non-ergodic Stochastic Processes)
Show Figures

Figure 1

13 pages, 2132 KiB  
Article
Socioeconomic Patterns of Twitter User Activity
by Jacob Levy Abitbol and Alfredo J. Morales
Entropy 2021, 23(6), 780; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060780 - 19 Jun 2021
Cited by 4 | Viewed by 2415
Abstract
Stratifying behaviors based on demographics and socioeconomic status is crucial for political and economic planning. Traditional methods to gather income and demographic information, like national censuses, require costly large-scale surveys both in terms of the financial and the organizational resources needed for their [...] Read more.
Stratifying behaviors based on demographics and socioeconomic status is crucial for political and economic planning. Traditional methods to gather income and demographic information, like national censuses, require costly large-scale surveys both in terms of the financial and the organizational resources needed for their successful collection. In this study, we use data from social media to expose how behavioral patterns in different socioeconomic groups can be used to infer an individual’s income. In particular, we look at the way people explore cities and use topics of conversation online as a means of inferring individual socioeconomic status. Privacy is preserved by using anonymized data, and abstracting human mobility and online conversation topics as aggregated high-dimensional vectors. We show that mobility and hashtag activity are good predictors of income and that the highest and lowest socioeconomic quantiles have the most differentiated behavior across groups. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

19 pages, 1976 KiB  
Article
On Entropy, Information, and Conservation of Information
by Yunus A. Çengel
Entropy 2021, 23(6), 779; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060779 - 19 Jun 2021
Cited by 7 | Viewed by 6659
Abstract
The term entropy is used in different meanings in different contexts, sometimes in contradictory ways, resulting in misunderstandings and confusion. The root cause of the problem is the close resemblance of the defining mathematical expressions of entropy in statistical thermodynamics and information in [...] Read more.
The term entropy is used in different meanings in different contexts, sometimes in contradictory ways, resulting in misunderstandings and confusion. The root cause of the problem is the close resemblance of the defining mathematical expressions of entropy in statistical thermodynamics and information in the communications field, also called entropy, differing only by a constant factor with the unit ‘J/K’ in thermodynamics and ‘bits’ in the information theory. The thermodynamic property entropy is closely associated with the physical quantities of thermal energy and temperature, while the entropy used in the communications field is a mathematical abstraction based on probabilities of messages. The terms information and entropy are often used interchangeably in several branches of sciences. This practice gives rise to the phrase conservation of entropy in the sense of conservation of information, which is in contradiction to the fundamental increase of entropy principle in thermodynamics as an expression of the second law. The aim of this paper is to clarify matters and eliminate confusion by putting things into their rightful places within their domains. The notion of conservation of information is also put into a proper perspective. Full article
Show Figures

Figure 1

16 pages, 5730 KiB  
Article
Global Sensitivity Analysis Based on Entropy: From Differential Entropy to Alternative Measures
by Zdeněk Kala
Entropy 2021, 23(6), 778; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060778 - 19 Jun 2021
Cited by 11 | Viewed by 6108
Abstract
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative [...] Read more.
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative entropy, just as Sobol sensitivity analysis does not have negative variance. Entropy is similar to variance but does not have the same properties. An alternative sensitivity measure based on the approximation of the differential entropy using dome-shaped functionals with non-negative values is proposed in the article. Case studies have shown that new sensitivity measures lead to a rational structure of sensitivity indices with a significantly lower proportion of higher-order sensitivity indices compared to other types of distributional sensitivity analysis. In terms of the concept of sensitivity analysis, a decrease in variance to zero means a transition from the differential to discrete entropy. The form of this transition is an open question, which can be studied using other scientific disciplines. The search for new functionals for distributional sensitivity analysis is not closed, and other suitable sensitivity measures may be found. Full article
Show Figures

Figure 1

19 pages, 978 KiB  
Article
Deep Learning for Walking Behaviour Detection in Elderly People Using Smart Footwear
by Rocío Aznar-Gimeno, Gorka Labata-Lezaun, Ana Adell-Lamora, David Abadía-Gallego, Rafael del-Hoyo-Alonso and Carlos González-Muñoz
Entropy 2021, 23(6), 777; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060777 - 19 Jun 2021
Cited by 13 | Viewed by 4231
Abstract
The increase in the proportion of elderly in Europe brings with it certain challenges that society needs to address, such as custodial care. We propose a scalable, easily modulated and live assistive technology system, based on a comfortable smart footwear capable of detecting [...] Read more.
The increase in the proportion of elderly in Europe brings with it certain challenges that society needs to address, such as custodial care. We propose a scalable, easily modulated and live assistive technology system, based on a comfortable smart footwear capable of detecting walking behaviour, in order to prevent possible health problems in the elderly, facilitating their urban life as independently and safety as possible. This brings with it the challenge of handling the large amounts of data generated, transmitting and pre-processing that information and analysing it with the aim of obtaining useful information in real/near-real time. This is the basis of information theory. This work presents a complete system aiming at elderly people that can detect different user behaviours/events (sitting, standing without imbalance, standing with imbalance, walking, running, tripping) through information acquired from 20 types of sensor measurements (16 piezoelectric pressure sensors, one accelerometer returning reading for the 3 axis and one temperature sensor) and warn the relatives about possible risks in near-real time. For the detection of these events, a hierarchical structure of cascading binary models is designed and applied using artificial neural network (ANN) algorithms and deep learning techniques. The best models are achieved with convolutional layered ANN and multilayer perceptrons. The overall event detection performance achieves an average accuracy and area under the ROC curve of 0.84 and 0.96, respectively. Full article
Show Figures

Figure 1

16 pages, 512 KiB  
Article
Multivariable Heuristic Approach to Intrusion Detection in Network Environments
by Marcin Niemiec, Rafał Kościej and Bartłomiej Gdowski
Entropy 2021, 23(6), 776; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060776 - 19 Jun 2021
Cited by 5 | Viewed by 2396
Abstract
The Internet is an inseparable part of our contemporary lives. This means that protection against threats and attacks is crucial for major companies and for individual users. There is a demand for the ongoing development of methods for ensuring security in cyberspace. A [...] Read more.
The Internet is an inseparable part of our contemporary lives. This means that protection against threats and attacks is crucial for major companies and for individual users. There is a demand for the ongoing development of methods for ensuring security in cyberspace. A crucial cybersecurity solution is intrusion detection systems, which detect attacks in network environments and responds appropriately. This article presents a new multivariable heuristic intrusion detection algorithm based on different types of flags and values of entropy. The data is shared by organisations to help increase the effectiveness of intrusion detection. The authors also propose default values for parameters of a heuristic algorithm and values regarding detection thresholds. This solution has been implemented in a well-known, open-source system and verified with a series of tests. Additionally, the authors investigated how updating the variables affects the intrusion detection process. The results confirmed the effectiveness of the proposed approach and heuristic algorithm. Full article
(This article belongs to the Special Issue Statistical Methods in Malware Mitigation)
Show Figures

Figure 1

10 pages, 2292 KiB  
Article
Confined Quantum Hard Spheres
by Sergio Contreras and Alejandro Gil-Villegas
Entropy 2021, 23(6), 775; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060775 - 18 Jun 2021
Viewed by 2104
Abstract
We present computer simulation and theoretical results for a system of N Quantum Hard Spheres (QHS) particles of diameter σ and mass m at temperature T, confined between parallel hard walls separated by a distance Hσ, within the range [...] Read more.
We present computer simulation and theoretical results for a system of N Quantum Hard Spheres (QHS) particles of diameter σ and mass m at temperature T, confined between parallel hard walls separated by a distance Hσ, within the range 1H. Semiclassical Monte Carlo computer simulations were performed adapted to a confined space, considering effects in terms of the density of particles ρ*=N/V, where V is the accessible volume, the inverse length H1 and the de Broglie’s thermal wavelength λB=h/2πmkT, where k and h are the Boltzmann’s and Planck’s constants, respectively. For the case of extreme and maximum confinement, 0.5<H1<1 and H1=1, respectively, analytical results can be given based on an extension for quantum systems of the Helmholtz free energies for the corresponding classical systems. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

11 pages, 3058 KiB  
Article
GIS Partial Discharge Pattern Recognition Based on a Novel Convolutional Neural Networks and Long Short-Term Memory
by Tingliang Liu, Jing Yan, Yanxin Wang, Yifan Xu and Yiming Zhao
Entropy 2021, 23(6), 774; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060774 - 18 Jun 2021
Cited by 20 | Viewed by 2240
Abstract
Distinguishing the types of partial discharge (PD) caused by different insulation defects in gas-insulated switchgear (GIS) is a great challenge in the power industry, and improving the recognition accuracy of the relevant models is one of the key problems. In this paper, a [...] Read more.
Distinguishing the types of partial discharge (PD) caused by different insulation defects in gas-insulated switchgear (GIS) is a great challenge in the power industry, and improving the recognition accuracy of the relevant models is one of the key problems. In this paper, a convolutional neural network and long short-term memory (CNN-LSTM) model is proposed, which can effectively extract and utilize the spatiotemporal characteristics of PD input signals. First, the spatial characteristics of higher-level PD signals can be obtained through the CNN network, but because CNN is a deep feedforward neural network, it does not have the ability to process time-series data. The PD voltage signal is related to the time dimension, so LSTM saves and analyzes the previous voltage signal information, realizes the modeling of the time dependence of the data, and improves the accuracy of the PD signal pattern recognition. Finally, the pattern recognition results based on CNN-LSTM are given and compared with those based on other traditional analysis methods. The results show that the pattern recognition rate of this method is the highest, with an average of 97.9%, and its overall accuracy is better than that of other traditional analysis methods. The CNN-LSTM model provides a reliable reference for GIS PD diagnosis. Full article
Show Figures

Figure 1

22 pages, 934 KiB  
Article
Robust Universal Inference
by Amichai Painsky and Meir Feder
Entropy 2021, 23(6), 773; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060773 - 18 Jun 2021
Cited by 3 | Viewed by 2601
Abstract
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number [...] Read more.
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

18 pages, 360 KiB  
Article
Timelessness Strictly inside the Quantum Realm
by Knud Thomsen
Entropy 2021, 23(6), 772; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060772 - 18 Jun 2021
Cited by 5 | Viewed by 3138
Abstract
Time is one of the undisputed foundations of our life in the real world. Here it is argued that inside small isolated quantum systems, time does not pass as we are used to, and it is primarily in this sense that quantum objects [...] Read more.
Time is one of the undisputed foundations of our life in the real world. Here it is argued that inside small isolated quantum systems, time does not pass as we are used to, and it is primarily in this sense that quantum objects enjoy only limited reality. Quantum systems, which we know, are embedded in the everyday classical world. Their preparation as well as their measurement-phases leave durable records and traces in the entropy of the environment. The Landauer Principle then gives a quantitative threshold for irreversibility. With double slit experiments and tunneling as paradigmatic examples, it is proposed that a label of timelessness offers clues for rendering a Copenhagen-type interpretation of quantum physics more “realistic” and acceptable by providing a coarse but viable link from the fundamental quantum realm to the classical world which humans directly experience. Full article
(This article belongs to the Special Issue Time, Causality, and Entropy)
13 pages, 1952 KiB  
Article
Unifying Node Labels, Features, and Distances for Deep Network Completion
by Qiang Wei and Guangmin Hu
Entropy 2021, 23(6), 771; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060771 - 18 Jun 2021
Cited by 3 | Viewed by 2005
Abstract
Collected network data are often incomplete, with both missing nodes and missing edges. Thus, network completion that infers the unobserved part of the network is essential for downstream tasks. Despite the emerging literature related to network recovery, the potential information has not been [...] Read more.
Collected network data are often incomplete, with both missing nodes and missing edges. Thus, network completion that infers the unobserved part of the network is essential for downstream tasks. Despite the emerging literature related to network recovery, the potential information has not been effectively exploited. In this paper, we propose a novel unified deep graph convolutional network that infers missing edges by leveraging node labels, features, and distances. Specifically, we first construct an estimated network topology for the unobserved part using node labels, then jointly refine the network topology and learn the edge likelihood with node labels, node features and distances. Extensive experiments using several real-world datasets show the superiority of our method compared with the state-of-the-art approaches. Full article
(This article belongs to the Collection Social Sciences)
Show Figures

Figure 1

20 pages, 7952 KiB  
Article
No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
by Chongchong Jin, Zongju Peng, Wenhui Zou, Fen Chen, Gangyi Jiang and Mei Yu
Entropy 2021, 23(6), 770; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060770 - 18 Jun 2021
Cited by 3 | Viewed by 1953
Abstract
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric [...] Read more.
Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images. Full article
Show Figures

Figure 1

18 pages, 5111 KiB  
Article
Robustness of Cyber-Physical Supply Networks in Cascading Failures
by Dong Mu, Xiongping Yue and Huanyu Ren
Entropy 2021, 23(6), 769; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060769 - 18 Jun 2021
Cited by 8 | Viewed by 2226
Abstract
A cyber-physical supply network is composed of an undirected cyber supply network and a directed physical supply network. Such interdependence among firms increases efficiency but creates more vulnerabilities. The adverse effects of any failure can be amplified and propagated throughout the network. This [...] Read more.
A cyber-physical supply network is composed of an undirected cyber supply network and a directed physical supply network. Such interdependence among firms increases efficiency but creates more vulnerabilities. The adverse effects of any failure can be amplified and propagated throughout the network. This paper aimed at investigating the robustness of the cyber-physical supply network against cascading failures. Considering that the cascading failure is triggered by overloading in the cyber supply network and is provoked by underload in the physical supply network, a realistic cascading model for cyber-physical supply networks is proposed. We conducted a numerical simulation under cyber node and physical node failure with varying parameters. The simulation results demonstrated that there are critical thresholds for both firm’s capacities, which can determine whether capacity expansion is helpful; there is also a cascade window for network load distribution, which can determine the cascading failures occurrence and scale. Our work may be beneficial for developing cascade control and defense strategies in cyber-physical supply networks. Full article
(This article belongs to the Special Issue Analysis and Applications of Complex Social Networks)
Show Figures

Figure 1

19 pages, 1003 KiB  
Article
Meta-Tree Random Forest: Probabilistic Data-Generative Model and Bayes Optimal Prediction
by Nao Dobashi, Shota Saito, Yuta Nakahara and Toshiyasu Matsushima
Entropy 2021, 23(6), 768; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060768 - 18 Jun 2021
Cited by 7 | Viewed by 3282
Abstract
This paper deals with a prediction problem of a new targeting variable corresponding to a new explanatory variable given a training dataset. To predict the targeting variable, we consider a model tree, which is used to represent a conditional probabilistic structure of a [...] Read more.
This paper deals with a prediction problem of a new targeting variable corresponding to a new explanatory variable given a training dataset. To predict the targeting variable, we consider a model tree, which is used to represent a conditional probabilistic structure of a targeting variable given an explanatory variable, and discuss statistical optimality for prediction based on the Bayes decision theory. The optimal prediction based on the Bayes decision theory is given by weighting all the model trees in the model tree candidate set, where the model tree candidate set is a set of model trees in which the true model tree is assumed to be included. Because the number of all the model trees in the model tree candidate set increases exponentially according to the maximum depth of model trees, the computational complexity of weighting them increases exponentially according to the maximum depth of model trees. To solve this issue, we introduce a notion of meta-tree and propose an algorithm called MTRF (Meta-Tree Random Forest) by using multiple meta-trees. Theoretical and experimental analyses of the MTRF show the superiority of the MTRF to previous decision tree-based algorithms. Full article
Show Figures

Figure 1

15 pages, 372 KiB  
Article
Why Dilated Convolutional Neural Networks: A Proof of Their Optimality
by Jonatan Contreras, Martine Ceberio and Vladik Kreinovich
Entropy 2021, 23(6), 767; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060767 - 18 Jun 2021
Cited by 1 | Viewed by 1692
Abstract
One of the most effective image processing techniques is the use of convolutional neural networks that use convolutional layers. In each such layer, the value of the layer’s output signal at each point is a combination of the layer’s input signals corresponding to [...] Read more.
One of the most effective image processing techniques is the use of convolutional neural networks that use convolutional layers. In each such layer, the value of the layer’s output signal at each point is a combination of the layer’s input signals corresponding to several neighboring points. To improve the accuracy, researchers have developed a version of this technique, in which only data from some of the neighboring points is processed. It turns out that the most efficient case—called dilated convolution—is when we select the neighboring points whose differences in both coordinates are divisible by some constant . In this paper, we explain this empirical efficiency by proving that for all reasonable optimality criteria, dilated convolution is indeed better than possible alternatives. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Figure 1

21 pages, 4016 KiB  
Article
Development and Analysis of the Novel Hybridization of a Single-Flash Geothermal Power Plant with Biomass Driven sCO2-Steam Rankine Combined Cycle
by Balkan Mutlu, Derek Baker and Feyza Kazanç
Entropy 2021, 23(6), 766; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060766 - 18 Jun 2021
Cited by 4 | Viewed by 3213
Abstract
This study investigates the hybridization scenario of a single-flash geothermal power plant with a biomass-driven sCO2-steam Rankine combined cycle, where a solid local biomass source, olive residue, is used as a fuel. The hybrid power plant is modeled using the simulation [...] Read more.
This study investigates the hybridization scenario of a single-flash geothermal power plant with a biomass-driven sCO2-steam Rankine combined cycle, where a solid local biomass source, olive residue, is used as a fuel. The hybrid power plant is modeled using the simulation software EBSILON®Professional. A topping sCO2 cycle is chosen due to its potential for flexible electricity generation. A synergy between the topping sCO2 and bottoming steam Rankine cycles is achieved by a good temperature match between the coupling heat exchanger, where the waste heat from the topping cycle is utilized in the bottoming cycle. The high-temperature heat addition problem, common in sCO2 cycles, is also eliminated by utilizing the heat in the flue gas in the bottoming cycle. Combined cycle thermal efficiency and a biomass-to-electricity conversion efficiency of 24.9% and 22.4% are achieved, respectively. The corresponding fuel consumption of the hybridized plant is found to be 2.2 kg/s. Full article
(This article belongs to the Special Issue Thermodynamic Analysis and Process Intensification)
Show Figures

Figure 1

32 pages, 657 KiB  
Article
Statistical Inference for Periodic Self-Exciting Threshold Integer-Valued Autoregressive Processes
by Congmin Liu, Jianhua Cheng and Dehui Wang
Entropy 2021, 23(6), 765; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060765 - 17 Jun 2021
Cited by 2 | Viewed by 1923
Abstract
This paper considers the periodic self-exciting threshold integer-valued autoregressive processes under a weaker condition in which the second moment is finite instead of the innovation distribution being given. The basic statistical properties of the model are discussed, the quasi-likelihood inference of the parameters [...] Read more.
This paper considers the periodic self-exciting threshold integer-valued autoregressive processes under a weaker condition in which the second moment is finite instead of the innovation distribution being given. The basic statistical properties of the model are discussed, the quasi-likelihood inference of the parameters is investigated, and the asymptotic behaviors of the estimators are obtained. Threshold estimates based on quasi-likelihood and least squares methods are given. Simulation studies evidence that the quasi-likelihood methods perform well with realistic sample sizes and may be superior to least squares and maximum likelihood methods. The practical application of the processes is illustrated by a time series dataset concerning the monthly counts of claimants collecting short-term disability benefits from the Workers’ Compensation Board (WCB). In addition, the forecasting problem of this dataset is addressed. Full article
(This article belongs to the Special Issue Time Series Modelling)
Show Figures

Figure 1

19 pages, 9676 KiB  
Article
Generalized Correntropy Criterion-Based Performance Assessment for Non-Gaussian Stochastic Systems
by Jinfang Zhang, Guodou Huang and Li Zhang
Entropy 2021, 23(6), 764; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060764 - 17 Jun 2021
Cited by 3 | Viewed by 1511
Abstract
Control loop performance assessment (CPA) is essential in the operation of industrial systems. In this paper, the shortcomings of existing performance assessment methods and indicators are summarized firstly, and a novel evaluation method based on generalized correntropy criterion (GCC) is proposed to evaluate [...] Read more.
Control loop performance assessment (CPA) is essential in the operation of industrial systems. In this paper, the shortcomings of existing performance assessment methods and indicators are summarized firstly, and a novel evaluation method based on generalized correntropy criterion (GCC) is proposed to evaluate the performance of non-Gaussian stochastic systems. This criterion could characterize the statistical properties of non-Gaussian random variables more fully, so it can be directly used as the assessment index. When the expected output of the given system is unknown, generalized correntropy is used to describe the similarity of two random variables in the joint space neighborhood controlled and take it as the criterion function of the identification algorithms. To estimate the performance benchmark more quickly and accurately, a hybrid-EDA (H-EDA) combined with the idea of “wading across the stream algorithm” is proposed to obtain the system parameters and disturbance noise PDF. Through the simulation of a single loop feedback control system under different noise disturbances, the effectiveness of the improved algorithm and new indexes are verified. Full article
Show Figures

Figure 1

23 pages, 6369 KiB  
Article
Assessment of Classification Models and Relevant Features on Nonalcoholic Steatohepatitis Using Random Forest
by Rafael García-Carretero, Roberto Holgado-Cuadrado and Óscar Barquero-Pérez
Entropy 2021, 23(6), 763; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060763 - 17 Jun 2021
Cited by 12 | Viewed by 2803
Abstract
Nonalcoholic fatty liver disease (NAFLD) is the hepatic manifestation of metabolic syndrome and is the most common cause of chronic liver disease in developed countries. Certain conditions, including mild inflammation biomarkers, dyslipidemia, and insulin resistance, can trigger a progression to nonalcoholic steatohepatitis (NASH), [...] Read more.
Nonalcoholic fatty liver disease (NAFLD) is the hepatic manifestation of metabolic syndrome and is the most common cause of chronic liver disease in developed countries. Certain conditions, including mild inflammation biomarkers, dyslipidemia, and insulin resistance, can trigger a progression to nonalcoholic steatohepatitis (NASH), a condition characterized by inflammation and liver cell damage. We demonstrate the usefulness of machine learning with a case study to analyze the most important features in random forest (RF) models for predicting patients at risk of developing NASH. We collected data from patients who attended the Cardiovascular Risk Unit of Mostoles University Hospital (Madrid, Spain) from 2005 to 2021. We reviewed electronic health records to assess the presence of NASH, which was used as the outcome. We chose RF as the algorithm to develop six models using different pre-processing strategies. The performance metrics was evaluated to choose an optimized model. Finally, several interpretability techniques, such as feature importance, contribution of each feature to predictions, and partial dependence plots, were used to understand and explain the model to help obtain a better understanding of machine learning-based predictions. In total, 1525 patients met the inclusion criteria. The mean age was 57.3 years, and 507 patients had NASH (prevalence of 33.2%). Filter methods (the chi-square and Mann–Whitney–Wilcoxon tests) did not produce additional insight in terms of interactions, contributions, or relationships among variables and their outcomes. The random forest model correctly classified patients with NASH to an accuracy of 0.87 in the best model and to 0.79 in the worst one. Four features were the most relevant: insulin resistance, ferritin, serum levels of insulin, and triglycerides. The contribution of each feature was assessed via partial dependence plots. Random forest-based modeling demonstrated that machine learning can be used to improve interpretability, produce understanding of the modeled behavior, and demonstrate how far certain features can contribute to predictions. Full article
Show Figures

Figure 1

23 pages, 7126 KiB  
Article
Rolling Bearing Fault Diagnosis Based on VMD-MPE and PSO-SVM
by Maoyou Ye, Xiaoan Yan and Minping Jia
Entropy 2021, 23(6), 762; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060762 - 16 Jun 2021
Cited by 59 | Viewed by 3507
Abstract
The goal of the paper is to present a solution to improve the fault detection accuracy of rolling bearings. The method is based on variational mode decomposition (VMD), multiscale permutation entropy (MPE) and the particle swarm optimization-based support vector machine (PSO-SVM). Firstly, the [...] Read more.
The goal of the paper is to present a solution to improve the fault detection accuracy of rolling bearings. The method is based on variational mode decomposition (VMD), multiscale permutation entropy (MPE) and the particle swarm optimization-based support vector machine (PSO-SVM). Firstly, the original bearing vibration signal is decomposed into several intrinsic mode functions (IMF) by using the VMD method, and the feature energy ratio (FER) criterion is introduced to reconstruct the bearing vibration signal. Secondly, the multiscale permutation entropy of the reconstructed signal is calculated to construct multidimensional feature vectors. Finally, the constructed multidimensional feature vector is fed into the PSO-SVM classification model for automatic identification of different fault patterns of the rolling bearing. Two experimental cases are adopted to validate the effectiveness of the proposed method. Experimental results show that the proposed method can achieve a higher identification accuracy compared with some similar available methods (e.g., variational mode decomposition-based multiscale sample entropy (VMD-MSE), variational mode decomposition-based multiscale fuzzy entropy (VMD-MFE), empirical mode decomposition-based multiscale permutation entropy (EMD-MPE) and wavelet transform-based multiscale permutation entropy (WT-MPE)). Full article
Show Figures

Figure 1

13 pages, 1095 KiB  
Article
A Two-Steps-Ahead Estimator for Bubble Entropy
by George Manis, Matteo Bodini, Massimo W. Rivolta and Roberto Sassi
Entropy 2021, 23(6), 761; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060761 - 16 Jun 2021
Cited by 5 | Viewed by 2097
Abstract
Aims: Bubble entropy (bEn) is an entropy metric with a limited dependence on parameters. bEn does not directly quantify the conditional entropy of the series, but it assesses the change in entropy of the ordering of [...] Read more.
Aims: Bubble entropy (bEn) is an entropy metric with a limited dependence on parameters. bEn does not directly quantify the conditional entropy of the series, but it assesses the change in entropy of the ordering of portions of its samples of length m, when adding an extra element. The analytical formulation of bEn for autoregressive (AR) processes shows that, for this class of processes, the relation between the first autocorrelation coefficient and bEn changes for odd and even values of m. While this is not an issue, per se, it triggered ideas for further investigation. Methods: Using theoretical considerations on the expected values for AR processes, we examined a two-steps-ahead estimator of bEn, which considered the cost of ordering two additional samples. We first compared it with the original bEn estimator on a simulated series. Then, we tested it on real heart rate variability (HRV) data. Results: The experiments showed that both examined alternatives showed comparable discriminating power. However, for values of 10<m<20, where the statistical significance of the method was increased and improved as m increased, the two-steps-ahead estimator presented slightly higher statistical significance and more regular behavior, even if the dependence on parameter m was still minimal. We also investigated a new normalization factor for bEn, which ensures that bEn =1 when white Gaussian noise (WGN) is given as the input. Conclusions: The research improved our understanding of bubble entropy, in particular in the context of HRV analysis, and we investigated interesting details regarding the definition of the estimator. Full article
(This article belongs to the Special Issue Entropy in Biomedical Engineering)
Show Figures

Figure 1

14 pages, 834 KiB  
Article
Performance Improvement of Atmospheric Continuous-Variable Quantum Key Distribution with Untrusted Source
by Qin Liao, Gang Xiao and Shaoliang Peng
Entropy 2021, 23(6), 760; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060760 - 16 Jun 2021
Cited by 1 | Viewed by 1839
Abstract
Atmospheric continuous-variable quantum key distribution (ACVQKD) has been proven to be secure theoretically with the assumption that the signal source is well protected by the sender so that it cannot be compromised. However, this assumption is quite unpractical in realistic quantum communication system. [...] Read more.
Atmospheric continuous-variable quantum key distribution (ACVQKD) has been proven to be secure theoretically with the assumption that the signal source is well protected by the sender so that it cannot be compromised. However, this assumption is quite unpractical in realistic quantum communication system. In this work, we investigate a practical situation in which the signal source is no longer protected by the legitimate parts, but is exposed to the untrusted atmospheric channel. We show that the performance of ACVQKD is reduced by removing the assumption, especially when putting the untrusted source at the middle of the channel. To improve the performance of the ACVQKD with the untrusted source, a non-Gaussian operation, called photon subtraction, is subsequently introduced. Numerical analysis shows that the performance of ACVQKD with an untrusted source can be improved by properly adopting the photon subtraction operation. Moreover, a special situation where the untrusted source is located in the middle of the atmospheric channel is also considered. Under direct reconciliation, we find that its performance can be significantly improved when the photon subtraction operation is manipulated by the sender. Full article
(This article belongs to the Special Issue Quantum Communication)
Show Figures

Figure 1

17 pages, 4586 KiB  
Article
Silhouette Analysis for Performance Evaluation in Machine Learning with Applications to Clustering
by Meshal Shutaywi and Nezamoddin N. Kachouie
Entropy 2021, 23(6), 759; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060759 - 16 Jun 2021
Cited by 76 | Viewed by 8165
Abstract
Grouping the objects based on their similarities is an important common task in machine learning applications. Many clustering methods have been developed, among them k-means based clustering methods have been broadly used and several extensions have been developed to improve the original k-means [...] Read more.
Grouping the objects based on their similarities is an important common task in machine learning applications. Many clustering methods have been developed, among them k-means based clustering methods have been broadly used and several extensions have been developed to improve the original k-means clustering method such as k-means ++ and kernel k-means. K-means is a linear clustering method; that is, it divides the objects into linearly separable groups, while kernel k-means is a non-linear technique. Kernel k-means projects the elements to a higher dimensional feature space using a kernel function, and then groups them. Different kernel functions may not perform similarly in clustering of a data set and, in turn, choosing the right kernel for an application could be challenging. In our previous work, we introduced a weighted majority voting method for clustering based on normalized mutual information (NMI). NMI is a supervised method where the true labels for a training set are required to calculate NMI. In this study, we extend our previous work of aggregating the clustering results to develop an unsupervised weighting function where a training set is not available. The proposed weighting function here is based on Silhouette index, as an unsupervised criterion. As a result, a training set is not required to calculate Silhouette index. This makes our new method more sensible in terms of clustering concept. Full article
Show Figures

Figure 1

30 pages, 3468 KiB  
Article
A Coupling Framework for Multi-Domain Modelling and Multi-Physics Simulations
by Dario Amirante, Vlad Ganine, Nicholas J. Hills and Paolo Adami
Entropy 2021, 23(6), 758; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060758 - 16 Jun 2021
Cited by 3 | Viewed by 2337
Abstract
This paper describes a coupling framework for parallel execution of different solvers for multi-physics and multi-domain simulations with an arbitrary number of adjacent zones connected by different physical or overlapping interfaces. The coupling architecture is based on the execution of several instances of [...] Read more.
This paper describes a coupling framework for parallel execution of different solvers for multi-physics and multi-domain simulations with an arbitrary number of adjacent zones connected by different physical or overlapping interfaces. The coupling architecture is based on the execution of several instances of the same coupling code and relies on the use of smart edges (i.e., separate processes) dedicated to managing the exchange of information between two adjacent regions. The collection of solvers and coupling sessions forms a flexible and modular system, where the data exchange is handled by independent servers that are dedicated to a single interface connecting two solvers’ sessions. Accuracy and performance of the strategy is considered for turbomachinery applications involving Conjugate Heat Transfer (CHT) analysis and Sliding Plane (SP) interfaces. Full article
(This article belongs to the Special Issue Computational Fluid Dynamics and Conjugate Heat Transfer)
Show Figures

Figure 1

12 pages, 557 KiB  
Article
Thermodynamic Efficiency of Interactions in Self-Organizing Systems
by Ramil Nigmatullin and Mikhail Prokopenko
Entropy 2021, 23(6), 757; https://0-doi-org.brum.beds.ac.uk/10.3390/e23060757 - 16 Jun 2021
Cited by 10 | Viewed by 3059
Abstract
The emergence of global order in complex systems with locally interacting components is most striking at criticality, where small changes in control parameters result in a sudden global reorganization. We study the thermodynamic efficiency of interactions in self-organizing systems, which quantifies the change [...] Read more.
The emergence of global order in complex systems with locally interacting components is most striking at criticality, where small changes in control parameters result in a sudden global reorganization. We study the thermodynamic efficiency of interactions in self-organizing systems, which quantifies the change in the system’s order per unit of work carried out on (or extracted from) the system. We analytically derive the thermodynamic efficiency of interactions for the case of quasi-static variations of control parameters in the exactly solvable Curie–Weiss (fully connected) Ising model, and demonstrate that this quantity diverges at the critical point of a second-order phase transition. This divergence is shown for quasi-static perturbations in both control parameters—the external field and the coupling strength. Our analysis formalizes an intuitive understanding of thermodynamic efficiency across diverse self-organizing dynamics in physical, biological, and social domains. Full article
(This article belongs to the Special Issue What Is Self-Organization?)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop