Next Issue
Volume 24, May
Previous Issue
Volume 24, March

Entropy, Volume 24, Issue 4 (April 2022) – 136 articles

Cover Story (view full-size image): Quantum energy coherences represent a thermodynamic resource, which can be exploited to extract energy from a thermal reservoir and deliver that energy as work. There exists a closely analogous classical thermodynamic resource, namely, energy–shell inhomogeneities in the phase space distribution of a system’s initial state. The amount of work that can be obtained from quantum coherences can be shown to be equal to the amount that can be obtained from classical inhomogeneities in the semiclassical limit. Thus, coherences do not provide a unique thermodynamic advantage of quantum systems over classical systems in situations where a well-defined semiclassical correspondence exists. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Entropy Generation Analysis of the Flow Boiling in Microgravity Field
Entropy 2022, 24(4), 569; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040569 - 18 Apr 2022
Viewed by 556
Abstract
Entropy generation analysis of the flow boiling in microgravity field is conducted in this paper. A new entropy generation model based on the flow pattern and the phase change process is developed in this study. The velocity ranges from 1 m/s to 4 [...] Read more.
Entropy generation analysis of the flow boiling in microgravity field is conducted in this paper. A new entropy generation model based on the flow pattern and the phase change process is developed in this study. The velocity ranges from 1 m/s to 4 m/s, and the heat flux ranges from 10,000 W/m2 to 50,000 W/m2, so as to investigate their influence on irreversibility during flow boiling in the tunnel. A phase–change model verified by the Stefan problem is employed in this paper to simulate the phase–change process in boiling. The numerical simulations are carried out on ANSYS-FLUENT. The entropy generation produced by the heat transfer, viscous dissipation, turbulent dissipation, and phase change are observed at different working conditions. Moreover, the Be number and a new evaluation number, EP, are introduced in this paper to investigate the performance of the boiling phenomenon. The following conclusions are obtained: (1) a high local entropy generation will be obtained when only heat conduction in vapor occurs near the hot wall, whereas a low local entropy generation will be obtained when heat conduction in water or evaporation occurs near the hot wall; (2) the entropy generation and the Be number are positively correlated with the heat flux, which indicates that the heat transfer entropy generation becomes the major contributor of the total entropy generation with the increase of the heat flux; (3) the transition of the boiling status shows different trends at different velocities, which affects the irreversibility in the tunnel; (4) the critical heat flux (CHF) is the optimal choice under the comprehensive consideration of the first law and the second law of the thermodynamics. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics III)
Show Figures

Figure 1

Article
Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems
Entropy 2022, 24(4), 568; https://doi.org/10.3390/e24040568 - 18 Apr 2022
Viewed by 489
Abstract
A multiview clustering (MVC) has been a significant technique to dispose data mining issues. Most of the existing studies on this topic adopt a fixed number of neighbors when constructing the similarity matrix of each view, like single-view clustering. However, this may reduce [...] Read more.
A multiview clustering (MVC) has been a significant technique to dispose data mining issues. Most of the existing studies on this topic adopt a fixed number of neighbors when constructing the similarity matrix of each view, like single-view clustering. However, this may reduce the clustering effect due to the diversity of multiview data sources. Moreover, most MVC utilizes iterative optimization to obtain clustering results, which consumes a significant amount of time. Therefore, this paper proposes a multiview clustering of adaptive sparse representation based on coupled P system (MVCS-CP) without iteration. The whole algorithm flow runs in the coupled P system. Firstly, the natural neighbor search algorithm without parameters automatically determines the number of neighbors of each view. In turn, manifold learning and sparse representation are employed to construct the similarity matrix, which preserves the internal geometry of the views. Next, a soft thresholding operator is introduced to form the unified graph to gain the clustering results. The experimental results on nine real datasets indicate that the MVCS-CP outperforms other state-of-the-art comparison algorithms. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
Numerical Solutions of Variable Coefficient Higher-Order Partial Differential Equations Arising in Beam Models
Entropy 2022, 24(4), 567; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040567 - 18 Apr 2022
Viewed by 455
Abstract
In this work, an efficient and robust numerical scheme is proposed to solve the variable coefficients’ fourth-order partial differential equations (FOPDEs) that arise in Euler–Bernoulli beam models. When partial differential equations (PDEs) are of higher order and invoke variable coefficients, then the numerical [...] Read more.
In this work, an efficient and robust numerical scheme is proposed to solve the variable coefficients’ fourth-order partial differential equations (FOPDEs) that arise in Euler–Bernoulli beam models. When partial differential equations (PDEs) are of higher order and invoke variable coefficients, then the numerical solution is quite a tedious and challenging problem, which is our main concern in this paper. The current scheme is hybrid in nature in which the second-order finite difference is used for temporal discretization, while spatial derivatives and solutions are approximated via the Haar wavelet. Next, the integration and Haar matrices are used to convert partial differential equations (PDEs) to the system of linear equations, which can be handled easily. Besides this, we derive the theoretical result for stability via the Lax–Richtmyer criterion and verify it computationally. Moreover, we address the computational convergence rate, which is near order two. Several test problems are given to measure the accuracy of the suggested scheme. Computations validate that the present scheme works well for such problems. The calculated results are also compared with the earlier work and the exact solutions. The comparison shows that the outcomes are in good agreement with both the exact solutions and the available results in the literature. Full article
(This article belongs to the Special Issue Advanced Numerical Methods for Differential Equations)
Show Figures

Figure 1

Article
Parallel and Practical Approach of Efficient Image Chaotic Encryption Based on Message Passing Interface (MPI)
Entropy 2022, 24(4), 566; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040566 - 18 Apr 2022
Viewed by 649
Abstract
Encrypting pictures quickly and securely is required to secure image transmission over the internet and local networks. This may be accomplished by employing a chaotic scheme with ideal properties such as unpredictability and non-periodicity. However, practically every modern-day system is a real-time system, [...] Read more.
Encrypting pictures quickly and securely is required to secure image transmission over the internet and local networks. This may be accomplished by employing a chaotic scheme with ideal properties such as unpredictability and non-periodicity. However, practically every modern-day system is a real-time system, for which time is a critical aspect for achieving the availability of the encrypted picture at the proper moment. From there, we must improve encryption’s performance and efficiency. For these goals, we adopted the distributed parallel programming model, namely, the message passing interface (MPI), in this study. Using the message passing interface, we created a novel parallel crypto-system. The suggested approach outperforms other models by 1.5 times. The suggested parallel encryption technique is applicable. Full article
(This article belongs to the Special Issue Computational Imaging and Image Encryption with Entropy)
Show Figures

Figure 1

Article
2D Ising Model for Enantiomer Adsorption on Achiral Surfaces: L- and D-Aspartic Acid on Cu(111)
Entropy 2022, 24(4), 565; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040565 - 18 Apr 2022
Viewed by 447
Abstract
The 2D Ising model is well-formulated to address problems in adsorption thermodynamics. It is particularly well-suited to describing the adsorption isotherms predicting the surface enantiomeric excess, ees, observed during competitive co-adsorption of enantiomers onto achiral surfaces. Herein, we make the [...] Read more.
The 2D Ising model is well-formulated to address problems in adsorption thermodynamics. It is particularly well-suited to describing the adsorption isotherms predicting the surface enantiomeric excess, ees, observed during competitive co-adsorption of enantiomers onto achiral surfaces. Herein, we make the direct one-to-one correspondence between the 2D Ising model Hamiltonian and the Hamiltonian used to describe competitive enantiomer adsorption on achiral surfaces. We then demonstrate that adsorption from racemic mixtures of enantiomers and adsorption of prochiral molecules are directly analogous to the Ising model with no applied magnetic field, i.e., the enantiomeric excess on chiral surfaces can be predicted using Onsager’s solution to the 2D Ising model. The implication is that enantiomeric purity on the surface can be achieved during equilibrium exposure of prochiral compounds or racemic mixtures of enantiomers to achiral surfaces. Full article
(This article belongs to the Special Issue Ising Model: Recent Developments and Exotic Applications)
Show Figures

Figure 1

Article
The Exergy Losses Analysis in Adiabatic Combustion Systems including the Exhaust Gas Exergy
Entropy 2022, 24(4), 564; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040564 - 18 Apr 2022
Viewed by 647
Abstract
The entropy generation analysis of adiabatic combustion systems was performed to quantify the exergy losses which are mainly the exergy destroyed during combustion inside the chamber and in the exhaust gases. The purpose of the present work was therefore: (a) to extend the [...] Read more.
The entropy generation analysis of adiabatic combustion systems was performed to quantify the exergy losses which are mainly the exergy destroyed during combustion inside the chamber and in the exhaust gases. The purpose of the present work was therefore: (a) to extend the exergy destruction analysis by including the exhaust gas exergy while applying the hybrid filtered Eulerian stochastic field (ESF) method coupled with the FGM chemistry tabulation strategy; (b) to introduce a novel method for evaluating the exergy content of exhaust gases; and (c) to highlight a link between exhaust gas exergy and combustion emissions. In this work, the adiabatic Sandia flames E and F were chosen as application combustion systems. First, the numerical results of the flow and scalar fields were validated by comparison with the experimental data. The under-utilization of eight stochastic fields (SFs), the flow field results and the associated scalar fields for the flame E show excellent agreement contrary to flame F. Then, the different exergy losses were calculated and analyzed. The heat transfer and chemical reaction are the main factors responsible for the exergy destruction during combustion. The chemical exergy of the exhaust gases shows a strong relation between the exergy losses and combustion emission as well as the gas exhaust temperature. Full article
(This article belongs to the Special Issue Entropy Generation Analysis in Near-Wall Turbulent Flow)
Show Figures

Figure 1

Article
Scalable and Transferable Reinforcement Learning for Multi-Agent Mixed Cooperative–Competitive Environments Based on Hierarchical Graph Attention
Entropy 2022, 24(4), 563; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040563 - 18 Apr 2022
Viewed by 558
Abstract
Most previous studies on multi-agent systems aim to coordinate agents to achieve a common goal, but the lack of scalability and transferability prevents them from being applied to large-scale multi-agent tasks. To deal with these limitations, we propose a deep reinforcement learning (DRL) [...] Read more.
Most previous studies on multi-agent systems aim to coordinate agents to achieve a common goal, but the lack of scalability and transferability prevents them from being applied to large-scale multi-agent tasks. To deal with these limitations, we propose a deep reinforcement learning (DRL) based multi-agent coordination control method for mixed cooperative–competitive environments. To improve scalability and transferability when applying in large-scale multi-agent systems, we construct inter-agent communication and use hierarchical graph attention networks (HGAT) to process the local observations of agents and received messages from neighbors. We also adopt the gated recurrent units (GRU) to address the partial observability issue by recording historical information. The simulation results based on a cooperative task and a competitive task not only show the superiority of our method, but also indicate the scalability and transferability of our method in various scale tasks. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

Article
A New Look at Calendar Anomalies: Multifractality and Day-of-the-Week Effect
Entropy 2022, 24(4), 562; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040562 - 17 Apr 2022
Viewed by 599
Abstract
Stock markets can become inefficient due to calendar anomalies known as the day-of-the-week effect. Calendar anomalies are well known in the financial literature, but the phenomena remain to be explored in econophysics. This paper uses multifractal analysis to evaluate if the temporal dynamics [...] Read more.
Stock markets can become inefficient due to calendar anomalies known as the day-of-the-week effect. Calendar anomalies are well known in the financial literature, but the phenomena remain to be explored in econophysics. This paper uses multifractal analysis to evaluate if the temporal dynamics of market returns also exhibit calendar anomalies such as day-of-the-week effects. We apply multifractal detrended fluctuation analysis (MF-DFA) to the daily returns of market indices worldwide for each day of the week. Our results indicate that distinct multifractal properties characterize individual days of the week. Monday returns tend to exhibit more persistent behavior and richer multifractal structures than other day-resolved returns. Shuffling the series reveals that multifractality arises from a broad probability density function and long-term correlations. The time-dependent multifractal analysis shows that the Monday returns’ multifractal spectra are much wider than those of other days. This behavior is especially persistent during financial crises. The presence of day-of-the-week effects in multifractal dynamics of market returns motivates further research on calendar anomalies for distinct market regimes. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

Article
Calorimetric Measurements of Biological Interactions and Their Relationships to Finite Time Thermodynamics Parameters
Entropy 2022, 24(4), 561; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040561 - 16 Apr 2022
Cited by 1 | Viewed by 574
Abstract
A description and examination of the potential for calorimetry for use in exploring the entropy flows in biological and or reacting systems is presented. A calorimeter operation background is provided, and two case studies are investigated using a transient numerical simulation. The first [...] Read more.
A description and examination of the potential for calorimetry for use in exploring the entropy flows in biological and or reacting systems is presented. A calorimeter operation background is provided, and two case studies are investigated using a transient numerical simulation. The first case describes a single cell calorimeter containing a single phase material excited by heat generation source function such as joule heating. The second case is a reacting system. The basic observation parameter, the temperature, cannot be used to separate the entropy property changes and the rate of entropy production in the second case. The calculated transient response can be further analyzed to determine the equilibrium constant once the reaction equation and stoichiometric constants are specified which allows entropy property changes and the rate of entropy production to be determined. In a biological community, the equivalent of the reaction equation and a definition of an equilibrium constant are not available for all systems. The results for the two cases illustrate that using calorimetry measurements to identify the entropy flows in biological community activities requires further work to establish a framework similar to that chemical reacting systems that are based on an equilibrium type parameter. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
There Is No Spooky Action at a Distance in Quantum Mechanics
Entropy 2022, 24(4), 560; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040560 - 16 Apr 2022
Viewed by 636
Abstract
Einstein became bothered by quantum mechanical action at a distance within two years of Schrödinger’s introduction of his eponymous wave equation. If the wave function represents the “real” physical state of a particle, then the measurement of the particle’s position would result in [...] Read more.
Einstein became bothered by quantum mechanical action at a distance within two years of Schrödinger’s introduction of his eponymous wave equation. If the wave function represents the “real” physical state of a particle, then the measurement of the particle’s position would result in the instantaneous collapse of the wave function to the single, measured position. Such a process seemingly violates not only the Schrödinger equation but also special relativity. Einstein was not alone in this vexation; however, the dilemma eventually faded as physicists concentrated on using the Schrödinger equation to solve a plethora of pressing problems. For the next 30 years, wave function collapse, while occasionally discussed by physicists, was primarily a topic of interest for philosophers. That is, until 1964, when Bell introduced his famous inequality and maintained that its violation proved that quantum mechanics and, by implication, nature herself are nonlocal. Unfortunately, this brought the topic back to mainstream physics, where it has remained and continues to muddy the waters. To be sure, not all physicists are bothered by the apparent nonlocality of quantum mechanics. So where have those who embrace quantum nonlocality gone wrong? I argue that the answer is a gratuitous belief in the ontic nature of the quantum state. Full article
(This article belongs to the Special Issue Completeness of Quantum Theory: Still an Open Question)
Article
Dynamics of Remote Communication: Movement Coordination in Video-Mediated and Face-to-Face Conversations
Entropy 2022, 24(4), 559; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040559 - 15 Apr 2022
Viewed by 512
Abstract
The present pandemic forced our daily interactions to move into the virtual world. People had to adapt to new communication media that afford different ways of interaction. Remote communication decreases the availability and salience of some cues but also may enable and highlight [...] Read more.
The present pandemic forced our daily interactions to move into the virtual world. People had to adapt to new communication media that afford different ways of interaction. Remote communication decreases the availability and salience of some cues but also may enable and highlight others. Importantly, basic movement dynamics, which are crucial for any interaction as they are responsible for the informational and affective coupling, are affected. It is therefore essential to discover exactly how these dynamics change. In this exploratory study of six interacting dyads we use traditional variability measures and cross recurrence quantification analysis to compare the movement coordination dynamics in quasi-natural dialogues in four situations: (1) remote video-mediated conversations with a self-view mirror image present, (2) remote video-mediated conversations without a self-view, (3) face-to-face conversations with a self-view, and (4) face-to-face conversations without a self-view. We discovered that in remote interactions movements pertaining to communicative gestures were exaggerated, while the stability of interpersonal coordination was greatly decreased. The presence of the self-view image made the gestures less exaggerated, but did not affect the coordination. The dynamical analyses are helpful in understanding the interaction processes and may be useful in explaining phenomena connected with video-mediated communication, such as “Zoom fatigue”. Full article
Show Figures

Figure 1

Article
Suspicion Distillation Gradient Descent Bit-Flipping Algorithm
Entropy 2022, 24(4), 558; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040558 - 15 Apr 2022
Viewed by 456
Abstract
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The [...] Read more.
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The name SuspicionDistillation reflects the main feature of the algorithm—that in every iteration, we assign a level of suspicion to each variable node about its current bit value. The level of suspicion of a variable node is used to decide whether the corresponding bit will be flipped. In addition, in each iteration, we determine the number of satisfied and unsatisfied checks that connect a suspicious node with other suspicious variable nodes. In this way, in the course of iteration, we “distill” such suspicious bits and flip them. The deterministic nature of the proposed algorithm results in a low-complexity implementation, as the bit-flipping rule can be obtained by modifying the original GDBF rule by using basic logic gates, and the modification is not applied in all decoding iterations. Furthermore, we present a more general framework based on deterministic re-initialization of the decoder input. The performance of the resulting algorithm is analyzed for the codes with various code lengths, and significant performance improvements are observed compared to the state-of-the-art hard-decision-decoding algorithms. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications)
Show Figures

Figure 1

Review
Blockchain Technology, Cryptocurrency: Entropy-Based Perspective
Entropy 2022, 24(4), 557; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040557 - 15 Apr 2022
Cited by 1 | Viewed by 930
Abstract
The large-scale application of blockchain technology is an expected to be an inevitable trend. This study revolves around published papers and articles related to blockchain technology, relevance analysis and sorting through the retrieved documents with six core layers of blockchain: Application Layer, Contract [...] Read more.
The large-scale application of blockchain technology is an expected to be an inevitable trend. This study revolves around published papers and articles related to blockchain technology, relevance analysis and sorting through the retrieved documents with six core layers of blockchain: Application Layer, Contract Layer, Actuator Layer, Consensus Layer, Network Layer and Data Layer. Based on the analysis results, this study found that China’s research is more towards the preference and application of landing and industry and smart cities with blockchain as the underlying technology. International research is more focused on the research of finance as the underlying technology of blockchain and tries to combine crypto assets with real industries, such as crypted assets and payment systems for traditional industries. This paper studies the impact of monetary entropy on cryptocurrencies in smart cities and uses the monetary entropy formula to measure the crypto-economic entropy. We use Kolmogorov entropy to describe the degree of chaos in the cryptocurrency market in a smart city. The study illustrates the current status of blockchain technology and applications from the perspective of cryptocurrency in a smart city. We find that smart cities and cryptocurrencies have a mutually reinforcing effect. Full article
(This article belongs to the Special Issue Signatures of Maturity in Cryptocurrency Market)
Show Figures

Figure 1

Article
Entropy Could Quantify Brain Activation Induced by Mechanical Impedance-Restrained Active Arm Motion: A Functional NIRS Study
by , and
Entropy 2022, 24(4), 556; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040556 - 15 Apr 2022
Viewed by 413
Abstract
Brain activation has been used to understand brain-level events associated with cognitive tasks or physical tasks. As a quantitative measure for brain activation, we propose entropy in place of signal amplitude and beta value, which are widely used, but sometimes criticized for their [...] Read more.
Brain activation has been used to understand brain-level events associated with cognitive tasks or physical tasks. As a quantitative measure for brain activation, we propose entropy in place of signal amplitude and beta value, which are widely used, but sometimes criticized for their limitations and shortcomings as such measures. To investigate the relevance of our proposition, we provided 22 subjects with physical stimuli through elbow extension-flexion motions by using our exoskeleton robot, measured brain activation in terms of entropy, signal amplitude, and beta value; and compared entropy with the other two. The results show that entropy is superior, in that its change appeared in limited, well established, motor areas, while signal amplitude and beta value changes appeared in a widespread fashion, contradicting the modularity theory. Entropy can predict increase in brain activation with task duration, while the other two cannot. When stimuli shifted from the rest state to the task state, entropy exhibited a similar increase as the other two did. Although entropy showed only a part of the phenomenon induced by task strength, it showed superiority by showing a decrease in brain activation that the other two did not show. Moreover, entropy was capable of identifying the physiologically important location. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Article
Implicit Subgrid-Scale Modeling of a Mach 2.5 Spatially Developing Turbulent Boundary Layer
Entropy 2022, 24(4), 555; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040555 - 15 Apr 2022
Viewed by 474
Abstract
We employ numerically implicit subgrid-scale modeling provided by the well-known streamlined upwind/Petrov–Galerkin stabilization for the finite element discretization of advection–diffusion problems in a Large Eddy Simulation (LES) approach. Whereas its original purpose was to provide sufficient algorithmic dissipation for a stable and convergent [...] Read more.
We employ numerically implicit subgrid-scale modeling provided by the well-known streamlined upwind/Petrov–Galerkin stabilization for the finite element discretization of advection–diffusion problems in a Large Eddy Simulation (LES) approach. Whereas its original purpose was to provide sufficient algorithmic dissipation for a stable and convergent numerical method, more recently, it has been utilized as a subgrid-scale (SGS) model to account for the effect of small scales, unresolvable by the discretization. The freestream Mach number is 2.5, and direct comparison with a DNS database from our research group, as well as with experiments from the literature of adiabatic supersonic spatially turbulent boundary layers, is performed. Turbulent inflow conditions are generated via our dynamic rescaling–recycling approach, recently extended to high-speed flows. Focus is given to the assessment of the resolved Reynolds stresses. In addition, flow visualization is performed to obtain a much better insight into the physics of the flow. A weak compressibility effect is observed on thermal turbulent structures based on two-point correlations (IC vs. supersonic). The Reynolds analogy (u vs. t) approximately holds for the supersonic regime, but to a lesser extent than previously observed in incompressible (IC) turbulent boundary layers, where temperature was assumed as a passive scalar. A much longer power law behavior of the mean streamwise velocity is computed in the outer region when compared to the log law at Mach 2.5. Implicit LES has shown very good performance in Mach 2.5 adiabatic flat plates in terms of the mean flow (i.e., Cf and UVD+). iLES significantly overpredicts the peak values of u, and consequently Reynolds shear stress peaks, in the buffer layer. However, excellent agreement between the turbulence intensities and Reynolds shear stresses is accomplished in the outer region by the present iLES with respect to the external DNS database at similar Reynolds numbers. Full article
(This article belongs to the Special Issue Computational Fluid Dynamics and Conjugate Heat Transfer)
Show Figures

Figure 1

Article
Quantum Gravity If Non-Locality Is Fundamental
Entropy 2022, 24(4), 554; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040554 - 15 Apr 2022
Cited by 1 | Viewed by 520
Abstract
I take non-locality to be the Michelson–Morley experiment of the early 21st century, assume its universal validity, and try to derive its consequences. Spacetime, with its locality, cannot be fundamental, but must somehow be emergent from entangled coherent quantum variables and their behaviors. [...] Read more.
I take non-locality to be the Michelson–Morley experiment of the early 21st century, assume its universal validity, and try to derive its consequences. Spacetime, with its locality, cannot be fundamental, but must somehow be emergent from entangled coherent quantum variables and their behaviors. There are, then, two immediate consequences: (i). if we start with non-locality, we need not explain non-locality. We must instead explain an emergence of locality and spacetime. (ii). There can be no emergence of spacetime without matter. These propositions flatly contradict General Relativity, which is foundationally local, can be formulated without matter, and in which there is no “emergence” of spacetime. If these be true, then quantum gravity cannot be a minor alteration of General Relativity but must demand its deep reformulation. This will almost inevitably lead to: matter not only curves spacetime, but “creates” spacetime. We will see independent grounds for the assertion that matter both curves and creates spacetime that may invite a new union of quantum gravity and General Relativity. This quantum creation of spacetime consists of: (i) fully non-local entangled coherent quantum variables. (ii) The onset of locality via decoherence. (iii) A metric in Hilbert space among entangled quantum variables by the sub-additive von Neumann entropy between pairs of variables. (iv) Mapping from metric distances in Hilbert space to metric distances in classical spacetime by episodic actualization events. (v) Discrete spacetime is the relations among these discrete actualization events. (vi) “Now” is the shared moment of actualization of one among the entangled variables when the amplitudes of the remaining entangled variables change instantaneously. (vii) The discrete, successive, episodic, irreversible actualization events constitute a quantum arrow of time. (viii) The arrow of time history of these events is recorded in the very structure of the spacetime constructed. (ix) Actual Time is a succession of two or more actual events. The theory inevitably yields a UV cutoff of a new type. The cutoff is a phase transition between continuous spacetime before the transition and discontinuous spacetime beyond the phase transition. This quantum creation of spacetime modifies General Relativity and may account for Dark Energy, Dark Matter, and the possible elimination of the singularities of General Relativity. Relations to Causal Set Theory, faithful Lorentzian manifolds, and past and future light cones joined at “Actual Now” are discussed. Possible observational and experimental tests based on: (i). the existence of Sub- Planckian photons, (ii). knee and ankle discontinuities in the high-energy gamma ray spectrum, and (iii). possible experiments to detect a creation of spacetime in the Casimir system are discussed. A quantum actualization enhancement of repulsive Casimir effect would be anti-gravitational and of possible practical use. The ideas and concepts discussed here are not yet a theory, but at most the start of a framework that may be useful. Full article
Editorial
Nonparametric Statistical Inference with an Emphasis on Information-Theoretic Methods
Entropy 2022, 24(4), 553; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040553 - 15 Apr 2022
Viewed by 407
Abstract
The presented volume addresses some vital problems in contemporary statistical reasoning [...] Full article
Article
Changes in the Complexity of Limb Movements during the First Year of Life across Different Tasks
Entropy 2022, 24(4), 552; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040552 - 15 Apr 2022
Cited by 1 | Viewed by 619
Abstract
Infants’ limb movements evolve from disorganized to more selectively coordinated during the first year of life as they learn to navigate and interact with an ever-changing environment more efficiently. However, how these coordination patterns change during the first year of life and across [...] Read more.
Infants’ limb movements evolve from disorganized to more selectively coordinated during the first year of life as they learn to navigate and interact with an ever-changing environment more efficiently. However, how these coordination patterns change during the first year of life and across different contexts is unknown. Here, we used wearable motion trackers to study the developmental changes in the complexity of limb movements (arms and legs) at 4, 6, 9 and 12 months of age in two different tasks: rhythmic rattle-shaking and free play. We applied Multidimensional Recurrence Quantification Analysis (MdRQA) to capture the nonlinear changes in infants’ limb complexity. We show that the MdRQA parameters (entropy, recurrence rate and mean line) are task-dependent only at 9 and 12 months of age, with higher values in rattle-shaking than free play. Since rattle-shaking elicits more stable and repetitive limb movements than the free exploration of multiple objects, we interpret our data as reflecting an increase in infants’ motor control that allows for stable body positioning and easier execution of limb movements. Infants’ motor system becomes more stable and flexible with age, allowing for flexible adaptation of behaviors to task demands. Full article
Show Figures

Figure 1

Review
Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging
Entropy 2022, 24(4), 551; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040551 - 14 Apr 2022
Cited by 1 | Viewed by 837
Abstract
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations [...] Read more.
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL. Full article
Show Figures

Figure 1

Article
Chess AI: Competing Paradigms for Machine Intelligence
Entropy 2022, 24(4), 550; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040550 - 14 Apr 2022
Cited by 2 | Viewed by 733
Abstract
Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly [...] Read more.
Endgame studies have long served as a tool for testing human creativity and intelligence. We find that they can serve as a tool for testing machine ability as well. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We use Plaskett’s Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Our experiments show that Stockfish outperforms LCZero on the puzzle. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. On the theoretical side, we describe how Bellman’s equation may be applied to optimize the probability of winning. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research. Full article
(This article belongs to the Special Issue Bayesian Statistics and Applied Probability for Games and Decisions)
Show Figures

Figure 1

Article
Research on Product Core Component Acquisition Based on Patent Semantic Network
Entropy 2022, 24(4), 549; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040549 - 14 Apr 2022
Viewed by 506
Abstract
Patent data contain plenty of valuable information. Recently, the lack of innovative ideas has resulted in some enterprises encountering bottlenecks in product research and development (R&D). Some enterprises point out that they do not have enough comprehension of product components. To improve efficiency [...] Read more.
Patent data contain plenty of valuable information. Recently, the lack of innovative ideas has resulted in some enterprises encountering bottlenecks in product research and development (R&D). Some enterprises point out that they do not have enough comprehension of product components. To improve efficiency of product R&D, this paper introduces natural-language processing (NLP) technology, which includes part-of-speech (POS) tagging and subject–action–object (SAO) classification. Our strategy first extracts patent keywords from products, then applies a complex network to obtain core components based on structural holes and centrality of eigenvector algorism. Finally, we use the example of US shower patents to verify the effectiveness and feasibility of the methodology. As a result, this paper examines the acquisition of core components and how they can help enterprises and designers clarify their R&D ideas and design priorities. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Article
Effect of Quantum Coherence on Landauer’s Principle
Entropy 2022, 24(4), 548; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040548 - 13 Apr 2022
Viewed by 599
Abstract
Landauer’s principle provides a fundamental lower bound for energy dissipation occurring with information erasure in the quantum regime. While most studies have related the entropy reduction incorporated with the erasure to the lower bound (entropic bound), recent efforts have also provided another lower [...] Read more.
Landauer’s principle provides a fundamental lower bound for energy dissipation occurring with information erasure in the quantum regime. While most studies have related the entropy reduction incorporated with the erasure to the lower bound (entropic bound), recent efforts have also provided another lower bound associated with the thermal fluctuation of the dissipated energy (thermodynamic bound). The coexistence of the two bounds has stimulated comparative studies of their properties; however, these studies were performed for systems where the time-evolution of diagonal (population) and off-diagonal (coherence) elements of the density matrix are decoupled. In this paper, we aimed to broaden the comparative study to include the influence of quantum coherence induced by the tilted system–reservoir interaction direction. By examining their dependence on the initial state of the information-bearing system, we find that the following properties of the bounds are generically held regardless of whether the influence of the coherence is present or not: the entropic bound serves as the tighter bound for a sufficiently mixed initial state, while the thermodynamic bound is tighter when the purity of the initial state is sufficiently high. The exception is the case where the system dynamics involve only phase relaxation; in this case, the two bounds coincide when the initial coherence is zero; otherwise, the thermodynamic bound serves the tighter bound. We also find the quantum information erasure inevitably accompanies constant energy dissipation caused by the creation of system–reservoir correlation, which may cause an additional source of energetic cost for the erasure. Full article
(This article belongs to the Special Issue Quantum Information Concepts in Open Quantum Systems)
Show Figures

Figure 1

Article
A Dynamic Autocatalytic Network Model of Therapeutic Change
Entropy 2022, 24(4), 547; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040547 - 13 Apr 2022
Viewed by 525
Abstract
Psychotherapy involves the modification of a client’s worldview to reduce distress and enhance well-being. We take a human dynamical systems approach to modeling this process, using Reflexively Autocatalytic foodset-derived (RAF) networks. RAFs have been used to model the self-organization of adaptive networks associated [...] Read more.
Psychotherapy involves the modification of a client’s worldview to reduce distress and enhance well-being. We take a human dynamical systems approach to modeling this process, using Reflexively Autocatalytic foodset-derived (RAF) networks. RAFs have been used to model the self-organization of adaptive networks associated with the origin and early evolution of both biological life, as well as the evolution and development of the kind of cognitive structure necessary for cultural evolution. The RAF approach is applicable in these seemingly disparate cases because it provides a theoretical framework for formally describing under what conditions systems composed of elements that interact and ‘catalyze’ the formation of new elements collectively become integrated wholes. In our application, the elements are mental representations, and the whole is a conceptual network. The initial components—referred to as foodset items—are mental representations that are innate, or were acquired through social learning or individual learning (of pre-existing information). The new elements—referred to as foodset-derived items—are mental representations that result from creative thought (resulting in new information). In clinical psychology, a client’s distress may be due to, or exacerbated by, one or more beliefs that diminish self-esteem. Such beliefs may be formed and sustained through distorted thinking, and the tendency to interpret ambiguous events as confirmation of these beliefs. We view psychotherapy as a creative collaborative process between therapist and client, in which the output is not an artwork or invention but a more well-adapted worldview and approach to life on the part of the client. In this paper, we model a hypothetical albeit representative example of the formation and dissolution of such beliefs over the course of a therapist–client interaction using RAF networks. We show how the therapist is able to elicit this worldview from the client and create a conceptualization of the client’s concerns. We then formally demonstrate four distinct ways in which the therapist is able to facilitate change in the client’s worldview: (1) challenging the client’s negative interpretations of events, (2) providing direct evidence that runs contrary to and counteracts the client’s distressing beliefs, (3) using self-disclosure to provide examples of strategies one can use to diffuse a negative conclusion, and (4) reinforcing the client’s attempts to assimilate such strategies into their own ways of thinking. We then discuss the implications of such an approach to expanding our knowledge of the development of mental health concerns and the trajectory of the therapeutic change. Full article
Show Figures

Figure 1

Article
A Method for Unsupervised Semi-Quantification of Inmunohistochemical Staining with Beta Divergences
Entropy 2022, 24(4), 546; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040546 - 13 Apr 2022
Viewed by 446
Abstract
In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a [...] Read more.
In many research laboratories, it is essential to determine the relative expression levels of some proteins of interest in tissue samples. The semi-quantitative scoring of a set of images consists of establishing a scale of scores ranging from zero or one to a maximum number set by the researcher and assigning a score to each image that should represent some predefined characteristic of the IHC staining, such as its intensity. However, manual scoring depends on the judgment of an observer and therefore exposes the assessment to a certain level of bias. In this work, we present a fully automatic and unsupervised method for comparative biomarker quantification in histopathological brightfield images. The method relies on a color separation method that discriminates between two chromogens expressed as brown and blue colors robustly, independent of color variation or biomarker expression level. For this purpose, we have adopted a two-stage stain separation approach in the optical density space. First, a preliminary separation is performed using a deconvolution method in which the color vectors of the stains are determined after an eigendecomposition of the data. Then, we adjust the separation using the non-negative matrix factorization method with beta divergences, initializing the algorithm with the matrices resulting from the previous step. After that, a feature vector of each image based on the intensity of the two chromogens is determined. Finally, the images are annotated using a systematically initialized k-means clustering algorithm with beta divergences. The method clearly defines the initial boundaries of the categories, although some flexibility is added. Experiments for the semi-quantitative scoring of images in five categories have been carried out by comparing the results with the scores of four expert researchers yielding accuracies that range between 76.60% and 94.58%. These results show that the proposed automatic scoring system, which is definable and reproducible, produces consistent results. Full article
(This article belongs to the Special Issue Theory and Applications of Information Processing Algorithms)
Show Figures

Figure 1

Article
Measurement Uncertainty, Purity, and Entanglement Dynamics of Maximally Entangled Two Qubits Interacting Spatially with Isolated Cavities: Intrinsic Decoherence Effect
Entropy 2022, 24(4), 545; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040545 - 13 Apr 2022
Viewed by 410
Abstract
In a system of two charge-qubits that are initially prepared in a maximally entangled Bell’s state, the dynamics of quantum memory-assisted entropic uncertainty, purity, and negative entanglement are investigated. Isolated external cavity fields are considered in two different configurations: coherent-even coherent and even [...] Read more.
In a system of two charge-qubits that are initially prepared in a maximally entangled Bell’s state, the dynamics of quantum memory-assisted entropic uncertainty, purity, and negative entanglement are investigated. Isolated external cavity fields are considered in two different configurations: coherent-even coherent and even coherent cavity fields. For different initial cavity configurations, the temporal evolution of the final state of qubits and cavities is solved analytically. The effects of intrinsic decoherence and detuning strength on the dynamics of bipartite entropic uncertainty, purity and entanglement are explored. Depending on the field parameters, nonclassical correlations can be preserved. Nonclassical correlations and revival aspects appear to be significantly inhibited when intrinsic decoherence increases. Nonclassical correlations stay longer and have greater revivals due to the high detuning of the two qubits and the coherence strength of the initial cavity fields. Quantum memory-assisted entropic uncertainty and entropy have similar dynamics while the negativity presents fewer revivals in contrast. Full article
(This article belongs to the Topic Quantum Information and Quantum Computing)
Show Figures

Figure 1

Article
Simplification of the Gram Matrix Eigenvalue Problem for Quadrature Amplitude Modulation Signals
Entropy 2022, 24(4), 544; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040544 - 13 Apr 2022
Viewed by 597
Abstract
In quantum information science, it is very important to solve the eigenvalue problem of the Gram matrix for quantum signals. This allows various quantities to be calculated, such as the error probability, mutual information, channel capacity, and the upper and lower bounds of [...] Read more.
In quantum information science, it is very important to solve the eigenvalue problem of the Gram matrix for quantum signals. This allows various quantities to be calculated, such as the error probability, mutual information, channel capacity, and the upper and lower bounds of the reliability function. Solving the eigenvalue problem also provides a matrix representation of quantum signals, which is useful for simulating quantum systems. In the case of symmetric signals, analytic solutions to the eigenvalue problem of the Gram matrix have been obtained, and efficient computations are possible. However, for asymmetric signals, there is no analytic solution and universal numerical algorithms that must be used, rendering the computations inefficient. Recently, we have shown that, for asymmetric signals such as amplitude-shift keying coherent-state signals, the Gram matrix eigenvalue problem can be simplified by exploiting its partial symmetry. In this paper, we clarify a method for simplifying the eigenvalue problem of the Gram matrix for quadrature amplitude modulation (QAM) signals, which are extremely important for applications in quantum communication and quantum ciphers. The results presented in this paper are applicable to ordinary QAM signals as well as modified QAM signals, which enhance the security of quantum cryptography. Full article
(This article belongs to the Special Issue Quantum Communication, Quantum Radar, and Quantum Cipher)
Show Figures

Figure 1

Article
A Uzawa-Type Iterative Algorithm for the Stationary Natural Convection Model
Entropy 2022, 24(4), 543; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040543 - 13 Apr 2022
Viewed by 473
Abstract
In this study, a Uzawa-type iterative algorithm is introduced and analyzed for solving the stationary natural convection model, where physical variables are discretized by utilizing a mixed finite element method. Compared with the common Uzawa iterative algorithm, the main finding is that the [...] Read more.
In this study, a Uzawa-type iterative algorithm is introduced and analyzed for solving the stationary natural convection model, where physical variables are discretized by utilizing a mixed finite element method. Compared with the common Uzawa iterative algorithm, the main finding is that the proposed algorithm produces weakly divergence-free velocity approximation. In addition, the convergence results of the proposed algorithm are provided, and numerical tests supporting the theory are presented. Full article
Show Figures

Figure 1

Article
On the Age of Information in a Two-User Multiple Access Setup
Entropy 2022, 24(4), 542; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040542 - 12 Apr 2022
Viewed by 535
Abstract
This work considers a two-user multiple access channel in which both users have Age of Information (AoI)-oriented traffic with different characteristics. More specifically, the first user has external traffic and cannot control the generation of status updates, and the second user monitors a [...] Read more.
This work considers a two-user multiple access channel in which both users have Age of Information (AoI)-oriented traffic with different characteristics. More specifically, the first user has external traffic and cannot control the generation of status updates, and the second user monitors a sensor and transmits status updates to the receiver according to a generate-at-will policy. The receiver is equipped with multiple antennas and the transmitters have single antennas; the channels are subject to Rayleigh fading and path loss. We analyze the average AoI of the first user for a discrete-time first-come-first-served (FCFS) queue, last-come-first-served (LCFS) queue, and queue with packet replacement. We derive the AoI distribution and the average AoI of the second user for a threshold policy. Then, we formulate an optimization problem to minimize the average AoI of the first user for the FCFS and LCFS with preemption queue discipline to maintain the average AoI of the second user below a given level. The constraints of the optimization problem are shown to be convex. It is also shown that the objective function of the problem for the first-come-first-served queue policy is non-convex, and a suboptimal technique is introduced to effectively solve the problem using the algorithms developed for solving a convex optimization problem. Numerical results illustrate the performance of the considered optimization algorithm versus the different parameters of the system. Finally, we discuss how the analytical results of this work can be extended to capture larger setups with more than two users. Full article
(This article belongs to the Special Issue Age of Information: Concept, Metric and Tool for Network Control)
Show Figures

Figure 1

Article
An Information Quantity in Pure State Models
Entropy 2022, 24(4), 541; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040541 - 12 Apr 2022
Viewed by 423
Abstract
When we consider an error model in a quantum computing system, we assume a parametric model where a prepared qubit belongs. Keeping this in mind, we focus on the evaluation of the amount of information we obtain when we know the system belongs [...] Read more.
When we consider an error model in a quantum computing system, we assume a parametric model where a prepared qubit belongs. Keeping this in mind, we focus on the evaluation of the amount of information we obtain when we know the system belongs to the model within the parameter range. Excluding classical fluctuations, uncertainty still remains in the system. We propose an information quantity called purely quantum information to evaluate this and give it an operational meaning. For the qubit case, it is relevant to the facility location problem on the unit sphere, which is well known in operations research. For general cases, we extend this to the facility location problem in complex projective spaces. Purely quantum information reflects the uncertainty of a quantum system and is related to the minimum entropy rather than the von Neumann entropy. Full article
(This article belongs to the Topic Quantum Information and Quantum Computing)
Show Figures

Figure 1

Article
The Analysis of Mammalian Hearing Systems Supports the Hypothesis That Criticality Favors Neuronal Information Representation but Not Computation
Entropy 2022, 24(4), 540; https://0-doi-org.brum.beds.ac.uk/10.3390/e24040540 - 12 Apr 2022
Viewed by 388
Abstract
In the neighborhood of critical states, distinct materials exhibit the same physical behavior, expressed by common simple laws among measurable observables, hence rendering a more detailed analysis of the individual systems obsolete. It is a widespread view that critical states are fundamental to [...] Read more.
In the neighborhood of critical states, distinct materials exhibit the same physical behavior, expressed by common simple laws among measurable observables, hence rendering a more detailed analysis of the individual systems obsolete. It is a widespread view that critical states are fundamental to neuroscience and directly favor computation. We argue here that from an evolutionary point of view, critical points seem indeed to be a natural phenomenon. Using mammalian hearing as our example, we show, however, explicitly that criticality does not describe the proper computational process and thus is only indirectly related to the computation in neural systems. Full article
(This article belongs to the Special Issue The Principle of Dynamical Criticality)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop