Next Issue
Volume 24, March
Previous Issue
Volume 24, January

Entropy, Volume 24, Issue 2 (February 2022) – 168 articles

Cover Story (view full-size image): The nonequilibrium thermodynamics of polymeric liquids poses a demanding intellectual and practical challenge for scientists and engineers. Unlike simple, isotropic liquids, the molecular configurational microstructure of long, chain-like polymers contributes a deeper level of complexity to an already challenging problem. Various frameworks of the nonequilibrium thermodynamics of polymers have been proposed over the previous 75 years, but explicit evidence of their validity and usefulness has yet to be provided, principally because of the troublesome fact that the experimental measurement of the entropy of a polymeric liquid is not currently possible. With the advent of realistic macromolecular simulations of melts and solutions over the last decade, it has finally become possible to calculate entropy for these complicated liquids. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Generalizations of Talagrand Inequality for Sinkhorn Distance Using Entropy Power Inequality
Entropy 2022, 24(2), 306; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020306 - 21 Feb 2022
Viewed by 950
Abstract
The distance that compares the difference between two probability distributions plays a fundamental role in statistics and machine learning. Optimal transport (OT) theory provides a theoretical framework to study such distances. Recent advances in OT theory include a generalization of classical OT with [...] Read more.
The distance that compares the difference between two probability distributions plays a fundamental role in statistics and machine learning. Optimal transport (OT) theory provides a theoretical framework to study such distances. Recent advances in OT theory include a generalization of classical OT with an extra entropic constraint or regularization, called entropic OT. Despite its convenience in computation, entropic OT still lacks sufficient theoretical support. In this paper, we show that the quadratic cost in entropic OT can be upper-bounded using entropy power inequality (EPI)-type bounds. First, we prove an HWI-type inequality by making use of the infinitesimal displacement convexity of the OT map. Second, we derive two Talagrand-type inequalities using the saturation of EPI that corresponds to a numerical term in our expressions. These two new inequalities are shown to generalize two previous results obtained by Bolley et al. and Bai et al. Using the new Talagrand-type inequalities, we also show that the geometry observed by Sinkhorn distance is smoothed in the sense of measure concentration. Finally, we corroborate our results with various simulation studies. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics III)
Show Figures

Figure 1

Article
Optimal UAV Formation Tracking Control with Dynamic Leading Velocity and Network-Induced Delays
Entropy 2022, 24(2), 305; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020305 - 21 Feb 2022
Viewed by 383
Abstract
With the rapid development of UAV technology, the research of optimal UAV formation tracking has been extensively studied. However, the high maneuverability and dynamic network topology of UAVs make formation tracking control much more difficult. In this paper, considering the highly dynamic features [...] Read more.
With the rapid development of UAV technology, the research of optimal UAV formation tracking has been extensively studied. However, the high maneuverability and dynamic network topology of UAVs make formation tracking control much more difficult. In this paper, considering the highly dynamic features of uncertain time-varying leader velocity and network-induced delays, the optimal formation control algorithms for both near-equilibrium and general dynamic control cases are developed. First, the discrete-time error dynamics of UAV leader–follower models are analyzed. Next, a linear quadratic optimization problem is formulated with the objective of minimizing the errors between the desired and actual states consisting of velocity and position information of the follower. The optimal formation tracking problem of near-equilibrium cases is addressed by using a backward recursion method, and then the results are further extended to the general dynamic case where the leader moves at an uncertain time-varying velocity. Additionally, angle deviations are investigated, and it is proved that the similar state dynamics to the general case can be derived and the principle of control strategy design can be maintained. By using actual real-world data, numerical experiments verify the effectiveness of the proposed optimal UAV formation-tracking algorithm in both near-equilibrium and dynamic control cases in the presence of network-induced delays. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Correlations, Information Backflow, and Objectivity in a Class of Pure Dephasing Models
Entropy 2022, 24(2), 304; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020304 - 21 Feb 2022
Cited by 1 | Viewed by 471
Abstract
We critically examine the role that correlations established between a system and fragments of its environment play in characterising the ensuing dynamics. We employ a dephasing model with different initial conditions, where the state of the initial environment represents a tunable degree of [...] Read more.
We critically examine the role that correlations established between a system and fragments of its environment play in characterising the ensuing dynamics. We employ a dephasing model with different initial conditions, where the state of the initial environment represents a tunable degree of freedom that qualitatively and quantitatively affects the correlation profiles, but nevertheless results in the same reduced dynamics for the system. We apply recently developed tools for the characterisation of non-Markovianity to carefully assess the role that correlations, as quantified by the (quantum) Jensen–Shannon divergence and relative entropy, as well as changes in the environmental state, play in whether the conditions for classical objectivity within the quantum Darwinism paradigm are met. We demonstrate that for precisely the same non-Markovian reduced dynamics of the system arising from different microscopic models, some exhibit quantum Darwinistic features, while others show that no meaningful notion of classical objectivity is present. Furthermore, our results highlight that the non-Markovian nature of an environment does not a priori prevent a system from redundantly proliferating relevant information, but rather it is the system’s ability to establish the requisite correlations that is the crucial factor in the manifestation of classical objectivity. Full article
(This article belongs to the Special Issue Quantum Information Concepts in Open Quantum Systems)
Show Figures

Figure 1

Article
Linear and Nonlinear Effects in Connectedness Structure: Comparison between European Stock Markets
Entropy 2022, 24(2), 303; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020303 - 21 Feb 2022
Viewed by 485
Abstract
The purpose of this research is to compare the risk transfer structure in Central and Eastern European and Western European stock markets during the 2007–2009 financial crisis and the COVID-19 pandemic. Similar to the global financial crisis (GFC), the spread of coronavirus (COVID-19) [...] Read more.
The purpose of this research is to compare the risk transfer structure in Central and Eastern European and Western European stock markets during the 2007–2009 financial crisis and the COVID-19 pandemic. Similar to the global financial crisis (GFC), the spread of coronavirus (COVID-19) created a significant level of risk, causing investors to suffer losses in a very short period of time. We use a variety of methods, including nonstandard like mutual information and transfer entropy. The results that we obtained indicate that there are significant nonlinear correlations in the capital markets that can be practically applied for investment portfolio optimization. From an investor perspective, our findings suggest that in the wake of global crisis and pandemic outbreak, the benefits of diversification will be limited by the transfer of funds between developed and developing country markets. Our study provides an insight into the risk transfer theory in developed and emerging markets as well as a cutting-edge methodology designed for analyzing the connectedness of markets. We contribute to the studies which have examined the different stock markets’ response to different turbulences. The study confirms that specific market effects can still play a significant role because of the interconnection of different sectors of the global economy. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Economics, Finance, and Management)
Show Figures

Figure 1

Article
A Novel Epidemic Model Base on Pulse Charging in Wireless Rechargeable Sensor Networks
Entropy 2022, 24(2), 302; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020302 - 21 Feb 2022
Viewed by 370
Abstract
As wireless rechargeable sensor networks (WRSNs) are gradually being widely accepted and recognized, the security issues of WRSNs have also become the focus of research discussion. In the existing WRSNs research, few people introduced the idea of pulse charging. Taking into account the [...] Read more.
As wireless rechargeable sensor networks (WRSNs) are gradually being widely accepted and recognized, the security issues of WRSNs have also become the focus of research discussion. In the existing WRSNs research, few people introduced the idea of pulse charging. Taking into account the utilization rate of nodes’ energy, this paper proposes a novel pulse infectious disease model (SIALS-P), which is composed of susceptible, infected, anti-malware and low-energy susceptible states under pulse charging, to deal with the security issues of WRSNs. In each periodic pulse point, some parts of low energy states (LS nodes, LI nodes) will be converted into the normal energy states (S nodes, I nodes) to control the number of susceptible nodes and infected nodes. This paper first analyzes the local stability of the SIALS-P model by Floquet theory. Then, a suitable comparison system is given by comparing theorem to analyze the stability of malware-free T-period solution and the persistence of malware transmission. Additionally, the optimal control of the proposed model is analyzed. Finally, the comparative simulation analysis regarding the proposed model, the non-charging model and the continuous charging model is given, and the effects of parameters on the basic reproduction number of the three models are shown. Meanwhile, the sensitivity of each parameter and the optimal control theory is further verified. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

Review
The Free Energy Principle for Perception and Action: A Deep Learning Perspective
Entropy 2022, 24(2), 301; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020301 - 21 Feb 2022
Viewed by 1387
Abstract
The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world, i.e., they minimize their free energy. Under this principle, biological agents learn a [...] Read more.
The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world, i.e., they minimize their free energy. Under this principle, biological agents learn a generative model of the world and plan actions in the future that will maintain the agent in an homeostatic state that satisfies its preferences. This framework lends itself to being realized in silico, as it comprehends important aspects that make it computationally affordable, such as variational inference and amortized planning. In this work, we investigate the tool of deep learning to design and realize artificial agents based on active inference, presenting a deep-learning oriented presentation of the free energy principle, surveying works that are relevant in both machine learning and active inference areas, and discussing the design choices that are involved in the implementation process. This manuscript probes newer perspectives for the active inference framework, grounding its theoretical aspects into more pragmatic affairs, offering a practical guide to active inference newcomers and a starting point for deep learning practitioners that would like to investigate implementations of the free energy principle. Full article
(This article belongs to the Special Issue Emerging Methods in Active Inference)
Show Figures

Figure 1

Article
Energy-Efficient Optimization for Energy-Harvesting-Enabled mmWave-UAV Heterogeneous Networks
Entropy 2022, 24(2), 300; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020300 - 20 Feb 2022
Cited by 1 | Viewed by 485
Abstract
Energy Harvesting (EH) is a promising paradigm for 5G heterogeneous communication. EH-enabled Device-to-Device (D2D) communication can assist devices in overcoming the disadvantage of limited battery capacity and improving the Energy Efficiency (EE) by performing EH from ambient wireless signals. Although numerous research works [...] Read more.
Energy Harvesting (EH) is a promising paradigm for 5G heterogeneous communication. EH-enabled Device-to-Device (D2D) communication can assist devices in overcoming the disadvantage of limited battery capacity and improving the Energy Efficiency (EE) by performing EH from ambient wireless signals. Although numerous research works have been conducted on EH-based D2D communication scenarios, the feature of EH-based D2D communication underlying Air-to-Ground (A2G) millimeter-Wave (mmWave) networks has not been fully studied. In this paper, we considered a scenario where multiple Unmanned Aerial Vehicles (UAVs) are deployed to provide energy for D2D Users (DUs) and data transmission for Cellular Users (CUs). We aimed to improve the network EE of EH-enabled D2D communications while reducing the time complexity of beam alignment for mmWave-enabled D2D Users (DUs). We considered a scenario where multiple EH-enabled DUs and CUs coexist, sharing the full mmWave frequency band and adopting high-directive beams for transmitting. To improve the network EE, we propose a joint beamwidth selection, power control, and EH time ratio optimization algorithm for DUs based on alternating optimization. We iteratively optimized one of the three variables, fixing the other two. During each iteration, we first used a game-theoretic approach to adjust the beamwidths of DUs to achieve the sub-optimal EE. Then, the problem with regard to power optimization was solved by the Dinkelbach method and Successive Convex Approximation (SCA). Finally, we performed the optimization of the EH time ratio using linear fractional programming to further increase the EE. By performing extensive simulation experiments, we validated the convergence and effectiveness of our algorithm. The results showed that our proposed algorithm outperformed the fixed beamwidth and fixed power strategy and could closely approach the performance of exhaustive search, particle swarm optimization, and the genetic algorithm, but with a much reduced time complexity. Full article
(This article belongs to the Special Issue Information Theory and Game Theory)
Show Figures

Figure 1

Article
Magnetic Entropic Forces Emerging in the System of Elementary Magnets Exposed to the Magnetic Field
Entropy 2022, 24(2), 299; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020299 - 20 Feb 2022
Viewed by 505
Abstract
A temperature dependent entropic force acting between the straight direct current I and the linear system (string with length of L) of N elementary non-interacting magnets/spins μ is reported. The system of elementary magnets is supposed to be in the thermal [...] Read more.
A temperature dependent entropic force acting between the straight direct current I and the linear system (string with length of L) of N elementary non-interacting magnets/spins μ is reported. The system of elementary magnets is supposed to be in the thermal equilibrium with the infinite thermal bath T. The entropic force at large distance from the current scales as Fmagnen~1r3, where r is the distance between the edge of the string and the current I, and kB is the Boltzmann constant; (rL is adopted). The entropic magnetic force is the repulsion force. The entropic magnetic force scales as Fmagnen~1T, which is unusual for entropic forces. The effect of “entropic pressure” is predicted for the situation when the source of the magnetic field is embedded into the continuous media, comprising elementary magnets/spins. Interrelation between bulk and entropy magnetic forces is analyzed. Entropy forces acting on the 1D string of elementary magnets that exposed the magnetic field produced by the magnetic dipole are addressed. Full article
(This article belongs to the Special Issue Entropic Forces in Complex Systems II)
Show Figures

Figure 1

Article
Networking Feasibility of Quantum Key Distribution Constellation Networks
Entropy 2022, 24(2), 298; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020298 - 20 Feb 2022
Viewed by 564
Abstract
Quantum key distribution constellation is the key to achieve global quantum networking. However, the networking feasibility of quantum constellation that combines satellite-to-ground accesses selection and inter-satellite routing is faced with a lack of research. In this paper, satellite-to-ground accesses selection is modeled as [...] Read more.
Quantum key distribution constellation is the key to achieve global quantum networking. However, the networking feasibility of quantum constellation that combines satellite-to-ground accesses selection and inter-satellite routing is faced with a lack of research. In this paper, satellite-to-ground accesses selection is modeled as problems to find the longest paths in directed acyclic graphs. The inter-satellite routing is interpreted as problems to find a maximum flow in graph theory. As far as we know, the above problems are initially understood from the perspective of graph theory. Corresponding algorithms to solve the problems are provided. Although the classical discrete variable quantum key distribution protocol, i.e., BB84 protocol, is applied in simulation, the methods proposed in our paper can also be used to solve other secure key distributions. The simulation results of a low-Earth-orbit constellation scenario show that the Sun is the leading factor in restricting the networking. Due to the solar influence, inter-planar links block the network periodically and, thus, the inter-continental delivery of keys is restricted significantly. Full article
(This article belongs to the Special Issue Quantum Communication)
Show Figures

Figure 1

Article
The Random Plots Graph Generation Model for Studying Systems with Unknown Connection Structures
Entropy 2022, 24(2), 297; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020297 - 20 Feb 2022
Viewed by 321
Abstract
We consider the problem of modeling complex systems where little or nothing is known about the structure of the connections between the elements. In particular, when such systems are to be modeled by graphs, it is unclear what vertex degree distributions these graphs [...] Read more.
We consider the problem of modeling complex systems where little or nothing is known about the structure of the connections between the elements. In particular, when such systems are to be modeled by graphs, it is unclear what vertex degree distributions these graphs should have. We propose that, instead of attempting to guess the appropriate degree distribution for a poorly understood system, one should model the system via a set of sample graphs whose degree distributions cover a representative range of possibilities and account for a variety of possible connection structures. To construct such a representative set of graphs, we propose a new random graph generator, Random Plots, in which we (1) generate a diversified set of vertex degree distributions and (2) target a graph generator at each of the constructed distributions, one-by-one, to obtain the ensemble of graphs. To assess the diversity of the resulting ensembles, we (1) substantialize the vague notion of diversity in a graph ensemble as the diversity of the numeral characteristics of the graphs within this ensemble and (2) compare such formalized diversity for the proposed model with that of three other common models (Erdos–Rényi–Gilbert (ERG), scale-free, and small-world). Computational experiments show that, in most cases, our approach produces more diverse sets of graphs compared with the three other models, including the entropy-maximizing ERG. The corresponding Python code is available at GitHub. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
TLP-CCC: Temporal Link Prediction Based on Collective Community and Centrality Feature Fusion
Entropy 2022, 24(2), 296; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020296 - 20 Feb 2022
Viewed by 462
Abstract
In the domain of network science, the future link between nodes is a significant problem in social network analysis. Recently, temporal network link prediction has attracted many researchers due to its valuable real-world applications. However, the methods based on network structure similarity are [...] Read more.
In the domain of network science, the future link between nodes is a significant problem in social network analysis. Recently, temporal network link prediction has attracted many researchers due to its valuable real-world applications. However, the methods based on network structure similarity are generally limited to static networks, and the methods based on deep neural networks often have high computational costs. This paper fully mines the network structure information and time-domain attenuation information, and proposes a novel temporal link prediction method. Firstly, the network collective influence (CI) method is used to calculate the weights of nodes and edges. Then, the graph is divided into several community subgraphs by removing the weak link. Moreover, the biased random walk method is proposed, and the embedded representation vector is obtained by the modified Skip-gram model. Finally, this paper proposes a novel temporal link prediction method named TLP-CCC, which integrates collective influence, the community walk features, and the centrality features. Experimental results on nine real dynamic network data sets show that the proposed method performs better for area under curve (AUC) evaluation compared with the classical link prediction methods. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Figure 1

Article
Numerical Study of Entropy Generation in Fully Developed Turbulent Circular Tube Flow Using an Elliptic Blending Turbulence Model
Entropy 2022, 24(2), 295; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020295 - 19 Feb 2022
Viewed by 344
Abstract
As computational fluid dynamics (CFD) advances, entropy generation minimization based on CFD becomes attractive for optimizing complex heat-transfer systems. This optimization depends on the accuracy of CFD results, such that accurate turbulence models, such as elliptic relaxation or elliptic blending turbulence models, become [...] Read more.
As computational fluid dynamics (CFD) advances, entropy generation minimization based on CFD becomes attractive for optimizing complex heat-transfer systems. This optimization depends on the accuracy of CFD results, such that accurate turbulence models, such as elliptic relaxation or elliptic blending turbulence models, become important. The performance of a previously developed elliptic blending turbulence model (the SST kωφα model) to predict the rate of entropy generation in the fully developed turbulent circular tube flow with constant heat flux was studied to provide some guidelines for using this class of turbulence model to calculate entropy generation in complex systems. The flow and temperature fields were simulated by using a CFD package, and then the rate of entropy generation was calculated in post-processing. The analytical correlations and results of two popular turbulence models (the realizable kε and the shear stress transport (SST) kω models) were used as references to demonstrate the accuracy of the SST kωφα model. The findings indicate that the turbulent Prandtl number (Prt) influences the entropy generation rate due to heat-transfer irreversibility. Prt = 0.85 produces the best results for the SST kωφα model. For the realizable kε and SST kω models, Prt = 0.85 and Prt = 0.92 produce the best results, respectively. For the realizable kε and the SST kω models, the two methods used to predict the rate of entropy generation due to friction irreversibility produce the same results. However, for the SST kωφα model, the rates of entropy generation due to friction irreversibility predicted by the two methods are different. The difference at a Reynolds number of 100,000 is about 14%. The method that incorporates the effective turbulent viscosity should be used to predict the rate of entropy generation due to friction irreversibility for the SST kωφα model. Furthermore, when the temperature in the flow field changes dramatically, the temperature-dependent fluid properties must be considered. Full article
Show Figures

Figure 1

Article
TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
Entropy 2022, 24(2), 294; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020294 - 19 Feb 2022
Cited by 1 | Viewed by 526
Abstract
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and [...] Read more.
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Article
Identifying Influential Nodes in Complex Networks Based on Multiple Local Attributes and Information Entropy
Entropy 2022, 24(2), 293; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020293 - 18 Feb 2022
Cited by 1 | Viewed by 402
Abstract
Identifying influential nodes in complex networks has attracted the attention of many researchers in recent years. However, due to the high time complexity, methods based on global attributes have become unsuitable for large-scale complex networks. In addition, compared with methods considering only a [...] Read more.
Identifying influential nodes in complex networks has attracted the attention of many researchers in recent years. However, due to the high time complexity, methods based on global attributes have become unsuitable for large-scale complex networks. In addition, compared with methods considering only a single attribute, considering multiple attributes can enhance the performance of the method used. Therefore, this paper proposes a new multiple local attributes-weighted centrality (LWC) based on information entropy, combining degree and clustering coefficient; both one-step and two-step neighborhood information are considered for evaluating the influence of nodes and identifying influential nodes in complex networks. Firstly, the influence of a node in a complex network is divided into direct influence and indirect influence. The degree and clustering coefficient are selected as direct influence measures. Secondly, based on the two direct influence measures, we define two indirect influence measures: two-hop degree and two-hop clustering coefficient. Then, the information entropy is used to weight the above four influence measures, and the LWC of each node is obtained by calculating the weighted sum of these measures. Finally, all the nodes are ranked based on the value of the LWC, and the influential nodes can be identified. The proposed LWC method is applied to identify influential nodes in four real-world networks and is compared with five well-known methods. The experimental results demonstrate the good performance of the proposed method on discrimination capability and accuracy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Residual Learning and Multi-Path Feature Fusion-Based Channel Estimation for Millimeter-Wave Massive MIMO System
Entropy 2022, 24(2), 292; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020292 - 18 Feb 2022
Viewed by 384
Abstract
Channel estimation is a challenging task in a millimeter-wave (mm Wave) massive multiple-input multiple-output (MIMO) system. The existing deep learning scheme, which learns the mapping from the input to the target channel, has great difficulty in estimating the exact channel state information (CSI). [...] Read more.
Channel estimation is a challenging task in a millimeter-wave (mm Wave) massive multiple-input multiple-output (MIMO) system. The existing deep learning scheme, which learns the mapping from the input to the target channel, has great difficulty in estimating the exact channel state information (CSI). In this paper, we consider the quantized received measurements as a low-resolution image, and we adopt the deep learning-based image super-resolution technique to reconstruct the mm Wave channel. Specifically, we exploit a state-of-the-art channel estimation framework based on residual learning and multi-path feature fusion (RL-MFF-Net). Firstly, residual learning makes the channel estimator focus on learning high-frequency residual information between the quantized received measurements and the mm Wave channel, while abundant low-frequency information is bypassed through skip connections. Moreover, to address the estimator’s gradient dispersion problem, a dense connection is added to the residual blocks to ensure the maximum information flow between the layers. Furthermore, the underlying mm Wave channel local features extracted from different residual blocks are preserved by multi-path feature fusion. The simulation results demonstrate that the proposed scheme outperforms traditional methods as well as existing deep learning methods, especially in the low signal-to-noise-ration (SNR) region. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Multiscale Geometric Analysis Fusion-Based Unsupervised Change Detection in Remote Sensing Images via FLICM Model
Entropy 2022, 24(2), 291; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020291 - 18 Feb 2022
Cited by 1 | Viewed by 410
Abstract
Remote sensing image change detection is widely used in land use and natural disaster detection. In order to improve the accuracy of change detection, a robust change detection method based on nonsubsampled contourlet transform (NSCT) fusion and fuzzy local information C-means clustering (FLICM) [...] Read more.
Remote sensing image change detection is widely used in land use and natural disaster detection. In order to improve the accuracy of change detection, a robust change detection method based on nonsubsampled contourlet transform (NSCT) fusion and fuzzy local information C-means clustering (FLICM) model is introduced in this paper. Firstly, the log-ratio and mean-ratio operators are used to generate the difference image (DI), respectively; then, the NSCT fusion model is utilized to fuse the two difference images, and one new DI is obtained. The fused DI can not only reflect the real change trend but also suppress the background. The FLICM is performed on the new DI to obtain the final change detection map. Four groups of homogeneous remote sensing images are selected for simulation experiments, and the experimental results demonstrate that the proposed homogeneous change detection method has a superior performance than other state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Article
Entropy-Variance Curves of Binary Sequences Generated by Random Substitutions of Constant Length
Entropy 2022, 24(2), 290; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020290 - 18 Feb 2022
Viewed by 351
Abstract
We study some properties of binary sequences generated by random substitutions of constant length. Specifically, assuming the alphabet {0,1}, we consider the following asymmetric substitution rule of length k: [...] Read more.
We study some properties of binary sequences generated by random substitutions of constant length. Specifically, assuming the alphabet {0,1}, we consider the following asymmetric substitution rule of length k: 00,0,,0 and 1Y1,Y2,,Yk, where Yi is a Bernoulli random variable with parameter p[0,1]. We obtain by recurrence the discrete probability distribution of the stochastic variable that counts the number of ones in the sequence formed after a number i of substitutions (iterations). We derive its first two statistical moments, mean and variance, and the entropy of the generated sequences as a function of the substitution length k for any successive iteration i, and characterize the values of p where the maxima of these measures occur. Finally, we obtain the parametric curves entropy-variance for each iteration and substitution length. We find two regimes of dependence between these two variables that, to our knowledge, have not been previously described. Besides, it allows to compare sequences with the same entropy but different variance and vice versa. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
Estimating Non-Gaussianity of a Quantum State by Measuring Orthogonal Quadratures
Entropy 2022, 24(2), 289; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020289 - 18 Feb 2022
Cited by 1 | Viewed by 472
Abstract
We derive the lower bounds for a non-Gaussianity measure based on quantum relative entropy (QRE). Our approach draws on the observation that the QRE-based non-Gaussianity measure of a single-mode quantum state is lower bounded by a function of the negentropies for quadrature distributions [...] Read more.
We derive the lower bounds for a non-Gaussianity measure based on quantum relative entropy (QRE). Our approach draws on the observation that the QRE-based non-Gaussianity measure of a single-mode quantum state is lower bounded by a function of the negentropies for quadrature distributions with maximum and minimum variances. We demonstrate that the lower bound can outperform the previously proposed bound by the negentropy of a quadrature distribution. Furthermore, we extend our method to establish lower bounds for the QRE-based non-Gaussianity measure of a multimode quantum state that can be measured by homodyne detection, with or without leveraging a Gaussian unitary operation. Finally, we explore how our lower bound finds application in non-Gaussian entanglement detection. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Article
Exploring and Selecting Features to Predict the Next Outcomes of MLB Games
Entropy 2022, 24(2), 288; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020288 - 17 Feb 2022
Viewed by 475
Abstract
(1) Background and Objective: Major League Baseball (MLB) is one of the most popular international sport events worldwide. Many people are very interest in the related activities, and they are also curious about the outcome of the next game. There are many factors [...] Read more.
(1) Background and Objective: Major League Baseball (MLB) is one of the most popular international sport events worldwide. Many people are very interest in the related activities, and they are also curious about the outcome of the next game. There are many factors that affect the outcome of a baseball game, and it is very difficult to predict the outcome of the game precisely. At present, relevant research predicts the accuracy of the next game falls between 55% and 62%. (2) Methods: This research collected MLB game data from 2015 to 2019 and organized a total of 30 datasets for each team to predict the outcome of the next game. The prediction method used includes one-dimensional convolutional neural network (1DCNN) and three machine-learning methods, namely an artificial neural network (ANN), support vector machine (SVM), and logistic regression (LR). (3) Results: The prediction results show that, among the four prediction models, SVM obtains the highest prediction accuracies of 64.25% and 65.75% without feature selection and with feature selection, respectively; and the best AUCs are 0.6495 and 0.6501, respectively. (4) Conclusions: This study used feature selection and optimized parameter combination to increase the prediction performance to around 65%, which surpasses the prediction accuracies when compared to the state-of-the-art works in the literature. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Article
A Hybrid Domain Image Encryption Algorithm Based on Improved Henon Map
Entropy 2022, 24(2), 287; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020287 - 17 Feb 2022
Cited by 1 | Viewed by 487
Abstract
A hybrid domain image encryption algorithm is developed by integrating with improved Henon map, integer wavelet transform (IWT), bit-plane decomposition, and deoxyribonucleic acid (DNA) sequence operations. First, we improve the classical two-dimensional Henon map. The improved Henon map is called 2D-ICHM, and its [...] Read more.
A hybrid domain image encryption algorithm is developed by integrating with improved Henon map, integer wavelet transform (IWT), bit-plane decomposition, and deoxyribonucleic acid (DNA) sequence operations. First, we improve the classical two-dimensional Henon map. The improved Henon map is called 2D-ICHM, and its chaotic performance is analyzed. Compared with some existing chaotic maps, 2D-ICHM has larger parameter space, continuous chaotic range, and more complex dynamic behavior. Second, an image encryption structure based on diffusion–scrambling–diffusion and spatial domain–frequency domain–spatial domain is proposed, which we call the double sandwich structure. In the encryption process, the diffusion and scrambling operations are performed in the spatial and frequency domains, respectively. In addition, initial values and system parameters of the 2D-ICHM are obtained by the secure hash algorithm-512 (SHA-512) hash value of the plain image and the given parameters. Consequently, the proposed algorithm is highly sensitive to plain images. Finally, simulation experiments and security analysis show that the proposed algorithm has a high level of security and strong robustness to various cryptanalytic attacks. Full article
Show Figures

Figure 1

Article
Missing Value Imputation Method for Multiclass Matrix Data Based on Closed Itemset
Entropy 2022, 24(2), 286; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020286 - 16 Feb 2022
Cited by 1 | Viewed by 648
Abstract
Handling missing values in matrix data is an important step in data analysis. To date, many methods to estimate missing values based on data pattern similarity have been proposed. Most previously proposed methods perform missing value imputation based on data trends over the [...] Read more.
Handling missing values in matrix data is an important step in data analysis. To date, many methods to estimate missing values based on data pattern similarity have been proposed. Most previously proposed methods perform missing value imputation based on data trends over the entire feature space. However, individual missing values are likely to show similarity to data patterns in local feature space. In addition, most existing methods focus on single class data, while multiclass analysis is frequently required in various fields. Missing value imputation for multiclass data must consider the characteristics of each class. In this paper, we propose two methods based on closed itemsets, CIimpute and ICIimpute, to achieve missing value imputation using local feature space for multiclass matrix data. CIimpute estimates missing values using closed itemsets extracted from each class. ICIimpute is an improved method of CIimpute in which an attribute reduction process is introduced. Experimental results demonstrate that attribute reduction considerably reduces computational time and improves imputation accuracy. Furthermore, it is shown that, compared to existing methods, ICIimpute provides superior imputation accuracy but requires more computational time. Full article
(This article belongs to the Special Issue Advances in Information Sciences and Applications)
Show Figures

Figure 1

Article
BMEFIQA: Blind Quality Assessment of Multi-Exposure Fused Images Based on Several Characteristics
Entropy 2022, 24(2), 285; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020285 - 16 Feb 2022
Viewed by 528
Abstract
A multi-exposure fused (MEF) image is generated by multiple images with different exposure levels, but the transformation process will inevitably introduce various distortions. Therefore, it is worth discussing how to evaluate the visual quality of MEF images. This paper proposes a new blind [...] Read more.
A multi-exposure fused (MEF) image is generated by multiple images with different exposure levels, but the transformation process will inevitably introduce various distortions. Therefore, it is worth discussing how to evaluate the visual quality of MEF images. This paper proposes a new blind quality assessment method for MEF images by considering their characteristics, and it is dubbed as BMEFIQA. More specifically, multiple features that represent different image attributes are extracted to perceive the various distortions of MEF images. Among them, structural, naturalness, and colorfulness features are utilized to describe the phenomena of structure destruction, unnatural presentation, and color distortion, respectively. All the captured features constitute a final feature vector for quality regression via random forest. Experimental results on a publicly available database show the superiority of the proposed BMEFIQA method to several blind quality assessment methods. Full article
Show Figures

Figure 1

Brief Report
A Confidential QR Code Approach with Higher Information Privacy
Entropy 2022, 24(2), 284; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020284 - 16 Feb 2022
Cited by 1 | Viewed by 764
Abstract
In present times, barcode decoders on mobile phones can extract the data content of QR codes. However, this convenience raises concerns about security issues when using QR codes to transmit confidential information, such as e-tickets, coupons, and other private data. Moreover, current secret [...] Read more.
In present times, barcode decoders on mobile phones can extract the data content of QR codes. However, this convenience raises concerns about security issues when using QR codes to transmit confidential information, such as e-tickets, coupons, and other private data. Moreover, current secret hiding techniques are unsuitable for QR code applications since QR codes are module-oriented, which is different from the pixel-oriented hiding manner. In this article, we propose an algorithm to conceal confidential information by changing the modules of the QR Code. This new scheme designs the triple module groups based on the concept of the error correction capability. Additionally, this manner can conceal two secret bits by changing only one module, and the amount of hidden confidential information can be twice the original amount. As a result, the ordinary data content (such as URL) can be extracted correctly from the generated QR code by any barcode decoders, which does not affect the readability of scanning. Furthermore, only authorized users with the secret key can further extract the concealed confidential information. This designed scheme can provide secure and reliable applications for the QR system. Full article
(This article belongs to the Special Issue Information Hiding and Coding Theory)
Show Figures

Figure 1

Article
Learning Competitive Swarm Optimization
Entropy 2022, 24(2), 283; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020283 - 16 Feb 2022
Viewed by 529
Abstract
Particle swarm optimization (PSO) is a popular method widely used in solving different optimization problems. Unfortunately, in the case of complex multidimensional problems, PSO encounters some troubles associated with the excessive loss of population diversity and exploration ability. This leads to a deterioration [...] Read more.
Particle swarm optimization (PSO) is a popular method widely used in solving different optimization problems. Unfortunately, in the case of complex multidimensional problems, PSO encounters some troubles associated with the excessive loss of population diversity and exploration ability. This leads to a deterioration in the effectiveness of the method and premature convergence. In order to prevent these inconveniences, in this paper, a learning competitive swarm optimization algorithm (LCSO) based on the particle swarm optimization method and the competition mechanism is proposed. In the first phase of LCSO, the swarm is divided into sub-swarms, each of which can work in parallel. In each sub-swarm, particles participate in the tournament. The participants of the tournament update their knowledge by learning from their competitors. In the second phase, information is exchanged between sub-swarms. The new algorithm was examined on a set of test functions. To evaluate the effectiveness of the proposed LCSO, the test results were compared with those achieved through the competitive swarm optimizer (CSO), comprehensive particle swarm optimizer (CLPSO), PSO, fully informed particle swarm (FIPS), covariance matrix adaptation evolution strategy (CMA-ES) and heterogeneous comprehensive learning particle swarm optimization (HCLPSO). The experimental results indicate that the proposed approach enhances the entropy of the particle swarm and improves the search process. Moreover, the LCSO algorithm is statistically and significantly more efficient than the other tested methods. Full article
Show Figures

Figure 1

Article
A Bounded Measure for Estimating the Benefit of Visualization (Part II): Case Studies and Empirical Evaluation
Entropy 2022, 24(2), 282; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020282 - 16 Feb 2022
Cited by 1 | Viewed by 561
Abstract
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining [...] Read more.
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure. Full article
Show Figures

Figure 1

Article
Analysis of Multi-Path Fading and the Doppler Effect for Reconfigurable-Intelligent-Surface-Assisted Wireless Networks
Entropy 2022, 24(2), 281; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020281 - 16 Feb 2022
Viewed by 543
Abstract
The randomness property of wireless channels restricts the improvement of their performance in wireless networks. As a novel solution for overcoming this, a reconfigurable intelligent surface (RIS) was introduced to reshape wireless physical environments. Initially, the multi-path and Doppler effects are discussed in [...] Read more.
The randomness property of wireless channels restricts the improvement of their performance in wireless networks. As a novel solution for overcoming this, a reconfigurable intelligent surface (RIS) was introduced to reshape wireless physical environments. Initially, the multi-path and Doppler effects are discussed in a case in which a reflector was considered to reflect the incident signal for wireless communication. Subsequently, the results for the transmission signal were analyzed when a reflector was coated with an RIS. Specifically, the multi-path fading stemming from the movement of the mobile transmitter was eliminated or mitigated by utilizing an RIS. Meanwhile, the Doppler effect was also reduced to restrain the rapid fluctuations in the transmission signal by using a tunable RIS in real time. The simulation results demonstrate that the magnitude and spectrum of the received signal can be regulated by an RIS. The multi-path fading and Doppler effect can be effectively mitigated when the reflector is coated with an RIS in wireless networks. Full article
Show Figures

Figure 1

Article
Frequency, Informativity and Word Length: Insights from Typologically Diverse Corpora
Entropy 2022, 24(2), 280; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020280 - 16 Feb 2022
Viewed by 1035
Abstract
Zipf’s law of abbreviation, which posits a negative correlation between word frequency and length, is one of the most famous and robust cross-linguistic generalizations. At the same time, it has been shown that contextual informativity (average surprisal given previous context) is more strongly [...] Read more.
Zipf’s law of abbreviation, which posits a negative correlation between word frequency and length, is one of the most famous and robust cross-linguistic generalizations. At the same time, it has been shown that contextual informativity (average surprisal given previous context) is more strongly correlated with word length, although this tendency is not observed consistently, depending on several methodological choices. The present study examines a more diverse sample of languages than the previous studies (Arabic, Finnish, Hungarian, Indonesian, Russian, Spanish and Turkish). I use large web-based corpora from the Leipzig Corpora Collection to estimate word lengths in UTF-8 characters and in phonemes (for some of the languages), as well as word frequency, informativity given previous word and informativity given next word, applying different methods of bigrams processing. The results show different correlations between word length and the corpus-based measure for different languages. I argue that these differences can be explained by the properties of noun phrases in a language, most importantly, by the order of heads and modifiers and their relative morphological complexity, as well as by orthographic conventions. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches to Explaining Linguistic Structure)
Show Figures

Figure 1

Article
Reinforcement Learning-Based Reactive Obstacle Avoidance Method for Redundant Manipulators
Entropy 2022, 24(2), 279; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020279 - 15 Feb 2022
Cited by 1 | Viewed by 597
Abstract
Redundant manipulators are widely used in fields such as human-robot collaboration due to their good flexibility. To ensure efficiency and safety, the manipulator is required to avoid obstacles while tracking a desired trajectory in many tasks. Conventional methods for obstacle avoidance of redundant [...] Read more.
Redundant manipulators are widely used in fields such as human-robot collaboration due to their good flexibility. To ensure efficiency and safety, the manipulator is required to avoid obstacles while tracking a desired trajectory in many tasks. Conventional methods for obstacle avoidance of redundant manipulators may encounter joint singularity or exceed joint position limits while tracking the desired trajectory. By integrating deep reinforcement learning into the gradient projection method, a reactive obstacle avoidance method for redundant manipulators is proposed. We establish a general DRL framework for obstacle avoidance, and then a reinforcement learning agent is applied to learn motion in the null space of the redundant manipulator Jacobian matrix. The reward function of reinforcement learning is redesigned to handle multiple constraints automatically. Specifically, the manipulability index is introduced into the reward function, and thus the manipulator can maintain high manipulability to avoid joint singularity while executing tasks. To show the effectiveness of the proposed method, the simulation of 4 degrees of planar manipulator freedom is given. Compared with the gradient projection method, the proposed method outperforms in a success rate of obstacles avoidance, average manipulability, and time efficiency. Full article
Show Figures

Figure 1

Article
Approximate Entropy in Canonical and Non-Canonical Fiction
Entropy 2022, 24(2), 278; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020278 - 15 Feb 2022
Viewed by 714
Abstract
Computational textual aesthetics aims at studying observable differences between aesthetic categories of text. We use Approximate Entropy to measure the (un)predictability in two aesthetic text categories, i.e., canonical fiction (‘classics’) and non-canonical fiction (with lower prestige). Approximate Entropy is determined for series derived [...] Read more.
Computational textual aesthetics aims at studying observable differences between aesthetic categories of text. We use Approximate Entropy to measure the (un)predictability in two aesthetic text categories, i.e., canonical fiction (‘classics’) and non-canonical fiction (with lower prestige). Approximate Entropy is determined for series derived from sentence-length values and the distribution of part-of-speech-tags in windows of texts. For comparison, we also include a sample of non-fictional texts. Moreover, we use Shannon Entropy to estimate degrees of (un)predictability due to frequency distributions in the entire text. Our results show that the Approximate Entropy values can better differentiate canonical from non-canonical texts compared with Shannon Entropy, which is not true for the classification of fictional vs. expository prose. Canonical and non-canonical texts thus differ in sequential structure, while inter-genre differences are a matter of the overall distribution of local frequencies. We conclude that canonical fictional texts exhibit a higher degree of (sequential) unpredictability compared with non-canonical texts, corresponding to the popular assumption that they are more ‘demanding’ and ‘richer’. In using Approximate Entropy, we propose a new method for text classification in the context of computational textual aesthetics. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Emergence of Objectivity for Quantum Many-Body Systems
Entropy 2022, 24(2), 277; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020277 - 14 Feb 2022
Viewed by 609
Abstract
We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case [...] Read more.
We examine the emergence of objectivity for quantum many-body systems in a setting without an environment to decohere the system’s state, but where observers can only access small fragments of the whole system. We extend the result of Reidel (2017) to the case where the system is in a mixed state, measurements are performed through POVMs, and imprints of the outcomes are imperfect. We introduce a new condition on states and measurements to recover full classicality for any number of observers. We further show that evolutions of quantum many-body systems can be expected to yield states that satisfy this condition whenever the corresponding measurement outcomes are redundant. Full article
(This article belongs to the Special Issue Quantum Darwinism and Friends)
Previous Issue
Next Issue
Back to TopTop