Entropy doi: 10.3390/e24050721

Authors: Yiwei Wang Chun Liu

In this paper, we summarize some recent advances related to the energetic variational approach (EnVarA), a general variational framework of building thermodynamically consistent models for complex fluids, by some examples. Particular focus will be placed on how to model systems involving chemo-mechanical couplings and non-isothermal effects.

]]>Entropy doi: 10.3390/e24050720

Authors: Lei Xie Jiahui Zhu Yuqing Jia Huifang Chen

In order to meet the requirements of communication security and concealment, as well as to protect marine life, bionic covert communication has become a hot research topic for underwater acoustic communication (UAC). In this paper, we propose a bionic covert UAC (BC-UAC) method based on the time&ndash;frequency contour (TFC) of the bottlenose dolphin whistle, which can overcome the safety problem of traditional low signal&ndash;noise ratio (SNR) covert communication and make the detected communication signal be excluded as marine biological noise. In the proposed BC-UAC method, the TFC of the bottlenose dolphin whistle is segmented to improve the transmission rate. Two BC-UAC schemes based on the segmented TFC of the whistle, the BC-UAC scheme using the whistle signal with time-delay (BC-UAC-TD) and the BC-UAC scheme using the whistle signal with frequency-shift (BC-UAC-FS), are addressed. The original whistle signal is used as a synchronization signal. Moreover, the virtual time reversal mirror (VTRM) technique is adopted to equalize the channel for mitigating the multipath effect. The performance of the proposed BC-UAC method, in terms of the Pearson correlation coefficient (PCC) and bit error rate (BER), is evaluated under simulated and measured underwater channels. Numerical results show that the proposed BC-UAC method performs well on covertness and reliability. Furthermore, the covertness of the bionic modulated signal in BC-UAC-TD is better than that of BC-UAC-FS, although the reliability of BC-UAC-FS is better than that of BC-UAC-TD.

]]>Entropy doi: 10.3390/e24050719

Authors: Leonardo dos Santos Lima

The nonlinear fractional stochastic differential equation approach with Hurst parameter H within interval H&isin;(0,1) to study the time evolution of the number of those infected by the coronavirus in countries where the number of cases is large as Brazil is studied. The rises and falls of novel cases daily or the fluctuations in the official data are treated as a random term in the stochastic differential equation for the fractional Brownian motion. The projection of novel cases in the future is treated as quadratic mean deviation in the official data of novel cases daily since the beginning of the pandemic up to the present. Moreover, the rescaled range analysis (RS) is employed to determine the Hurst index for the time series of novel cases and some statistical tests are performed with the aim to determine the shape of the probability density of novel cases in the future.

]]>Entropy doi: 10.3390/e24050718

Authors: Sebastian Raubitzek Thomas Neubauer Jan Friedrich Andreas Rauber

We present a novel method for interpolating univariate time series data. The proposed method combines multi-point fractional Brownian bridges, a genetic algorithm, and Takens&rsquo; theorem for reconstructing a phase space from univariate time series data. The basic idea is to first generate a population of different stochastically-interpolated time series data, and secondly, to use a genetic algorithm to find the pieces in the population which generate the smoothest reconstructed phase space trajectory. A smooth trajectory curve is hereby found to have a low variance of second derivatives along the curve. For simplicity, we refer to the developed method as PhaSpaSto-interpolation, which is an abbreviation for phase-space-trajectory-smoothing stochastic interpolation. The proposed approach is tested and validated with a univariate time series of the Lorenz system, five non-model data sets and compared to a cubic spline interpolation and a linear interpolation. We find that the criterion for smoothness guarantees low errors on known model and non-model data. Finally, we interpolate the discussed non-model data sets, and show the corresponding improved phase space portraits. The proposed method is useful for interpolating low-sampled time series data sets for, e.g., machine learning, regression analysis, or time series prediction approaches. Further, the results suggest that the variance of second derivatives along a given phase space trajectory is a valuable tool for phase space analysis of non-model time series data, and we expect it to be useful for future research.

]]>Entropy doi: 10.3390/e24050717

Authors: Soham Jariwala Norman J. Wagner Antony N. Beris

In this work, we outline the development of a thermodynamically consistent microscopic model for a suspension of aggregating particles under arbitrary, inertia-less deformation. As a proof-of-concept, we show how the combination of a simplified population-balance-based description of the aggregating particle microstructure along with the use of the single-generator bracket description of nonequilibrium thermodynamics, which leads naturally to the formulation of the model equations. Notable elements of the model are a lognormal distribution for the aggregate size population, a population balance-based model of the aggregation and breakup processes and a conformation tensor-based viscoelastic description of the elastic network of the particle aggregates. The resulting example model is evaluated in steady and transient shear forces and elongational flows and shown to offer predictions that are consistent with observed rheological behavior of typical systems of aggregating particles. Additionally, an expression for the total entropy production is also provided that allows one to judge the thermodynamic consistency and to evaluate the importance of the various dissipative phenomena involved in given flow processes.

]]>Entropy doi: 10.3390/e24050716

Authors: Jiajia Zhou Masao Doi

Using the Onsager variational principle, we study the dynamic coupling between the stress and the composition in a polymer solution. In the original derivation of the two-fluid model of Doi and Onuki the polymer stress was introduced a priori; therefore, a constitutive equation is required to close the equations. Based on our previous study of viscoelastic fluids with homogeneous composition, we start with a dumbbell model for the polymer, and derive all dynamic equations using the Onsager variational principle.

]]>Entropy doi: 10.3390/e24050715

Authors: Konstantin Beyer Kimmo Luoma Tim Lenz Walter T. Strunz

We investigate a composite quantum collision model with measurements on the memory part, which effectively probe the system. The framework allows us to adjust the measurement strength, thereby tuning the dynamical map of the system. For a two-qubit setup with a symmetric and informationally complete measurement on the memory, we study the divisibility of the resulting dynamics in dependence of the measurement strength. The measurements give rise to quantum trajectories of the system and we show that the average asymptotic purity depends on the specific form of the measurement. With the help of numerical simulations, we demonstrate that the different performance of the measurements is generic and holds for almost all interaction gates between the system and the memory in the composite collision model. The discrete model is then extended to a time-continuous limit.

]]>Entropy doi: 10.3390/e24050714

Authors: Alexander Ziller Tamara T. Mueller Rickmer Braren Daniel Rueckert Georgios Kaissis

The increasing prevalence of large-scale data collection in modern society represents a potential threat to individual privacy. Addressing this threat, for example through privacy-enhancing technologies (PETs), requires a rigorous definition of what exactly is being protected, that is, of privacy itself. In this work, we formulate an axiomatic definition of privacy based on quantifiable and irreducible information flows. Our definition synthesizes prior work from the domain of social science with a contemporary understanding of PETs such as differential privacy (DP). Our work highlights the fact that the inevitable difficulties of protecting privacy in practice are fundamentally information-theoretic. Moreover, it enables quantitative reasoning about PETs based on what they are protecting, thus fostering objective policy discourse about their societal implementation.

]]>Entropy doi: 10.3390/e24050713

Authors: Szilárd Nemes Andreas Gustavsson Alexandra Jauhiainen

Restricted Mean Survival Time (RMST), the average time without an event of interest until a specific time point, is a model-free, easy to interpret statistic. The heavy reliance on non-parametric or semi-parametric methods in the survival analysis has drawn criticism, due to the loss of efficacy compared to parametric methods. This assumes that the parametric family used is the true one, otherwise the gain in efficacy might be lost to interpretability problems due to bias. The Focused Information Criterion (FIC) considers the trade-off between bias and variance and offers an objective framework for the selection of the optimal non-parametric or parametric estimator for scalar statistics. Herein, we present the FIC framework for the selection of the RMST estimator with the best bias-variance trade-off. The aim is not to identify the true underling distribution that generated the data, but to identify families of distributions that best approximate this process. Through simulation studies and theoretical reasoning, we highlight the effect of censoring on the performance of FIC. Applicability is illustrated with a real life example. Censoring has a non-linear effect on FICs performance that can be traced back to the asymptotic relative efficiency of the estimators. FICs performance is sample size dependent; however, with censoring percentages common in practical applications FIC selects the true model at a nominal probability (0.843) even with small or moderate sample sizes.

]]>Entropy doi: 10.3390/e24050712

Authors: Igal Sason

Data science, information theory, probability theory, statistical learning, statistical signal processing, and other related disciplines greatly benefit from non-negative measures of dissimilarity between pairs of probability measures [...]

]]>Entropy doi: 10.3390/e24050711

Authors: Dawei Li Zhi-Ping Liu

The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of eight methods with diverse combinations of economic factors. Specifically, we achieved a prediction performance of the relative root mean squared error of 0.056 in the US and of 0.183 in China for the two case studies of movie markets in time-series forecasting experiments from 2013 to 2016. We concluded that the support-vector-machine-based method using gross domestic product reached the best prediction performance and satisfies the easily available information of economic factors. The computational experiments and comparison studies provided evidence for the effectiveness and advantages of our proposed prediction strategy. In the validation process of the predicted total box-office markets in 2017, the error rates were 0.044 in the US and 0.066 in China. In the consecutive predictions of nationwide box-office markets in 2018 and 2019, the mean relative absolute percentage errors achieved were 0.041 and 0.035 in the US and China, respectively. The precise predictions, both in the training and validation data, demonstrate the efficiency and versatility of our proposed method.

]]>Entropy doi: 10.3390/e24050710

Authors: Thomas Doctor Olaf Witkowski Elizaveta Solomonova Bill Duane Michael Levin

Intelligence is a central feature of human beings’ primary and interpersonal experience. Understanding how intelligence originated and scaled during evolution is a key challenge for modern biology. Some of the most important approaches to understanding intelligence are the ongoing efforts to build new intelligences in computer science (AI) and bioengineering. However, progress has been stymied by a lack of multidisciplinary consensus on what is central about intelligence regardless of the details of its material composition or origin (evolved vs. engineered). We show that Buddhist concepts offer a unique perspective and facilitate a consilience of biology, cognitive science, and computer science toward understanding intelligence in truly diverse embodiments. In coming decades, chimeric and bioengineering technologies will produce a wide variety of novel beings that look nothing like familiar natural life forms; how shall we gauge their moral responsibility and our own moral obligations toward them, without the familiar touchstones of standard evolved forms as comparison? Such decisions cannot be based on what the agent is made of or how much design vs. natural evolution was involved in their origin. We propose that the scope of our potential relationship with, and so also our moral duty toward, any being can be considered in the light of Care—a robust, practical, and dynamic lynchpin that formalizes the concepts of goal-directedness, stress, and the scaling of intelligence; it provides a rubric that, unlike other current concepts, is likely to not only survive but thrive in the coming advances of AI and bioengineering. We review relevant concepts in basal cognition and Buddhist thought, focusing on the size of an agent’s goal space (its cognitive light cone) as an invariant that tightly links intelligence and compassion. Implications range across interpersonal psychology, regenerative medicine, and machine learning. The Bodhisattva’s vow (“for the sake of all sentient life, I shall achieve awakening”) is a practical design principle for advancing intelligence in our novel creations and in ourselves.

]]>Entropy doi: 10.3390/e24050709

Authors: Richard Le Blanc

The univariate noncentral distributions can be derived by multiplying their central distributions with translation factors. When constructed in terms of translated uniform distributions on unit radius hyperspheres, these translation factors become generating functions for classical families of orthogonal polynomials. The ultraspherical noncentral t, normal N, F, and &chi;2 distributions are thus found to be associated with the Gegenbauer, Hermite, Jacobi, and Laguerre polynomial families, respectively, with the corresponding central distributions standing for the polynomial family-defining weights. Obtained through an unconstrained minimization of the Gibbs potential, Jaynes&rsquo; maximal entropy priors are formally expressed in terms of the empirical densities&rsquo; entropic convex duals. Expanding these duals on orthogonal polynomial bases allows for the expedient determination of the Jaynes&ndash;Gibbs priors. Invoking the moment problem and the duality principle, modelization can be reduced to the direct determination of the prior moments in parametric space in terms of the Bayes factor&rsquo;s orthogonal polynomial expansion coefficients in random variable space. Genomics and geophysics examples are provided.

]]>Entropy doi: 10.3390/e24050708

Authors: Tiancheng Wang Tsuyoshi Sasaki Usuda

We propose an amplitude shift keying-type asymmetric quantum communication (AQC) system that uses an entangled state. As a first step toward development of this system, we evaluated and considered the communication performance of the proposed receiver when applied to the AQC system using a two-mode squeezed vacuum state (TSVS), the maximum quasi-Bell state, and the non-maximum quasi-Bell state, along with an asymmetric classical communication (ACC) system using the coherent state. Specifically, we derived an analytical expression for the error probability of the AQC system using the quasi-Bell state. Comparison of the error probabilities of the ACC system and the AQC systems when using the TSVS and the quasi-Bell state shows that the AQC system using the quasi-Bell state offers a clear performance advantage under specific conditions. Additionally, it was clarified that there are cases where the universal lower bound on the error probability for the AQC system was almost achieved when using the quasi-Bell state, unlike the case in which the TSVS was used.

]]>Entropy doi: 10.3390/e24050704

Authors: Koduru Hajarathaiah Murali Krishna Enduri Satish Anamalamudi Tatireddy Subba Reddy Srilatha Tokala

Computing influential nodes gets a lot of attention from many researchers for information spreading in complex networks. It has vast applications, such as viral marketing, social leader creation, rumor control, and opinion monitoring. The information-spreading ability of influential nodes is greater compared with other nodes in the network. Several researchers proposed centrality measures to compute the influential nodes in a complex network, such as degree, betweenness, closeness, semi-local centralities, and PageRank. These centrality methods are defined based on the local and/or global information of nodes in the network. However, due to their high time complexity, centrality measures based on the global information of nodes have become unsuitable for large-scale networks. Very few centrality measures exist that are based on the attributes between nodes and the structure of the network. We propose the nearest neighborhood trust PageRank (NTPR) based on the structural attributes of neighbors and nearest neighbors of nodes. We define the measure based on the degree ratio, the similarity between nodes, the trust values of neighbors, and the nearest neighbors. We computed the influential nodes in various real-world networks using the proposed centrality method. We found the maximum influence by using influential nodes with SIR and independent cascade methods. We also compare the maximum influence of our centrality measure with the existing basic centrality measures.

]]>Entropy doi: 10.3390/e24050707

Authors: Ping Xie Qingya Chang Yuanyuan Zhang Xiaojiao Dong Jinxu Yu Xiaoling Chen

Muscle synergy analysis is a kind of modularized decomposition of muscles during exercise controlled by the central nervous system (CNS). It can not only extract the synergistic muscles in exercise, but also obtain the activation states of muscles to reflect the coordination and control relationship between muscles. However, previous studies have mainly focused on the time-domain synergy without considering the frequency-specific characteristics within synergy structures. Therefore, this study proposes a novel method, named time-frequency non-negative matrix factorization (TF-NMF), to explore the time-varying regularity of muscle synergy characteristics of multi-channel surface electromyogram (sEMG) signals at different frequency bands. In this method, the wavelet packet transform (WPT) is used to transform the time-scale signals into time-frequency dimension. Then, the NMF method is calculated in each time-frequency window to extract the synergy modules. Finally, this method is used to analyze the sEMG signals recorded from 8 muscles during the conversion between wrist flexion (WF stage) and wrist extension (WE stage) movements in 12 healthy people. The experimental results show that the number of synergy modules in wrist flexion transmission to wrist extension (Motion Conversion, MC stage) is more than that in the WF stage and WE stage. Furthermore, the number of flexor and extensor muscle synergies in the frequency band of 0&ndash;125 Hz during the MC stage is more than that in the frequency band of 125&ndash;250 Hz. Further analysis shows that the flexion muscle synergies mostly exist in the frequency band of 140.625&ndash;156.25 Hz during the WF stage, and the extension muscle synergies appear in the frequency band of 125&ndash;156.25 Hz during the WE stage. These results can help to better understand the time-frequency features of muscle synergy, and expand study perspective related to motor control in nervous system.

]]>Entropy doi: 10.3390/e24050706

Authors: John C. Baez

The R&eacute;nyi entropy is a generalization of the usual concept of entropy which depends on a parameter q. In fact, R&eacute;nyi entropy is closely related to free energy. Suppose we start with a system in thermal equilibrium and then suddenly divide the temperature by q. Then the maximum amount of work the system can perform as it moves to equilibrium at the new temperature divided by the change in temperature equals the system&rsquo;s R&eacute;nyi entropy in its original state. This result applies to both classical and quantum systems. Mathematically, we can express this result as follows: the R&eacute;nyi entropy of a system in thermal equilibrium is without the &lsquo;q&minus;1-derivative&rsquo; of its free energy with respect to the temperature. This shows that R&eacute;nyi entropy is a q-deformation of the usual concept of entropy.

]]>Entropy doi: 10.3390/e24050705

Authors: Haihui Yang Shiguo Huang Shengwei Guo Guobing Sun

With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain&rsquo;s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers&rsquo; output probabilities as a portion of the weighted features.

]]>Entropy doi: 10.3390/e24050703

Authors: Marco Tomassini

The local optima network model has proved useful in the past in connection with combinatorial optimization problems. Here we examine its extension to the real continuous function domain. Through a sampling process, the model builds a weighted directed graph which captures the function&rsquo;s minima basin structure and its interconnection and which can be easily manipulated with the help of complex networks metrics. We show that the model provides a complementary view of function spaces that is easier to analyze and visualize, especially at higher dimensions. In particular, we show that function hardness as represented by algorithm performance is strongly related to several graph properties of the corresponding local optima network, opening the way for a classification of problem difficulty according to the corresponding graph structure and with possible extensions in the design of better metaheuristic approaches.

]]>Entropy doi: 10.3390/e24050702

Authors: Xiaoteng Yang Zhenqiang Wu Shumaila Javaid

The interdependence of financial institutions is primarily responsible for creating a systemic hierarchy in the industry. In this paper, an Adaptive Hierarchical Network Model is proposed to study the problem of hierarchical relationships arising from different individuals in the economic domain. In the presented dynamically evolving network model, new directed edges are generated depending on the existing nodes and the hierarchical structures among the network, and these edges decay over time. When the preference of nodes in the network for higher ranks exceeds a certain threshold value, the equality state in the network becomes unstable and rank states emerge. Meanwhile, we select four real data sets for model evaluation and observe the resilience in the network hierarchy evolution and the differences formed by different patterns of hierarchy preference mechanisms, which help us better understand data science and network dynamics evolution.

]]>Entropy doi: 10.3390/e24050701

Authors: Loretta Mastroeni Pierluigi Vellucci

As pointed out by many researchers, replication plays a key role in the credibility of applied sciences and the confidence in all research findings. With regard, in particular, to energy finance and economics, replication papers are rare, probably because they are hampered by inaccessible data, but their aim is crucial. We consider two ways to avoid misleading results on the ostensible chaoticity of price series. The first one is represented by the proper mathematical definition of chaos and the related theoretical background, while the latter is represented by the hybrid approach that we propose here&mdash;i.e., consisting of considering the dynamical system underlying the price time series as a deterministic system with noise. We find that both chaotic and stochastic features coexist in the energy commodity markets, although the misuse of some tests in the established practice in the literature may say otherwise.

]]>Entropy doi: 10.3390/e24050700

Authors: Jiansheng Bai Jinjie Yao Juncheng Qi Liming Wang

AMC (automatic modulation classification) plays a vital role in spectrum monitoring and electromagnetic abnormal signal detection. Up to now, few studies have focused on the complementarity between features of different modalities and the importance of the feature fusion mechanism in the AMC method. This paper proposes a dual-modal feature fusion convolutional neural network (DMFF-CNN) for AMC to use the complementarity between different modal features fully. DMFF-CNN uses the gram angular field (GAF) image coding and intelligence quotient (IQ) data combined with CNN. Firstly, the original signal is converted into images by GAF, and the GAF images are used as the input of ResNet50. Secondly, it is converted into IQ data and as the complex value network (CV-CNN) input to extract features. Furthermore, a dual-modal feature fusion mechanism (DMFF) is proposed to fuse the dual-modal features extracted by GAF-ResNet50 and CV-CNN. The fusion feature is used as the input of DMFF-CNN for model training to achieve AMC of multi-type signals. In the evaluation stage, the advantages of the DMFF mechanism proposed in this paper and the accuracy improvement compared with other feature fusion algorithms are discussed. The experiment shows that our method performs better than others, including some state-of-the-art methods, and has superior robustness at a low signal-to-noise ratio (SNR), and the average classification accuracy of the dataset signals reaches 92.1%. The DMFF-CNN proposed in this paper provides a new path for the AMC field.

]]>Entropy doi: 10.3390/e24050699

Authors: Anna Wawrzaszek Renata Modzelewska Agata Krasińska Agnieszka Gil Vasile Glavan

We analyse the fractal nature of geomagnetic field northward and eastward horizontal components with 1 min resolution measured by the four stations Belsk, Hel, Sodankyl&auml; and Hornsund during the period of 22 August&ndash;1 September, when the 26 August 2018 geomagnetic storm appeared. To reveal and to quantitatively describe the fractal scaling of the considered data, three selected methods, structure function scaling, Higuchi, and detrended fluctuation analysis are applied. The obtained results show temporal variation of the fractal dimension of geomagnetic field components, revealing differences between their irregularity (complexity). The values of fractal dimension seem to be sensitive to the physical conditions connected with the interplanetary shock, the coronal mass ejection, the corotating interaction region, and the high-speed stream passage during the storm development. Especially, just after interplanetary shock occurrence, a decrease in the fractal dimension for all stations is observed, not straightforwardly visible in the geomagnetic field components data.

]]>Entropy doi: 10.3390/e24050698

Authors: Marco Favretti

In this paper we introduce a class of statistical models consisting of exponential families depending on additional parameters, called external parameters. The main source for these statistical models resides in the Maximum Entropy framework where we have thermal parameters, corresponding to the natural parameters of an exponential family, and mechanical parameters, here called external parameters. In the first part we we study the geometry of these models introducing a fibration of parameter space over external parameters. In the second part we investigate a class of evolution problems driven by a Fokker-Planck equation whose stationary distribution is an exponential family with external parameters. We discuss applications of these statistical models to thermodynamic length and isentropic evolution of thermodynamic systems and to a problem in the dynamic of quantitative traits in genetics.

]]>Entropy doi: 10.3390/e24050697

Authors: Petr Andriushchenko Dmitrii Kapitan Vitalii Kapitan

Spin glass is the simplest disordered system that preserves the full range of complex collective behavior of interacting frustrating elements. In the paper, we propose a novel approach for calculating the values of thermodynamic averages of the frustrated spin glass model using custom deep neural networks. The spin glass system was considered as a specific weighted graph whose spatial distribution of the edges values determines the fundamental characteristics of the system. Special neural network architectures that mimic the structure of spin lattices have been proposed, which has increased the speed of learning and the accuracy of the predictions compared to the basic solution of fully connected neural networks. At the same time, the use of trained neural networks can reduce simulation time by orders of magnitude compared to other classical methods. The validity of the results is confirmed by comparison with numerical simulation with the replica-exchange Monte Carlo method.

]]>Entropy doi: 10.3390/e24050696

Authors: Tatiana Mihaescu Aurelian Isar

The Markovian time evolution of the entropy production rate is studied as a measure of irreversibility generated in a bipartite quantum system consisting of two coupled bosonic modes immersed in a common thermal environment. The dynamics of the system is described in the framework of the formalism of the theory of open quantum systems based on completely positive quantum dynamical semigroups, for initial two-mode squeezed thermal states, squeezed vacuum states, thermal states and coherent states. We show that the rate of the entropy production of the initial state and nonequilibrium stationary state, and the time evolution of the rate of entropy production, strongly depend on the parameters of the initial Gaussian state (squeezing parameter and average thermal photon numbers), frequencies of modes, parameters characterising the thermal environment (temperature and dissipation coefficient), and the strength of coupling between the two modes. We also provide a comparison of the behaviour of entropy production rate and R&eacute;nyi-2 mutual information present in the considered system.

]]>Entropy doi: 10.3390/e24050695

Authors: Pedro Vega-Jorquera Erick De la Barra Héctor Torres Yerko Vásquez

Mathai&rsquo;s pathway model is playing an increasingly prominent role in statistical distributions. As a generalization of a great variety of distributions, the pathway model allows the studying of several non-linear dynamics of complex systems. Here, we construct a model, called the Pareto&ndash;Mathai distribution, using the fact that the earthquakes&rsquo; magnitudes of full catalogues are well-modeled by a Mathai distribution. The Pareto&ndash;Mathai distribution is used to study artificially induced microseisms in the mining industry. The fitting of a distribution for entire range of magnitudes allow us to calculate the completeness magnitude (Mc). Mathematical properties of the new distribution are studied. In addition, applying this model to data recorded at a Chilean mine, the magnitude Mc is estimated for several mine sectors and also the entire mine.

]]>Entropy doi: 10.3390/e24050694

Authors: Jinzhuo Liu Yunchen Peng Peican Zhu Yong Yu

We introduce a mixed network coupling mechanism and study its effects on how cooperation evolves in interdependent networks. This mechanism allows some players (conservative-driven) to establish a fixed-strength coupling, while other players (radical-driven) adjust their coupling strength through the evolution of strategy. By means of numerical simulation, a hump-like relationship between the level of cooperation and conservative participant density is revealed. Interestingly, interspecies interactions stimulate polarization of the coupling strength of radical-driven players, promoting cooperation between two types of players. We thus demonstrate that a simple mixed network coupling mechanism substantially expands the scope of cooperation among structured populations.

]]>Entropy doi: 10.3390/e24050693

Authors: Yan Yan Feng Jiang Xinan Zhang Tianhai Tian

One of the key challenges in systems biology and molecular sciences is how to infer regulatory relationships between genes and proteins using high-throughout omics datasets. Although a wide range of methods have been designed to reverse engineer the regulatory networks, recent studies show that the inferred network may depend on the variable order in the dataset. In this work, we develop a new algorithm, called the statistical path-consistency algorithm (SPCA), to solve the problem of the dependence of variable order. This method generates a number of different variable orders using random samples, and then infers a network by using the path-consistent algorithm based on each variable order. We propose measures to determine the edge weights using the corresponding edge weights in the inferred networks, and choose the edges with the largest weights as the putative regulations between genes or proteins. The developed method is rigorously assessed by the six benchmark networks in DREAM challenges, the mitogen-activated protein (MAP) kinase pathway, and a cancer-specific gene regulatory network. The inferred networks are compared with those obtained by using two up-to-date inference methods. The accuracy of the inferred networks shows that the developed method is effective for discovering molecular regulatory systems.

]]>Entropy doi: 10.3390/e24050692

Authors: Lihui Sun Ya Liu Chen Li Kaikai Zhang Wenxing Yang Zbigniew Ficek

Interesting coherence and correlations appear between superpositions of two bosonic modes when the modes are parametrically coupled to a third intermediate mode and are also coupled to external modes which are in thermal states of unequal mean photon numbers. Under such conditions, it is found that one of linear superpositions of the modes, which is effectively decoupled from the other modes, can be perfectly coherent with the other orthogonal superposition of the modes and can simultaneously exhibit anticoherence with the intermediate mode, which can give rise to entanglement between the modes. It is shown that the coherence effects have a substantial effect on the population distribution between the modes, which may result in lowering the population of the intermediate mode. This shows that the system can be employed to cool modes to lower temperatures. Furthermore, for appropriate thermal photon numbers and coupling strengths between the modes, it is found that entanglement between the directly coupled superposition and the intermediate modes may occur in a less restricted range of the number of the thermal photons such that the modes could be strongly entangled, even at large numbers of the thermal photons.

]]>Entropy doi: 10.3390/e24050691

Authors: Lihua Yang Xiaofei Qi Jinchuan Hou

In the last decade, much attention has been focused on examining the nonlocality of various quantum networks, which are fundamental for long-distance quantum communications. In this paper, we consider the nonlocality of any forked tree-shaped network, where each node, respectively, shares arbitrary number of bipartite sources with other nodes in the next &ldquo;layer&rdquo;. The Bell-type inequalities for such quantum networks are obtained, which are, respectively, satisfied by all (tn&minus;1)-local correlations and all local correlations, where tn denotes the total number of nodes in the network. The maximal quantum violations of these inequalities and the robustness to noise in these networks are also discussed. Our network can be seen as a generalization of some known quantum networks.

]]>Entropy doi: 10.3390/e24050690

Authors: Bjarne Andresen Peter Salamon

Finite-time thermodynamics was created 45 years ago as a slight modification of classical thermodynamics, by adding the constraint that the process in question goes to completion within a finite length of time [...]

]]>Entropy doi: 10.3390/e24050689

Authors: Paul B. Badcock Maxwell J. D. Ramstead Zahra Sheikhbahaee Axel Constant

The free energy principle (FEP) is a formulation of the adaptive, belief-driven behaviour of self-organizing systems that gained prominence in the early 2000s as a unified model of the brain [...]

]]>Entropy doi: 10.3390/e24050688

Authors: Fábio Mendonça Sheikh Shanawaz Mostafa Diogo Freitas Fernando Morgado-Dias Antonio G. Ravelo-García

Methodologies for automatic non-rapid eye movement and cyclic alternating pattern analysis were proposed to examine the signal from one electroencephalogram monopolar derivation for the A phase, cyclic alternating pattern cycles, and cyclic alternating pattern rate assessments. A population composed of subjects free of neurological disorders and subjects diagnosed with sleep-disordered breathing was studied. Parallel classifications were performed for non-rapid eye movement and A phase estimations, examining a one-dimension convolutional neural network (fed with the electroencephalogram signal), a long short-term memory (fed with the electroencephalogram signal or with proposed features), and a feed-forward neural network (fed with proposed features), along with a finite state machine for the cyclic alternating pattern cycle scoring. Two hyper-parameter tuning algorithms were developed to optimize the classifiers. The model with long short-term memory fed with proposed features was found to be the best, with accuracy and area under the receiver operating characteristic curve of 83% and 0.88, respectively, for the A phase classification, while for the non-rapid eye movement estimation, the results were 88% and 0.95, respectively. The cyclic alternating pattern cycle classification accuracy was 79% for the same model, while the cyclic alternating pattern rate percentage error was 22%.

]]>Entropy doi: 10.3390/e24050687

Authors: Afek Ilay Adler Amichai Painsky

Gradient Boosting Machines (GBM) are among the go-to algorithms on tabular data, which produce state-of-the-art results in many prediction tasks. Despite its popularity, the GBM framework suffers from a fundamental flaw in its base learners. Specifically, most implementations utilize decision trees that are typically biased towards categorical variables with large cardinalities. The effect of this bias was extensively studied over the years, mostly in terms of predictive performance. In this work, we extend the scope and study the effect of biased base learners on GBM feature importance (FI) measures. We demonstrate that although these implementation demonstrate highly competitive predictive performance, they still, surprisingly, suffer from bias in FI. By utilizing cross-validated (CV) unbiased base learners, we fix this flaw at a relatively low computational cost. We demonstrate the suggested framework in a variety of synthetic and real-world setups, showing a significant improvement in all GBM FI measures while maintaining relatively the same level of prediction accuracy.

]]>Entropy doi: 10.3390/e24050686

Authors: Cen-Jhih Li Pin-Han Huang Yi-Ting Ma Hung Hung Su-Yun Huang

Federated learning is a framework for multiple devices or institutions, called local clients, to collaboratively train a global model without sharing their data. For federated learning with a central server, an aggregation algorithm integrates model information sent from local clients to update the parameters for a global model. Sample mean is the simplest and most commonly used aggregation method. However, it is not robust for data with outliers or under the Byzantine problem, where Byzantine clients send malicious messages to interfere with the learning process. Some robust aggregation methods were introduced in literature including marginal median, geometric median and trimmed-mean. In this article, we propose an alternative robust aggregation method, named &gamma;-mean, which is the minimum divergence estimation based on a robust density power divergence. This &gamma;-mean aggregation mitigates the influence of Byzantine clients by assigning fewer weights. This weighting scheme is data-driven and controlled by the &gamma; value. Robustness from the viewpoint of the influence function is discussed and some numerical results are presented.

]]>Entropy doi: 10.3390/e24050685

Authors: Thibaud Brochet Jérôme Lapuyade-Lahorgue Alexandre Huat Sébastien Thureau David Pasquier Isabelle Gardin Romain Modzelewski David Gibon Juliette Thariat Vincent Grégoire Pierre Vera Su Ruan

Alexandre Huat, S&eacute;bastien Thureau, David Pasquier, Isabelle Gardin, Romain Modzelewski, David Gibon, Juliette Thariat and Vincent Gr&eacute;goire were not included as authors in the original publication [...]

]]>Entropy doi: 10.3390/e24050684

Authors: Loreta Saunoriene Kamilija Jablonskaite Jurate Ragulskiene Minvydas Ragulskis

A computational technique for the determination of optimal hiding conditions of a digital image in a self-organizing pattern is presented in this paper. Three statistical features of the developing pattern (the Wada index based on the weighted and truncated Shannon entropy, the mean of the brightness of the pattern, and the p-value of the Kolmogorov-Smirnov criterion for the normality testing of the distribution function) are used for that purpose. The transition from the small-scale chaos of the initial conditions to the large-scale chaos of the developed pattern is observed during the evolution of the self-organizing system. Computational experiments are performed with the stripe-type patterns, spot-type patterns, and unstable patterns. It appears that optimal image hiding conditions are secured when the Wada index stabilizes after the initial decline, the mean of the brightness of the pattern remains stable before dropping down significantly below the average, and the p-value indicates that the distribution becomes Gaussian.

]]>Entropy doi: 10.3390/e24050683

Authors: Jialin Zhang Jingyi Shi

Shannon&rsquo;s entropy is one of the building blocks of information theory and an essential aspect of Machine Learning (ML) methods (e.g., Random Forests). Yet, it is only finitely defined for distributions with fast decaying tails on a countable alphabet. The unboundedness of Shannon&rsquo;s entropy over the general class of all distributions on an alphabet prevents its potential utility from being fully realized. To fill the void in the foundation of information theory, Zhang (2020) proposed generalized Shannon&rsquo;s entropy, which is finitely defined everywhere. The plug-in estimator, adopted in almost all entropy-based ML method packages, is one of the most popular approaches to estimating Shannon&rsquo;s entropy. The asymptotic distribution for Shannon&rsquo;s entropy&rsquo;s plug-in estimator was well studied in the existing literature. This paper studies the asymptotic properties for the plug-in estimator of generalized Shannon&rsquo;s entropy on countable alphabets. The developed asymptotic properties require no assumptions on the original distribution. The proposed asymptotic properties allow for interval estimation and statistical tests with generalized Shannon&rsquo;s entropy.

]]>Entropy doi: 10.3390/e24050682

Authors: Lorenzo Squadrani Nico Curti Enrico Giampieri Daniel Remondini Brian Blais Gastone Castellani

Purpose: In this work, we propose an implementation of the Bienenstock&ndash;Cooper&ndash;Munro (BCM) model, obtained by a combination of the classical framework and modern deep learning methodologies. The BCM model remains one of the most promising approaches to modeling the synaptic plasticity of neurons, but its application has remained mainly confined to neuroscience simulations and few applications in data science. Methods: To improve the convergence efficiency of the BCM model, we combine the original plasticity rule with the optimization tools of modern deep learning. By numerical simulation on standard benchmark datasets, we prove the efficiency of the BCM model in learning, memorization capacity, and feature extraction. Results: In all the numerical simulations, the visualization of neuronal synaptic weights confirms the memorization of human-interpretable subsets of patterns. We numerically prove that the selectivity obtained by BCM neurons is indicative of an internal feature extraction procedure, useful for patterns clustering and classification. The introduction of competitiveness between neurons in the same BCM network allows the network to modulate the memorization capacity of the model and the consequent model selectivity. Conclusions: The proposed improvements make the BCM model a suitable alternative to standard machine learning techniques for both feature selection and classification tasks.

]]>Entropy doi: 10.3390/e24050681

Authors: Meng Tang Yaxuan Liao Fan Luo Xiangshun Li

When rotating machinery fails, the consequent vibration signal contains rich fault feature information. However, the vibration signal bears the characteristics of nonlinearity and nonstationarity, and is easily disturbed by noise, thus it may be difficult to accurately extract hidden fault features. To extract effective fault features from the collected vibration signals and improve the diagnostic accuracy of weak faults, a novel method for fault diagnosis of rotating machinery is proposed. The new method is based on Fast Iterative Filtering (FIF) and Parameter Adaptive Refined Composite Multiscale Fluctuation-based Dispersion Entropy (PARCMFDE). Firstly, the collected original vibration signal is decomposed by FIF to obtain a series of intrinsic mode functions (IMFs), and the IMFs with a large correlation coefficient are selected for reconstruction. Then, a PARCMFDE is proposed for fault feature extraction, where its embedding dimension and class number are determined by Genetic Algorithm (GA). Finally, the extracted fault features are input into Fuzzy C-Means (FCM) to classify different states of rotating machinery. The experimental results show that the proposed method can accurately extract weak fault features and realize reliable fault diagnosis of rotating machinery.

]]>Entropy doi: 10.3390/e24050680

Authors: Peter Grassberger

We present a new class of estimators of Shannon entropy for severely undersampled discrete distributions. It is based on a generalization of an estimator proposed by T. Sch&uuml;rmann, which itself is a generalization of an estimator proposed by myself.For a special set of parameters, they are completely free of bias and have a finite variance, something which is widely believed to be impossible. We present also detailed numerical tests, where we compare them with other recent estimators and with exact results, and point out a clash with Bayesian estimators for mutual information.

]]>Entropy doi: 10.3390/e24050679

Authors: Richard D. Gill

In 2016, Steve Gull has outlined has outlined a proof of Bell&rsquo;s theorem using Fourier theory. Gull&rsquo;s philosophy is that Bell&rsquo;s theorem (or perhaps a key lemma in its proof) can be seen as a no-go theorem for a project in distributed computing with classical, not quantum, computers. We present his argument, correcting misprints and filling gaps. In his argument, there were two completely separated computers in the network. We need three in order to fill all the gaps in his proof: a third computer supplies a stream of random numbers to the two computers representing the two measurement stations in Bell&rsquo;s work. One could also imagine that computer replaced by a cloned, virtual computer, generating the same pseudo-random numbers within each of Alice and Bob&rsquo;s computers. Either way, we need an assumption of the presence of shared i.i.d. randomness in the form of a synchronised sequence of realisations of i.i.d. hidden variables underlying the otherwise deterministic physics of the sequence of trials. Gull&rsquo;s proof then just needs a third step: rewriting an expectation as the expectation of a conditional expectation given the hidden variables.

]]>Entropy doi: 10.3390/e24050678

Authors: Yu Chen Jieyu Zhao Qilu Qiu

Learning the relationship between the part and whole of an object, such as humans recognizing objects, is a challenging task. In this paper, we specifically design a novel neural network to explore the local-to-global cognition of 3D models and the aggregation of structural contextual features in 3D space, inspired by the recent success of Transformer in natural language processing (NLP) and impressive strides in image analysis tasks such as image classification and object detection. We build a 3D shape Transformer based on local shape representation, which provides relation learning between local patches on 3D mesh models. Similar to token (word) states in NLP, we propose local shape tokens to encode local geometric information. On this basis, we design a shape-Transformer-based capsule routing algorithm. By applying an iterative capsule routing algorithm, local shape information can be further aggregated into high-level capsules containing deeper contextual information so as to realize the cognition from the local to the whole. We performed classification tasks on the deformable 3D object data sets SHREC10 and SHREC15 and the large data set ModelNet40, and obtained profound results, which shows that our model has excellent performance in complex 3D model recognition and big data feature learning.

]]>Entropy doi: 10.3390/e24050677

Authors: Takuto Isoyama Shunsuke Kidani Masashi Unoki

State-of-the-art speech watermarking techniques enable speech signals to be authenticated and protected against any malicious attack to ensure secure speech communication. In general, reliable speech watermarking methods must satisfy four requirements: inaudibility, robustness, blind-detectability, and confidentiality. We previously proposed a method of non-blind speech watermarking based on direct spread spectrum (DSS) using a linear prediction (LP) scheme to solve the first two issues (inaudibility and robustness) due to distortion by spread spectrum. This method not only effectively embeds watermarks with small distortion but also has the same robustness as the DSS method. There are, however, two remaining issues with blind-detectability and confidentiality. In this work, we attempt to resolve these issues by developing an approach called the LP-DSS scheme, which takes two forms of data embedding for blind detection and frame synchronization. We incorporate blind detection with frame synchronization into the scheme to satisfy blind-detectability and incorporate two forms of data embedding process, front-side and back-side embedding for blind detection and frame synchronization, to satisfy confidentiality. We evaluated these improved processes by carrying out four objective tests (PESQ, LSD, Bit-error-rate, and accuracy of frame synchronization) to determine whether inaudibility and blind-detectability could be satisfied. We also evaluated all combinations with the two forms of data embedding for blind detection with frame synchronization by carrying out BER tests to determine whether confidentiality could be satisfied. Finally, we comparatively evaluated the proposed method by carrying out ten robustness tests against various processing and attacks. Our findings showed that an inaudible, robust, blindly detectable, and confidential speech watermarking method based on the proposed LP-DSS scheme could be achieved.

]]>Entropy doi: 10.3390/e24050676

Authors: Silvin P. Knight Mark Ward Louise Newman James Davis Eoin Duggan Rose Anne Kenny Roman Romero-Ortuno

In this study, the relationship between cardiovascular signal entropy and the risk of seven-year all-cause mortality was explored in a large sample of community-dwelling older adults from The Irish Longitudinal Study on Ageing (TILDA). The hypothesis under investigation was that physiological dysregulation might be quantifiable by the level of sample entropy (SampEn) in continuously noninvasively measured resting-state systolic (sBP) and diastolic (dBP) blood pressure (BP) data, and that this SampEn measure might be independently predictive of mortality. Participants&rsquo; date of death up to 2017 was identified from official death registration data and linked to their TILDA baseline survey and health assessment data (2010). BP was continuously monitored during supine rest at baseline, and SampEn values were calculated for one-minute and five-minute sections of this data. In total, 4543 participants were included (mean (SD) age: 61.9 (8.4) years; 54.1% female), of whom 214 died. Cox proportional hazards regression models were used to estimate the hazard ratios (HRs) with 95% confidence intervals (CIs) for the associations between BP SampEn and all-cause mortality. Results revealed that higher SampEn in BP signals was significantly predictive of mortality risk, with an increase of one standard deviation in sBP SampEn and dBP SampEn corresponding to HRs of 1.19 and 1.17, respectively, in models comprehensively controlled for potential confounders. The quantification of SampEn in short length BP signals could provide a novel and clinically useful predictor of mortality risk in older adults.

]]>Entropy doi: 10.3390/e24050675

Authors: Visnja Ognjenovic Vladimir Brtka Jelena Stojanov Eleonora Brtka Ivana Berkovic

The preprocessing of data is an important task in rough set theory as well as in Entropy. The discretization of data as part of the preprocessing of data is a very influential process. Is there a connection between the segmentation of the data histogram and data discretization? The authors propose a novel data segmentation technique based on a histogram with regard to the quality of a data discretization. The significance of a cut&rsquo;s position has been researched on several groups of histograms. A data set reduct was observed with respect to the histogram type. Connections between the data histograms and cuts, reduct and the classification rules have been researched. The result is that the reduct attributes have a more irregular histogram than attributes out of the reduct. The following discretization algorithms were used: the entropy algorithm and the Maximal Discernibility algorithm developed in rough set theory. This article presents the Cuts Selection Method based on histogram segmentation, reduct of data and MD algorithm of discretization. An application on the selected database shows that the benefits of a selection of cuts relies on histogram segmentation. The results of the classification were compared with the results of the Na&iuml;ve Bayes algorithm.

]]>Entropy doi: 10.3390/e24050674

Authors: William Stuckey Michael Silberstein Timothy McDevitt Ian Kohler

The authors wish to make the following correction to this paper [...]

]]>Entropy doi: 10.3390/e24050673

Authors: Tahir Kerem Oğuz Elif Tuğçe Ceran Elif Uysal Tolga Girici

As communication systems evolve to better cater to the needs of machine-type applications such as remote monitoring and networked control, advanced perspectives are required for the design of link layer protocols. The age of information (AoI) metric has firmly taken its place in the literature as a metric and tool to measure and control the data freshness demands of various applications. AoI measures the timeliness of transferred information from the point of view of the destination. In this study, we experimentally investigate AoI of multiple packet flows on a wireless multi-user link consisting of a transmitter (base station) and several receivers, implemented using software-defined radios (SDRs). We examine the performance of various scheduling policies under push-based and pull-based communication scenarios. For the push-based communication scenario, we implement age-aware scheduling policies from the literature and compare their performance with those of conventional scheduling methods. Then, we investigate the query age of information (QAoI) metric, an adaptation of the AoI concept for pull-based scenarios. We modify the former age-aware policies to propose variants that have a QAoI minimization objective. We share experimental results obtained in a simulation environment as well as on the SDR testbed.

]]>Entropy doi: 10.3390/e24050668

Authors: Miguel Flores Diego Heredia Roberto Andrade Mariam Ibrahim

A risk assessment model for a smart home Internet of Things (IoT) network is implemented using a Bayesian network. The directed acyclic graph of the Bayesian network is constructed from an attack graph that details the paths through which different attacks can occur in the IoT network. The parameters of the Bayesian network are estimated with the maximum likelihood method applied to a data set obtained from the simulation of attacks, in five simulation scenarios. For the risk assessment, inferences in the Bayesian network and the impact of the attacks are considered, focusing on DoS attacks, MitM attacks and both at the same time to the devices that allow the automation of the smart home and that are generally the ones that individually have lower levels of security.

]]>Entropy doi: 10.3390/e24050672

Authors: Yeganeh Zamiri-Jafarian Konstantinos N. Plataniotis

This article proposes the Bayesian surprise as the main methodology that drives the cognitive radar to estimate a target&rsquo;s future state (i.e., velocity, distance) from noisy measurements and execute a decision to minimize the estimation error over time. The research aims to demonstrate whether the cognitive radar as an autonomous system can modify its internal model (i.e., waveform parameters) to gain consecutive informative measurements based on the Bayesian surprise. By assuming that the radar measurements are constructed from linear Gaussian state-space models, the paper applies Kalman filtering to perform state estimation for a simple vehicle-following scenario. According to the filter&rsquo;s estimate, the sensor measures the contribution of prospective waveforms&mdash;which are available from the sensor profile library&mdash;to state estimation and selects the one that maximizes the expectation of Bayesian surprise. Numerous experiments examine the estimation performance of the proposed cognitive radar for single-target tracking in practical highway and urban driving environments. The robustness of the proposed method is compared to the state-of-the-art for various error measures. Results indicate that the Bayesian surprise outperforms its competitors with respect to the mean square relative error when one-step and multiple-step planning is considered.

]]>Entropy doi: 10.3390/e24050671

Authors: Kambale Mondo Senda Agrebi Fathi Hamdi Fatma Lakhal Amsini Sadiki Mouldi Chrigui

Even though there is a pressing interest in clean energy sources, compression ignition (CI) engines, also called diesel engines, will remain of great importance for transportation sectors as well as for power generation in stationary applications in the foreseeable future. In order to promote applications dealing with complex diesel alternative fuels by facilitating their integration in numerical simulation, this paper targets three objectives. First, generate novel diesel fuel surrogates with more than one component. Here, five surrogates are generated using an advanced chemistry solver and are compared against three mechanisms from the literature. Second, validate the suggested reaction mechanisms (RMs) with experimental data. For this purpose, an engine configuration, which features a reacting spray flow evolving in a direct-injection (DI), single-cylinder, and four-stroke motor, is used. The RNG k-Epsilon coupled to power-law combustion models is applied to describe the complex in-cylinder turbulent reacting flow, while the hybrid Eulerian-Lagrangian Kelvin Helmholtz-Rayleigh Taylor (KH-RT) spray model is employed to capture the spray breakup. Third, highlight the impact of these surrogate fuels on the combustion properties along with the exergy of the engine. The results include distribution of temperature, pressure, heat release rate (HRR), vapor penetration length, and exergy efficiency. The effect of the surrogates on pollutant formation (NOX, CO, CO2) is also highlighted. The fifth surrogate showed 47% exergy efficiency. The fourth surrogate agreed well with the maximum experimental pressure, which equaled 85 Mpa. The first, second, and third surrogates registered 400, 316, and 276 g/kg fuel, respectively, of the total CO mass fraction at the outlet. These quantities were relatively higher compared to the fourth and fifth RMs.

]]>Entropy doi: 10.3390/e24050670

Authors: Kurt A. Pflughoeft Ehsan S. Soofi Refik Soyer

Preserving confidentiality of individuals in data disclosure is a prime concern for public and private organizations. The main challenge in the data disclosure problem is to release data such that misuse by intruders is avoided while providing useful information to legitimate users for analysis. We propose an information theoretic architecture for the data disclosure problem. The proposed framework consists of developing a maximum entropy (ME) model based on statistical information of the actual data, testing the adequacy of the ME model, producing disclosure data from the ME model and quantifying the discrepancy between the actual and the disclosure data. The architecture can be used both for univariate and multivariate data disclosure. We illustrate the implementation of our approach using financial data.

]]>Entropy doi: 10.3390/e24050669

Authors: Jesús Gutiérrez-Gutiérrez Fernando M. Villar-Rosety Xabier Insausti Marta Zárraga-Rodríguez

In the era of the Internet of Things, there are many applications where numerous devices are deployed to acquire information and send it to analyse the data and make informed decisions. In these applications, the power consumption and price of the devices are often an issue. In this work, analog coding schemes are considered, so that an ADC is not needed, allowing the size and power consumption of the devices to be reduced. In addition, linear and DFT-based transmission schemes are proposed, so that the complexity of the operations involved is lowered, thus reducing the requirements in terms of processing capacity and the price of the hardware. The proposed schemes are proved to be asymptotically optimal among the linear ones for WSS, MA, AR and ARMA sources.

]]>Entropy doi: 10.3390/e24050667

Authors: Masaki Sohma Osamu Hirota

In this review paper, we first introduce the basic concept of quantum computer-resistant cryptography, which is the cornerstone of security technology for the network of a new era. Then, we will describe the positioning of mathematical cryptography and quantum cryptography, that are currently being researched and developed. Quantum cryptography includes QKD and quantum stream cipher, but we point out that the latter is expected as the core technology of next-generation communication systems. Various ideas have been proposed for QKD quantum cryptography, but most of them use a single-photon or similar signal. Then, although such technologies are applicable to special situations, these methods still have several difficulties to provide functions that surpass conventional technologies for social systems in the real environment. Thus, the quantum stream cipher has come to be expected as one promising countermeasure, which artificially creates quantum properties using special modulation techniques based on the macroscopic coherent state. In addition, it has the possibility to provide superior security performance than one-time pad cipher. Finally, we introduce detailed research activity aimed at putting the quantum stream cipher into practical use in social network technology.

]]>Entropy doi: 10.3390/e24050666

Authors: Pierre Nazé Marcus V. S. Bonança Sebastian Deffner

While quantum phase transitions share many characteristics with thermodynamic phase transitions, they are also markedly different as they occur at zero temperature. Hence, it is not immediately clear whether tools and frameworks that capture the properties of thermodynamic phase transitions also apply in the quantum case. Concerning the crossing of thermodynamic critical points and describing its non-equilibrium dynamics, the Kibble&ndash;Zurek mechanism and linear response theory have been demonstrated to be among the very successful approaches. In the present work, we show that these two approaches are also consistent in the description of quantum phase transitions, and that linear response theory can even inform arguments of the Kibble&ndash;Zurek mechanism. In particular, we show that the relaxation time provided by linear response theory gives a rigorous argument for why to identify the &ldquo;gap&rdquo; as a relaxation rate, and we verify that the excess work computed from linear response theory exhibits Kibble&ndash;Zurek scaling.

]]>Entropy doi: 10.3390/e24050665

Authors: Ricard Solé Luís F. Seoane

When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can die) and how computers could get inspiration from neural systems. In parallel, the first artificial neural networks came to life. Since then, the comparative view between brains and computers has been developed in new, sometimes unexpected directions. With the rise of deep learning and the development of connectomics, an evolutionary look at how both hardware and neural complexity have evolved or designed is required. In this paper, we argue that important similarities have resulted both from convergent evolution (the inevitable outcome of architectural constraints) and inspiration of hardware and software principles guided by toy pictures of neurobiology. Moreover, dissimilarities and gaps originate from the lack of major innovations that have paved the way to biological computing (including brains) that are completely absent within the artificial domain. As it occurs within synthetic biocomputation, we can also ask whether alternative minds can emerge from A.I. designs. Here, we take an evolutionary view of the problem and discuss the remarkable convergences between living and artificial designs and what are the pre-conditions to achieve artificial intelligence.

]]>Entropy doi: 10.3390/e24050664

Authors: Bowen Shi Ke Xu Jichang Zhao

The boom in social media with regard to producing and consuming information simultaneously implies the crucial role of online user influence in determining content popularity. In particular, understanding behavior variations between the influential elites and the mass grassroots is an important issue in communication. However, how their behavior varies across user categories and content domains and how these differences influence content popularity are rarely addressed. From a novel view of seven content domains, a detailed picture of the behavior variations among five user groups, from the views of both the elites and mass, is drawn on Weibo, one of the most popular Twitter-like services in China. Interestingly, elites post more diverse content with video links, while the mass possess retweeters of higher loyalty. According to these variations, user-oriented actions for enhancing content popularity are discussed and testified. The most surprising finding is that the diverse content does not always bring more retweets, and the mass and elites should promote content popularity by increasing their retweeter counts and loyalty, respectively. For the first time, our results demonstrate the possibility of highly individualized strategies of popularity promotions in social media, instead of a universal principle.

]]>Entropy doi: 10.3390/e24050663

Authors: Jacek Siódmiak Adam Gadomski

This communication addresses the question of the far-from-equilibrium growth of spherulites with different growing modes. The growth occurs in defects containing and condensed matter addressing environments of (bio)polymeric and biominerals involving outcomes. It turns out that it is possible to anticipate that, according to our considerations, there is a chance of spherulites&rsquo; emergence prior to a pure diffusion-controlled (poly)crystal growth. Specifically, we have shown that the emergence factors of the two different evolution types of spherulitic growth modes, namely, diffusion-controlled growth and mass convection-controlled growth, appear. As named by us, the unimodal crystalline Mullins&ndash;Sekerka type mode of growth, characteristic of local curvatures&rsquo; presence, seems to be more entropy-productive in its emerging (structural) nature than the so-named bimodal or Goldenfeld type mode of growth. In the latter, the local curvatures do not play any crucial roles. In turn, a liaison of amorphous and crystalline phases makes the system far better compromised to the thermodynamic-kinetic conditions it actually, and concurrently, follows. The dimensionless character of the modeling suggests that the system does not directly depend upon experimental details, manifesting somehow its quasi-universal, i.e., scaling addressing character.

]]>Entropy doi: 10.3390/e24050662

Authors: Ke Xue Zhigang Shen Shengmei Zhao Qianping Mao

Twin-field quantum key distribution (TF-QKD) has attracted considerable attention because it can exceed the basic rate-distance limit without quantum repeaters. Its variant protocol, sending or not-sending quantum key distribution (SNS-QKD), not only fixes the security vulnerability of TF-QKD, but also can tolerate large misalignment errors. However, the current SNS-QKD protocol is based on the active decoy-state method, which may lead to side channel information leakage when multiple light intensities are modulated in practice. In this work, we propose a passive decoy-state SNS-QKD protocol to further enhance the security of SNS-QKD. Numerical simulation results show that the protocol not only improves the security in source, but also retains the advantages of tolerating large misalignment errors. Therefore, it may provide further guidance for the practical application of SNS-QKD.

]]>Entropy doi: 10.3390/e24050661

Authors: Abdulrahman Alenezi Abdulrahman Almutairi Hamad Alhajeri Saad F. Almekmesh Bashar B. Alzuwayer

In this paper, a numerical investigation was performed of an air jet incident that normally occurs on a horizontal heated plane. Analysis of flow physics and entropy generation due to heat and friction is included using a simple easy-to-manufacture, surface roughening element: a circular rib concentric with the air jet. This study shows how varying the locations and dimensions of the rib can deliver a favorable trade-off between entropy generation and flow parameters, such as vortex generation and heat transfer. The performance of the roughness element was tested at three different radii; R/D = 1, 1.5 and 2, where D was the jet hydraulic diameter and R was the radial distance from the geometric center. At each location, the normalized rib height (e/D) was increased from 0.019 to 0.074 based on an increment of (e/D) = 0.019. The jet-to-target distance was H/D = 6 and the jet Reynolds number (Re) ranged from 10,000 to 50,000 Re, which was obtained from the jet hydraulic diameter (D), and the jet exit velocity (U). All results are presented in the form of entropy generation due to friction and heat exchange, as well as the total entropy generated. A detailed comparison of flow physics is presented for all ribs and compared with the baseline case of a smooth surface. The results show that at higher Reynolds numbers, adding a rib of a suitable height reduced the total entropy (St) by 31% compared to the no rib case. In addition, with ribs of heights 0.019, 0.037 and 0.054, respectively, the entropy generated by friction (Sf) was greater than that due to heat exchange (Sh) by about 42%, 26% and 4%, respectively. The rib of height e/D = 0.074 produced the minimum St at R/D = 1. As for varying R/D, varying rib location and Re values had a noticeable impact on Sh, Sf and (St). Placing the rib at R/D = 1 gave the highest total entropy generation (St) followed by R/D = 1.5 for all Re. Finally, the Bejan number increased as both rib height and rib location increased.

]]>Entropy doi: 10.3390/e24050660

Authors: Yajie Yu Shaojun Xia Ming Zhao

The use of olefin oligomerization in the synthesis of liquid fuel has broad application prospects in military and civil fields. Here, based on finite time thermodynamics (FTT), an ethylene oligomerization chemical process (EOCP) model with a constant temperature heat source outside the heat exchanger and reactor pipes was established. The process was first optimized with the minimum specific entropy generation rate (SEGR) as the optimization objective, then multi-objective optimization was further performed by utilizing the NSGA-II algorithm with the minimization of the entropy generation rate (EGR) and the maximization of the C10H20 yield as the optimization objectives. The results showed that the point of the minimum EGR was the same as that of SEGR in the Pareto optimal frontier. The solution obtained using the Shannon entropy decision method had the lowest deviation index, the C10H20 yield was reduced by 49.46% compared with the point of reference and the EGR and SEGR were reduced by 59.01% and 18.88%, respectively.

]]>Entropy doi: 10.3390/e24050659

Authors: Pu Wang Zhihua Guo Huaixin Cao

Quantum coherence is known as an important resource in many quantum information tasks, which is a basis-dependent property of quantum states. In this paper, we discuss quantum incoherence based simultaneously on k bases using Matrix Theory Method. First, by defining a correlation function m(e,f) of two orthonormal bases e and f, we investigate the relationships between sets I(e) and I(f) of incoherent states with respect to e and f. We prove that I(e)=I(f) if and only if the rank-one projective measurements generated by e and f are identical. We give a necessary and sufficient condition for the intersection I(e)&#8898;I(f) to include a state except the maximally mixed state. Especially, if two bases e and f are mutually unbiased, then the intersection has only the maximally mixed state. Secondly, we introduce the concepts of strong incoherence and weak coherence of a quantum state with respect to a set B of k bases and propose a measure for the weak coherence. In the two-qubit system, we prove that there exists a maximally coherent state with respect to B when k=2 and it is not the case for k=3.

]]>Entropy doi: 10.3390/e24050658

Authors: Sankar Das Ganesh Ghorai Qin Xin

In this study, a novel concept of picture fuzzy threshold graph (PFTG) is introduced. It has been shown that PFTGs are free from alternating 4-cycle and it can be constructed by repeatedly adding a dominating or an isolated node. Several properties about PFTGs are discussed and obtained the results that every picture fuzzy graph (PFG) is equivalent to a PFTG under certain conditions. Also, the underlying crisp graph (UCG) of PFTG is a split graph (SG), and conversely, a given SG can be applied to constitute a PFTG. A PFTG can be decomposed in a unique way and it generates three distinct fuzzy threshold graphs (FTGs). Furthermore, two important parameters i.e., picture fuzzy (PF) threshold dimension (TD) and PF partition number (PN) of PFGs are defined. Several properties on TD and PN have also been discussed. Lastly, an application of these developed results are presented in controlling medicine resources.

]]>Entropy doi: 10.3390/e24050657

Authors: Kady Sako Berthine Nyunga Mpinda Paulo Canas Rodrigues

Financial and economic time series forecasting has never been an easy task due to its sensibility to political, economic and social factors. For this reason, people who invest in financial markets and currency exchange are usually looking for robust models that can ensure them to maximize their profile and minimize their losses as much as possible. Fortunately, recently, various studies have speculated that a special type of Artificial Neural Networks (ANNs) called Recurrent Neural Networks (RNNs) could improve the predictive accuracy of the behavior of the financial data over time. This paper aims to forecast: (i) the closing price of eight stock market indexes; and (ii) the closing price of six currency exchange rates related to the USD, using the RNNs model and its variants: the Long Short-Term Memory (LSTM) and the Gated Recurrent Unit (GRU). The results show that the GRU gives the overall best results, especially for the univariate out-of-sample forecasting for the currency exchange rates and multivariate out-of-sample forecasting for the stock market indexes.

]]>Entropy doi: 10.3390/e24050656

Authors: Ting-Ting Wang Shu-Chuan Chu Chia-Cheng Hu Han-Dong Jia Jeng-Shyang Pan

Manually designing a convolutional neural network (CNN) is an important deep learning method for solving the problem of image classification. However, most of the existing CNN structure designs consume a significant amount of time and computing resources. Over the years, the demand for neural architecture search (NAS) methods has been on the rise. Therefore, we propose a novel deep architecture generation model based on Aquila optimization (AO) and a genetic algorithm (GA). The main contributions of this paper are as follows: Firstly, a new encoding strategy representing the CNN coding structure is proposed, so that the evolutionary computing algorithm can be combined with CNN. Secondly, a new mechanism for updating location is proposed, which incorporates three typical operators from GA cleverly into the model we have designed so that the model can find the optimal solution in the limited search space. Thirdly, the proposed method can deal with the variable-length CNN structure by adding skip connections. Fourthly, combining traditional CNN layers and residual blocks and introducing a grouping strategy provides greater possibilities for searching for the optimal CNN structure. Additionally, we use two notable datasets, consisting of the MNIST and CIFAR-10 datasets for model evaluation. The experimental results show that our proposed model has good results in terms of search accuracy and time.

]]>Entropy doi: 10.3390/e24050652

Authors: Liuhai Wang Xin Du Bo Jiang Weifeng Pan Hua Ming Dongsheng Liu

Software maintenance is indispensable in the software development process. Developers need to spend a lot of time and energy to understand the software when maintaining the software, which increases the difficulty of software maintenance. It is a feasible method to understand the software through the key classes of the software. Identifying the key classes of the software can help developers understand the software more quickly. Existing techniques on key class identification mainly use static analysis techniques to extract software structure information. Such structure information may contain redundant relationships that may not exist when the software runs and ignores the actual interaction times between classes. In this paper, we propose an approach based on dynamic analysis and entropy-based metrics to identify key classes in the Java GUI software system, called KEADA (identifying KEy clAsses based on Dynamic Analysis and entropy-based metrics). First, KEADA extracts software structure information by recording the calling relationship between classes during the software running process; such structure information takes into account the actual interaction of classes. Second, KEADA represents the structure information as a weighted directed network and further calculates the importance of each node using an entropy-based metric OSE (One-order Structural Entropy). Third, KEADA ranks classes in descending order according to their OSE values and selects a small number of classes as the key class candidates. In order to verify the effectiveness of our approach, we conducted experiments on three Java GUI software systems and compared them with seven state-of-the-art approaches. We used the Friedman test to evaluate all approaches, and the results demonstrate that our approach performs best in all software systems.

]]>Entropy doi: 10.3390/e24050655

Authors: Zhiheng Zeng Bin Li Chongyang Han Weibin Wu Xiaoming Wang Jian Xu Zefeng Zheng Baoqi Ma Zhibiao Hu

The performance evaluation and optimization of an energy conversion system design of an energy intensive drying system applied the method of combining exergy and economy is a theme of global concern. In this study, a gas-type industrial drying system of black tea with a capacity of 100 kg/h is used to investigate the exergetic and economic performance through the exergy and exergoeconomic methodology. The result shows that the drying rate of tea varies from the maximum value of 3.48 gwater/gdry matter h to the minimum 0.18 gwater/gdry matter h. The highest exergy destruction rate is found for the drying chamber (74.92 kW), followed by the combustion chamber (20.42 kW) in the initial drying system, and 51.83 kW and 21.15 kW in the redrying system. Similarly, the highest cost of the exergy destruction rate is found for the drying chamber (18.497 USD/h), followed by the combustion chamber (5.041 USD/h) in the initial drying system, and 12.796 USD/h and 5.222 USD/h in the redrying system. Furthermore, we analyzed the unit exergy rate consumed and the unit exergy cost of water removal in different drying sections of the drying system, and determined the optimal ordering of each component. These results mentioned above indicate that, whether from an energy or economic perspective, the component improvements should prioritize the drying chamber. Accordingly, minimizing exergy destruction and the cost of the exergy destruction rate can be considered as a strategy for improving the performance of energy and economy. Overall, the main results provide a more intuitive judgment for system improvement and optimization, and the exergy and exergoeconomic methodology can be commended as a method for agricultural product industrial drying from the perspective of exergoeconomics.

]]>Entropy doi: 10.3390/e24050654

Authors: Ignazio Blanco Gianluca Cicala Giuseppe Recca Claudio Tosto

This research focuses on the thermal characterization of 3D-printed parts obtained via fused filament fabrication (FFF) technology, which uses a poly(lactic acid) (PLA)-based filament filled with milled carbon fibers (MCF) from pyrolysis at different percentages by weight (10, 20, 30 wt%). Differential scanning calorimetry (DSC) and thermal conductivity measurements were used to evaluate the thermal characteristics, morphological features, and heat transport behavior of the printed specimens. The experimental results showed that the addition of MCF to the PLA matrix improved the conductive properties. Scanning electron microscopy (SEM) micrographs were used to obtain further information about the porosity of the systems.

]]>Entropy doi: 10.3390/e24050653

Authors: Hongyan Liu Daokui Qu Fang Xu Zhenjun Du Kai Jia Mingmin Liu

With the rapid development of robot perception and planning technology, robots are gradually getting rid of fixed fences and working closely with humans in shared workspaces. The safety of human-robot coexistence has become critical. Traditional motion planning methods perform poorly in dynamic environments where obstacles motion is highly uncertain. In this paper, we propose an efficient online trajectory generation method to help manipulator autonomous planning in dynamic environments. Our approach starts with an efficient kinodynamic path search algorithm that considers the links constraints and finds a safe and feasible initial trajectory with minimal control effort and time. To increase the clearance between the trajectory and obstacles and improve the smoothness, a trajectory optimization method using the B-spline convex hull property is adopted to minimize the penalty of collision cost, smoothness, and dynamical feasibility. To avoid the collisions between the links and obstacles and the collisions of the links themselves, a constraint-relaxed links collision avoidance method is developed by solving a quadratic programming problem. Compared with the existing state-of-the-art planning method for dynamic environments and advanced trajectory optimization method, our method can generate a smoother, collision-free trajectory in less time with a higher success rate. Detailed simulation comparison experiments, as well as real-world experiments, are reported to verify the effectiveness of our method.

]]>Entropy doi: 10.3390/e24050651

Authors: Ilias Kalouptsoglou Miltiadis Siavvas Dionysios Kehagias Alexandros Chatzigeorgiou Apostolos Ampatzoglou

Software security is a very important aspect for software development organizations who wish to provide high-quality and dependable software to their consumers. A crucial part of software security is the early detection of software vulnerabilities. Vulnerability prediction is a mechanism that facilitates the identification (and, in turn, the mitigation) of vulnerabilities early enough during the software development cycle. The scientific community has recently focused a lot of attention on developing Deep Learning models using text mining techniques for predicting the existence of vulnerabilities in software components. However, there are also studies that examine whether the utilization of statically extracted software metrics can lead to adequate Vulnerability Prediction Models. In this paper, both software metrics- and text mining-based Vulnerability Prediction Models are constructed and compared. A combination of software metrics and text tokens using deep-learning models is examined as well in order to investigate if a combined model can lead to more accurate vulnerability prediction. For the purposes of the present study, a vulnerability dataset containing vulnerabilities from real-world software products is utilized and extended. The results of our analysis indicate that text mining-based models outperform software metrics-based models with respect to their F2-score, whereas enriching the text mining-based models with software metrics was not found to provide any added value to their predictive performance.

]]>Entropy doi: 10.3390/e24050650

Authors: Fathi Hamdi Senda Agrebi Mohamed Salah Idrissi Kambale Mondo Zeineb Labiadh Amsini Sadiki Mouldi Chrigui

The behaviors of spray, in Reactivity Controlled Combustion Ignition (RCCI) dual fuel engine and subsequent emissions formation, are numerically addressed. Five spray cone angles ranging between 5&deg; and 25&deg; with an advanced injection timing of 22&deg; Before Top Dead Center (BTDC) are considered. The objective of this paper is twofold: (a) to enhance engine behaviors in terms of performances and consequent emissions by adjusting spray cone angle and (b) to outcome the exergy efficiency for each case. The simulations are conducted using the Ansys-forte tool. The turbulence model is the Renormalization Group (RNG) K-epsilon, which is selected for its effectiveness in strongly sheared flows. The spray breakup is governed by the hybrid model Kelvin&ndash;Helmholtz and Rayleigh&ndash;Taylor spray models. A surrogate of n-heptane, which contains 425 species and 3128 reactions, is used for diesel combustion modeling. The obtained results for methane/diesel engine combustion, under low load operating conditions, include the distribution of heat transfer flux, pressure, temperature, Heat Release Rate (HRR), and Sauter Mean Diameter (SMD). An exergy balance analysis is conducted to quantify the engine performances. Output emissions at the outlet of the combustion chamber are also monitored in this work. Investigations show a pressure decrease for a cone angle &theta; = 5&deg; of roughly 8%, compared to experimental measurement (&theta; = 10&deg;). A broader cone angle produces a higher mass of NOx. The optimum spray cone angle, in terms of exergy efficiency, performance, and consequent emissions is found to lie at 15&deg; &le; &theta; &le; 20&deg;.

]]>Entropy doi: 10.3390/e24050649

Authors: Adrián A. Budini

Quantum memory effects can be qualitatively understood as a consequence of an environment-to-system backflow of information. Here, we analyze and compare how this concept is interpreted and implemented in different approaches to quantum non-Markovianity. We study a nonoperational approach, defined by the distinguishability between two system states characterized by different initial conditions, and an operational approach, which is defined by the correlation between different outcomes associated to successive measurement processes performed over the system of interest. The differences, limitations, and vantages of each approach are characterized in detail by considering diverse system&ndash;environment models and dynamics. As a specific example, we study a non-Markovian depolarizing map induced by the interaction of the system of interest with an environment characterized by incoherent and coherent self-dynamics.

]]>Entropy doi: 10.3390/e24050648

Authors: Xinyang Deng Tianhan Gao Nan Guo Cong Zhao Jiayu Qi

In vehicular ad hoc networks (VANETs), pseudonym change is considered as the vital mechanism to support vehicles&rsquo; anonymity. Due to the complicated road conditions and network environment, it is a challenge to design an efficient and adaptive pseudonym change protocol. In this paper, a pseudonym change protocol for location privacy preserving (PCP) is proposed. We first present the requirements of pseudonym change in different scenarios. According to variable network states and road conditions, vehicles are able to take different pseudonym change strategies to resist the tracking by global passive adversaries. Furthermore, the registration protocol, authentication protocol, pseudonym issuance protocol, and pseudonym revocation protocol are introduced for the pseudonym management mechanism. As a consequence, it is not feasible for global passive adversaries to track a vehicle for a long time and obtain the trajectory of the vehicle. The analysis results show that the security and performance of PCP are improved compared with the traditional ones.

]]>Entropy doi: 10.3390/e24050647

Authors: Bikramaditya Ghosh Elie Bouri

The Bitcoin mining process is energy intensive, which can hamper the much-desired ecological balance. Given that the persistence of high levels of energy consumption of Bitcoin could have permanent policy implications, we examine the presence of long memory in the daily data of the Bitcoin Energy Consumption Index (BECI) (BECI upper bound, BECI lower bound, and BECI average) covering the period 25 February 2017 to 25 January 2022. Employing fractionally integrated GARCH (FIGARCH) and multifractal detrended fluctuation analysis (MFDFA) models to estimate the order of fractional integrating parameter and compute the Hurst exponent, which measures long memory, this study shows that distant series observations are strongly autocorrelated and long memory exists in most cases, although mean-reversion is observed at the first difference of the data series. Such evidence for the profound presence of long memory suggests the suitability of applying permanent policies regarding the use of alternate energy for mining; otherwise, transitory policy would quickly become obsolete. We also suggest the replacement of &lsquo;proof-of-work&rsquo; with &lsquo;proof-of-space&rsquo; or &lsquo;proof-of-stake&rsquo;, although with a trade-off (possible security breach) to reduce the carbon footprint, the implementation of direct tax on mining volume, or the mandatory use of carbon credits to restrict the environmental damage.

]]>Entropy doi: 10.3390/e24050646

Authors: Hans Fuchs Michele D’Anna Federico Corni

We discuss how to construct a direct and experientially natural path to entropy as a extensive quantity of a macroscopic theory of thermal systems and processes. The scientific aspects of this approach are based upon continuum thermodynamics. We ask what the roots of an experientially natural approach might be—to this end we investigate and describe in some detail (a) how humans experience and conceptualize an extensive thermal quantity (i.e., an amount of heat), and (b) how this concept evolved during the early development of the science of thermal phenomena (beginning with the Experimenters of the Accademia del Cimento and ending with Sadi Carnot). We show that a direct approach to entropy, as the extensive quantity of models of thermal systems and processes, is possible and how it can be applied to the teaching of thermodynamics for various audiences.

]]>Entropy doi: 10.3390/e24050645

Authors: Sfundo C. Gumede Keshlan S. Govinder Sunil D. Maharaj

The equation yxx=f(x)y2+g(x)y3 is the charged generalization of the Emden-Fowler equation that is crucial in the study of spherically symmetric shear-free spacetimes. This version arises from the Einstein&ndash;Maxwell system for a charged shear-free matter distribution. We integrate this equation and find a new first integral. For this solution to exist, two integral equations arise as integrability conditions. The integrability conditions can be transformed to nonlinear differential equations, which give explicit forms for f(x) and g(x) in terms of elementary and special functions. The explicit forms f(x)&sim;1x51&minus;1x&minus;11/5 and g(x)&sim;1x61&minus;1x&minus;12/5 arise as repeated roots of a fourth order polynomial. This is a new solution to the Einstein-Maxwell equations. Our result complements earlier work in neutral and charged matter showing that the complexity of a charged self-gravitating fluid is connected to the existence of a first integral.

]]>Entropy doi: 10.3390/e24050644

Authors: Lu Li Zhong-Xiao Man Yun-Jie Xia

We study the steady-state thermodynamics of a cascaded collision model where two subsystems S1 and S2 collide successively with an environment R in the cascaded fashion. We first formulate general expressions of thermodynamics quantities and identify the nonlocal forms of work and heat that result from cascaded interactions of the system with the common environment. Focusing on a concrete system of two qubits, we then show that, to be able to unidirectionally influence the thermodynamics of S2, the former interaction of S1&minus;R should not be energy conserving. We finally demonstrate that the steady-state coherence generated in the cascaded model is a kind of useful resource in extracting work, quantified by ergotropy, from the system. Our results provide a comprehensive understanding on the thermodynamics of the cascaded model and a possible way to achieve the unidirectional control on the thermodynamics process in the steady-state regime.

]]>Entropy doi: 10.3390/e24050643

Authors: Shideh Rezaeifar Slava Voloshynovskiy Meisam Asgari Jirhandeh Vitality Kinakh

With the recent developments of Machine Learning as a Service (MLaaS), various privacy concerns have been raised. Having access to the user&rsquo;s data, an adversary can design attacks with different objectives, namely, reconstruction or attribute inference attacks. In this paper, we propose two different training frameworks for an image classification task while preserving user data privacy against the two aforementioned attacks. In both frameworks, an encoder is trained with contrastive loss, providing a superior utility-privacy trade-off. In the reconstruction attack scenario, a supervised contrastive loss was employed to provide maximal discrimination for the targeted classification task. The encoded features are further perturbed using the obfuscator module to remove all redundant information. Moreover, the obfuscator module is jointly trained with a classifier to minimize the correlation between private feature representation and original data while retaining the model utility for the classification. For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. Therefore, the encoder is trained with supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. The reported results on the CelebA dataset validate the effectiveness of the proposed frameworks.

]]>Entropy doi: 10.3390/e24050642

Authors: Constanta Zoie Radulescu Marius Radulescu Radu Boncea

The COVID-19 pandemic caused important health and societal damage across the world in 2020&ndash;2022. Its study represents a tremendous challenge for the scientific community. The correct evaluation and analysis of the situation can lead to the elaboration of the most efficient strategies and policies to control and mitigate its propagation. The paper proposes a Multi-Criteria Decision Support (MCDS) based on the combination of three methods: the Group Analytic Hierarchy Process (GAHP), which is a subjective group weighting method; Extended Entropy Weighting Method (EEWM), which is an objective weighting method; and the COmplex PRoportional ASsessment (COPRAS), which is a multi-criteria method. The COPRAS uses the combined weights calculated by the GAHP and EEWM. The sum normalization (SN) is considered for COPRAS and EEWM. An extended entropy is proposed in EEWM. The MCDS is implemented for the development of a complex COVID-19 indicator called COVIND, which includes several countries&rsquo; COVID-19 indicators, over a fourth COVID-19 wave, for a group of European countries. Based on these indicators, a ranking of the countries is obtained. An analysis of the obtained rankings is realized by the variation of two parameters: a parameter that describes the combination of weights obtained with EEWM and GAHP and the parameter of extended entropy function. A correlation analysis between the new indicator and the general country indicators is performed. The MCDS provides policy makers with a decision support able to synthesize the available information on the fourth wave of the COVID-19 pandemic.

]]>Entropy doi: 10.3390/e24050641

Authors: Fei Luo Cheng Chen Joel Fuentes Yong Li Weichao Ding

As a non-deterministic polynomial hard (NP-hard) problem, the shortest common supersequence (SCS) problem is normally solved by heuristic or metaheuristic algorithms. One type of metaheuristic algorithms that has relatively good performance for solving SCS problems is the chemical reaction optimization (CRO) algorithm. Several CRO-based proposals exist; however, they face such problems as unstable molecular population quality, uneven distribution, and local optimum (premature) solutions. To overcome these problems, we propose a new approach for the search mechanism of CRO-based algorithms. It combines the opposition-based learning (OBL) mechanism with the previously studied improved chemical reaction optimization (IMCRO) algorithm. This upgraded version is dubbed OBLIMCRO. In its initialization phase, the opposite population is constructed from a random population based on OBL; then, the initial population is generated by selecting molecules with the lowest potential energy from the random and opposite populations. In the iterative phase, reaction operators create new molecules, where the final population update is performed. Experiments show that the average running time of OBLIMCRO is more than 50% less than the average running time of CRO_SCS and its baseline algorithm, IMCRO, for the desoxyribonucleic acid (DNA) and protein datasets.

]]>Entropy doi: 10.3390/e24050640

Authors: Zhongqi Cai Enrico Gerding Markus Brede

Using observational data to infer the coupling structure or parameters in dynamical systems is important in many real-world applications. In this paper, we propose a framework of strategically influencing a dynamical process that generates observations with the aim of making hidden parameters more easily inferable. More specifically, we consider a model of networked agents who exchange opinions subject to voting dynamics. Agent dynamics are subject to peer influence and to the influence of two controllers. One of these controllers is treated as passive and we presume its influence is unknown. We then consider a scenario in which the other active controller attempts to infer the passive controller&rsquo;s influence from observations. Moreover, we explore how the active controller can strategically deploy its own influence to manipulate the dynamics with the aim of accelerating the convergence of its estimates of the opponent. Along with benchmark cases we propose two heuristic algorithms for designing optimal influence allocations. We establish that the proposed algorithms accelerate the inference process by strategically interacting with the network dynamics. Investigating configurations in which optimal control is deployed. We first find that agents with higher degrees and larger opponent allocations are harder to predict. Second, even factoring in strategical allocations, opponent&rsquo;s influence is typically the harder to predict the more degree-heterogeneous the social network.

]]>Entropy doi: 10.3390/e24050639

Authors: Tyler S. Barker Massimiliano Pierobon Peter J. Thomas

Information transmission and storage have gained traction as unifying concepts to characterize biological systems and their chances of survival and evolution at multiple scales. Despite the potential for an information-based mathematical framework to offer new insights into life processes and ways to interact with and control them, the main legacy is that of Shannon&rsquo;s, where a purely syntactic characterization of information scores systems on the basis of their maximum information efficiency. The latter metrics seem not entirely suitable for biological systems, where transmission and storage of different pieces of information (carrying different semantics) can result in different chances of survival. Based on an abstract mathematical model able to capture the parameters and behaviors of a population of single-celled organisms whose survival is correlated to information retrieval from the environment, this paper explores the aforementioned disconnect between classical information theory and biology. In this paper, we present a model, specified as a computational state machine, which is then utilized in a simulation framework constructed specifically to reveal emergence of a &ldquo;subjective information&rdquo;, i.e., trade-off between a living system&rsquo;s capability to maximize the acquisition of information from the environment, and the maximization of its growth and survival over time. Simulations clearly show that a strategy that maximizes information efficiency results in a lower growth rate with respect to the strategy that gains less information but contains a higher meaning for survival.

]]>Entropy doi: 10.3390/e24050638

Authors: Yixiang Ren Zhenhui Ye Guanghua Song Xiaohong Jiang

Mobile crowdsensing (MCS) is attracting considerable attention in the past few years as a new paradigm for large-scale information sensing. Unmanned aerial vehicles (UAVs) have played a significant role in MCS tasks and served as crucial nodes in the newly-proposed space-air-ground integrated network (SAGIN). In this paper, we incorporate SAGIN into MCS task and present a Space-Air-Ground integrated Mobile CrowdSensing (SAG-MCS) problem. Based on multi-source observations from embedded sensors and satellites, an aerial UAV swarm is required to carry out energy-efficient data collection and recharging tasks. Up to date, few studies have explored such multi-task MCS problem with the cooperation of UAV swarm and satellites. To address this multi-agent problem, we propose a novel deep reinforcement learning (DRL) based method called Multi-Scale Soft Deep Recurrent Graph Network (ms-SDRGN). Our ms-SDRGN approach incorporates a multi-scale convolutional encoder to process multi-source raw observations for better feature exploitation. We also use a graph attention mechanism to model inter-UAV communications and aggregate extra neighboring information, and utilize a gated recurrent unit for long-term performance. In addition, a stochastic policy can be learned through a maximum-entropy method with an adjustable temperature parameter. Specifically, we design a heuristic reward function to encourage the agents to achieve global cooperation under partial observability. We train the model to convergence and conduct a series of case studies. Evaluation results show statistical significance and that ms-SDRGN outperforms three state-of-the-art DRL baselines in SAG-MCS. Compared with the best-performing baseline, ms-SDRGN improves 29.0% reward and 3.8% CFE score. We also investigate the scalability and robustness of ms-SDRGN towards DRL environments with diverse observation scales or demanding communication conditions.

]]>Entropy doi: 10.3390/e24050637

Authors: Homa Nikbakht Michèle Wigger Malcolm Egan Shlomo Shamai (Shitz) Jean-Marie Gorce H. Vincent Poor

Fifth generation mobile communication systems (5G) have to accommodate both Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) services. While eMBB applications support high data rates, URLLC services aim at guaranteeing low-latencies and high-reliabilities. eMBB and URLLC services are scheduled on the same frequency band, where the different latency requirements of the communications render their coexistence challenging. In this survey, we review, from an information theoretic perspective, coding schemes that simultaneously accommodate URLLC and eMBB transmissions and show that they outperform traditional scheduling approaches. Various communication scenarios are considered, including point-to-point channels, broadcast channels, interference networks, cellular models, and cloud radio access networks (C-RANs). The main focus is on the set of rate pairs that can simultaneously be achieved for URLLC and eMBB messages, which captures well the tension between the two types of communications. We also discuss finite-blocklength results where the measure of interest is the set of error probability pairs that can simultaneously be achieved in the two communication regimes.

]]>Entropy doi: 10.3390/e24050636

Authors: Edgar Vicente Torres González Sergio Castro Hernández Helen Denise Lugo Méndez Fernando Gabriel Arroyo Cabañas Javier Valencia López Raúl Lugo Leyte

Nowadays, in Mexico, most of the installed electricity generation capacity corresponds to combined cycles, representing 37.1%. For this reason, it is important to maintain these cycles in good operating conditions, with the least environmental impacts. An exergoeconomic and environmental analysis is realized to compare the operation of the combined cycle, with and without postcombustion, with the comparison of exergoeconomic and environmental indicators. With the productive structure of the energy system, the process of formation of the final products and the residues are identified, and an allocation criterion is also used to impute the formation cost of residue to the productive components related to its formation. This criterion considers the irreversibilities generated in each productive component that participates in the formation of a residue. The compositions of pollutant gases emitted are obtained, and their environmental impact is determined. The unit exergoeconomic cost of the power output in the gas turbine is lower in the combined cycle with postcombustion, indicating greater efficiency in the process of obtaining this energy stream, and the environmental indicators of global warming, smog formation and acid rain formation are higher in the combined cycle with postcombustion, these differences being 5.22%, 5.53% and 5.30%, respectively.

]]>Entropy doi: 10.3390/e24050635

Authors: Hiroto Kuramata Hideki Yagi

We consider a binary classification problem for a test sequence to determine from which source the sequence is generated. The system classifies the test sequence based on empirically observed (training) sequences obtained from unknown sources P1 and P2. We analyze the asymptotic fundamental limits of statistical classification for sources with multiple subclasses. We investigate the first- and second-order maximum error exponents under the constraint that the type-I error probability for all pairs of distributions decays exponentially fast and the type-II error probability is upper bounded by a small constant. In this paper, we first give a classifier which achieves the asymptotically maximum error exponent in the class of deterministic classifiers for sources with multiple subclasses, and then provide a characterization of the first-order error exponent. We next provide a characterization of the second-order error exponent in the case where only P2 has multiple subclasses but P1 does not. We generalize our results to classification in the case that P1 and P2 are a stationary and memoryless source and a mixed memoryless source with general mixture, respectively.

]]>Entropy doi: 10.3390/e24050634

Authors: Junlan Xian Jingyi Zhang

In this work, we study the Hawking temperature of the global monopole spacetime (non-spherical symmetrical black hole) based on the topological method proposed by Robson, Villari, and Biancalana (RVB). By connecting the Hawking temperature with the topological properties of black holes, the Hawking temperature of the global monopole spacetime can be obtained by the RVB method. We also discuss the Hawking temperature in massive gravity, and find that the effect of the mass term cannot be ignored in the calculation of the Hawking temperature; the corrected Hawking temperature in massive gravity can be derived by adding an integral constant, which can be determined by the standard definition.

]]>Entropy doi: 10.3390/e24050632

Authors: Eugene Korotkov Konstantin Zaytsev Alexey Fedorov

In this paper, we attempted to find a relation between bacteria living conditions and their genome algorithmic complexity. We developed a probabilistic mathematical method for the evaluation of k-words (6 bases length) occurrence irregularity in bacterial gene coding sequences. For this, the coding sequences from different bacterial genomes were analyzed and as an index of k-words occurrence irregularity, we used W, which has a distribution similar to normal. The research results for bacterial genomes show that they can be divided into two uneven groups. First, the smaller one has W in the interval from 170 to 475, while for the second it is from 475 to 875. Plants, metazoan and virus genomes also have W in the same interval as the first bacterial group. We suggested that second bacterial group coding sequences are much less susceptible to evolutionary changes than the first group ones. It is also discussed to use the W index as a biological stress value.

]]>Entropy doi: 10.3390/e24050633

Authors: Ginestra Bianconi

Maximum entropy network ensembles have been very successful in modelling sparse network topologies and in solving challenging inference problems. However the sparse maximum entropy network models proposed so far have fixed number of nodes and are typically not exchangeable. Here we consider hierarchical models for exchangeable networks in the sparse limit, i.e., with the total number of links scaling linearly with the total number of nodes. The approach is grand canonical, i.e., the number of nodes of the network is not fixed a priori: it is finite but can be arbitrarily large. In this way the grand canonical network ensembles circumvent the difficulties in treating infinite sparse exchangeable networks which according to the Aldous-Hoover theorem must vanish. The approach can treat networks with given degree distribution or networks with given distribution of latent variables. When only a subgraph induced by a subset of nodes is known, this model allows a Bayesian estimation of the network size and the degree sequence (or the sequence of latent variables) of the entire network which can be used for network reconstruction.

]]>Entropy doi: 10.3390/e24050631

Authors: Mahnaz Ashrafi Hamid Soltanian-Zadeh

Recognition of a brain region&rsquo;s interaction is an important field in neuroscience. Most studies use the Pearson correlation to find the interaction between the regions. According to the experimental evidence, there is a nonlinear dependence between the activities of different brain regions that is ignored by Pearson correlation as a linear measure. Typically, the average activity of each region is used as input because it is a univariate measure. This dimensional reduction, i.e., averaging, leads to a loss of spatial information across voxels within the region. In this study, we propose using an information-theoretic measure, multivariate mutual information (mvMI), as a nonlinear dependence to find the interaction between regions. This measure, which has been recently proposed, simplifies the mutual information calculation complexity using the Gaussian copula. Using simulated data, we show that the using this measure overcomes the mentioned limitations. Additionally using the real resting-state fMRI data, we compare the level of significance and randomness of graphs constructed using different methods. Our results indicate that the proposed method estimates the functional connectivity more significantly and leads to a smaller number of random connections than the common measure, Pearson correlation. Moreover, we find that the similarity of the estimated functional networks of the individuals is higher when the proposed method is used.

]]>Entropy doi: 10.3390/e24050630

Authors: Shuai Chen Jinglin Li Chengpeng Jiang Wendong Xiao

Energy storage is an important adjustment method to improve the economy and reliability of a power system. Due to the complexity of the coupling relationship of elements such as the power source, load, and energy storage in the microgrid, there are problems of insufficient performance in terms of economic operation and efficient dispatching. In view of this, this paper proposes an energy storage configuration optimization model based on reinforcement learning and battery state of health assessment. Firstly, a quantitative assessment of battery health life loss based on deep learning was performed. Secondly, on the basis of considering comprehensive energy complementarity, a two-layer optimal configuration model was designed to optimize the capacity configuration and dispatch operation. Finally, the feasibility of the proposed method in microgrid energy storage planning and operation was verified by experimentation. By integrating reinforcement learning and traditional optimization methods, the proposed method did not rely on the accurate prediction of the power supply and load and can make decisions based only on the real-time information of the microgrid. In this paper, the advantages and disadvantages of the proposed method and existing methods were analyzed, and the results show that the proposed method can effectively improve the performance of dynamic planning for energy storage in microgrids.

]]>Entropy doi: 10.3390/e24050628

Authors: Xin Xing Demin Liu

In this paper, three iterative methods (Stokes, Newton and Oseen iterative methods) based on finite element discretization for the stationary micropolar fluid equations are proposed, analyzed and compared. The stability and error estimation for the Stokes and Newton iterative methods are obtained under the strong uniqueness conditions. In addition, the stability and error estimation for the Oseen iterative method are derived under the uniqueness condition of the weak solution. Finally, numerical examples test the applicability and the effectiveness of the three iterative methods.

]]>Entropy doi: 10.3390/e24050629

Authors: Yarden Katz Walter Fontana

Probabilistic inference&mdash;the process of estimating the values of unobserved variables in probabilistic models&mdash;has been used to describe various cognitive phenomena related to learning and memory. While the study of biological realizations of inference has focused on animal nervous systems, single-celled organisms also show complex and potentially &ldquo;predictive&rdquo; behaviors in changing environments. Yet, it is unclear how the biochemical machinery found in cells might perform inference. Here, we show how inference in a simple Markov model can be approximately realized, in real-time, using polymerizing biochemical circuits. Our approach relies on assembling linear polymers that record the history of environmental changes, where the polymerization process produces molecular complexes that reflect posterior probabilities. We discuss the implications of realizing inference using biochemistry, and the potential of polymerization as a form of biological information-processing.

]]>Entropy doi: 10.3390/e24050626

Authors: Soroosh Shalileh Boris Mirkin

This paper proposes a meaningful and effective extension of the celebrated K-means algorithm to detect communities in feature-rich networks, due to our assumption of non-summability mode. We least-squares approximate given matrices of inter-node links and feature values, leading to a straightforward extension of the conventional K-means clustering method as an alternating minimization strategy for the criterion. This works in a two-fold space, embracing both the network nodes and features. The metric used is a weighted sum of the squared Euclidean distances in the feature and network spaces. To tackle the so-called curse of dimensionality, we extend this to a version that uses the cosine distances between entities and centers. One more version of our method is based on the Manhattan distance metric. We conduct computational experiments to test our method and compare its performances with those by competing popular algorithms at synthetic and real-world datasets. The cosine-based version of the extended K-means typically wins at the high-dimension real-world datasets. In contrast, the Manhattan-based version wins at most synthetic datasets.

]]>Entropy doi: 10.3390/e24050627

Authors: Ryszard Kutner Christophe Schinckus Harry Eugene Stanley

The Special Issue comes out in the increasing accumulation of negative global tensions in many areas [...]

]]>Entropy doi: 10.3390/e24050625

Authors: Angus Leung Naotsugu Tsuchiya

How a system generates conscious experience remains an elusive question. One approach towards answering this is to consider the information available in the system from the perspective of the system itself. Integrated information theory (IIT) proposes a measure to capture this integrated information (&Phi;). While &Phi; can be computed at any spatiotemporal scale, IIT posits that it be applied at the scale at which the measure is maximised. Importantly, &Phi; in conscious systems should emerge to be maximal not at the smallest spatiotemporal scale, but at some macro scale where system elements or timesteps are grouped into larger elements or timesteps. Emergence in this sense has been demonstrated in simple example systems composed of logic gates, but it remains unclear whether it occurs in real neural recordings which are generally continuous and noisy. Here we first utilise a computational model to confirm that &Phi; becomes maximal at the temporal scales underlying its generative mechanisms. Second, we search for emergence in local field potentials from the fly brain recorded during wakefulness and anaesthesia, finding that normalised &Phi; (wake/anaesthesia), but not raw &Phi; values, peaks at 5 ms. Lastly, we extend our model to investigate why raw &Phi; values themselves did not peak. This work extends the application of &Phi; to simple artificial systems consisting of logic gates towards searching for emergence of a macro spatiotemporal scale in real neural systems.

]]>Entropy doi: 10.3390/e24050624

Authors: Zhiwen Zhang Xiaosen Li Zhaoyang Chen Yu Zhang Hao Peng

The phase fraction measurement of gas-water-sand fluid in downhole is an important premise for safe and stable exploitation of natural gas hydrates, but the existing phase fraction measurement device for oil and natural gas exploitation can&rsquo;t be directly applied to hydrate exploitation. In this work, the electrical resistivity properties of different gas-water-sand fluid were experimentally investigated using the multiphase flow loop and static solution experiments. The effect of gas phase fraction and gas bubbles distribution, sand fraction and sand particle size on the relative resistivity of the multiphase fluid were systematically studied. The measurement devices and operating parameters were also optimized. A novel combined resistivity method was developed, which demonstrated a good effect for the measurement of phase fractions of gas-water-sand fluid, and will have a good application potential in marine natural gas hydrates exploitation.

]]>Entropy doi: 10.3390/e24050623

Authors: Claudiu Vințe Marcel Ausloos

To take into account the temporal dimension of uncertainty in stock markets, this paper introduces a cross-sectional estimation of stock market volatility based on the intrinsic entropy model. The proposed cross-sectional intrinsic entropy (CSIE) is defined and computed as a daily volatility estimate for the entire market, grounded on the daily traded prices&mdash;open, high, low, and close prices (OHLC)&mdash;along with the daily traded volume for all symbols listed on The New York Stock Exchange (NYSE) and The National Association of Securities Dealers Automated Quotations (NASDAQ). We perform a comparative analysis between the time series obtained from the CSIE and the historical volatility as provided by the estimators: close-to-close, Parkinson, Garman&ndash;Klass, Rogers&ndash;Satchell, Yang&ndash;Zhang, and intrinsic entropy (IE), defined and computed from historical OHLC daily prices of the Standard &amp; Poor&rsquo;s 500 index (S&amp;P500), Dow Jones Industrial Average (DJIA), and the NASDAQ Composite index, respectively, for various time intervals. Our study uses an approximate 6000-day reference point, starting 1 January 2001, until 23 January 2022, for both the NYSE and the NASDAQ. We found that the CSIE market volatility estimator is consistently at least 10 times more sensitive to market changes, compared to the volatility estimate captured through the market indices. Furthermore, beta values confirm a consistently lower volatility risk for market indices overall, between 50% and 90% lower, compared to the volatility risk of the entire market in various time intervals and rolling windows.

]]>Entropy doi: 10.3390/e24050622

Authors: Chia-Hung Yang Samuel V. Scarpino

Fitness landscapes are a powerful metaphor for understanding the evolution of biological systems. These landscapes describe how genotypes are connected to each other through mutation and related through fitness. Empirical studies of fitness landscapes have increasingly revealed conserved topographical features across diverse taxa, e.g., the accessibility of genotypes and &ldquo;ruggedness&rdquo;. As a result, theoretical studies are needed to investigate how evolution proceeds on fitness landscapes with such conserved features. Here, we develop and study a model of evolution on fitness landscapes using the lens of Gene Regulatory Networks (GRNs), where the regulatory products are computed from multiple genes and collectively treated as phenotypes. With the assumption that regulation is a binary process, we prove the existence of empirically observed, topographical features such as accessibility and connectivity. We further show that these results hold across arbitrary fitness functions and that a trade-off between accessibility and ruggedness need not exist. Then, using graph theory and a coarse-graining approach, we deduce a mesoscopic structure underlying GRN fitness landscapes where the information necessary to predict a population&rsquo;s evolutionary trajectory is retained with minimal complexity. Using this coarse-graining, we develop a bottom-up algorithm to construct such mesoscopic backbones, which does not require computing the genotype network and is therefore far more efficient than brute-force approaches. Altogether, this work provides mathematical results of high-dimensional fitness landscapes and a path toward connecting theory to empirical studies.

]]>