Entropy doi: 10.3390/e23101269

Authors: Abdul Jawad Shahid Chaudhary Kazuharu Bamba

We investigate the influence of the first-order correction of entropy caused by thermal quantum fluctuations on the thermodynamics of a logarithmic corrected charged black hole in massive gravity. For this black hole, we explore the thermodynamic quantities, such as entropy, Helmholtz free energy, internal energy, enthalpy, Gibbs free energy and specific heat. We discuss the influence of the topology of the event horizon, dimensions and nonlinearity parameter on the local and global stability of the black hole. As a result, it is found that the holographic dual parameter vanishes. This means that the thermal corrections have no significant role to disturb the holographic duality of the logarithmic charged black hole in massive gravity, although the thermal corrections have a substantial impact on the thermodynamic quantities in the high-energy limit and the stability conditions of black holes.

]]>Entropy doi: 10.3390/e23101268

Authors: Kong Zhang Zhao Wei

In this paper, variational sparse Bayesian learning is utilized to estimate the multipath parameters for wireless channels. Due to its flexibility to fit any probability density function (PDF), the Gaussian mixture model (GMM) is introduced to represent the complicated fading phenomena in various communication scenarios. First, the expectation-maximization (EM) algorithm is applied to the parameter initialization. Then, the variational update scheme is proposed and implemented for the channel parameters’ posterior PDF approximation. Finally, in order to prevent the derived channel model from overfitting, an effective pruning criterion is designed to eliminate the virtual multipath components. The numerical results show that the proposed method outperforms the variational Bayesian scheme with Gaussian prior in terms of root mean squared error (RMSE) and selection accuracy of model order.

]]>Entropy doi: 10.3390/e23101267

Authors: Arash Sioofy Khoojine Mahdi Shadabfar Vahid Reza Hosseini Hadi Kordestani

Predicting the way diseases spread in different societies has been thus far documented as one of the most important tools for control strategies and policy-making during a pandemic. This study is to propose a network autoregressive (NAR) model to forecast the number of total currently infected cases with coronavirus disease 2019 (COVID-19) in Iran until the end of December 2021 in view of the disease interactions within the neighboring countries in the region. For this purpose, the COVID-19 data were initially collected for seven regional nations, including Iran, Turkey, Iraq, Azerbaijan, Armenia, Afghanistan, and Pakistan. Thenceforth, a network was established over these countries, and the correlation of the disease data was calculated. Upon introducing the main structure of the NAR model, a mathematical platform was subsequently provided to further incorporate the correlation matrix into the prediction process. In addition, the maximum likelihood estimation (MLE) was utilized to determine the model parameters and optimize the forecasting accuracy. Thereafter, the number of infected cases up to December 2021 in Iran was predicted by importing the correlation matrix into the NAR model formed to observe the impact of the disease interactions in the neighboring countries. In addition, the autoregressive integrated moving average (ARIMA) was used as a benchmark to compare and validate the NAR model outcomes. The results reveal that COVID-19 data in Iran have passed the fifth peak and continue on a downward trend to bring the number of total currently infected cases below 480,000 by the end of 2021. Additionally, 20%, 50%, 80% and 95% quantiles are provided along with the point estimation to model the uncertainty in the forecast.

]]>Entropy doi: 10.3390/e23101266

Authors: Lam Lam Jaaman

Investors wish to obtain the best trade-off between the return and risk. In portfolio optimization, the mean-absolute deviation model has been used to achieve the target rate of return and minimize the risk. However, the maximization of entropy is not considered in the mean-absolute deviation model according to past studies. In fact, higher entropy values give higher portfolio diversifications, which can reduce portfolio risk. Therefore, this paper aims to propose a multi-objective optimization model, namely a mean-absolute deviation-entropy model for portfolio optimization by incorporating the maximization of entropy. In addition, the proposed model incorporates the optimal value of each objective function using a goal-programming approach. The objective functions of the proposed model are to maximize the mean return, minimize the absolute deviation and maximize the entropy of the portfolio. The proposed model is illustrated using returns of stocks of the Dow Jones Industrial Average that are listed in the New York Stock Exchange. This study will be of significant impact to investors because the results show that the proposed model outperforms the mean-absolute deviation model and the naive diversification strategy by giving higher a performance ratio. Furthermore, the proposed model generates higher portfolio mean returns than the MAD model and the naive diversification strategy. Investors will be able to generate a well-diversified portfolio in order to minimize unsystematic risk with the proposed model.

]]>Entropy doi: 10.3390/e23101265

Authors: Sicong Liu Rui Cai

Interval type-2 fuzzy sets (IT2 FS) play an important part in dealing with uncertain applications. However, how to measure the uncertainty of IT2 FS is still an open issue. The specific objective of this study is to present a new entropy named fuzzy belief entropy to solve the problem based on the relation among IT2 FS, belief structure, and Z-valuations. The interval of membership function can be transformed to interval BPA [Bel,Pl]. Then, Bel and Pl are put into the proposed entropy to calculate the uncertainty from the three aspects of fuzziness, discord, and nonspecificity, respectively, which makes the result more reasonable. Compared with other methods, fuzzy belief entropy is more reasonable because it can measure the uncertainty caused by multielement fuzzy subsets. Furthermore, when the membership function belongs to type-1 fuzzy sets, fuzzy belief entropy degenerates to Shannon entropy. Compared with other methods, several numerical examples are demonstrated that the proposed entropy is feasible and persuasive.

]]>Entropy doi: 10.3390/e23101264

Authors: Rahimi Eassa Elrefaei

Recently, deep learning (DL) has been utilized successfully in different fields, achieving remarkable results. Thus, there is a noticeable focus on DL approaches to automate software engineering (SE) tasks such as maintenance, requirement extraction, and classification. An advanced utilization of DL is the ensemble approach, which aims to reduce error rates and learning time and improve performance. In this research, three ensemble approaches were applied: accuracy as a weight ensemble, mean ensemble, and accuracy per class as a weight ensemble with a combination of four different DL models—long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM), a gated recurrent unit (GRU), and a convolutional neural network (CNN)—in order to classify the software requirement (SR) specification, the binary classification of SRs into functional requirement (FRs) or non-functional requirements (NFRs), and the multi-label classification of both FRs and NFRs into further experimental classes. The models were trained and tested on the PROMISE dataset. A one-phase classification system was developed to classify SRs directly into one of the 17 multi-classes of FRs and NFRs. In addition, a two-phase classification system was developed to classify SRs first into FRs or NFRs and to pass the output to the second phase of multi-class classification to 17 classes. The experimental results demonstrated that the proposed classification systems can lead to a competitive classification performance compared to the state-of-the-art methods. The two-phase classification system proved its robustness against the one-phase classification system, as it obtained a 95.7% accuracy in the binary classification phase and a 93.4% accuracy in the second phase of NFR and FR multi-class classification.

]]>Entropy doi: 10.3390/e23101263

Authors: Zhu Dou

In this study, an extended model for describing the temporal evolution of a characteristic floc size of cohesive sediment particles when the flocculation system is subject to a piecewise varied turbulent shear rate was derived by the probability methods based on the Shannon entropy theory following Zhu (2018). This model only contained three important parameters: initial and steady-state values of floc size, and a parameter characterizing the maximum capacity for floc size increase (or decay), and it can be adopted to capture well a monotonic pattern in which floc size increases (or decays) with flocculation time. Comparison with 13 literature experimental data sets regarding floc size variation to a varied shear rate showed the validity of the entropic model with a high correlation coefficient and few errors. Furthermore, for the case of tapered shear flocculation, it was found that there was a power decay of the capacity parameter with the shear rate, which is similar to the dependence of the steady-state floc size on the shear rate. The entropic model was further parameterized by introducing these two empirical relations into it, and the finally obtained model was found to be more sensitive to two empirical coefficients that have been incorporated into the capacity parameter than those in the steady-state floc size. The proposed entropic model could have the potential, as an addition to existing flocculation models, to be coupled into present mature hydrodynamic models to model the cohesive sediment transport in estuarine and coastal regions.

]]>Entropy doi: 10.3390/e23101262

Authors: Chiara Bardelli

The need to provide accurate predictions in the evolution of the COVID-19 epidemic has motivated the development of different epidemiological models. These models require a careful calibration of their parameters to capture the dynamics of the phenomena and the uncertainty in the data. This work analyzes different parameters related to the personal evolution of COVID-19 (i.e., time of recovery, length of stay in hospital and delay in hospitalization). A Bayesian Survival Analysis is performed considering the age factor and period of the epidemic as fixed predictors to understand how these features influence the evolution of the epidemic. These results can be easily included in the epidemiological SIR model to make prediction results more stable.

]]>Entropy doi: 10.3390/e23101261

Authors: Ricardo Espinosa Raquel Bailón Pablo Laguna

Image processing has played a relevant role in various industries, where the main challenge is to extract specific features from images. Specifically, texture characterizes the phenomenon of the occurrence of a pattern along the spatial distribution, taking into account the intensities of the pixels for which it has been applied in classification and segmentation tasks. Therefore, several feature extraction methods have been proposed in recent decades, but few of them rely on entropy, which is a measure of uncertainty. Moreover, entropy algorithms have been little explored in bidimensional data. Nevertheless, there is a growing interest in developing algorithms to solve current limits, since Shannon Entropy does not consider spatial information, and SampEn2D generates unreliable values in small sizes. We introduce a proposed algorithm, EspEn (Espinosa Entropy), to measure the irregularity present in two-dimensional data, where the calculation requires setting the parameters as follows: m (length of square window), r (tolerance threshold), and ρ (percentage of similarity). Three experiments were performed; the first two were on simulated images contaminated with different noise levels. The last experiment was with grayscale images from the Normalized Brodatz Texture database (NBT). First, we compared the performance of EspEn against the entropy of Shannon and SampEn2D. Second, we evaluated the dependence of EspEn on variations of the values of the parameters m, r, and ρ. Third, we evaluated the EspEn algorithm on NBT images. The results revealed that EspEn could discriminate images with different size and degrees of noise. Finally, EspEn provides an alternative algorithm to quantify the irregularity in 2D data; the recommended parameters for better performance are m = 3, r = 20, and ρ = 0.7.

]]>Entropy doi: 10.3390/e23101260

Authors: Dong Hwan Kim Su-Yong Lee Yonggi Jo Duk Y. Kim Zaeill Kim Taek Jeong

Quantum illumination uses entangled light that consists of signal and idler modes to achieve higher detection rate of a low-reflective object in noisy environments. The best performance of quantum illumination can be achieved by measuring the returned signal mode together with the idler mode. Thus, it is necessary to prepare a quantum memory that can keep the idler mode ideal. To send a signal towards a long-distance target, entangled light in the microwave regime is used. There was a recent demonstration of a microwave quantum memory using microwave cavities coupled with a transmon qubit. We propose an ordering of bosonic operators to efficiently compute the Schrieffer–Wolff transformation generator to analyze the quantum memory. Our proposed method is applicable to a wide class of systems described by bosonic operators whose interaction part represents a definite number of transfer in quanta.

]]>Entropy doi: 10.3390/e23101259

Authors: Joao Florindo Konradin Metze

Here we present a study on the use of non-additive entropy to improve the performance of convolutional neural networks for texture description. More precisely, we introduce the use of a local transform that associates each pixel with a measure of local entropy and use such alternative representation as the input to a pretrained convolutional network that performs feature extraction. We compare the performance of our approach in texture recognition over well-established benchmark databases and on a practical task of identifying Brazilian plant species based on the scanned image of the leaf surface. In both cases, our method achieved interesting performance, outperforming several methods from the state-of-the-art in texture analysis. Among the interesting results we have an accuracy of 84.4% in the classification of KTH-TIPS-2b database and 77.7% in FMD. In the identification of plant species we also achieve a promising accuracy of 88.5%. Considering the challenges posed by these tasks and results of other approaches in the literature, our method managed to demonstrate the potential of computing deep learning features over an entropy representation.

]]>Entropy doi: 10.3390/e23101258

Authors: Al-Shehari Alsowail

Insider threats are malicious acts that can be carried out by an authorized employee within an organization. Insider threats represent a major cybersecurity challenge for private and public organizations, as an insider attack can cause extensive damage to organization assets much more than external attacks. Most existing approaches in the field of insider threat focused on detecting general insider attack scenarios. However, insider attacks can be carried out in different ways, and the most dangerous one is a data leakage attack that can be executed by a malicious insider before his/her leaving an organization. This paper proposes a machine learning-based model for detecting such serious insider threat incidents. The proposed model addresses the possible bias of detection results that can occur due to an inappropriate encoding process by employing the feature scaling and one-hot encoding techniques. Furthermore, the imbalance issue of the utilized dataset is also addressed utilizing the synthetic minority oversampling technique (SMOTE). Well known machine learning algorithms are employed to detect the most accurate classifier that can detect data leakage events executed by malicious insiders during the sensitive period before they leave an organization. We provide a proof of concept for our model by applying it on CMU-CERT Insider Threat Dataset and comparing its performance with the ground truth. The experimental results show that our model detects insider data leakage events with an AUC-ROC value of 0.99, outperforming the existing approaches that are validated on the same dataset. The proposed model provides effective methods to address possible bias and class imbalance issues for the aim of devising an effective insider data leakage detection system.

]]>Entropy doi: 10.3390/e23101256

Authors: Abdullah M. Almarashi Ali Algarni Amal S. Hassan Ahmed N. Zaky Mohammed Elgarhy

Dynamic cumulative residual (DCR) entropy is a valuable randomness metric that may be used in survival analysis. The Bayesian estimator of the DCR Rényi entropy (DCRRéE) for the Lindley distribution using the gamma prior is discussed in this article. Using a number of selective loss functions, the Bayesian estimator and the Bayesian credible interval are calculated. In order to compare the theoretical results, a Monte Carlo simulation experiment is proposed. Generally, we note that for a small true value of the DCRRéE, the Bayesian estimates under the linear exponential loss function are favorable compared to the others based on this simulation study. Furthermore, for large true values of the DCRRéE, the Bayesian estimate under the precautionary loss function is more suitable than the others. The Bayesian estimates of the DCRRéE work well when increasing the sample size. Real-world data is evaluated for further clarification, allowing the theoretical results to be validated.

]]>Entropy doi: 10.3390/e23101257

Authors: Dimitri Meunier Pierre Alquier

Online learning methods, similar to the online gradient algorithm (OGA) and exponentially weighted aggregation (EWA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows us to learn the initialization and the step size in OGA with guarantees. It also allows us to learn the prior or the learning rate in EWA. We provide a regret analysis of the strategy. It allows to identify settings where meta-learning indeed improves on learning each task in isolation.

]]>Entropy doi: 10.3390/e23101255

Authors: Yuheng Bu Weihao Gao Shaofeng Zou Venugopal V. Veeravalli

It has been reported in many recent works on deep model compression that the population risk of a compressed model can be even better than that of the original model. In this paper, an information-theoretic explanation for this population risk improvement phenomenon is provided by jointly studying the decrease in the generalization error and the increase in the empirical risk that results from model compression. It is first shown that model compression reduces an information-theoretic bound on the generalization error, which suggests that model compression can be interpreted as a regularization technique to avoid overfitting. The increase in empirical risk caused by model compression is then characterized using rate distortion theory. These results imply that the overall population risk could be improved by model compression if the decrease in generalization error exceeds the increase in empirical risk. A linear regression example is presented to demonstrate that such a decrease in population risk due to model compression is indeed possible. Our theoretical results further suggest a way to improve a widely used model compression algorithm, i.e., Hessian-weighted K-means clustering, by regularizing the distance between the clustering centers. Experiments with neural networks are provided to validate our theoretical assertions.

]]>Entropy doi: 10.3390/e23101254

Authors: Matthew A. Morena Kevin M. Short

In chaotic entanglement, pairs of interacting classically-chaotic systems are induced into a state of mutual stabilization that can be maintained without external controls and that exhibits several properties consistent with quantum entanglement. In such a state, the chaotic behavior of each system is stabilized onto one of the system’s many unstable periodic orbits (generally located densely on the associated attractor), and the ensuing periodicity of each system is sustained by the symbolic dynamics of its partner system, and vice versa. Notably, chaotic entanglement is an entropy-reversing event: the entropy of each member of an entangled pair decreases to zero when each system collapses onto a given period orbit. In this paper, we discuss the role that entropy plays in chaotic entanglement. We also describe the geometry that arises when pairs of entangled chaotic systems organize into coherent structures that range in complexity from simple tripartite lattices to more involved patterns. We conclude with a discussion of future research directions.

]]>Entropy doi: 10.3390/e23101252

Authors: Jie Yang Shimin Hu Qichao Wang Simon Fong

The university curriculum is a systematic and organic study complex with some immediate associated steps; the initial learning of each semester’s course is crucial, and significantly impacts the learning process of subsequent courses and further studies. However, the low teacher–student ratio makes it difficult for teachers to consistently follow up on the detail-oriented learning situation of individual students. The extant learning early warning system is committed to automatically detecting whether students have potential difficulties—or even the risk of failing, or non-pass reports—before starting the course. Previous related research has the following three problems: first of all, it mainly focused on e-learning platforms and relied on online activity data, which was not suitable for traditional teaching scenarios; secondly, most current methods can only proffer predictions when the course is in progress, or even approaching the end; thirdly, few studies have focused on the feature redundancy in these learning data. Aiming at the traditional classroom teaching scenario, this paper transforms the pre-class student performance prediction problem into a multi-label learning model, and uses the attribute reduction method to scientifically streamline the characteristic information of the courses taken and explore the important relationship between the characteristics of the previously learned courses and the attributes of the courses to be taken, in order to detect high-risk students in each course before the course begins. Extensive experiments were conducted on 10 real-world datasets, and the results proved that the proposed approach achieves better performance than most other advanced methods in multi-label classification evaluation metrics.

]]>Entropy doi: 10.3390/e23101253

Authors: Xianhang Xu Mohd Anuar Arshad Arshad Mahmood

Based on the analysis and measurement of the overall situation, import and export structure and international competitiveness of the various sectors of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area, with the help of MATLAB and Gray System Modeling software, the synergy degree model was established to quantitatively analyze the synergy level of service trade in the Greater Bay Area with the help of grey correlation analysis method and entropy weight method. The results show that the overall development trend of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area is good. The service trade industries in different regions are highly complementary and have a high degree of correlation. The potential for the coordinated development of internal service trade is excellent, and the overall situation of service trade in the Greater Bay Area is in a stage of transition from a moderate level of synergy to a high level of synergy. The Greater Bay Area can achieve industrial synergy by accelerating industrial integration and green transformation, establishing a coordinated development mechanism, sharing market platform, strengthening personnel security, and further enhancing the international competitiveness of service trade. The established model better reflects the current coordination of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area and has good applicability. In the future, more economic, technological, geographic, and policy data and information can be comprehensively used to study the spatial pattern, evolution rules, and mechanisms of coordinated development in the broader area.

]]>Entropy doi: 10.3390/e23101250

Authors: Yin-Ting Zhang Wei-Xing Zhou

With increasing global demand for food, international food trade is playing a critical role in balancing the food supply and demand across different regions. Here, using trade datasets of four crops that provide more than 50% of the calories consumed globally, we constructed four international crop trade networks (iCTNs). We observed the increasing globalization in the international crop trade and different trade patterns in different iCTNs. The distributions of node degrees deviate from power laws, and the distributions of link weights follow power laws. We also found that the in-degree is positively correlated with the out-degree, but negatively correlated with the clustering coefficient. This indicates that the numbers of trade partners affect the tendency of economies to form clusters. In addition, each iCTN exhibits a unique topology which is different from the whole food network studied by many researchers. Our analysis on the microstructural characteristics of different iCTNs provides highly valuable insights into distinctive features of specific crop trades and has potential implications for model construction and food security.

]]>Entropy doi: 10.3390/e23101251

Authors: Ghada Atteia Nagwan Abdel Samee Hassan Zohair Hassan

Diabetic macular edema (DME) is the most common cause of irreversible vision loss in diabetes patients. Early diagnosis of DME is necessary for effective treatment of the disease. Visual detection of DME in retinal screening images by ophthalmologists is a time-consuming process. Recently, many computer-aided diagnosis systems have been developed to assist doctors by detecting DME automatically. In this paper, a new deep feature transfer-based stacked autoencoder neural network system is proposed for the automatic diagnosis of DME in fundus images. The proposed system integrates the power of pretrained convolutional neural networks as automatic feature extractors with the power of stacked autoencoders in feature selection and classification. Moreover, the system enables extracting a large set of features from a small input dataset using four standard pretrained deep networks: ResNet-50, SqueezeNet, Inception-v3, and GoogLeNet. The most informative features are then selected by a stacked autoencoder neural network. The stacked network is trained in a semi-supervised manner and is used for the classification of DME. It is found that the introduced system achieves a maximum classification accuracy of 96.8%, sensitivity of 97.5%, and specificity of 95.5%. The proposed system shows a superior performance over the original pretrained network classifiers and state-of-the-art findings.

]]>Entropy doi: 10.3390/e23101249

Authors: Jinwon Heo Jangsun Baek

Along with advances in technology, matrix data, such as medical/industrial images, have emerged in many practical fields. These data usually have high dimensions and are not easy to cluster due to their intrinsic correlated structure among rows and columns. Most approaches convert matrix data to multi dimensional vectors and apply conventional clustering methods to them, and thus, suffer from an extreme high-dimensionality problem as well as a lack of interpretability of the correlated structure among row/column variables. Recently, a regularized model was proposed for clustering matrix-valued data by imposing a sparsity structure for the mean signal of each cluster. We extend their approach by regularizing further on the covariance to cope better with the curse of dimensionality for large size images. A penalized matrix normal mixture model with lasso-type penalty terms in both mean and covariance matrices is proposed, and then an expectation maximization algorithm is developed to estimate the parameters. The proposed method has the competence of both parsimonious modeling and reflecting the proper conditional correlation structure. The estimators are consistent, and their limiting distributions are derived. We applied the proposed method to simulated data as well as real datasets and measured its clustering performance with the clustering accuracy (ACC) and the adjusted rand index (ARI). The experiment results show that the proposed method performed better with higher ACC and ARI than those of conventional methods.

]]>Entropy doi: 10.3390/e23101248

Authors: Eleana Hatzidaki Aggelos Iliopoulos Ioannis Papasotiriou

Colorectal cancer is one of the most common types of cancer, and it can have a high mortality rate if left untreated or undiagnosed. The fact that CRC becomes symptomatic at advanced stages highlights the importance of early screening. The reference screening method for CRC is colonoscopy, an invasive, time-consuming procedure that requires sedation or anesthesia and is recommended from a certain age and above. The aim of this study was to build a machine learning classifier that can distinguish cancer from non-cancer samples. For this, circulating tumor cells were enumerated using flow cytometry. Their numbers were used as a training set for building an optimized SVM classifier that was subsequently used on a blind set. The SVM classifier’s accuracy on the blind samples was found to be 90.0%, sensitivity was 80.0%, specificity was 100.0%, precision was 100.0% and AUC was 0.98. Finally, in order to test the generalizability of our method, we also compared the performances of different classifiers developed by various machine learning models, using over-sampling datasets generated by the SMOTE algorithm. The results showed that SVM achieved the best performances according to the validation accuracy metric. Overall, our results demonstrate that CTCs enumerated by flow cytometry can provide significant information, which can be used in machine learning algorithms to successfully discriminate between healthy and colorectal cancer patients. The clinical significance of this method could be the development of a simple, fast, non-invasive cancer screening tool based on blood CTC enumeration by flow cytometry and machine learning algorithms.

]]>Entropy doi: 10.3390/e23101247

Authors: Mingyang Liu Jin Yang Wei Zheng

Numerous novel improved support vector machine (SVM) methods are used in leak detection of water pipelines at present. The least square twin K-class support vector machine (LST-KSVC) is a novel simple and fast multi-classification method. However, LST-KSVC has a non-negligible drawback that it assigns the same classification weights to leak samples, including outliers that affect classification, these outliers are often situated away from the main leak samples. To overcome this shortcoming, the maximum entropy (MaxEnt) version of the LST-KSVC is proposed in this paper, called the MLT-KSVC algorithm. In this classification approach, classification weights of leak samples are calculated based on the MaxEnt model. Different sample points are assigned different weights: large weights are assigned to primary leak samples and outliers are assigned small weights, hence the outliers can be ignored in the classification process. Leak recognition experiments prove that the proposed MLT-KSVC algorithm can reduce the impact of outliers on the classification process and avoid the misclassification color block drawback in linear LST-KSVC. MLT-KSVC is more accurate compared with LST-KSVC, TwinSVC, TwinKSVC, and classic Multi-SVM.

]]>Entropy doi: 10.3390/e23101246

Authors: Candy Olivia Mawalim Masashi Unoki

Speech watermarking has become a promising solution for protecting the security of speech communication systems. We propose a speech watermarking method that uses the McAdams coefficient, which is commonly used for frequency harmonics adjustment. The embedding process was conducted, using bit-inverse shifting. We also developed a random forest classifier, using features related to frequency harmonics for blind detection. An objective evaluation was conducted to analyze the performance of our method in terms of the inaudibility and robustness requirements. The results indicate that our method satisfies the speech watermarking requirements with a 16 bps payload under normal conditions and numerous non-malicious signal processing operations, e.g., conversion to Ogg or MP4 format.

]]>Entropy doi: 10.3390/e23101245

Authors: Ivan V. Prikhodko Georgy Th. Guria

Nucleation theory has been widely applied for the interpretation of critical phenomena in nonequilibrium systems. Ligand-induced receptor clustering is a critical step of cellular activation. Receptor clusters on the cell surface are treated from the nucleation theory point of view. The authors propose that the redistribution of energy over the degrees of freedom is crucial for forming each new bond in the growing cluster. The expression for a kinetic barrier for new bond formation in a cluster was obtained. The shape of critical receptor clusters seems to be very important for the clustering on the cell surface. The von Neumann entropy of the graph of bonds is used to determine the influence of the cluster shape on the kinetic barrier. Numerical studies were carried out to assess the dependence of the barrier on the size of the cluster. The asymptotic expression, reflecting the conditions necessary for the formation of receptor clusters, was obtained. Several dynamic effects were found. A slight increase of the ligand mass has been shown to significantly accelerate the nucleation of receptor clusters. The possible meaning of the obtained results for medical applications is discussed.

]]>Entropy doi: 10.3390/e23101244

Authors: Wanlong Zhao Huifeng Zhao Deyue Zou Lu Liu

Cooperative localization (CL) of underwater multi-AUVs is vital for numerous underwater operations. Single-transponder-aided cooperative localization (STCL) is regarded as a promising scheme for multi-AUVs CL, benefiting from the fact that an accurate reference is adopted. To improve the positioning accuracy and robustness of STCL, a novel Factor Graph and Cubature Kalman Filter (FGCKF)-integrated algorithm is proposed in this paper. In the proposed FGCKF, historical information can be efficiently used in measurement updating to overcome uncertain observation environments, which greatly helps to improve the performance of filtering progress. Furthermore, Adaptive CKF, sum product, and Maximum Correntropy Criterion (MCC) methods are designed to deal with outliers of acoustic transmission delay, sound velocity, and motion velocity, respectively. Simulations and experiments are conducted, and it is verified that the proposed FGCKF algorithm can improve positioning accuracy and robustness greatly than traditional filtering methods.

]]>Entropy doi: 10.3390/e23101243

Authors: Maciej Nowak Tadeusz Trzaskalik Sebastian Sitarz

A problem that appears in many decision models is that of the simultaneous occurrence of deterministic, stochastic, and fuzzy values in the set of multidimensional evaluations. Such problems will be called mixed problems. They lead to the formulation of optimization problems in ordered structures and their scalarization. The aim of the paper is to present an interactive procedure with trade-offs for mixed problems, which helps the decision-maker to make a final decision. Its basic advantage consists of simplicity: after having obtained the solution proposed, the decision-maker should determine whether it is satisfactory and if not, how it should be improved by indicating the criteria whose values should be improved, the criteria whose values cannot be made worse, and the criteria whose values can be made worse. The procedure is applied in solving capacity planning treated as a mixed dynamic programming problem.

]]>Entropy doi: 10.3390/e23101242

Authors: Sihao Zhang Jingyang Liu Guigen Zeng Chunhui Zhang Xingyu Zhou Qin Wang

In most of the realistic measurement device-independent quantum key distribution (MDI-QKD) systems, efficient, real-time feedback controls are required to maintain system stability when facing disturbance from either external environment or imperfect internal components. Traditionally, people either use a “scanning-and-transmitting” program or insert an extra device to make a phase reference frame calibration for a stable high-visibility interference, resulting in higher system complexity and lower transmission efficiency. In this work, we build a machine learning-assisted MDI-QKD system, where a machine learning model—the long short-term memory (LSTM) network—is for the first time to apply onto the MDI-QKD system for reference frame calibrations. In this machine learning-assisted MDI-QKD system, one can predict out the phase drift between the two users in advance, and actively perform real-time phase compensations, dramatically increasing the key transmission efficiency. Furthermore, we carry out corresponding experimental demonstration over 100 km and 250 km commercial standard single-mode fibers, verifying the effectiveness of the approach.

]]>Entropy doi: 10.3390/e23101241

Authors: Xin Zhao Xiaokai Nie

Some theories are explored in this research about decision trees which give theoretical support to the applications based on decision trees. The first is that there are many splitting criteria to choose in the tree growing process. The splitting bias that influences the criterion chosen due to missing values and variables with many possible values has been studied. Results show that the Gini index is superior to entropy information as it has less bias regarding influences. The second is that noise variables with more missing values have a better chance to be chosen while informative variables do not. The third is that when there are many noise variables involved in the tree building process, it influences the corresponding computational complexity. Results show that the computational complexity increase is linear to the number of noise variables. So methods that decompose more information from the original data but increase the variable dimension can also be considered in real applications.

]]>Entropy doi: 10.3390/e23101240

Authors: Denis Sh. Sabirov Igor S. Shepelevich

Basic applications of the information entropy concept to chemical objects are reviewed. These applications deal with quantifying chemical and electronic structures of molecules, signal processing, structural studies on crystals, and molecular ensembles. Recent advances in the mentioned areas make information entropy a central concept in interdisciplinary studies on digitalizing chemical reactions, chemico-information synthesis, crystal engineering, as well as digitally rethinking basic notions of structural chemistry in terms of informatics.

]]>Entropy doi: 10.3390/e23101239

Authors: Rafael Cação Lucas Cortez Ismael de Farias Ernee Kozyreff Jalil Khatibi Moqadam Renato Portugal

We study discrete-time quantum walks on generalized Birkhoff polytope graphs (GBPGs), which arise in the solution-set to certain transportation linear programming problems (TLPs). It is known that quantum walks mix at most quadratically faster than random walks on cycles, two-dimensional lattices, hypercubes, and bounded-degree graphs. In contrast, our numerical results show that it is possible to achieve a greater than quadratic quantum speedup for the mixing time on a subclass of GBPG (TLP with two consumers and m suppliers). We analyze two types of initial states. If the walker starts on a single node, the quantum mixing time does not depend on m, even though the graph diameter increases with it. To the best of our knowledge, this is the first example of its kind. If the walker is initially spread over a maximal clique, the quantum mixing time is O(m/ϵ), where ϵ is the threshold used in the mixing times. This result is better than the classical mixing time, which is O(m1.5/ϵ).

]]>Entropy doi: 10.3390/e23101238

Authors: Humaira Kalsoom Miguel Vivas-Cortez Muhammad Amer Latif

In this paper, we establish new (p,q)κ1-integral and (p,q)κ2-integral identities. By employing these new identities, we establish new (p,q)κ1 and (p,q)κ2- trapezoidal integral-type inequalities through strongly convex and quasi-convex functions. Finally, some examples are given to illustrate the investigated results.

]]>Entropy doi: 10.3390/e23101237

Authors: Zehba A. S. Raizah Ammar I. Alsabery Abdelraheem M. Aly Ishak Hashim

The flow and heat transfer fields from a nanofluid within a horizontal annulus partly saturated with a porous region are examined by the Galerkin weighted residual finite element technique scheme. The inner and the outer circular boundaries have hot and cold temperatures, respectively. Impacts of the wide ranges of the Darcy number, porosity, dimensionless length of the porous layer, and nanoparticle volume fractions on the streamlines, isotherms, and isentropic distributions are investigated. The primary outcomes revealed that the stream function value is powered by increasing the Darcy parameter and porosity and reduced by growing the porous region’s area. The Bejan number and the average temperature are reduced by the increase in Da, porosity ε, and nanoparticles volume fractions ϕ. The heat transfer through the nanofluid-porous layer was determined to be the best toward high rates of Darcy number, porosity, and volume fraction of nanofluid. Further, the local velocity and local temperature in the interface surface between nanofluid-porous layers obtain high values at the smallest area from the porous region (D=0.4), and in contrast, the local heat transfer takes the lower value.

]]>Entropy doi: 10.3390/e23091236

Authors: Seungsik Min Gyuchang Lim

In this work, a Korean peninsula earthquake network, constructed via event-sequential linking known as the Abe–Suzuki method, was investigated in terms of network properties. A significance test for these network properties was performed via comparisons with those of two random networks, constructed from two approaches, that is, EVENT (SEQUENCE) SHUFFLING and NETWORK (MATRIX) SHUFFLING. The Abe–Suzuki earthquake network has a clear difference from the two random networks. However, the two shuffled networks exhibited completely different functions, and even some network properties for one shuffled datum are significantly high and those of the other shuffled data are low compared to actual data. For most cases, the event-shuffled network showed a functional similarity to the real network, but with different exponents/parameters. This result strongly claims that the Korean peninsula earthquake network has a spatiotemporal causal relation. Additionally, the Korean peninsula network properties are mostly similar to those found in previous studies on the US and Japan. Further, the Korean earthquake network showed strong linearity in a specific range of spatial resolution, that is, 0.20°~0.80°, implying that macroscopic properties of the Korean earthquake network are highly regular in this range of resolution.

]]>Entropy doi: 10.3390/e23091235

Authors: Shaojuan Lei Xiaodong Zhang Suhui Liu

A large amount of semantic content is generated during designer collaboration in open-source projects (OSPs). Based on the characteristics of knowledge collaboration behavior in OSPs, we constructed a directed, weighted, semantic-based knowledge collaborative network. Four social network analysis indexes were created to identify the key opinion leader nodes in the network using the entropy weight and TOPSIS method. Further, three degradation modes were designed for (1) the collaborative behavior of opinion leaders, (2) main knowledge dissemination behavior, and (3) main knowledge contribution behavior. Regarding the degradation model of the collaborative behavior of opinion leaders, we considered the propagation characteristics of opinion leaders to other nodes, and we created a susceptible–infected–removed (SIR) propagation model of the influence of opinion leaders’ behaviors. Finally, based on empirical data from the Local Motors open-source vehicle design community, a dynamic robustness analysis experiment was carried out. The results showed that the robustness of our constructed network varied for different degradation modes: the degradation of the opinion leaders’ collaborative behavior had the lowest robustness; this was followed by the main knowledge dissemination behavior and the main knowledge contribution behavior; the degradation of random behavior had the highest robustness. Our method revealed the influence of the degradation of collaborative behavior of different types of nodes on the robustness of the network. This could be used to formulate the management strategy of the open-source design community, thus promoting the stable development of OSPs.

]]>Entropy doi: 10.3390/e23091234

Authors: Kyungwon Kim Minhyuk Lee

The global economy is under great shock again in 2020 due to the COVID-19 pandemic; it has not been long since the global financial crisis in 2008. Therefore, we investigate the evolution of the complexity of the cryptocurrency market and analyze the characteristics from the past bull market in 2017 to the present the COVID-19 pandemic. To confirm the evolutionary complexity of the cryptocurrency market, three general complexity analyses based on nonlinear measures were used: approximate entropy (ApEn), sample entropy (SampEn), and Lempel-Ziv complexity (LZ). We analyzed the market complexity/unpredictability for 43 cryptocurrency prices that have been trading until recently. In addition, three non-parametric tests suitable for non-normal distribution comparison were used to cross-check quantitatively. Finally, using the sliding time window analysis, we observed the change in the complexity of the cryptocurrency market according to events such as the COVID-19 pandemic and vaccination. This study is the first to confirm the complexity/unpredictability of the cryptocurrency market from the bull market to the COVID-19 pandemic outbreak. We find that ApEn, SampEn, and LZ complexity metrics of all markets could not generalize the COVID-19 effect of the complexity due to different patterns. However, market unpredictability is increasing by the ongoing health crisis.

]]>Entropy doi: 10.3390/e23091233

Authors: Bing Sun Xiaofeng Liu

As an extension of the support vector machine, support vector regression (SVR) plays a significant role in image denoising. However, due to ignoring the spatial distribution information of noisy pixels, the conventional SVR denoising model faces the bottleneck of overfitting in the case of serious noise interference, which leads to a degradation of the denoising effect. For this problem, this paper proposes a significance measurement framework for evaluating the sample significance with sample spatial density information. Based on the analysis of the penalty factor in SVR, significance SVR (SSVR) is presented by assigning the sample significance factor to each sample. The refined penalty factor enables SSVR to be less susceptible to outliers in the solution process. This overcomes the drawback that the SVR imposes the same penalty factor for all samples, which leads to the objective function paying too much attention to outliers, resulting in poorer regression results. As an example of the proposed framework applied in image denoising, a cutoff distance-based significance factor is instantiated to estimate the samples’ importance in SSVR. Experiments conducted on three image datasets showed that SSVR demonstrates excellent performance compared to the best-in-class image denoising techniques in terms of a commonly used denoising evaluation index and observed visual.

]]>Entropy doi: 10.3390/e23091232

Authors: Hui Wen Nies Mohd Saberi Mohamad Zalmiyah Zakaria Weng Howe Chan Muhammad Akmal Remli Yong Hui Nies

Artificial intelligence in healthcare can potentially identify the probability of contracting a particular disease more accurately. There are five common molecular subtypes of breast cancer: luminal A, luminal B, basal, ERBB2, and normal-like. Previous investigations showed that pathway-based microarray analysis could help in the identification of prognostic markers from gene expressions. For example, directed random walk (DRW) can infer a greater reproducibility power of the pathway activity between two classes of samples with a higher classification accuracy. However, most of the existing methods (including DRW) ignored the characteristics of different cancer subtypes and considered all of the pathways to contribute equally to the analysis. Therefore, an enhanced DRW (eDRW+) is proposed to identify breast cancer prognostic markers from multiclass expression data. An improved weight strategy using one-way ANOVA (F-test) and pathway selection based on the greatest reproducibility power is proposed in eDRW+. The experimental results show that the eDRW+ exceeds other methods in terms of AUC. Besides this, the eDRW+ identifies 294 gene markers and 45 pathway markers from the breast cancer datasets with better AUC. Therefore, the prognostic markers (pathway markers and gene markers) can identify drug targets and look for cancer subtypes with clinically distinct outcomes.

]]>Entropy doi: 10.3390/e23091231

Authors: Xiangde Zhang Jian Zhang

Mode collapse has always been a fundamental problem in generative adversarial networks. The recently proposed Zero Gradient Penalty (0GP) regularization can alleviate the mode collapse, but it will exacerbate a discriminator’s misjudgment problem, that is the discriminator judges that some generated samples are more real than real samples. In actual training, the discriminator will direct the generated samples to point to samples with higher discriminator outputs. The serious misjudgment problem of the discriminator will cause the generator to generate unnatural images and reduce the quality of the generation. This paper proposes Real Sample Consistency (RSC) regularization. In the training process, we randomly divided the samples into two parts and minimized the loss of the discriminator’s outputs corresponding to these two parts, forcing the discriminator to output the same value for all real samples. We analyzed the effectiveness of our method. The experimental results showed that our method can alleviate the discriminator’s misjudgment and perform better with a more stable training process than 0GP regularization. Our real sample consistency regularization improved the FID score for the conditional generation of Fake-As-Real GAN (FARGAN) from 14.28 to 9.8 on CIFAR-10. Our RSC regularization improved the FID score from 23.42 to 17.14 on CIFAR-100 and from 53.79 to 46.92 on ImageNet2012. Our RSC regularization improved the average distance between the generated and real samples from 0.028 to 0.025 on synthetic data. The loss of the generator and discriminator in standard GAN with our regularization was close to the theoretical loss and kept stable during the training process.

]]>Entropy doi: 10.3390/e23091230

Authors: Pamela Ercegovac Gordan Stojić Miloš Kopić Željko Stević Feta Sinani Ilija Tanackov

There is not a single country in the world that is so rich that it can remove all level crossings or provide their denivelation in order to absolutely avoid the possibility of accidents at the intersections of railways and road traffic. In the Republic of Serbia alone, the largest number of accidents occur at passive crossings, which make up three-quarters of the total number of crossings. Therefore, it is necessary to constantly find solutions to the problem of priorities when choosing level crossings where it is necessary to raise the level of security, primarily by analyzing the risk and reliability at all level crossings. This paper presents a model that enables this. The calculation of the maximal risk of a level crossing is achieved under the conditions of generating the maximum entropy in the virtual operating mode. The basis of the model is a heterogeneous queuing system. Maximum entropy is based on the mandatory application of an exponential distribution. The system is Markovian and is solved by a standard analytical concept. The basic input parameters for the calculation of the maximal risk are the geometric characteristics of the level crossing and the intensities and structure of the flows of road and railway vehicles. The real risk is based on statistical records of accidents and flow intensities. The exact reliability of the level crossing is calculated from the ratio of real and maximal risk, which enables their further comparison in order to raise the level of safety, and that is the basic idea of this paper.

]]>Entropy doi: 10.3390/e23091229

Authors: Rabih Mezher Jack Arayro Nicolas Hascoet Francisco Chinesta

The present study addresses the discrete simulation of the flow of concentrated suspensions encountered in the forming processes involving reinforced polymers, and more particularly the statistical characterization and description of the effects of the intense fiber interaction, occurring during the development of the flow induced orientation, on the fibers’ geometrical center trajectory. The number of interactions as well as the interaction intensity will depend on the fiber volume fraction and the applied shear, which should affect the stochastic trajectory. Topological data analysis (TDA) will be applied on the geometrical center trajectories of the simulated fiber to prove that a characteristic pattern can be extracted depending on the flow conditions (concentration and shear rate). This work proves that TDA allows capturing and extracting from the so-called persistence image, a pattern that characterizes the dependence of the fiber trajectory on the flow kinematics and the suspension concentration. Such a pattern could be used for classification and modeling purposes, in rheology or during processing monitoring.

]]>Entropy doi: 10.3390/e23091228

Authors: Qifan Deng Ji Pei Wenjie Wang Bin Lin Chenying Zhang Jiantao Zhao

Impeller trimming is an economical method for broadening the range of application of a given pump, but it can destroy operational stability and efficiency. In this study, entropy production theory was utilized to analyze the variation of energy loss caused by impeller trimming based on computational fluid dynamics. Experiments and numerical simulations were conducted to investigate the energy loss and fluid-induced radial forces. The pump’s performance seriously deteriorated after impeller trimming, especially under overload conditions. Energy loss in the volute decreased after trimming under part-load conditions but increased under overload conditions, and this phenomenon made the pump head unable to be accurately predicted by empirical equations. With the help of entropy production theory, high-energy dissipation regions were mainly located in the volute discharge diffuser under overload conditions because of the flow separation and the mixing of the main flow and the stalled fluid. The increased incidence angle at the volute’s tongue after impeller trimming resulted in more serious flow separation and higher energy loss. Furthermore, the radial forces and their fluctuation amplitudes decreased under all the investigated conditions. The horizontal components of the radial forces in all cases were much higher than the vertical components.

]]>Entropy doi: 10.3390/e23091227

Authors: Xian Ma Yongxian Wang Xiaoqian Zhu Wei Liu Wenbin Xiao Qiang Lan

The accuracy and efficiency of sound field calculations highly concern issues of hydroacoustics. Recently, one-dimensional spectral methods have shown high-precision characteristics when solving the sound field but can solve only simplified models of underwater acoustic propagation, thus their application range is small. Therefore, it is necessary to directly calculate the two-dimensional Helmholtz equation of ocean acoustic propagation. Here, we use the Chebyshev–Galerkin and Chebyshev collocation methods to solve the two-dimensional Helmholtz model equation. Then, the Chebyshev collocation method is used to model ocean acoustic propagation because, unlike the Galerkin method, the collocation method does not need stringent boundary conditions. Compared with the mature Kraken program, the Chebyshev collocation method exhibits a higher numerical accuracy. However, the shortcoming of the collocation method is that the computational efficiency cannot satisfy the requirements of real-time applications due to the large number of calculations. Then, we implemented the parallel code of the collocation method, which could effectively improve calculation effectiveness.

]]>Entropy doi: 10.3390/e23091226

Authors: Garrett Mindt

The hard problem of consciousness has been a perennially vexing issue for the study of consciousness, particularly in giving a scientific and naturalized account of phenomenal experience. At the heart of the hard problem is an often-overlooked argument, which is at the core of the hard problem, and that is the structure and dynamics (S&amp;D) argument. In this essay, I will argue that we have good reason to suspect that the S&amp;D argument given by David Chalmers rests on a limited conception of S&amp;D properties, what in this essay I’m calling extrinsic structure and dynamics. I argue that if we take recent insights from the complexity sciences and from recent developments in Integrated Information Theory (IIT) of Consciousness, that we get a more nuanced picture of S&amp;D, specifically, a class of properties I’m calling intrinsic structure and dynamics. This I think opens the door to a broader class of properties with which we might naturally and scientifically explain phenomenal experience, as well as the relationship between syntactic, semantic, and intrinsic notions of information. I argue that Chalmers’ characterization of structure and dynamics in his S&amp;D argument paints them with too broad a brush and fails to account for important nuances, especially when considering accounting for a system’s intrinsic properties. Ultimately, my hope is to vindicate a certain species of explanation from the S&amp;D argument, and by extension dissolve the hard problem of consciousness at its core, by showing that not all structure and dynamics are equal.

]]>Entropy doi: 10.3390/e23091225

Authors: Yan Yang Haoping Peng Chuang Wen

Massive droplets can be generated to form two-phase flow in steam turbines, leading to erosion issues to the blades and reduces the reliability of the components. A condensing two-phase flow model was developed to assess the flow structure and loss considering the nonequilibrium condensation phenomenon due to the high expansion behaviour in the transonic flow in linear blade cascades. A novel dehumidification strategy was proposed by introducing turbulent disturbances on the suction side. The results show that the Wilson point of the nonequilibrium condensation process was delayed by increasing the inlet superheated level at the entrance of the blade cascade. With an increase in the inlet superheated level of 25 K, the liquid fraction and condensation loss significantly reduced by 79% and 73%, respectively. The newly designed turbine blades not only remarkably kept the liquid phase region away from the blade walls but also significantly reduced 28.1% averaged liquid fraction and 47.5% condensation loss compared to the original geometry. The results provide an insight to understand the formation and evaporation of the condensed droplets inside steam turbines.

]]>Entropy doi: 10.3390/e23091224

Authors: Tianyi Wu Qing Pan Chushan Lin Lei Shi Shanghong Zhao Yijun Zhang Xingyu Wang Chen Dong

Polarization encoding has been extensively used in quantum key distribution (QKD) implementations along free-space links. However, the calculation model to characterize channel transmittance and quantum bit error rate (QBER) for free-space QKD has not been systematically studied. As a result, it is often assumed that misalignment error is equal to a fixed value, which is not theoretically rigorous. In this paper, we investigate the depolarization and rotation of the signal beams resulting from spatially-dependent polarization effects of the use of curved optics in an off-axis configuration, where decoherence can be characterized by the Huygens–Fresnel principle and the cross-spectral density matrix (CSDM). The transmittance and misalignment error in a practical free-space QKD can thus be estimated using the method. Furthermore, the numerical simulations clearly show that the polarization effect caused by turbulence can be effectively mitigated when maintaining good beam coherence properties.

]]>Entropy doi: 10.3390/e23091223

Authors: Chengji Liu Changhua Zhu Zhihui Li Min Nie Hong Yang Changxing Pei

We propose a continuous-variable quantum secret sharing (CVQSS) scheme based on thermal terahertz (THz) sources in inter-satellite wireless links (THz-CVQSS). In this scheme, firstly, each player locally preforms Gaussian modulation to prepare a thermal THz state, and then couples it into a circulating spatiotemporal mode using a highly asymmetric beam splitter. At the end, the dealer measures the quadrature components of the received spatiotemporal mode through performing the heterodyne detection to share secure keys with all the players of a group. This design enables that the key can be recovered only by the whole group players’ knowledge in cooperation and neither a single player nor any subset of the players in the group can recover the key correctly. We analyze both the security and the performance of THz-CVQSS in inter-satellite links. Results show that a long-distance inter-satellite THz-CVQSS scheme with multiple players is feasible. This work will provide an effective way for building an inter-satellite quantum communication network.

]]>Entropy doi: 10.3390/e23091222

Authors: Fanghui Huang Yu Zhang Ziqing Wang Xinyang Deng

Dempster–Shafer theory (DST), which is widely used in information fusion, can process uncertain information without prior information; however, when the evidence to combine is highly conflicting, it may lead to counter-intuitive results. Moreover, the existing methods are not strong enough to process real-time and online conflicting evidence. In order to solve the above problems, a novel information fusion method is proposed in this paper. The proposed method combines the uncertainty of evidence and reinforcement learning (RL). Specifically, we consider two uncertainty degrees: the uncertainty of the original basic probability assignment (BPA) and the uncertainty of its negation. Then, Deng entropy is used to measure the uncertainty of BPAs. Two uncertainty degrees are considered as the condition of measuring information quality. Then, the adaptive conflict processing is performed by RL and the combination two uncertainty degrees. The next step is to compute Dempster’s combination rule (DCR) to achieve multi-sensor information fusion. Finally, a decision scheme based on correlation coefficient is used to make the decision. The proposed method not only realizes adaptive conflict evidence management, but also improves the accuracy of multi-sensor information fusion and reduces information loss. Numerical examples verify the effectiveness of the proposed method.

]]>Entropy doi: 10.3390/e23091221

Authors: Wenhao Yan Zijing Jiang Xin Huang Qun Ding

Chaos is considered as a natural candidate for encryption systems owing to its sensitivity to initial values and unpredictability of its orbit. However, some encryption schemes based on low-dimensional chaotic systems exhibit various security defects due to their relatively simple dynamic characteristics. In order to enhance the dynamic behaviors of chaotic maps, a novel 3D infinite collapse map (3D-ICM) is proposed, and the performance of the chaotic system is analyzed from three aspects: a phase diagram, the Lyapunov exponent, and Sample Entropy. The results show that the chaotic system has complex chaotic behavior and high complexity. Furthermore, an image encryption scheme based on 3D-ICM is presented, whose security analysis indicates that the proposed image encryption scheme can resist violent attacks, correlation analysis, and differential attacks, so it has a higher security level.

]]>Entropy doi: 10.3390/e23091220

Authors: Karl Friston Conor Heins Kai Ueltzhöffer Lancelot Da Costa Thomas Parr

In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.

]]>Entropy doi: 10.3390/e23091219

Authors: Luis Herrera Alicia Di Prisco Justo Ospino

We study fluid distributions endowed with hyperbolic symmetry, which share many common features with Lemaitre–Tolman–Bondi (LTB) solutions (e.g., they are geodesic, shearing, and nonconformally flat, and the energy density is inhomogeneous). As such, they may be considered as hyperbolic symmetric versions of LTB, with spherical symmetry replaced by hyperbolic symmetry. We start by considering pure dust models, and afterwards, we extend our analysis to dissipative models with anisotropic pressure. In the former case, the complexity factor is necessarily nonvanishing, whereas in the latter cases, models with a vanishing complexity factor are found. The remarkable fact is that all solutions satisfying the vanishing complexity factor condition are necessarily nondissipative and satisfy the stiff equation of state.

]]>Entropy doi: 10.3390/e23091218

Authors: Adrian Moldovan Angel Caţaron Răzvan Andonie

Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter.

]]>Entropy doi: 10.3390/e23091217

Authors: Jindong Wang Xin Chen Haiyang Zhao Yanyang Li Zujian Liu

In practical engineering applications, the vibration signals collected by sensors often contain outliers, resulting in the separation accuracy of source signals from the observed signals being seriously affected. The mixing matrix estimation is crucial to the underdetermined blind source separation (UBSS), determining the accuracy level of the source signals recovery. Therefore, a two-stage clustering method is proposed by combining hierarchical clustering and K-means to improve the reliability of the estimated mixing matrix in this paper. The proposed method is used to solve the two major problems in the K-means algorithm: the random selection of initial cluster centers and the sensitivity of the algorithm to outliers. Firstly, the observed signals are clustered by hierarchical clustering to get the cluster centers. Secondly, the cosine distance is used to eliminate the outliers deviating from cluster centers. Then, the initial cluster centers are obtained by calculating the mean value of each remaining cluster. Finally, the mixing matrix is estimated with the improved K-means, and the sources are recovered using the least square method. Simulation and the reciprocating compressor fault experiments demonstrate the effectiveness of the proposed method.

]]>Entropy doi: 10.3390/e23091216

Authors: Jedidiah Yanez-Sierra Arturo Diaz-Perez Victor Sosa-Sosa

One of the main problems in graph analysis is the correct identification of relevant nodes for spreading processes. Spreaders are crucial for accelerating/hindering information diffusion, increasing product exposure, controlling diseases, rumors, and more. Correct identification of spreaders in graph analysis is a relevant task to optimally use the network structure and ensure a more efficient flow of information. Additionally, network topology has proven to play a relevant role in the spreading processes. In this sense, more of the existing methods based on local, global, or hybrid centrality measures only select relevant nodes based on their ranking values, but they do not intentionally focus on their distribution on the graph. In this paper, we propose a simple yet effective method that takes advantage of the underlying graph topology to guarantee that the selected nodes are not only relevant but also well-scattered. Our proposal also suggests how to define the number of spreaders to select. The approach is composed of two phases: first, graph partitioning; and second, identification and distribution of relevant nodes. We have tested our approach by applying the SIR spreading model over nine real complex networks. The experimental results showed more influential and scattered values for the set of relevant nodes identified by our approach than several reference algorithms, including degree, closeness, Betweenness, VoteRank, HybridRank, and IKS. The results further showed an improvement in the propagation influence value when combining our distribution strategy with classical metrics, such as degree, outperforming computationally more complex strategies. Moreover, our proposal shows a good computational complexity and can be applied to large-scale networks.

]]>Entropy doi: 10.3390/e23091215

Authors: Gil Cohen Mahmoud Qadan

The popularity of SPACs (Special Purpose Acquisition Companies) has grown dramatically in recent years as a substitute for the traditional IPO (Initial Public Offer). We modeled the average annual return for SPAC investors and found that this financial tool produced an annual return of 17.3%. We then constructed an information model that examined a SPAC′s excess returns during the 60 days after a potential merger or acquisition had been announced. We found that the announcement had a major impact on the SPAC’s share price over the 60 days, delivering on average 0.69% daily excess returns over the IPO portfolio and 31.6% cumulative excess returns for the entire period. Relative to IPOs, the cumulative excess returns of SPACs rose dramatically in the next few days after the potential merger or acquisition announcement until the 26th day. They then declined but rose again until the 48th day after the announcement. Finally, the SPAC’s structure reduced the investors’ risk. Thus, if investors buy a SPAC stock immediately after a potential merger or acquisition has been announced and hold it for 48 days, they can reap substantial short-term returns.

]]>Entropy doi: 10.3390/e23091214

Authors: Yihao Luo Shiqiang Zhang Yueqi Cao Huafei Sun

The Wasserstein distance, especially among symmetric positive-definite matrices, has broad and deep influences on the development of artificial intelligence (AI) and other branches of computer science. In this paper, by involving the Wasserstein metric on SPD(n), we obtain computationally feasible expressions for some geometric quantities, including geodesics, exponential maps, the Riemannian connection, Jacobi fields and curvatures, particularly the scalar curvature. Furthermore, we discuss the behavior of geodesics and prove that the manifold is globally geodesic convex. Finally, we design algorithms for point cloud denoising and edge detecting of a polluted image based on the Wasserstein curvature on SPD(n). The experimental results show the efficiency and robustness of our curvature-based methods.

]]>Entropy doi: 10.3390/e23091213

Authors: Adina Criste Iulia Lupu Radu Lupu

The pattern of financial cycles in the European Union has direct impacts on financial stability and economic sustainability in view of adoption of the euro. The purpose of the article is to identify the degree of coherence of credit cycles in the countries potentially seeking to adopt the euro with the credit cycle inside the Eurozone. We first estimate the credit cycles in the selected countries and in the euro area (at the aggregate level) and filter the series with the Hodrick–Prescott filter for the period 1999Q1–2020Q4. Based on these values, we compute the indicators that define the credit cycle similarity and synchronicity in the selected countries and a set of entropy measures (block entropy, entropy rate, Bayesian entropy) to show the high degree of heterogeneity, noting that the manifestation of the global financial crisis has changed the credit cycle patterns in some countries. Our novel approach provides analytical tools to cope with euro adoption decisions, showing how the coherence of credit cycles can be increased among European countries and how the national macroprudential policies can be better coordinated, especially in light of changes caused by the pandemic crisis.

]]>Entropy doi: 10.3390/e23091211

Authors: Peter Tsung-Wen Yen Kelin Xia Siew Ann Cheong

In econophysics, the achievements of information filtering methods over the past 20 years, such as the minimal spanning tree (MST) by Mantegna and the planar maximally filtered graph (PMFG) by Tumminello et al., should be celebrated. Here, we show how one can systematically improve upon this paradigm along two separate directions. First, we used topological data analysis (TDA) to extend the notions of nodes and links in networks to faces, tetrahedrons, or k-simplices in simplicial complexes. Second, we used the Ollivier-Ricci curvature (ORC) to acquire geometric information that cannot be provided by simple information filtering. In this sense, MSTs and PMFGs are but first steps to revealing the topological backbones of financial networks. This is something that TDA can elucidate more fully, following which the ORC can help us flesh out the geometry of financial networks. We applied these two approaches to a recent stock market crash in Taiwan and found that, beyond fusions and fissions, other non-fusion/fission processes such as cavitation, annihilation, rupture, healing, and puncture might also be important. We also successfully identified neck regions that emerged during the crash, based on their negative ORCs, and performed a case study on one such neck region.

]]>Entropy doi: 10.3390/e23091212

Authors: Roland Riek Atanu Chatterjee

Causality describes the process and consequences from an action: a cause has an effect. Causality is preserved in classical physics as well as in special and general theories of relativity. Surprisingly, causality as a relationship between the cause and its effect is in neither of these theories considered a law or a principle. Its existence in physics has even been challenged by prominent opponents in part due to the time symmetric nature of the physical laws. With the use of the reduced action and the least action principle of Maupertuis along with a discrete dynamical time physics yielding an arrow of time, causality is defined as the partial spatial derivative of the reduced action and as such is position- and momentum-dependent and requests the presence of space. With this definition the system evolves from one step to the next without the need of time, while (discrete) time can be reconstructed.

]]>Entropy doi: 10.3390/e23091210

Authors: Elzbieta Turska Szymon Jurga Jaroslaw Piskorski

We apply tree-based classification algorithms, namely the classification trees, with the use of the rpart algorithm, random forests and XGBoost methods to detect mood disorder in a group of 2508 lower secondary school students. The dataset presents many challenges, the most important of which is many missing data as well as the being heavily unbalanced (there are few severe mood disorder cases). We find that all algorithms are specific, but only the rpart algorithm is sensitive; i.e., it is able to detect cases of real cases mood disorder. The conclusion of this paper is that this is caused by the fact that the rpart algorithm uses the surrogate variables to handle missing data. The most important social-studies-related result is that the adolescents’ relationships with their parents are the single most important factor in developing mood disorders—far more important than other factors, such as the socio-economic status or school success.

]]>Entropy doi: 10.3390/e23091209

Authors: Fuwang Wang Bin Lu Xiaogang Kang Rongrong Fu

The accurate detection and alleviation of driving fatigue are of great significance to traffic safety. In this study, we tried to apply the modified multi-scale entropy (MMSE) approach, based on variational mode decomposition (VMD), to driving fatigue detection. Firstly, the VMD was used to decompose EEG into multiple intrinsic mode functions (IMFs), then the best IMFs and scale factors were selected using the least square method (LSM). Finally, the MMSE features were extracted. Compared with the traditional sample entropy (SampEn), the VMD-MMSE method can identify the characteristics of driving fatigue more effectively. The VMD-MMSE characteristics combined with a subjective questionnaire (SQ) were used to analyze the change trends of driving fatigue under two driving modes: normal driving mode and interesting auditory stimulation mode. The results show that the interesting auditory stimulation method adopted in this paper can effectively relieve driving fatigue. In addition, the interesting auditory stimulation method, which simply involves playing interesting auditory information on the vehicle-mounted player, can effectively relieve driving fatigue. Compared with traditional driving fatigue-relieving methods, such as sleeping and drinking coffee, this interesting auditory stimulation method can relieve fatigue in real-time when the driver is driving normally.

]]>Entropy doi: 10.3390/e23091208

Authors: Wantao Jia Yong Xu Dongxi Li Rongchun Hu

In the present paper, the statistical responses of two-special prey–predator type ecosystem models excited by combined Gaussian and Poisson white noise are investigated by generalizing the stochastic averaging method. First, we unify the deterministic models for the two cases where preys are abundant and the predator population is large, respectively. Then, under some natural assumptions of small perturbations and system parameters, the stochastic models are introduced. The stochastic averaging method is generalized to compute the statistical responses described by stationary probability density functions (PDFs) and moments for population densities in the ecosystems using a perturbation technique. Based on these statistical responses, the effects of ecosystem parameters and the noise parameters on the stationary PDFs and moments are discussed. Additionally, we also calculate the Gaussian approximate solution to illustrate the effectiveness of the perturbation results. The results show that the larger the mean arrival rate, the smaller the difference between the perturbation solution and Gaussian approximation solution. In addition, direct Monte Carlo simulation is performed to validate the above results.

]]>Entropy doi: 10.3390/e23091207

Authors: Qisong Song Shaobo Li Qiang Bai Jing Yang Ansi Zhang Xingxing Zhang Longxuan Zhe

Robot manipulator trajectory planning is one of the core robot technologies, and the design of controllers can improve the trajectory accuracy of manipulators. However, most of the controllers designed at this stage have not been able to effectively solve the nonlinearity and uncertainty problems of the high degree of freedom manipulators. In order to overcome these problems and improve the trajectory performance of the high degree of freedom manipulators, a manipulator trajectory planning method based on a radial basis function (RBF) neural network is proposed in this work. Firstly, a 6-DOF robot experimental platform was designed and built. Secondly, the overall manipulator trajectory planning framework was designed, which included manipulator kinematics and dynamics and a quintic polynomial interpolation algorithm. Then, an adaptive robust controller based on an RBF neural network was designed to deal with the nonlinearity and uncertainty problems, and Lyapunov theory was used to ensure the stability of the manipulator control system and the convergence of the tracking error. Finally, to test the method, a simulation and experiment were carried out. The simulation results showed that the proposed method improved the response and tracking performance to a certain extent, reduced the adjustment time and chattering, and ensured the smooth operation of the manipulator in the course of trajectory planning. The experimental results verified the effectiveness and feasibility of the method proposed in this paper.

]]>Entropy doi: 10.3390/e23091206

Authors: Guoxin Zuo Kang Fu Xianhua Dai Liwei Zhang

For count data, though a zero-inflated model can work perfectly well with an excess of zeroes and the generalized Poisson model can tackle over- or under-dispersion, most models cannot simultaneously deal with both zero-inflated or zero-deflated data and over- or under-dispersion. Ear diseases are important in healthcare, and falls into this kind of count data. This paper introduces a generalized Poisson Hurdle model that work with count data of both too many/few zeroes and a sample variance not equal to the mean. To estimate parameters, we use the generalized method of moments. In addition, the asymptotic normality and efficiency of these estimators are established. Moreover, this model is applied to ear disease using data gained from the New South Wales Health Research Council in 1990. This model performs better than both the generalized Poisson model and the Hurdle model.

]]>Entropy doi: 10.3390/e23091205

Authors: Amnon Moalem Alexander Gersten

Quantum equations for massless particles of any spin are considered in stationary uncharged axially symmetric spacetimes. It is demonstrated that up to a normalization function, the angular wave function does not depend on the metric and practically is the same as in the Minkowskian case. The radial wave functions satisfy second order nonhomogeneous differential equations with three nonhomogeneous terms, which depend in a unique way on time and space curvatures. In agreement with the principle of equivalence, these terms vanish locally, and the radial equations reduce to the same homogeneous equations as in Minkowski spacetime.

]]>Entropy doi: 10.3390/e23091204

Authors: Adèle Helena Ribeiro Maciel Calebe Vidal João Ricardo Sato André Fujita

Graphs/networks have become a powerful analytical approach for data modeling. Besides, with the advances in sensor technology, dynamic time-evolving data have become more common. In this context, one point of interest is a better understanding of the information flow within and between networks. Thus, we aim to infer Granger causality (G-causality) between networks’ time series. In this case, the straightforward application of the well-established vector autoregressive model is not feasible. Consequently, we require a theoretical framework for modeling time-varying graphs. One possibility would be to consider a mathematical graph model with time-varying parameters (assumed to be random variables) that generates the network. Suppose we identify G-causality between the graph models’ parameters. In that case, we could use it to define a G-causality between graphs. Here, we show that even if the model is unknown, the spectral radius is a reasonable estimate of some random graph model parameters. We illustrate our proposal’s application to study the relationship between brain hemispheres of controls and children diagnosed with Autism Spectrum Disorder (ASD). We show that the G-causality intensity from the brain’s right to the left hemisphere is different between ASD and controls.

]]>Entropy doi: 10.3390/e23091203

Authors: Qirui Gong Yanlin Ge Lingen Chen Shuangshaung Shi Huijun Feng

Based on the established model of the irreversible rectangular cycle in the previous literature, in this paper, finite time thermodynamics theory is applied to analyze the performance characteristics of an irreversible rectangular cycle by firstly taking power density and effective power as the objective functions. Then, four performance indicators of the cycle, that is, the thermal efficiency, dimensionless power output, dimensionless effective power, and dimensionless power density, are optimized with the cycle expansion ratio as the optimization variable by applying the nondominated sorting genetic algorithm II (NSGA-II) and considering four-objective, three-objective, and two-objective optimization combinations. Finally, optimal results are selected through three decision-making methods. The results show that although the efficiency of the irreversible rectangular cycle under the maximum power density point is less than that at the maximum power output point, the cycle under the maximum power density point can acquire a smaller size parameter. The efficiency at the maximum effective power point is always larger than that at the maximum power output point. When multi-objective optimization is performed on dimensionless power output, dimensionless effective power, and dimensionless power density, the deviation index obtained from the technique for order preference by similarity to an ideal solution (TOPSIS) decision-making method is the smallest value, which means the result is the best.

]]>Entropy doi: 10.3390/e23091202

Authors: Luca Spolladore Michela Gelfusa Riccardo Rossi Andrea Murari

Model selection criteria are widely used to identify the model that best represents the data among a set of potential candidates. Amidst the different model selection criteria, the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) are the most popular and better understood. In the derivation of these indicators, it was assumed that the model’s dependent variables have already been properly identified and that the entries are not affected by significant uncertainties. These are issues that can become quite serious when investigating complex systems, especially when variables are highly correlated and the measurement uncertainties associated with them are not negligible. More sophisticated versions of this criteria, capable of better detecting spurious relations between variables when non-negligible noise is present, are proposed in this paper. Their derivation is obtained starting from a Bayesian statistics framework and adding an a priori Chi-squared probability distribution function of the model, dependent on a specifically defined information theoretic quantity that takes into account the redundancy between the dependent variables. The performances of the proposed versions of these criteria are assessed through a series of systematic simulations, using synthetic data for various classes of functions and noise levels. The results show that the upgraded formulation of the criteria clearly outperforms the traditional ones in most of the cases reported.

]]>Entropy doi: 10.3390/e23091201

Authors: Mohamed Kayid

In contrast to many survival models such as proportional hazard rates and proportional mean residual lives, the proportional vitalities model has also been introduced in the literature. In this paper, further stochastic ordering properties of a dynamic version of the model with a random vitality growth parameter are investigated. Examples are presented to illustrate different established properties of the model. Potentials for inference about the parameters in proportional vitalities model with possibly time-varying effects are also argued and discussed.

]]>Entropy doi: 10.3390/e23091200

Authors: Yong Shen Wangzhen Cai Hongwei Kang Xingping Sun Qingyi Chen Haigang Zhang

Particle swarm optimization (PSO) has the disadvantages of easily getting trapped in local optima and a low search accuracy. Scores of approaches have been used to improve the diversity, search accuracy, and results of PSO, but the balance between exploration and exploitation remains sub-optimal. Many scholars have divided the population into multiple sub-populations with the aim of managing it in space. In this paper, a multi-stage search strategy that is dominated by mutual repulsion among particles and supplemented by attraction was proposed to control the traits of the population. From the angle of iteration time, the algorithm was able to adequately enhance the entropy of the population under the premise of satisfying the convergence, creating a more balanced search process. The study acquired satisfactory results from the CEC2017 test function by improving the standard PSO and improved PSO.

]]>Entropy doi: 10.3390/e23091199

Authors: Lina Zhao Jianqing Li Xiangkui Wan Shoushui Wei Chengyu Liu

Entropy algorithm is an important nonlinear method for cardiovascular disease detection due to its power in analyzing short-term time series. In previous a study, we proposed a new entropy-based atrial fibrillation (AF) detector, i.e., EntropyAF, which showed a high classification accuracy in identifying AF and non-AF rhythms. As a variation of entropy measures, EntropyAF has two parameters that need to be initialized before the calculation: (1) tolerance threshold r and (2) similarity weight n. In this study, a comprehensive analysis for the two parameters determination was presented, aiming to achieve a high detection accuracy for AF events. Data were from the MIT-BIH AF database. RR interval recordings were segmented using a 30-beat time window. The parameters r and n were initialized from a relatively small value, then gradually increased, and finally the best parameter combination was determined using grid searching. AUC (area under curve) values from the receiver operator characteristic curve (ROC) were compared under different parameter combinations of parameters r and n, and the results demonstrated that the selection of these two parameters plays an important role in AF/non-AF classification. Small values of parameters r and n can lead to a better detection accuracy than other selections. The best AUC value for AF detection was 98.15%, and the corresponding parameter combinations for EntropyAF were as follows: r = 0.01, n = 0.0625, 0.125, 0.25, or 0.5; r = 0.05 and n = 0.0625, 0.125, or 0.25; and r = 0.10 and n = 0.0625 or 0.125.

]]>Entropy doi: 10.3390/e23091198

Authors: Stefano Cusumano Łukasz Rudnicki

Recent years have seen the flourishing of research devoted to quantum effects on mesoscopic and macroscopic scales. In this context, in Entropy&nbsp;2019, 21, 705, a formalism aiming at describing macroscopic quantum fields, dubbed Reduced State of the Field (RSF), was envisaged. While, in the original work, a proper notion of entropy for macroscopic fields, together with their dynamical equations, was derived, here, we expand thermodynamic analysis of the RSF, discussing the notion of heat, solving dynamical equations in various regimes of interest, and showing the thermodynamic implications of these solutions.

]]>Entropy doi: 10.3390/e23091197

Authors: Arkady Plotnitsky

This article reconsiders the concept of physical reality in quantum theory and the concept of quantum measurement, following Bohr, whose analysis of quantum measurement led him to his concept of a (quantum) “phenomenon,” referring to “the observations obtained under the specified circumstances,” in the interaction between quantum objects and measuring instruments. This situation makes the terms “observation” and “measurement,” as conventionally understood, inapplicable. These terms are remnants of classical physics or still earlier history, from which classical physics inherited it. As defined here, a quantum measurement does not measure any preexisting property of the ultimate constitution of the reality responsible for quantum phenomena. An act of measurement establishes a quantum phenomenon by an interaction between the instrument and the quantum object or in the present view the ultimate constitution of the reality responsible for quantum phenomena and, at the time of measurement, also quantum objects. In the view advanced in this article, in contrast to that of Bohr, quantum objects, such as electrons or photons, are assumed to exist only at the time of measurement and not independently, a view that redefines the concept of quantum object as well. This redefinition becomes especially important in high-energy quantum regimes and quantum field theory and allows this article to define a new concept of quantum field. The article also considers, now following Bohr, the quantum measurement as the entanglement between quantum objects and measurement instruments. The argument of the article is grounded in the concept “reality without realism” (RWR), as underlying quantum measurement thus understood, and the view, the RWR view, of quantum theory defined by this concept. The RWR view places a stratum of physical reality thus designated, here the reality ultimately responsible for quantum phenomena, beyond representation or knowledge, or even conception, and defines the corresponding set of interpretations quantum mechanics or quantum field theory, such as the one assumed in this article, in which, again, not only quantum phenomena but also quantum objects are (idealizations) defined by measurement. As such, the article also offers a broadly conceived response to J. Bell’s argument “against ‘measurement’”.

]]>Entropy doi: 10.3390/e23091196

Authors: Jianhua Song Zhe Zhang

Magnetic resonance imaging (MRI) segmentation is a fundamental and significant task since it can guide subsequent clinic diagnosis and treatment. However, images are often corrupted by defects such as low-contrast, noise, intensity inhomogeneity, and so on. Therefore, a weighted level set model (WLSM) is proposed in this study to segment inhomogeneous intensity MRI destroyed by noise and weak boundaries. First, in order to segment the intertwined regions of brain tissue accurately, a weighted neighborhood information measure scheme based on local multi information and kernel function is designed. Then, the membership function of fuzzy c-means clustering is used as the spatial constraint of level set model to overcome the sensitivity of level set to initialization, and the evolution of level set function can be adaptively changed according to different tissue information. Finally, the distance regularization term in level set function is replaced by a double potential function to ensure the stability of the energy function in the evolution process. Both real and synthetic MRI images can show the effectiveness and performance of WLSM. In addition, compared with several state-of-the-art models, segmentation accuracy and Jaccard similarity coefficient obtained by WLSM are increased by 0.0586, 0.0362 and 0.1087, 0.0703, respectively.

]]>Entropy doi: 10.3390/e23091193

Authors: Toufik Boubehziz Carlos Quesada-Granja Claire Dupont Pierre Villon Florian De Vuyst Anne-Virginie Salsac

An innovative data-driven model-order reduction technique is proposed to model dilute micrometric or nanometric suspensions of microcapsules, i.e., microdrops protected in a thin hyperelastic membrane, which are used in Healthcare as innovative drug vehicles. We consider a microcapsule flowing in a similar-size microfluidic channel and vary systematically the governing parameter, namely the capillary number, ratio of the viscous to elastic forces, and the confinement ratio, ratio of the capsule to tube size. The resulting space-time-parameter problem is solved using two global POD reduced bases, determined in the offline stage for the space and parameter variables, respectively. A suitable low-order spatial reduced basis is then computed in the online stage for any new parameter instance. The time evolution of the capsule dynamics is achieved by identifying the nonlinear low-order manifold of the reduced variables; for that, a point cloud of reduced data is computed and a diffuse approximation method is used. Numerical comparisons between the full-order fluid-structure interaction model and the reduced-order one confirm both accuracy and stability of the reduction technique over the whole admissible parameter domain. We believe that such an approach can be applied to a broad range of coupled problems especially involving quasistatic models of structural mechanics.

]]>Entropy doi: 10.3390/e23091195

Authors: Tai-Danae Bradley

We share a small connection between information theory, algebra, and topology—namely, a correspondence between Shannon entropy and derivations of the operad of topological simplices. We begin with a brief review of operads and their representations with topological simplices and the real line as the main example. We then give a general definition for a derivation of an operad in any category with values in an abelian bimodule over the operad. The main result is that Shannon entropy defines a derivation of the operad of topological simplices, and that for every derivation of this operad there exists a point at which it is given by a constant multiple of Shannon entropy. We show this is compatible with, and relies heavily on, a well-known characterization of entropy given by Faddeev in 1956 and a recent variation given by Leinster.

]]>Entropy doi: 10.3390/e23091189

Authors: Rehab Ali Ibrahim Laith Abualigah Ahmed A. Ewees Mohammed A. A. Al-qaness Dalia Yousri Samah Alshathri Mohamed Abd Elaziz

With the widespread use of intelligent information systems, a massive amount of data with lots of irrelevant, noisy, and redundant features are collected; moreover, many features should be handled. Therefore, introducing an efficient feature selection (FS) approach becomes a challenging aim. In the recent decade, various artificial methods and swarm models inspired by biological and social systems have been proposed to solve different problems, including FS. Thus, in this paper, an innovative approach is proposed based on a hybrid integration between two intelligent algorithms, Electric fish optimization (EFO) and the arithmetic optimization algorithm (AOA), to boost the exploration stage of EFO to process the high dimensional FS problems with a remarkable convergence speed. The proposed EFOAOA is examined with eighteen datasets for different real-life applications. The EFOAOA results are compared with a set of recent state-of-the-art optimizers using a set of statistical metrics and the Friedman test. The comparisons show the positive impact of integrating the AOA operator in the EFO, as the proposed EFOAOA can identify the most important features with high accuracy and efficiency. Compared to the other FS methods whereas, it got the lowest features number and the highest accuracy in 50% and 67% of the datasets, respectively.

]]>Entropy doi: 10.3390/e23091194

Authors: Ge Zhang Qiong Yang Guotong Li Jiaxing Leng Mubiao Yan

Detection of faults at the incipient stage is critical to improving the availability and continuity of satellite services. The application of a local optimum projection vector and the Kullback–Leibler (KL) divergence can improve the detection rate of incipient faults. However, this suffers from the problem of high time complexity. We propose decomposing the KL divergence in the original optimization model and applying the property of the generalized Rayleigh quotient to reduce time complexity. Additionally, we establish two distribution models for subfunctions F1(w) and F3(w) to detect the slight anomalous behavior of the mean and covariance. The effectiveness of the proposed method was verified through a numerical simulation case and a real satellite fault case. The results demonstrate the advantages of low computational complexity and high sensitivity to incipient faults.

]]>Entropy doi: 10.3390/e23091192

Authors: Mark P. Holland Alef E. Sterk

Suppose (f,X,μ) is a measure preserving dynamical system and ϕ:X→R a measurable observable. Let Xi=ϕ∘fi−1 denote the time series of observations on the system, and consider the maxima process Mn:=max{X1,…,Xn}. Under linear scaling of Mn, its asymptotic statistics are usually captured by a three-parameter generalised extreme value distribution. This assumes certain regularity conditions on the measure density and the observable. We explore an alternative parametric distribution that can be used to model the extreme behaviour when the observables (or measure density) lack certain regular variation assumptions. The relevant distribution we study arises naturally as the limit for max-semistable processes. For piecewise uniformly expanding dynamical systems, we show that a max-semistable limit holds for the (linear) scaled maxima process.

]]>Entropy doi: 10.3390/e23091191

Authors: Colin Shea-Blymyer Subhradeep Roy Benjamin Jantzen

Many problems in the study of dynamical systems—including identification of effective order, detection of nonlinearity or chaos, and change detection—can be reframed in terms of assessing the similarity between dynamical systems or between a given dynamical system and a reference. We introduce a general metric of dynamical similarity that is well posed for both stochastic and deterministic systems and is informative of the aforementioned dynamical features even when only partial information about the system is available. We describe methods for estimating this metric in a range of scenarios that differ in respect to contol over the systems under study, the deterministic or stochastic nature of the underlying dynamics, and whether or not a fully informative set of variables is available. Through numerical simulation, we demonstrate the sensitivity of the proposed metric to a range of dynamical properties, its utility in mapping the dynamical properties of parameter space for a given model, and its power for detecting structural changes through time series data.

]]>Entropy doi: 10.3390/e23091190

Authors: Liang Liu Jinchuan Hou Xiaofei Qi

Generally speaking, it is difficult to compute the values of the Gaussian quantum discord and Gaussian geometric discord for Gaussian states, which limits their application. In the present paper, for any (n+m)-mode continuous-variable system, a computable Gaussian quantum correlation M is proposed. For any state ρAB of the system, M(ρAB) depends only on the covariant matrix of ρAB without any measurements performed on a subsystem or any optimization procedures, and thus is easily computed. Furthermore, M has the following attractive properties: (1) M is independent of the mean of states, is symmetric about the subsystems and has no ancilla problem; (2) M is locally Gaussian unitary invariant; (3) for a Gaussian state ρAB, M(ρAB)=0 if and only if ρAB is a product state; and (4) 0≤M((ΦA⊗ΦB)ρAB)≤M(ρAB) holds for any Gaussian state ρAB and any Gaussian channels ΦA and ΦB performed on the subsystem A and B, respectively. Therefore, M is a nice Gaussian correlation which describes the same Gaussian correlation as Gaussian quantum discord and Gaussian geometric discord when restricted on Gaussian states. As an application of M, a noninvasive quantum method for detecting intracellular temperature is proposed.

]]>Entropy doi: 10.3390/e23091188

Authors: Alexander Sobol Peter Güntert Roland Riek

A one-dimensional gas comprising N point particles undergoing elastic collisions within a finite space described by a Sinai billiard generating identical dynamical trajectories are calculated and analyzed with regard to strict extensivity of the entropy definitions of Boltzmann–Gibbs. Due to the collisions, trajectories of gas particles are strongly correlated and exhibit both chaotic and periodic properties. Probability distributions for the position of each particle in the one-dimensional gas can be obtained analytically, elucidating that the entropy in this special case is extensive at any given number N. Furthermore, the entropy obtained can be interpreted as a measure of the extent of interactions between molecules. The results obtained for the non-mixable one-dimensional system are generalized to mixable one- and two-dimensional systems, the latter by a simple example only providing similar findings.

]]>Entropy doi: 10.3390/e23091187

Authors: Xinchao Ruan Wenhao Shi Guojun Chen Wei Zhao Hang Zhang Ying Guo

The secret key rate is one of the main obstacles to the practical application of continuous-variable quantum key distribution (CVQKD). In this paper, we propose a multiplexing scheme to increase the secret key rate of the CVQKD system with orbital angular momentum (OAM). The propagation characteristics of a typical vortex beam, involving the Laguerre–Gaussian (LG) beam, are analyzed in an atmospheric channel for the Kolmogorov turbulence model. Discrete modulation is utilized to extend the maximal transmission distance. We show the effect of the transmittance of the beam over the turbulent channel on the secret key rate and the transmission distance. Numerical simulations indicate that the OAM multiplexing scheme can improve the performance of the CVQKD system and hence has potential use for practical high-rate quantum communications.

]]>Entropy doi: 10.3390/e23091186

Authors: Dmitri Sokolovski Alexandre Matzkin

Wigner’s friend scenarios involve an Observer, or Observers, measuring a Friend, or Friends, who themselves make quantum measurements. In recent discussions, it has been suggested that quantum mechanics may not always be able to provide a consistent account of a situation involving two Observers and two Friends. We investigate this problem by invoking the basic rules of quantum mechanics as outlined by Feynman in the well-known “Feynman Lectures on Physics”. We show here that these “Feynman rules” constrain the a priori assumptions which can be made in generalised Wigner’s friend scenarios, because the existence of the probabilities of interest ultimately depends on the availability of physical evidence (material records) of the system’s past. With these constraints obeyed, a non-ambiguous and consistent account of all measurement outcomes is obtained for all agents, taking part in various Wigner’s Friend scenarios.

]]>Entropy doi: 10.3390/e23091185

Authors: Silvia Ghirga Letizia Chiodo Riccardo Marrocchio Javier G. Orlandi Alessandro Loppini

The comprehension of neuronal network functioning, from most basic mechanisms of signal transmission to complex patterns of memory and decision making, is at the basis of the modern research in experimental and computational neurophysiology. While mechanistic knowledge of neurons and synapses structure increased, the study of functional and effective networks is more complex, involving emergent phenomena, nonlinear responses, collective waves, correlation and causal interactions. Refined data analysis may help in inferring functional/effective interactions and connectivity from neuronal activity. The Transfer Entropy (TE) technique is, among other things, well suited to predict structural interactions between neurons, and to infer both effective and structural connectivity in small- and large-scale networks. To efficiently disentangle the excitatory and inhibitory neural activities, in the article we present a revised version of TE, split in two contributions and characterized by a suited delay time. The method is tested on in silico small neuronal networks, built to simulate the calcium activity as measured via calcium imaging in two-dimensional neuronal cultures. The inhibitory connections are well characterized, still preserving a high accuracy for excitatory connections prediction. The method could be applied to study effective and structural interactions in systems of excitable cells, both in physiological and in pathological conditions.

]]>Entropy doi: 10.3390/e23091184

Authors: Wei Wang Jianming Wang Jianhua Chen

The setting of the measurement number for each block is very important for a block-based compressed sensing system. However, in practical applications, we only have the initial measurement results of the original signal on the sampling side instead of the original signal itself, therefore, we cannot directly allocate the appropriate measurement number for each block without the sparsity of the original signal. To solve this problem, we propose an adaptive block-based compressed video sensing scheme based on saliency detection and side information. According to the Johnson–Lindenstrauss lemma, we can use the initial measurement results to perform saliency detection and then obtain the saliency value for each block. Meanwhile, a side information frame which is an estimate of the current frame is generated on the reconstruction side by the proposed probability fusion model, and the significant coefficient proportion of each block is estimated through the side information frame. Both the saliency value and significant coefficient proportion can reflect the sparsity of the block. Finally, these two estimates of block sparsity are fused, so that we can simultaneously use intra-frame and inter-frame correlation for block sparsity estimation. Then the measurement number of each block can be allocated according to the fusion sparsity. Besides, we propose a global recovery model based on weighting, which can reduce the block effect of reconstructed frames. The experimental results show that, compared with existing schemes, the proposed scheme can achieve a significant improvement in peak signal-to-noise ratio (PSNR) at the same sampling rate.

]]>Entropy doi: 10.3390/e23091182

Authors: Maciej Stankiewicz Karol Horodecki Omer Sakarya Danuta Makowiec

We investigate whether the heart rate can be treated as a semi-random source with the aim of amplification by quantum devices. We use a semi-random source model called ε-Santha–Vazirani source, which can be amplified via quantum protocols to obtain a fully private random sequence. We analyze time intervals between consecutive heartbeats obtained from Holter electrocardiogram (ECG) recordings of people of different sex and age. We propose several transformations of the original time series into binary sequences. We have performed different statistical randomness tests and estimated quality parameters. We find that the heart can be treated as a good enough, and private by its nature, source of randomness that every human possesses. As such, in principle, it can be used as input to quantum device-independent randomness amplification protocols. The properly interpreted ε parameter can potentially serve as a new characteristic of the human heart from the perspective of medicine.

]]>Entropy doi: 10.3390/e23091183

Authors: Wen-Li Yu Tao Li Hai Li Yun Zhang Jian Zou Ying-Dan Wang

We study a scheme of thermal management where a three-qubit system assisted with a coherent auxiliary bath (CAB) is employed to implement heat management on a target thermal bath (TTB). We consider the CAB/TTB being ensemble of coherent/thermal two-level atoms (TLAs), and within the framework of collision model investigate the characteristics of steady heat current (also called target heat current (THC)) between the system and the TTB. It demonstrates that with the help of the quantum coherence of ancillae the magnitude and direction of heat current can be controlled only by adjusting the coupling strength of system-CAB. Meanwhile, we also show that the influences of quantum coherence of ancillae on the heat current strongly depend on the coupling strength of system—CAB, and the THC becomes positively/negatively correlated with the coherence magnitude of ancillae when the coupling strength below/over some critical value. Besides, the system with the CAB could serve as a multifunctional device integrating the thermal functions of heat amplifier, suppressor, switcher and refrigerator, while with thermal auxiliary bath it can only work as a thermal suppressor. Our work provides a new perspective for the design of multifunctional thermal device utilizing the resource of quantum coherence from the CAB.

]]>Entropy doi: 10.3390/e23091180

Authors: Kai Liu Fanwei Meng Shengya Meng Chonghui Wang

The coupling between variables in the multi-input multi-output (MIMO) systems brings difficulties to the design of the controller. Aiming at this problem, this paper combines the particle swarm optimization (PSO) with the coefficient diagram method (CDM) and proposes a robust controller design strategy for the MIMO systems. The decoupling problem is transformed into a compensator parameter optimization problem, and PSO optimizes the compensator parameters to reduce the coupling effect in the MIMO systems. For the MIMO system with measurement noise, the effectiveness of CDM in processing measurement noise is analyzed. This paper gives the control design steps of the MIMO systems. Finally, simulation experiments of four typical MIMO systems demonstrate the effectiveness of the proposed method.

]]>Entropy doi: 10.3390/e23091181

Authors: Qiang Chen Yong Zhao Lixia Yan

Pulsars, especially X-ray pulsars detectable for small-size detectors, are highly accurate natural clocks suggesting potential applications such as interplanetary navigation control. Due to various complex cosmic background noise, the original pulsar signals, namely photon sequences, observed by detectors have low signal-to-noise ratios (SNRs) that obstruct the practical uses. This paper presents the pulsar denoising strategy developed based on the variational mode decomposition (VMD) approach. It is actually the initial work of our interplanetary navigation control research. The original pulsar signals are decomposed into intrinsic mode functions (IMFs) via VMD, by which the Gaussian noise contaminating the pulsar signals can be attenuated because of the filtering effect during signal decomposition and reconstruction. Comparison experiments based on both simulation and HEASARC-archived X-ray pulsar signals are carried out to validate the effectiveness of the proposed pulsar denoising strategy.

]]>Entropy doi: 10.3390/e23091179

Authors: Saulo V. Moreira Breno Marques Fernando L. Semião

The investigation of the phenomenon of dephasing assisted quantum transport, which happens when the presence of dephasing benefits the efficiency of this process, has been mainly focused on Markovian scenarios associated with constant and positive dephasing rates in their respective Lindblad master equations. What happens if we consider a more general framework, where time-dependent dephasing rates are allowed, thereby, permitting the possibility of non-Markovian scenarios? Does dephasing-assisted transport still manifest for non-Markovian dephasing? Here, we address these open questions in a setup of coupled two-level systems. Our results show that the manifestation of non-Markovian dephasing-assisted transport depends on the way in which the incoherent energy sources are locally coupled to the chain. This is illustrated with two different configurations, namely non-symmetric and symmetric. Specifically, we verify that non-Markovian dephasing-assisted transport manifested only in the non-symmetric configuration. This allows us to draw a parallel with the conditions in which time-independent Markovian dephasing-assisted transport manifests. Finally, we find similar results by considering a controllable and experimentally implementable system, which highlights the significance of our findings for quantum technologies.

]]>Entropy doi: 10.3390/e23091178

Authors: Hector Freytes Giuseppe Sergioli

An holistic extension for classical propositional logic is introduced in the framework of quantum computation with mixed states. The mentioned extension is obtained by applying the quantum Fredkin gate to non-factorizable bipartite states. In particular, an extended notion of classical contradiction is studied in this holistic framework.

]]>Entropy doi: 10.3390/e23091177

Authors: Ning Luan Ke Xiong Zhifei Zhang Haina Zheng Yu Zhang Pingyi Fan Gang Qu

This article investigates a relay-assisted wireless powered communication network (WPCN), where the access point (AP) inspires the auxiliary nodes to participate together in charging the sensor, and then the sensor uses its harvested energy to send status update packets to the AP. An incentive mechanism is designed to overcome the selfishness of the auxiliary node. In order to further improve the system performance, we establish a Stackelberg game to model the efficient cooperation between the AP–sensor pair and auxiliary node. Specifically, we formulate two utility functions for the AP–sensor pair and the auxiliary node, and then formulate two maximization problems respectively. As the former problem is non-convex, we transform it into a convex problem by introducing an extra slack variable, and then by using the Lagrangian method, we obtain the optimal solution with closed-form expressions. Numerical experiments show that the larger the transmit power of the AP, the smaller the age of information (AoI) of the AP–sensor pair and the less the influence of the location of the auxiliary node on AoI. In addition, when the distance between the AP and the sensor node exceeds a certain threshold, employing the relay can achieve better AoI performance than non-relaying systems.

]]>Entropy doi: 10.3390/e23091176

Authors: Fairouz Tchier Ghous Ali Muhammad Gulzar Dragan Pamučar Ganesh Ghorai

As an extension of intuitionistic fuzzy sets, the theory of picture fuzzy sets not only deals with the degrees of rejection and acceptance but also considers the degree of refusal during a decision-making process; therefore, by incorporating this competency of picture fuzzy sets, the goal of this study is to propose a novel hybrid model called picture fuzzy soft expert sets by combining picture fuzzy sets with soft expert sets for dealing with uncertainties in different real-world group decision-making problems. The proposed hybrid model is a more generalized form of intuitionistic fuzzy soft expert sets. Some novel desirable properties of the proposed model, namely, subset, equality, complement, union and intersection, are investigated together with their corresponding examples. Two well-known operations AND and OR are also studied for the developed model. Further, a decision-making method supporting by an algorithmic format under the proposed approach is presented. Moreover, an illustrative application is provided for its better demonstration, which is subjected to the selection of a suitable company of virtual reality devices. Finally, a comparison of the initiated method is explored with some existing models, including intuitionistic fuzzy soft expert sets.

]]>Entropy doi: 10.3390/e23091174

Authors: Chen Yang Yan Liu Changqing Yin

Source Code Generation (SCG) is a prevalent research field in the automation software engineering sector that maps specific descriptions to various sorts of executable code. Along with the numerous intensive studies, diverse SCG types that integrate different scenarios and contexts continue to emerge. As the ultimate purpose of SCG, Natural Language-based Source Code Generation (NLSCG) is growing into an attractive and challenging field, as the expressibility and extremely high abstraction of the input end. The booming large-scale dataset generated by open-source code repositories and Q&amp;A resources, the innovation of machine learning algorithms, and the development of computing capacity make the NLSCG field promising and give more opportunities to the model implementation and perfection. Besides, we observed an increasing interest stream of NLSCG relevant studies recently, presenting quite various technical schools. However, many studies are bound to specific datasets with customization issues, producing occasional successful solutions with tentative technical methods. There is no systematic study to explore and promote the further development of this field. We carried out a systematic literature survey and tool research to find potential improvement directions. First, we position the role of NLSCG among various SCG genres, and specify the generation context empirically via software development domain knowledge and programming experiences; second, we explore the selected studies collected by a thoughtfully designed snowballing process, clarify the NLSCG field and understand the NLSCG problem, which lays a foundation for our subsequent investigation. Third, we model the research problems from technical focus and adaptive challenges, and elaborate insights gained from the NLSCG research backlog. Finally, we summarize the latest technology landscape over the transformation model and depict the critical tactics used in the essential components and their correlations. This research addresses the challenges of bridging the gap between natural language processing and source code analytics, outlines different dimensions of NLSCG research concerns and technical utilities, and shows a bounded technical context of NLSCG to facilitate more future studies in this promising area.

]]>Entropy doi: 10.3390/e23091175

Authors: Mariana Krasnytska Bertrand Berche Yurij Holovatch Ralph Kenna

We consider a recently introduced generalization of the Ising model in which individual spin strength can vary. The model is intended for analysis of ordering in systems comprising agents which, although matching in their binarity (i.e., maintaining the iconic Ising features of ‘+’ or ‘−’, ‘up’ or ‘down’, ‘yes’ or ‘no’), differ in their strength. To investigate the interplay between variable properties of nodes and interactions between them, we study the model on a complex network where both the spin strength and degree distributions are governed by power laws. We show that in the annealed network approximation, thermodynamic functions of the model are self-averaging and we obtain an exact solution for the partition function. This allows us derive the leading temperature and field dependencies of thermodynamic functions, their critical behavior, and logarithmic corrections at the interface of different phases. We find the delicate interplay of the two power laws leads to new universality classes.

]]>Entropy doi: 10.3390/e23091173

Authors: Kaichun Yang Chunxin Yang Han Yang Chenglong Zhou

During manned space missions, an environmental control and life-support system (ECLSS) is employed to meet the life-supporting requirements of astronauts. The ECLSS is a type of hierarchical system, with subsystem—component—single machines, forming a complex structure. Therefore, system-level conceptual designing and performance evaluation of the ECLSS must be conducted. This study reports the top-level scheme of ECLSS, including the subsystems of atmosphere revitalization, water management, and waste management. We propose two schemes based on the design criteria of improving closure and reducing power consumption. In this study, we use the structural entropy method (SEM) to calculate the system order degree to quantitatively evaluate the ECLSS complexity at the top level. The complexity of the system evaluated by directed SEM and undirected SEM presents different rules. The results show that the change in the system structure caused by the replacement of some single technologies will not have great impact on the overall system complexity. The top-level scheme design and complexity evaluation presented in this study may provide technical support for the development of ECLSS in future manned spaceflights.

]]>Entropy doi: 10.3390/e23091172

Authors: Xunfa Lu Kai Liu Kin Keung Lai Hairong Cui

Combined with the B-P (breakpoint) test and VAR–DCC–GARCH model, the relationship between WTI crude oil futures and S&amp;P 500 index futures or CSI 300 index futures was investigated and compared. The results show that breakpoints exist in the relationship in the mean between WTI crude oil futures market and Chinese stock index futures market or US stock index futures market. The relationship in mean between WTI crude oil futures prices and S&amp;P 500 stock index futures, or CSI 300 stock index futures is weakening. Meanwhile, there is a decreasing dynamic conditional correlation between the WTI crude oil futures market and Chinese stock index futures market or US stock index futures market after the breakpoint in the price series. The Chinese stock index futures are less affected by short-term fluctuations in crude oil futures returns than US stock index futures.

]]>Entropy doi: 10.3390/e23091171

Authors: Premkumar Leishangthem Faizyab Ahmad Shankar Das

We study the role of disorder in producing the metastable states in which the extent of mass localization is intermediate between that of a liquid and a crystal with long-range order. We estimate the corresponding entropy with the coarse-grained description of a many-particle system used in the classical density functional model. We demonstrate that intermediate localization of the particles results in a change of the entropy from what is obtained from a microscopic approach using for sharply localized vibrational modes following a Debye distribution. An additional contribution is included in the density of vibrational states g(ω) to account for this excess entropy. A corresponding peak in g(ω)/ω2 vs. frequency ω matches the characteristic boson peak seen in amorphous solids. In the present work, we also compare the shear modulus for the inhomogeneous solid having localized density profiles with the corresponding elastic response for the uniform liquid in the limit of high frequencies.

]]>Entropy doi: 10.3390/e23091170

Authors: Yangyang Dai Feng Duan Fan Feng Zhe Sun Yu Zhang Cesar F. Caiafa Pere Marti-Puig Jordi Solé-Casals

An electroencephalogram (EEG) is an electrophysiological signal reflecting the functional state of the brain. As the control signal of the brain–computer interface (BCI), EEG may build a bridge between humans and computers to improve the life quality for patients with movement disorders. The collected EEG signals are extremely susceptible to the contamination of electromyography (EMG) artifacts, affecting their original characteristics. Therefore, EEG denoising is an essential preprocessing step in any BCI system. Previous studies have confirmed that the combination of ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA) can effectively suppress EMG artifacts. However, the time-consuming iterative process of EEMD may limit the application of the EEMD-CCA method in real-time monitoring of BCI. Compared with the existing EEMD, the recently proposed signal serialization based EEMD (sEEMD) is a good choice to provide effective signal analysis and fast mode decomposition. In this study, an EMG denoising method based on sEEMD and CCA is discussed. All of the analyses are carried out on semi-simulated data. The results show that, in terms of frequency and amplitude, the intrinsic mode functions (IMFs) decomposed by sEEMD are consistent with the IMFs obtained by EEMD. There is no significant difference in the ability to separate EMG artifacts from EEG signals between the sEEMD-CCA method and the EEMD-CCA method (p &gt; 0.05). Even in the case of heavy contamination (signal-to-noise ratio is less than 2 dB), the relative root mean squared error is about 0.3, and the average correlation coefficient remains above 0.9. The running speed of the sEEMD-CCA method to remove EMG artifacts is significantly improved in comparison with that of EEMD-CCA method (p &lt; 0.05). The running time of the sEEMD-CCA method for three lengths of semi-simulated data is shortened by more than 50%. This indicates that sEEMD-CCA is a promising tool for EMG artifact removal in real-time BCI systems.

]]>