Special Issue "MaxEnt 2020/2021—The 40th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (20 December 2021).

Special Issue Editors

Prof. Dr. Wolfgang von der Linden
E-Mail Website
Guest Editor
Institute of Theoretical and Computational Physics, Graz University of Technology, 8010 Graz, Austria
Interests: Bayesian probability theory; stochastic processes; maximum entropy; condensed matter physics; quantum physics; quantum Monte Carlo; machine learning; neural networks
Dr. Sascha Ranftl
E-Mail Website
Guest Editor
Institute of Theoretical and Computational Physics, Graz University of Technology, Petersgasse 16, 8010 Graz, Austria
Interests: Bayesian probability theory stochastic processes; surrogate modelling; learning simulations; physics-informed machine learning; uncertainty quantification/uncertainty propagation of computer simulations; computational biomechanics; aortic dissection

Special Issue Information

Dear colleagues,

This Special Issue invites contributions on all aspects of probabilistic inference, such as foundations, novel methods, and novel applications. We welcome the submission of extended papers on contributions presented at the 40th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2021). In light of the special circumstances and challenges of this year’s edition, we also welcome submissions that could not be presented at the workshop.

Contributions on foundations and methodology have previously addressed subjects such as approximate or variational inference, experimental design, and computational techniques such as MCMC, amongst other topics. Due to the broad applicability of Bayesian inference, previous editions have featured contributions to and from many diverse disciplines, such as physics (e.g., plasma physics, astrophysics, statistical mechanics, foundations of quantum mechanics), chemistry, geodesy, biology, medicine, econometrics, hydrology, image reconstruction, communication theory, computational engineering (e.g., uncertainty quantification), machine learning (e.g., Gaussian processes, neural networks), and, quite timely, epidemiology.

Prof. Dr. Wolfgang von der Linden
Dr. Sascha Ranftl
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems
Entropy 2021, 23(12), 1673; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121673 - 13 Dec 2021
Viewed by 523
Abstract
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of [...] Read more.
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of optimization algorithms have been proposed. When these two terms are distance or divergence measures, they can have a Bayesian Maximum A Posteriori (MAP) interpretation where these two terms correspond to the likelihood and prior-probability models, respectively. The Bayesian approach gives more flexibility in choosing these terms and, in particular, the prior term via hierarchical models and hidden variables. However, the Bayesian computations can become very heavy computationally. The machine learning (ML) methods such as classification, clustering, segmentation, and regression, based on neural networks (NN) and particularly convolutional NN, deep NN, physics-informed neural networks, etc. can become helpful to obtain approximate practical solutions to inverse problems. In this tutorial article, particular examples of image denoising, image restoration, and computed-tomography (CT) image reconstruction will illustrate this cooperation between ML and inversion. Full article
Show Figures

Figure 1

Article
Update of Prior Probabilities by Minimal Divergence
Entropy 2021, 23(12), 1668; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121668 - 11 Dec 2021
Viewed by 429
Abstract
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or [...] Read more.
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or the quadratic Bregman divergence. The results obtained by the two methods differ. Updates with information about conditional probabilities are considered as well. Full article
Show Figures

Figure 1

Article
Entropy-Based Temporal Downscaling of Precipitation as Tool for Sediment Delivery Ratio Assessment
Entropy 2021, 23(12), 1615; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121615 - 01 Dec 2021
Viewed by 401
Abstract
Many regions around the globe are subjected to precipitation-data scarcity that often hinders the capacity of hydrological modeling. The entropy theory and the principle of maximum entropy can help hydrologists to extract useful information from the scarce data available. In this work, we [...] Read more.
Many regions around the globe are subjected to precipitation-data scarcity that often hinders the capacity of hydrological modeling. The entropy theory and the principle of maximum entropy can help hydrologists to extract useful information from the scarce data available. In this work, we propose a new method to assess sub-daily precipitation features such as duration and intensity based on daily precipitation using the principle of maximum entropy. Particularly in arid and semiarid regions, such sub-daily features are of central importance for modeling sediment transport and deposition. The obtained features were used as input to the SYPoME model (sediment yield using the principle of maximum entropy). The combined method was implemented in seven catchments in Northeast Brazil with drainage areas ranging from 10−3 to 10+2 km2 in assessing sediment yield and delivery ratio. The results show significant improvement when compared with conventional deterministic modeling, with Nash–Sutcliffe efficiency (NSE) of 0.96 and absolute error of 21% for our method against NSE of −4.49 and absolute error of 105% for the deterministic approach. Full article
Show Figures

Figure 1

Back to TopTop