entropy-logo

Journal Browser

Journal Browser

MaxEnt 2020/2021—The 40th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (20 December 2021) | Viewed by 20719

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Theoretical and Computational Physics, Graz University of Technology, 8010 Graz, Austria
Interests: Bayesian probability theory; stochastic processes; maximum entropy; condensed matter physics; quantum physics; quantum Monte Carlo; machine learning; neural networks

E-Mail Website
Guest Editor
Institute of Theoretical and Computational Physics, Graz University of Technology, Petersgasse 16, 8010 Graz, Austria
Interests: Bayesian probability theory stochastic processes; surrogate modelling; learning simulations; physics-informed machine learning; uncertainty quantification/uncertainty propagation of computer simulations; computational biomechanics; aortic dissection

Special Issue Information

Dear colleagues,

This Special Issue invites contributions on all aspects of probabilistic inference, such as foundations, novel methods, and novel applications. We welcome the submission of extended papers on contributions presented at the 40th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2021). In light of the special circumstances and challenges of this year’s edition, we also welcome submissions that could not be presented at the workshop.

Contributions on foundations and methodology have previously addressed subjects such as approximate or variational inference, experimental design, and computational techniques such as MCMC, amongst other topics. Due to the broad applicability of Bayesian inference, previous editions have featured contributions to and from many diverse disciplines, such as physics (e.g., plasma physics, astrophysics, statistical mechanics, foundations of quantum mechanics), chemistry, geodesy, biology, medicine, econometrics, hydrology, image reconstruction, communication theory, computational engineering (e.g., uncertainty quantification), machine learning (e.g., Gaussian processes, neural networks), and, quite timely, epidemiology.

Prof. Dr. Wolfgang von der Linden
Dr. Sascha Ranftl
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1277 KiB  
Article
Information Field Theory and Artificial Intelligence
by Torsten Enßlin
Entropy 2022, 24(3), 374; https://0-doi-org.brum.beds.ac.uk/10.3390/e24030374 - 07 Mar 2022
Cited by 5 | Viewed by 3131
Abstract
Information field theory (IFT), the information theory for fields, is a mathematical framework for signal reconstruction and non-parametric inverse problems. Artificial intelligence (AI) and machine learning (ML) aim at generating intelligent systems, including such for perception, cognition, and learning. This overlaps with IFT, [...] Read more.
Information field theory (IFT), the information theory for fields, is a mathematical framework for signal reconstruction and non-parametric inverse problems. Artificial intelligence (AI) and machine learning (ML) aim at generating intelligent systems, including such for perception, cognition, and learning. This overlaps with IFT, which is designed to address perception, reasoning, and inference tasks. Here, the relation between concepts and tools in IFT and those in AI and ML research are discussed. In the context of IFT, fields denote physical quantities that change continuously as a function of space (and time) and information theory refers to Bayesian probabilistic logic equipped with the associated entropic information measures. Reconstructing a signal with IFT is a computational problem similar to training a generative neural network (GNN) in ML. In this paper, the process of inference in IFT is reformulated in terms of GNN training. In contrast to classical neural networks, IFT based GNNs can operate without pre-training thanks to incorporating expert knowledge into their architecture. Furthermore, the cross-fertilization of variational inference methods used in IFT and ML are discussed. These discussions suggest that IFT is well suited to address many problems in AI and ML research and application. Full article
Show Figures

Figure 1

18 pages, 3298 KiB  
Article
Neural Network Structure Optimization by Simulated Annealing
by Chun Lin Kuo, Ercan Engin Kuruoglu and Wai Kin Victor Chan
Entropy 2022, 24(3), 348; https://0-doi-org.brum.beds.ac.uk/10.3390/e24030348 - 28 Feb 2022
Cited by 13 | Viewed by 4466
Abstract
A critical problem in large neural networks is over parameterization with a large number of weight parameters, which limits their use on edge devices due to prohibitive computational power and memory/storage requirements. To make neural networks more practical on edge devices and real-time [...] Read more.
A critical problem in large neural networks is over parameterization with a large number of weight parameters, which limits their use on edge devices due to prohibitive computational power and memory/storage requirements. To make neural networks more practical on edge devices and real-time industrial applications, they need to be compressed in advance. Since edge devices cannot train or access trained networks when internet resources are scarce, the preloading of smaller networks is essential. Various works in the literature have shown that the redundant branches can be pruned strategically in a fully connected network without sacrificing the performance significantly. However, majority of these methodologies need high computational resources to integrate weight training via the back-propagation algorithm during the process of network compression. In this work, we draw attention to the optimization of the network structure for preserving performance despite compression by pruning aggressively. The structure optimization is performed using the simulated annealing algorithm only, without utilizing back-propagation for branch weight training. Being a heuristic-based, non-convex optimization method, simulated annealing provides a globally near-optimal solution to this NP-hard problem for a given percentage of branch pruning. Our simulation results have shown that simulated annealing can significantly reduce the complexity of a fully connected network while maintaining the performance without the help of back-propagation. Full article
Show Figures

Figure 1

32 pages, 2702 KiB  
Article
A Bayesian Approach to the Estimation of Parameters and Their Interdependencies in Environmental Modeling
by Christopher G. Albert, Ulrich Callies and Udo von Toussaint
Entropy 2022, 24(2), 231; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020231 - 03 Feb 2022
Cited by 5 | Viewed by 2333
Abstract
We present a case study for Bayesian analysis and proper representation of distributions and dependence among parameters when calibrating process-oriented environmental models. A simple water quality model for the Elbe River (Germany) is referred to as an example, but the approach is applicable [...] Read more.
We present a case study for Bayesian analysis and proper representation of distributions and dependence among parameters when calibrating process-oriented environmental models. A simple water quality model for the Elbe River (Germany) is referred to as an example, but the approach is applicable to a wide range of environmental models with time-series output. Model parameters are estimated by Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling. While the best-fit solution matches usual least-squares model calibration (with a penalty term for excessive parameter values), the Bayesian approach has the advantage of yielding a joint probability distribution for parameters. This posterior distribution encompasses all possible parameter combinations that produce a simulation output that fits observed data within measurement and modeling uncertainty. Bayesian inference further permits the introduction of prior knowledge, e.g., positivity of certain parameters. The estimated distribution shows to which extent model parameters are controlled by observations through the process of inference, highlighting issues that cannot be settled unless more information becomes available. An interactive interface enables tracking for how ranges of parameter values that are consistent with observations change during the process of a step-by-step assignment of fixed parameter values. Based on an initial analysis of the posterior via an undirected Gaussian graphical model, a directed Bayesian network (BN) is constructed. The BN transparently conveys information on the interdependence of parameters after calibration. Finally, a strategy to reduce the number of expensive model runs in MCMC sampling for the presented purpose is introduced based on a newly developed variant of delayed acceptance sampling with a Gaussian process surrogate and linear dimensionality reduction to support function-valued outputs. Full article
Show Figures

Figure 1

13 pages, 611 KiB  
Article
Haphazard Intentional Sampling in Survey and Allocation Studies on COVID-19 Prevalence and Vaccine Efficacy
by Miguel G. R. Miguel, Rafael P. Waissman, Marcelo S. Lauretto and Julio M. Stern
Entropy 2022, 24(2), 225; https://0-doi-org.brum.beds.ac.uk/10.3390/e24020225 - 31 Jan 2022
Viewed by 2034
Abstract
Haphazard intentional sampling is a method developed by our research group for two main purposes: (i) sampling design, where the interest is to select small samples that accurately represent the general population regarding a set of covariates of interest; or (ii) experimental design, [...] Read more.
Haphazard intentional sampling is a method developed by our research group for two main purposes: (i) sampling design, where the interest is to select small samples that accurately represent the general population regarding a set of covariates of interest; or (ii) experimental design, where the interest is to assemble treatment groups that are similar to each other regarding a set of covariates of interest. Rerandomization is a similar method proposed by K. Morgan and D. Rubin. Both methods intentionally select good samples but, in slightly different ways, also introduce some noise in the selection procedure aiming to obtain a decoupling effect that avoids systematic bias or other confounding effects. This paper compares the performance of the aforementioned methods and the standard randomization method in two benchmark problems concerning SARS-CoV-2 prevalence and vaccine efficacy. Numerical simulation studies show that haphazard intentional sampling can either reduce operating costs in up to 80% to achieve the same estimation errors yielded by the standard randomization method or, the other way around, reduce estimation errors in up to 80% using the same sample sizes. Full article
Show Figures

Figure 1

25 pages, 5936 KiB  
Article
Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems
by Ali Mohammad-Djafari
Entropy 2021, 23(12), 1673; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121673 - 13 Dec 2021
Cited by 13 | Viewed by 4175
Abstract
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of [...] Read more.
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of optimization algorithms have been proposed. When these two terms are distance or divergence measures, they can have a Bayesian Maximum A Posteriori (MAP) interpretation where these two terms correspond to the likelihood and prior-probability models, respectively. The Bayesian approach gives more flexibility in choosing these terms and, in particular, the prior term via hierarchical models and hidden variables. However, the Bayesian computations can become very heavy computationally. The machine learning (ML) methods such as classification, clustering, segmentation, and regression, based on neural networks (NN) and particularly convolutional NN, deep NN, physics-informed neural networks, etc. can become helpful to obtain approximate practical solutions to inverse problems. In this tutorial article, particular examples of image denoising, image restoration, and computed-tomography (CT) image reconstruction will illustrate this cooperation between ML and inversion. Full article
Show Figures

Figure 1

12 pages, 288 KiB  
Article
Update of Prior Probabilities by Minimal Divergence
by Jan Naudts
Entropy 2021, 23(12), 1668; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121668 - 11 Dec 2021
Cited by 1 | Viewed by 1516
Abstract
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or [...] Read more.
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or the quadratic Bregman divergence. The results obtained by the two methods differ. Updates with information about conditional probabilities are considered as well. Full article
Show Figures

Figure 1

18 pages, 2827 KiB  
Article
Entropy-Based Temporal Downscaling of Precipitation as Tool for Sediment Delivery Ratio Assessment
by Pedro Henrique Lima Alencar, Eva Nora Paton and José Carlos de Araújo
Entropy 2021, 23(12), 1615; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121615 - 01 Dec 2021
Cited by 1 | Viewed by 1626
Abstract
Many regions around the globe are subjected to precipitation-data scarcity that often hinders the capacity of hydrological modeling. The entropy theory and the principle of maximum entropy can help hydrologists to extract useful information from the scarce data available. In this work, we [...] Read more.
Many regions around the globe are subjected to precipitation-data scarcity that often hinders the capacity of hydrological modeling. The entropy theory and the principle of maximum entropy can help hydrologists to extract useful information from the scarce data available. In this work, we propose a new method to assess sub-daily precipitation features such as duration and intensity based on daily precipitation using the principle of maximum entropy. Particularly in arid and semiarid regions, such sub-daily features are of central importance for modeling sediment transport and deposition. The obtained features were used as input to the SYPoME model (sediment yield using the principle of maximum entropy). The combined method was implemented in seven catchments in Northeast Brazil with drainage areas ranging from 10−3 to 10+2 km2 in assessing sediment yield and delivery ratio. The results show significant improvement when compared with conventional deterministic modeling, with Nash–Sutcliffe efficiency (NSE) of 0.96 and absolute error of 21% for our method against NSE of −4.49 and absolute error of 105% for the deterministic approach. Full article
Show Figures

Figure 1

Back to TopTop