Next Issue
Volume 10, August-1
Previous Issue
Volume 10, July-1
 
 

Mathematics, Volume 10, Issue 14 (July-2 2022) – 185 articles

Cover Story (view full-size image): Out of the five senses that a human can perceive, sight offers a complete perception of the environment, and thus is considered one of the most important ones. Computer vision studies the sense of sight in order to apply it to machines. To put it in simple terms, computer vision can be considered the discipline of "teaching machines how to see" the world behind the images. Environment perception and environment understanding represent the core tasks of computer vision. In recent years, due to the rapid development of deep learning techniques and dedicated hardware, the vision-based solutions that support these tasks have obtained outstanding results. However, more complex solutions for environment perception and environment understanding, such as semantic instance segmentation, remain an open challenge in the computer vision field. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 4012 KiB  
Article
Deep Learning for Forecasting Electricity Demand in Taiwan
by Cheng-Hong Yang, Bo-Hong Chen, Chih-Hsien Wu, Kuo-Chang Chen and Li-Yeh Chuang
Mathematics 2022, 10(14), 2547; https://doi.org/10.3390/math10142547 - 21 Jul 2022
Cited by 6 | Viewed by 1765
Abstract
According to the World Energy Investment 2018 report, the global annual investment in renewable energy exceeded USD 200 billion for eight consecutive years until 2017. In this paper, a deep-learning-based time-series prediction method, namely a gated recurrent unit (GRU)-based prediction method, is proposed [...] Read more.
According to the World Energy Investment 2018 report, the global annual investment in renewable energy exceeded USD 200 billion for eight consecutive years until 2017. In this paper, a deep-learning-based time-series prediction method, namely a gated recurrent unit (GRU)-based prediction method, is proposed to predict energy generation in Taiwan. Data on thermal power (coal, oil, and gas power), renewable energy (conventional hydropower, solar power, and wind power), pumped hydropower, and nuclear power generation for 1991 to 2020 were obtained from the Bureau of Energy, Ministry of Economic Affairs, Taiwan, and the Taiwan Power Company. The proposed GRU-based method was compared with six common forecasting methods: autoregressive integrated moving average, exponential smoothing (ETS), Holt–Winters ETS, support vector regression (SVR), whale-optimization-algorithm-based SVR, and long short-term memory. Among the methods compared, the proposed method had the lowest mean absolute percentage error and root mean square error and thus the highest accuracy. Government agencies and power companies in Taiwan can use the predictions of accurate energy forecasting models as references to formulate energy policies and design plans for the development of alternative energy sources. Full article
(This article belongs to the Special Issue Advanced Aspects of Computational Intelligence with Its Applications)
Show Figures

Figure 1

12 pages, 1097 KiB  
Article
OMPEGAS: Optimized Relativistic Code for Multicore Architecture
by Elena N. Akimova, Vladimir E. Misilov, Igor M. Kulikov and Igor G. Chernykh
Mathematics 2022, 10(14), 2546; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142546 - 21 Jul 2022
Cited by 2 | Viewed by 1190
Abstract
The paper presents a new hydrodynamical code, OMPEGAS, for the 3D simulation of astrophysical flows on shared memory architectures. It provides a numerical method for solving the three-dimensional equations of the gravitational hydrodynamics based on Godunov’s method for solving the Riemann problem and [...] Read more.
The paper presents a new hydrodynamical code, OMPEGAS, for the 3D simulation of astrophysical flows on shared memory architectures. It provides a numerical method for solving the three-dimensional equations of the gravitational hydrodynamics based on Godunov’s method for solving the Riemann problem and the piecewise parabolic approximation with a local stencil. It obtains a high order of accuracy and low dissipation of the solution. The code is implemented for multicore processors with vector instructions using the OpenMP technology, Intel SDLT library, and compiler auto-vectorization tools. The model problem of simulating a star explosion was used to study the developed code. The experiments show that the presented code reproduces the behavior of the explosion correctly. Experiments for the model problem with a grid size of 128×128×128 were performed on an 16-core Intel Core i9-12900K CPU to study the efficiency and performance of the developed code. By using the autovectorization, we achieved a 3.3-fold increase in speed in comparison with the non-vectorized program on the processor with AVX2 support. By using multithreading with OpenMP, we achieved an increase in speed of 2.6 times on a 16-core processor in comparison with the vectorized single-threaded program. The total increase in speed was up to ninefold. Full article
(This article belongs to the Special Issue Parallel Computing and Applications)
Show Figures

Figure 1

21 pages, 2972 KiB  
Article
LLAKEP: A Low-Latency Authentication and Key Exchange Protocol for Energy Internet of Things in the Metaverse Era
by Xin Zhang, Xin Huang, Haotian Yin, Jiajia Huang, Sheng Chai, Bin Xing, Xiaohua Wu and Liangbin Zhao
Mathematics 2022, 10(14), 2545; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142545 - 21 Jul 2022
Cited by 7 | Viewed by 1912
Abstract
The authenticated key exchange (AKE) protocol can ensure secure communication between a client and a server in the electricity transaction of the Energy Internet of things (EIoT). Park proposed a two-factor authentication protocol 2PAKEP, whose computational burden of authentication is evenly shared by [...] Read more.
The authenticated key exchange (AKE) protocol can ensure secure communication between a client and a server in the electricity transaction of the Energy Internet of things (EIoT). Park proposed a two-factor authentication protocol 2PAKEP, whose computational burden of authentication is evenly shared by both sides. However, the computing capability of the client device is weaker than that of the server. Therefore, based on 2PAKEP, we propose an authentication protocol that transfers computational tasks from the client to the server. The client has fewer computing tasks in this protocol than the server, and the overall latency will be greatly reduced. Furthermore, the security of the proposed protocol is analyzed by using the ROR model and GNY logic. We verify the low-latency advantage of the proposed protocol through various comparative experiments and use it for EIoT electricity transaction systems in a Metaverse scenario. Full article
(This article belongs to the Special Issue Codes, Designs, Cryptography and Optimization, 2nd Edition)
Show Figures

Figure 1

22 pages, 6732 KiB  
Review
Traffic Missing Data Imputation: A Selective Overview of Temporal Theories and Algorithms
by Tuo Sun, Shihao Zhu, Ruochen Hao, Bo Sun and Jiemin Xie
Mathematics 2022, 10(14), 2544; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142544 - 21 Jul 2022
Cited by 4 | Viewed by 2199
Abstract
A great challenge for intelligent transportation systems (ITS) is missing traffic data. Traffic data are input from various transportation applications. In the past few decades, several methods for traffic temporal data imputation have been proposed. A key issue is that temporal information collected [...] Read more.
A great challenge for intelligent transportation systems (ITS) is missing traffic data. Traffic data are input from various transportation applications. In the past few decades, several methods for traffic temporal data imputation have been proposed. A key issue is that temporal information collected by neighbor detectors can make traffic missing data imputation more accurate. This review analyzes traffic temporal data imputation methods. Research methods, missing patterns, assumptions, imputation styles, application conditions, limitations, and public datasets are reviewed. Then, five representative methods are tested under different missing patterns and missing ratios. California performance measurement system (PeMS) data including traffic volume and speed are selected to conduct the test. Probabilistic principal component analysis performs the best under the most conditions. Full article
Show Figures

Figure 1

16 pages, 14690 KiB  
Article
Techno-Economic Evaluation of Optimal Integration of PV Based DG with DSTATCOM Functionality with Solar Irradiance and Loading Variations
by Ahmed Amin, Mohamed Ebeed, Loai Nasrat, Mokhtar Aly, Emad M. Ahmed, Emad A. Mohamed, Hammad H. Alnuman and Amal M. Abd El Hamed
Mathematics 2022, 10(14), 2543; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142543 - 21 Jul 2022
Cited by 10 | Viewed by 1360
Abstract
Nowadays, the trend of countries and their electrical sectors moves towards the inclusion of renewable distributed generators (RDGs) to diminish the use of the fossil fuel based DGs. The solar photovoltaic-based DG (PV-DG) is widely used as a clean and sustainable energy resource. [...] Read more.
Nowadays, the trend of countries and their electrical sectors moves towards the inclusion of renewable distributed generators (RDGs) to diminish the use of the fossil fuel based DGs. The solar photovoltaic-based DG (PV-DG) is widely used as a clean and sustainable energy resource. Determining the best placements and ratings of the PV-DG is a significant task for the electrical systems to assess the PV-DG potentials. With the capability of the PV-DG inverters to inject the required reactive power in to the system during the night period or during cloudy weather adds the static compensation (STATCOM) functionality to the PV unit, which is being known as distributed static compensator (DSTATCOM). In the literature, there is a research gap relating the optimal allocation of the PV-DGs along with the seasonal variation of the solar irradiance. Therefore, the aim of this paper is to determine the optimal allocation and sizing of the PV-DGs along with the optimal injected reactive power by their inverters. An efficient optimization technique called Gorilla troop’s optimizer (GTO) is used to solve the optimal allocation problem of the PV-DGs with DSTATCOM functionality on a 94 bus distribution network. Three objective functions are used as a multi-objective function, including the total annual cost, the system voltage deviations, and the system stability. The simulation results show that integration of PV-DGs with the DSTATCOM functionality show the superiorities of reducing the total system cost and considerably enhancing system performance in voltages deviations and system stability compared to inclusion of the PV-DGs without the DSTATCOM functionality. The optimal integration of the PV-DGs with DSTATCOM functionality can reduce the total cost and the voltage deviations by 15.05% and 77.05%, respectively. While the total voltage stability is enhanced by 25.43% compared to the base case. Full article
Show Figures

Figure 1

27 pages, 707 KiB  
Article
Using Value-Based Potentials for Making Approximate Inference on Probabilistic Graphical Models
by Pedro Bonilla-Nadal, Andrés Cano, Manuel Gómez-Olmedo, Serafín Moral and Ofelia Paula Retamero
Mathematics 2022, 10(14), 2542; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142542 - 21 Jul 2022
Cited by 1 | Viewed by 1372
Abstract
The computerization of many everyday tasks generates vast amounts of data, and this has lead to the development of machine-learning methods which are capable of extracting useful information from the data so that the data can be used in future decision-making processes. For [...] Read more.
The computerization of many everyday tasks generates vast amounts of data, and this has lead to the development of machine-learning methods which are capable of extracting useful information from the data so that the data can be used in future decision-making processes. For a long time now, a number of fields, such as medicine (and all healthcare-related areas) and education, have been particularly interested in obtaining relevant information from this stored data. This interest has resulted in the need to deal with increasingly complex problems which involve many different variables with a high degree of interdependency. This produces models (and in our case probabilistic graphical models) that are difficult to handle and that require very efficient techniques to store and use the information that quantifies the relationships between the problem variables. It has therefore been necessary to develop efficient structures, such as probability trees or value-based potentials, to represent the information. Even so, there are problems that must be treated using approximation since this is the only way that results can be obtained, despite the corresponding loss of information. The aim of this article is to show how the approximation can be performed with value-based potentials. Our experimental work is based on checking the behavior of this approximation technique on several Bayesian networks related to medical problems, and our experiments show that in some cases there are notable savings in memory space with limited information loss. Full article
Show Figures

Figure 1

11 pages, 269 KiB  
Article
Differential Game for an Infinite System of Two-Block Differential Equations
by Gafurjan Ibragimov, Sarvinoz Kuchkarova, Risman Mat Hasim and Bruno Antonio Pansera
Mathematics 2022, 10(14), 2541; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142541 - 21 Jul 2022
Cited by 1 | Viewed by 1243
Abstract
We present a pursuit differential game for an infinite system of two-block differential equations in Hilbert space l2. The pursuer and evader control functions are subject to integral constraints. The differential game is said to be completed if the state of [...] Read more.
We present a pursuit differential game for an infinite system of two-block differential equations in Hilbert space l2. The pursuer and evader control functions are subject to integral constraints. The differential game is said to be completed if the state of the system falls into the origin of l2 at some finite time. The purpose of the pursuer is to bring the state of the controlled system to the origin of the space l2, whereas the evader’s aim is to prevent this. For the optimal pursuit time, we obtain an equation and construct the optimal strategies for the players. Full article
(This article belongs to the Special Issue Differential Games and Its Applications)
17 pages, 469 KiB  
Article
Modeling of Territorial and Managerial Aspects of Robotization of Agriculture in Russia
by Yury B. Melnikov, Egor Skvortsov, Natalia Ziablitckaia and Alexander Kurdyumov
Mathematics 2022, 10(14), 2540; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142540 - 21 Jul 2022
Cited by 2 | Viewed by 979
Abstract
In the context of a shortage of labor and objective patterns of the development of means of production in a number of sectors of agriculture, farmers are increasingly using robotics. Despite the presence of significant positive economic effects, the robotization of agriculture in [...] Read more.
In the context of a shortage of labor and objective patterns of the development of means of production in a number of sectors of agriculture, farmers are increasingly using robotics. Despite the presence of significant positive economic effects, the robotization of agriculture in Russia is carried out at a slow pace and is very uneven. This suggests that the robotization of agriculture is influenced by the socio-economic characteristics and characteristics of the regions. The methods are based on a systematic approach to research and an algebraic approach to modelling, which, in our opinion, is a system of several components. To build models, data on the introduction of robotics in Russian agriculture for 2006–2020 and the socio-economic characteristics of the regions during the period of the most intensive introduction of robots (2013–2017) were used. As a conclusion, it can be noted that the robotization of agriculture in the Russian Federation is at the implementation stage, which is confirmed by a significant spread in the correlation coefficient of robotization indicators and various socio-economic characteristics of the regions, including the share of organizations using the Internet, availability of road infrastructure, the share of the rural population in the regions and a number of other indicators. It is shown that, at this stage of the robotization of agriculture, the most important are the models of the management process, while the priority is the subjective component of decision-making about the introduction of robotics, both at the micro level and at the regional level. We have proposed models that reflect various aspects of the robotization process and three mathematical models for the implementation of the strategy are built, which form a model-triad. Three theorems on the existence of an optimal realization of the strategy are proved. Full article
Show Figures

Figure 1

19 pages, 3472 KiB  
Article
An Efficient Method for Breast Mass Classification Using Pre-Trained Deep Convolutional Networks
by Ebtihal Al-Mansour, Muhammad Hussain, Hatim A. Aboalsamh and Fazal-e-Amin
Mathematics 2022, 10(14), 2539; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142539 - 21 Jul 2022
Cited by 3 | Viewed by 1386
Abstract
Masses are the early indicators of breast cancer, and distinguishing between benign and malignant masses is a challenging problem. Many machine learning- and deep learning-based methods have been proposed to distinguish benign masses from malignant ones on mammograms. However, their performance is not [...] Read more.
Masses are the early indicators of breast cancer, and distinguishing between benign and malignant masses is a challenging problem. Many machine learning- and deep learning-based methods have been proposed to distinguish benign masses from malignant ones on mammograms. However, their performance is not satisfactory. Though deep learning has been shown to be effective in a variety of applications, it is challenging to apply it for mass classification since it requires a large dataset for training and the number of available annotated mammograms is limited. A common approach to overcome this issue is to employ a pre-trained model and fine-tune it on mammograms. Though this works well, it still involves fine-tuning a huge number of learnable parameters with a small number of annotated mammograms. To tackle the small set problem in the training or fine-tuning of CNN models, we introduce a new method, which uses a pre-trained CNN without any modifications as an end-to-end model for mass classification, without fine-tuning the learnable parameters. The training phase only identifies the neurons in the classification layer, which yield higher activation for each class, and later on uses the activation of these neurons to classify an unknown mass ROI. We evaluated the proposed approach using different CNN models on the public domain benchmark datasets, such as DDSM and INbreast. The results show that it outperforms the state-of-the-art deep learning-based methods. Full article
(This article belongs to the Special Issue Advanced Aspects of Computational Intelligence with Its Applications)
Show Figures

Figure 1

14 pages, 484 KiB  
Article
Impact of Regressand Stratification in Dataset Shift Caused by Cross-Validation
by José A. Sáez and José L. Romero-Béjar
Mathematics 2022, 10(14), 2538; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142538 - 21 Jul 2022
Cited by 1 | Viewed by 1265
Abstract
Data that have not been modeled cannot be correctly predicted. Under this assumption, this research studies how k-fold cross-validation can introduce dataset shift in regression problems. This fact implies data distributions in the training and test sets to be different and, therefore, a [...] Read more.
Data that have not been modeled cannot be correctly predicted. Under this assumption, this research studies how k-fold cross-validation can introduce dataset shift in regression problems. This fact implies data distributions in the training and test sets to be different and, therefore, a deterioration of the model performance estimation. Even though the stratification of the output variable is widely used in the field of classification to reduce the impacts of dataset shift induced by cross-validation, its use in regression is not widespread in the literature. This paper analyzes the consequences for dataset shift of including different regressand stratification schemes in cross-validation with regression data. The results obtained show that these allow for creating more similar training and test sets, reducing the presence of dataset shift related to cross-validation. The bias and deviation of the performance estimation results obtained by regression algorithms are improved using the highest amounts of strata, as are the number of cross-validation repetitions necessary to obtain these better results. Full article
(This article belongs to the Special Issue Applied Statistical Modeling and Data Mining)
Show Figures

Figure 1

12 pages, 1578 KiB  
Article
Reconstructing the Local Volatility Surface from Market Option Prices
by Soobin Kwak, Youngjin Hwang, Yongho Choi, Jian Wang, Sangkwon Kim and Junseok Kim
Mathematics 2022, 10(14), 2537; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142537 - 21 Jul 2022
Cited by 1 | Viewed by 1776
Abstract
We present an efficient and accurate computational algorithm for reconstructing a local volatility surface from given market option prices. The local volatility surface is dependent on the values of both the time and underlying asset. We use the generalized Black–Scholes (BS) equation and [...] Read more.
We present an efficient and accurate computational algorithm for reconstructing a local volatility surface from given market option prices. The local volatility surface is dependent on the values of both the time and underlying asset. We use the generalized Black–Scholes (BS) equation and finite difference method (FDM) to numerically solve the generalized BS equation. We reconstruct the local volatility function, which provides the best fit between the theoretical and market option prices by minimizing a cost function that is a quadratic representation of the difference between the two option prices. This is an inverse problem in which we want to calculate a local volatility function consistent with the observed market prices. To achieve robust computation, we place the sample points of the unknown volatility function in the middle of the expiration dates. We perform various numerical experiments to confirm the simplicity, robustness, and accuracy of the proposed method in reconstructing the local volatility function. Full article
Show Figures

Figure 1

24 pages, 1352 KiB  
Article
Behavioral Study of Software-Defined Network Parameters Using Exploratory Data Analysis and Regression-Based Sensitivity Analysis
by Mobayode O. Akinsolu, Abimbola O. Sangodoyin and Uyoata E. Uyoata
Mathematics 2022, 10(14), 2536; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142536 - 21 Jul 2022
Cited by 3 | Viewed by 1567
Abstract
To provide a low-cost methodical way for inference-driven insight into the assessment of SDN operations, a behavioral study of key network parameters that predicate the proper functioning and performance of software-defined networks (SDNs) is presented to characterize their alterations or variations, given various [...] Read more.
To provide a low-cost methodical way for inference-driven insight into the assessment of SDN operations, a behavioral study of key network parameters that predicate the proper functioning and performance of software-defined networks (SDNs) is presented to characterize their alterations or variations, given various emulated SDN scenarios. It is standard practice to use simulation environments to investigate the performance characteristics of SDNs, quantitatively and qualitatively; hence, the use of emulated scenarios to typify the investigated SDN in this paper. The key parameters studied analytically are the jitter, response time and throughput of the SDN. These network parameters provide the most vital metrics in SDN operations according to literature, and they have been behaviorally studied in the following popular SDN states: normal operating condition without any incidents on the SDN, hypertext transfer protocol (HTTP) flooding, transmission control protocol (TCP) flooding, and user datagram protocol (UDP) flooding, when the SDN is subjected to a distributed denial-of-service (DDoS) attack. The behavioral study is implemented primarily via univariate and multivariate exploratory data analysis (EDA) to characterize and visualize the variations of the SDN parameters for each of the emulated scenarios, and linear regression-based analysis to draw inferences on the sensitivity of the SDN parameters to the emulated scenarios. Experimental results indicate that the SDN performance metrics (i.e., jitter, latency and throughput) vary as the SDN scenario changes given a DDoS attack on the SDN, and they are all sensitive to the respective attack scenarios with some level of interactions between them. Full article
(This article belongs to the Special Issue Sensitivity Analysis)
Show Figures

Graphical abstract

29 pages, 7723 KiB  
Article
MSGWO-MKL-SVM: A Missing Link Prediction Method for UAV Swarm Network Based on Time Series
by Mingyu Nan, Yifan Zhu, Jie Zhang, Tao Wang and Xin Zhou
Mathematics 2022, 10(14), 2535; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142535 - 21 Jul 2022
Cited by 2 | Viewed by 1068
Abstract
Missing link prediction technology (MLP) is always a hot research area in the field of complex networks, and it has been extensively utilized in UAV swarm network reconstruction recently. UAV swarm is an artificial network with strong randomness, in the face of which [...] Read more.
Missing link prediction technology (MLP) is always a hot research area in the field of complex networks, and it has been extensively utilized in UAV swarm network reconstruction recently. UAV swarm is an artificial network with strong randomness, in the face of which prediction methods based on network similarity often perform poorly. To solve those problems, this paper proposes a Multi Kernel Learning algorithm with a multi-strategy grey wolf optimizer based on time series (MSGWO-MKL-SVM). The Multiple Kernel Learning (MKL) method is adopted in this algorithm to extract the advanced features of time series, and the Support Vector Machine (SVM) algorithm is used to determine the hyperplane of threshold value in nonlinear high dimensional space. Besides that, we propose a new measurable indicator of Multiple Kernel Learning based on cluster, transforming a Multiple Kernel Learning problem into a multi-objective optimization problem. Some adaptive neighborhood strategies are used to enhance the global searching ability of grey wolf optimizer algorithm (GWO). Comparison experiments were conducted on the standard UCI datasets and the professional UAV swarm datasets. The classification accuracy of MSGWO-MKL-SVM on UCI datasets is improved by 6.2% on average, and the link prediction accuracy of MSGWO-MKL-SVM on professional UAV swarm datasets is improved by 25.9% on average. Full article
(This article belongs to the Special Issue Evolutionary Multi-Criteria Optimization: Methods and Applications)
Show Figures

Figure 1

24 pages, 1604 KiB  
Article
The Effects of Cognitive and Skill Learning on the Joint Vendor–Buyer Model with Imperfect Quality and Fuzzy Random Demand
by Kaifang Fu, Zhixiang Chen and Guolin Zhou
Mathematics 2022, 10(14), 2534; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142534 - 21 Jul 2022
Viewed by 814
Abstract
This study investigates the optimization of an integrated production–inventory system that consists of an original equipment manufacturer (OEM) supplier and an OEM brand company. The cognitive and skill learning effect, imperfect quality, and fuzzy random demand are incorporated into the integrated two-echelon supply [...] Read more.
This study investigates the optimization of an integrated production–inventory system that consists of an original equipment manufacturer (OEM) supplier and an OEM brand company. The cognitive and skill learning effect, imperfect quality, and fuzzy random demand are incorporated into the integrated two-echelon supply chain model to minimize the total cost. We contribute to dividing the learning effect into cognitive learning and skill learning, we build a new learning curve to resemble the real complexity more closely and avoid the problem that production time tends towards zero after production is stable. In total, five production–inventory models are constructed. Furthermore, a solution procedure is designed to solve the model to obtain the optimal order quantities, and the optimal shipment size. Additionally, the symbolic distance method is used to deal with the inverse fuzzification. Then numerical analysis shows that the increase of the cognitive learning coefficient and skill learning coefficient will reduce the total cost of the production–inventory system. With the increase of the cognitive learning coefficient, the gap between the total cost of cognitive learning and skill learning, and that of Wright learning, correspondingly decreases consistently. However, with the increase of the skill learning coefficient, there is a consistent corresponding increase. The total cost of cognitive learning and skill learning shows hyperbolic characteristics. The important insights of this study for managers are that employees’ knowledge plays an important role in reducing costs in the early learning stage and humanistic management measures should be taken to reduce employees’ turnover. Compared with the skill learning training for production technicians, we should pay more attention to the training of cognitive learning. Full article
Show Figures

Figure 1

31 pages, 3106 KiB  
Article
Calculating the Segmented Helix Formed by Repetitions of Identical Subunits thereby Generating a Zoo of Platonic Helices
by Robert L. Read
Mathematics 2022, 10(14), 2533; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142533 - 21 Jul 2022
Viewed by 1436
Abstract
Eric Lord has observed: “In nature, helical structures arise when identical structural subunits combine sequentially, the orientational and translational relation between each unit and its predecessor remaining constant.” This paper proves Lord’s observation. Constant-time algorithms are given for the segmented helix generated from [...] Read more.
Eric Lord has observed: “In nature, helical structures arise when identical structural subunits combine sequentially, the orientational and translational relation between each unit and its predecessor remaining constant.” This paper proves Lord’s observation. Constant-time algorithms are given for the segmented helix generated from the intrinsic properties of a stacked object and its conjoining rule. Standard results from screw theory and previous work are combined with corollaries of Lord’s observation to allow calculations of segmented helices from either transformation matrices or four known consecutive points. The construction of these from the intrinsic properties of the rule for conjoining repeated subunits of arbitrary shape is provided, allowing the complete parameters describing the unique segmented helix generated by arbitrary stackings to be easily calculated. Free/Libre open-source interactive software and a website which performs this computation for arbitrary prisms along with interactive 3D visualization is provided. We prove that any subunit can produce a toroid-like helix or a maximally-extended helix, forming a continuous spectrum based on joint-face normal twist. This software, website and paper, taken together, compute, render, and catalog an exhaustive “zoo” of 28 uniquely-shaped platonic helices, such as the Boerdijk–Coxeter tetrahelix and various species of helices formed from dodecahedra. Full article
Show Figures

Graphical abstract

22 pages, 6645 KiB  
Article
Opinion Mining of Green Energy Sentiment: A Russia-Ukraine Conflict Analysis
by Raquel Ibar-Alonso, Raquel Quiroga-García and Mar Arenas-Parra
Mathematics 2022, 10(14), 2532; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142532 - 21 Jul 2022
Cited by 18 | Viewed by 3263
Abstract
In this paper, we assess sentiment and emotion regarding green energy through employing a social listening analysis on Twitter. Knowing the sentiment and attitude of the population is important because it will help to promote policies and actions that favor the development of [...] Read more.
In this paper, we assess sentiment and emotion regarding green energy through employing a social listening analysis on Twitter. Knowing the sentiment and attitude of the population is important because it will help to promote policies and actions that favor the development of green or renewable energies. We chose to study a crucial period that coincides with the onset of the 2022 Ukrainian–Russo conflict, which has undoubtedly affected global energy policies worldwide. We searched for messages containing the term “green energy” during the days before and after the conflict started. We then performed a semantic analysis of the most frequent words, a comparative analysis of sentiments and emotions in both periods, a dimensionality reduction analysis, and an analysis of the variance of tweets versus retweets. The results of the analysis show that the conflict has changed society’s sentiments about an energy transition to green energy. In addition, we found that negative feelings and emotions emerged in green energy tweeters once the conflict started. However, the emotion of confidence also increased as the conflict, intimately linked to energy, has driven all countries to promote a rapid transition to greener energy sources. Finally, we observed that of the two latent variables identified for social opinion, one of them, pessimism, was maintained while the other, optimism, was subdivided into optimism and expectation. Full article
Show Figures

Figure 1

14 pages, 2828 KiB  
Article
Deep Transfer Learning Method Based on Automatic Domain Alignment and Moment Matching
by Jingui Zhang, Chuangji Meng, Cunlu Xu, Jingyong Ma and Wei Su
Mathematics 2022, 10(14), 2531; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142531 - 21 Jul 2022
Cited by 2 | Viewed by 1109
Abstract
Domain discrepancy is a key research problem in the field of deep domain adaptation. Two main strategies are used to reduce the discrepancy: the parametric method and the nonparametric method. Both methods have achieved good results in practical applications. However, research on whether [...] Read more.
Domain discrepancy is a key research problem in the field of deep domain adaptation. Two main strategies are used to reduce the discrepancy: the parametric method and the nonparametric method. Both methods have achieved good results in practical applications. However, research on whether the combination of the two can further reduce domain discrepancy has not been conducted. Therefore, in this paper, a deep transfer learning method based on automatic domain alignment and moment matching (DA-MM) is proposed. First, an automatic domain alignment layer is embedded in the front of each domain-specific layer of a neural network structure to preliminarily align the source and target domains. Then, a moment matching measure (such as MMD distance) is added between every domain-specific layer to map the source and target domain features output by the alignment layer to a common reproduced Hilbert space. The results of an extensive experimental analysis over several public benchmarks show that DA-MM can reduce the distribution discrepancy between the two domains and improve the domain adaptation performance. Full article
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

11 pages, 285 KiB  
Article
General Relativistic Space-Time with η1-Einstein Metrics
by Yanlin Li, Fatemah Mofarreh, Santu Dey, Soumendu Roy and Akram Ali
Mathematics 2022, 10(14), 2530; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142530 - 21 Jul 2022
Cited by 21 | Viewed by 1451
Abstract
The present research paper consists of the study of an η1-Einstein soliton in general relativistic space-time with a torse-forming potential vector field. Besides this, we try to evaluate the characterization of the metrics when the space-time with a semi-symmetric energy-momentum tensor [...] Read more.
The present research paper consists of the study of an η1-Einstein soliton in general relativistic space-time with a torse-forming potential vector field. Besides this, we try to evaluate the characterization of the metrics when the space-time with a semi-symmetric energy-momentum tensor admits an η1-Einstein soliton, whose potential vector field is torse-forming. In adition, certain curvature conditions on the space-time that admit an η1-Einstein soliton are explored and build up the importance of the Laplace equation on the space-time in terms of η1-Einstein soliton. Lastly, we have given some physical accomplishment with the connection of dust fluid, dark fluid and radiation era in general relativistic space-time admitting an η1-Einstein soliton. Full article
(This article belongs to the Special Issue New Advances in Differential Geometry and Optimizations on Manifolds)
17 pages, 3320 KiB  
Article
HJ-Biplot as a Tool to Give an Extra Analytical Boost for the Latent Dirichlet Assignment (LDA) Model: With an Application to Digital News Analysis about COVID-19
by Luis Pilacuan-Bonete, Purificación Galindo-Villardón and Francisco Delgado-Álvarez
Mathematics 2022, 10(14), 2529; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142529 - 20 Jul 2022
Viewed by 1630
Abstract
This work objective is to generate an HJ-biplot representation for the content analysis obtained by latent Dirichlet assignment (LDA) of the headlines of three Spanish newspapers in their web versions referring to the topic of the pandemic caused by the SARS-CoV-2 virus (COVID-19) [...] Read more.
This work objective is to generate an HJ-biplot representation for the content analysis obtained by latent Dirichlet assignment (LDA) of the headlines of three Spanish newspapers in their web versions referring to the topic of the pandemic caused by the SARS-CoV-2 virus (COVID-19) with more than 500 million affected and almost six million deaths to date. The HJ-biplot is used to give an extra analytical boost to the model, it is an easy-to-interpret multivariate technique which does not require in-depth knowledge of statistics, allows capturing the relationship between the topics about the COVID-19 news and the three digital newspapers, and it compares them with LDAvis and heatmap representations, the HJ-biplot provides a better representation and visualization, allowing us to analyze the relationship between each newspaper analyzed (column markers represented by vectors) and the 14 topics obtained from the LDA model (row markers represented by points) represented in the plane with the greatest informative capacity. It is concluded that the newspapers El Mundo and 20 M present greater homogeneity between the topics published during the pandemic, while El País presents topics that are less related to the other two newspapers, highlighting topics such as t_12 (Government_Madrid) and t_13 (Government_millions). Full article
(This article belongs to the Special Issue Multivariate Statistics: Theory and Its Applications)
Show Figures

Figure 1

13 pages, 273 KiB  
Article
Fully Degenerating of Daehee Numbers and Polynomials
by Sahar Albosaily, Waseem Ahmad Khan, Serkan Araci and Azhar Iqbal
Mathematics 2022, 10(14), 2528; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142528 - 20 Jul 2022
Cited by 2 | Viewed by 1133
Abstract
In this paper, we consider fully degenerate Daehee numbers and polynomials by using degenerate logarithm function. We investigate some properties of these numbers and polynomials. We also introduce higher-order multiple fully degenerate Daehee polynomials and numbers which can be represented in terms of [...] Read more.
In this paper, we consider fully degenerate Daehee numbers and polynomials by using degenerate logarithm function. We investigate some properties of these numbers and polynomials. We also introduce higher-order multiple fully degenerate Daehee polynomials and numbers which can be represented in terms of Riemann integrals on the interval 0,1. Finally, we derive their summation formulae. Full article
19 pages, 2884 KiB  
Article
Thermal-Economic Optimization of Plate–Fin Heat Exchanger Using Improved Gaussian Quantum-Behaved Particle Swarm Algorithm
by Joo Hyun Moon, Kyun Ho Lee, Haedong Kim and Dong In Han
Mathematics 2022, 10(14), 2527; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142527 - 20 Jul 2022
Cited by 6 | Viewed by 1190
Abstract
Heat exchangers are usually designed using a sophisticated process of trial-and-error to find proper values of unknown parameters which satisfy given requirements. Recently, the design of heat exchangers using evolutionary optimization algorithms has received attention. The major aim of the present study is [...] Read more.
Heat exchangers are usually designed using a sophisticated process of trial-and-error to find proper values of unknown parameters which satisfy given requirements. Recently, the design of heat exchangers using evolutionary optimization algorithms has received attention. The major aim of the present study is to propose an improved Gaussian quantum-behaved particle swarm optimization (GQPSO) algorithm for enhanced optimization performance and its verification through application to a multivariable thermal-economic optimization problem of a crossflow plate–fin heat exchanger (PFHE). Three single objective functions: the number of entropy generation units (NEGUs), total annual cost (TAC), and heat exchanger surface area (A), were minimized separately by evaluating optimal values of seven unknown variables using four different PSO-based methods. By comparing the obtained best fitness values, the improved GQPSO approach could search quickly for better global optimal solutions by preventing particles from falling to the local minimum due to its modified local attractor scheme based on the Gaussian distributed random numbers. For example, the proposed GQPSO could predict further improved best fitness values of 40% for NEGUs, 17% for TAC, and 4.5% for A, respectively. Consequently, the present study suggests that the improved GQPSO approach with the modified local attractor scheme can be efficient in rapidly finding more suitable solutions for optimizing the thermal-economic problem of the crossflow PFHE. Full article
Show Figures

Figure 1

19 pages, 7650 KiB  
Article
Multimode Process Monitoring Based on Modified Density Peak Clustering and Parallel Variational Autoencoder
by Feng Yu, Jianchang Liu and Dongming Liu
Mathematics 2022, 10(14), 2526; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142526 - 20 Jul 2022
Cited by 5 | Viewed by 1420
Abstract
Clustering algorithms and deep learning methods have been widely applied in the multimode process monitoring. However, for the process data with unknown mode, traditional clustering methods can hardly identify the number of modes automatically. Further, deep learning methods can learn effective features from [...] Read more.
Clustering algorithms and deep learning methods have been widely applied in the multimode process monitoring. However, for the process data with unknown mode, traditional clustering methods can hardly identify the number of modes automatically. Further, deep learning methods can learn effective features from nonlinear process data, while the extracted features cannot follow the Gaussian distribution, which may lead to incorrect control limit for fault detection. In this paper, a comprehensive monitoring method based on modified density peak clustering and parallel variational autoencoder (MDPC-PVAE) is proposed for multimode processes. Firstly, a novel clustering algorithm, named MDPC, is presented for the mode identification and division. MDPC can identify the number of modes without prior knowledge of mode information and divide the whole process data into multiple modes. Then, the PVAE is established based on distinguished multimode data to generate the deep nonlinear features, in which the generated features in each VAE follow the Gaussian distribution. Finally, the Gaussian feature representations obtained by PVAE are provided to construct the statistics H2, and the control limits are determined by the kernel density estimation (KDE) method. The effectiveness of the proposed method is evaluated by the Tennessee Eastman process and semiconductor etching process. Full article
(This article belongs to the Special Issue Engineering Calculation and Data Modeling)
Show Figures

Figure 1

20 pages, 9549 KiB  
Article
Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
by Lizong Zhang, Haojun Yin, Bei Hui, Sijuan Liu and Wei Zhang
Mathematics 2022, 10(14), 2525; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142525 - 20 Jul 2022
Cited by 3 | Viewed by 1925
Abstract
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have [...] Read more.
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have not yet been fully utilized, and the combination of visual and auxiliary information can lead to visual dependencies, which impacts relationship prediction among objects. Therefore, we propose a novel knowledge-based model with adjustable visual contextual dependency. Our model has three key components. The first module extracts the visual features and bounding boxes in the input image. The second module uses two encoders to fully integrate visual information and external knowledge. Finally, visual context loss and visual relationship loss are introduced to adjust the visual dependency of the model. The difference between the initial prediction results and the visual dependency results is calculated to generate the dependency-corrected results. The proposed model can obtain better global and contextual information for predicting object relationships, and the visual dependencies can be adjusted through the two loss functions. The results of extensive experiments show that our model outperforms most existing methods. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

13 pages, 493 KiB  
Article
Output Tracking Control of Random Nonlinear Time-Varying Systems
by Ruitao Wang, Hui Wang, Wuquan Li and Ben Niu
Mathematics 2022, 10(14), 2524; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142524 - 20 Jul 2022
Viewed by 943
Abstract
This paper is concerned with the output tracking control problem for random nonlinear systems with time-varying powers. A distinct feature of this paper is that we consider time-varying powers and the second-order moment process simultaneously, which is more practical in real applications than [...] Read more.
This paper is concerned with the output tracking control problem for random nonlinear systems with time-varying powers. A distinct feature of this paper is that we consider time-varying powers and the second-order moment process simultaneously, which is more practical in real applications than the existing results where only one factor is considered. We propose a new design scheme, which ensures that the fourth moment of the tracking error can be adjusted to be arbitrarily small and all the states of the closed-loop system are bounded in probability. Finally, a numerical simulation is given to demonstrate the feasibility of the control idea. Full article
(This article belongs to the Section Network Science)
Show Figures

Figure 1

20 pages, 5700 KiB  
Article
Learning to Utilize Curiosity: A New Approach of Automatic Curriculum Learning for Deep RL
by Zeyang Lin, Jun Lai, Xiliang Chen, Lei Cao and Jun Wang
Mathematics 2022, 10(14), 2523; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142523 - 20 Jul 2022
Cited by 1 | Viewed by 1419
Abstract
In recent years, reinforcement learning algorithms based on automatic curriculum learning have been increasingly applied to multi-agent system problems. However, in the sparse reward environment, the reinforcement learning agents get almost no feedback from the environment during the whole training process, which leads [...] Read more.
In recent years, reinforcement learning algorithms based on automatic curriculum learning have been increasingly applied to multi-agent system problems. However, in the sparse reward environment, the reinforcement learning agents get almost no feedback from the environment during the whole training process, which leads to a decrease in the convergence speed and learning efficiency of the curriculum reinforcement learning algorithm. Based on the automatic curriculum learning algorithm, this paper proposes a curriculum reinforcement learning method based on the curiosity model (CMCL). The method divides the curriculum sorting criteria into temporal-difference error and curiosity reward, uses the K-fold cross validation method to evaluate the difficulty priority of task samples, uses the Intrinsic Curiosity Module (ICM) to evaluate the curiosity priority of the task samples, and uses the curriculum factor to adjust the learning probability of the task samples. This study compares the CMCL algorithm with other baseline algorithms in cooperative-competitive environments, and the experimental simulation results show that the CMCL method can improve the training performance and robustness of multi-agent deep reinforcement learning algorithms. Full article
Show Figures

Figure 1

20 pages, 1598 KiB  
Article
Exact Solutions and Non-Traveling Wave Solutions of the (2+1)-Dimensional Boussinesq Equation
by Lihui Gao, Chunxiao Guo, Yanfeng Guo and Donglong Li
Mathematics 2022, 10(14), 2522; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142522 - 20 Jul 2022
Cited by 2 | Viewed by 1102
Abstract
By the extended (GG) method and the improved tanh function method, the exact solutions of the (2+1) dimensional Boussinesq equation are studied. Firstly, with the help of the solutions of the nonlinear ordinary differential equation, we obtain the new [...] Read more.
By the extended (GG) method and the improved tanh function method, the exact solutions of the (2+1) dimensional Boussinesq equation are studied. Firstly, with the help of the solutions of the nonlinear ordinary differential equation, we obtain the new traveling wave exact solutions of the equation by the homogeneous equilibrium principle and the extended (GG) method. Secondly, by constructing the new ansatz solutions and applying the improved tanh function method, many non-traveling wave exact solutions of the equation are given. The solutions mainly include hyperbolic, trigonometric and rational functions, which reflect different types of solutions for nonlinear waves. Finally, we discuss the effects of these solutions on the formation of rogue waves according to the numerical simulation. Full article
Show Figures

Figure 1

18 pages, 885 KiB  
Article
Intrinsic Correlation with Betweenness Centrality and Distribution of Shortest Paths
by Yelai Feng, Huaixi Wang, Chao Chang and Hongyi Lu
Mathematics 2022, 10(14), 2521; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142521 - 20 Jul 2022
Cited by 6 | Viewed by 1329
Abstract
Betweenness centrality evaluates the importance of nodes and edges in networks and is one of the most pivotal indices in complex network analysis; for example, it is widely used in centrality ordering, failure cascading modeling, and path planning. Existing algorithms are based on [...] Read more.
Betweenness centrality evaluates the importance of nodes and edges in networks and is one of the most pivotal indices in complex network analysis; for example, it is widely used in centrality ordering, failure cascading modeling, and path planning. Existing algorithms are based on single-source shortest paths technology, which cannot show the change of betweenness centrality with the growth of paths, and prevents deep analysis. We propose a novel algorithm that calculates betweenness centrality hierarchically and accelerates computing via GPUs. Based on the novel algorithm, we find that the distribution of shortest path has an intrinsic correlation with betweenness centrality. Furthermore, we find that the betweenness centrality indices of some nodes are 0, but these nodes are not edge nodes, and they characterize critical significance in real networks. Experimental evidence shows that betweenness centrality is closely related to the distribution of the shortest paths. Full article
(This article belongs to the Special Issue Complex Network Modeling: Theory and Applications)
Show Figures

Figure 1

32 pages, 7868 KiB  
Article
Queueing Theory-Based Mathematical Models Applied to Enterprise Organization and Industrial Production Optimization
by Laurentiu Rece, Sorin Vlase, Daniel Ciuiu, Giorgian Neculoiu, Stefan Mocanu and Arina Modrea
Mathematics 2022, 10(14), 2520; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142520 - 20 Jul 2022
Cited by 5 | Viewed by 1885
Abstract
In the paper, a new method was presented using queueing theory models in order to ensure an optimal production department size, optimized production costs and optimal provision. Queueing/waiting mathematical models represent the development matrix for an experimental algorithm and implicitly numerical approach, both [...] Read more.
In the paper, a new method was presented using queueing theory models in order to ensure an optimal production department size, optimized production costs and optimal provision. Queueing/waiting mathematical models represent the development matrix for an experimental algorithm and implicitly numerical approach, both successfully applied (and confirmed in practice) in a production section design for a real industrial engineering unit with discussed method technological flow and equipment schemes compatibility. The total costs for a queueing system with S servers depend on the number of servers. The problem of minimizing cost in terms of S was the main aim of the paper. In order to solve it, we estimated all the variables of the system that influence the cost using the Monte Carlo method. For a Jackson queueing network, the involved linear system has good properties such that it can be solved by iterative methods such as Jacobi and Gauss–Seidel. Full article
(This article belongs to the Special Issue Applied Mathematics and Continuum Mechanics)
Show Figures

Figure 1

17 pages, 1830 KiB  
Article
A Two-Stage Model with an Improved Clustering Algorithm for a Distribution Center Location Problem under Uncertainty
by Jun Wu, Xin Liu, Yuanyuan Li, Liping Yang, Wenyan Yuan and Yile Ba
Mathematics 2022, 10(14), 2519; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142519 - 20 Jul 2022
Cited by 4 | Viewed by 1873
Abstract
Distribution centers are quite important for logistics. In order to save costs, reduce energy consumption and deal with increasingly uncertain demand, it is necessary for distribution centers to select the location strategically. In this paper, a two-stage model based on an improved clustering [...] Read more.
Distribution centers are quite important for logistics. In order to save costs, reduce energy consumption and deal with increasingly uncertain demand, it is necessary for distribution centers to select the location strategically. In this paper, a two-stage model based on an improved clustering algorithm and the center-of-gravity method is proposed to deal with the multi-facility location problem arising from a real-world case. First, a distance function used in clustering is redefined to include both the spatial indicator and the socio-economic indicator. Then, an improved clustering algorithm is used to determine the optimal number of distribution centers needed and the coverage of each center. Third, the center-of-gravity method is used to determine the final location of each center. Finally, the improved method is compared with the traditional clustering method by testing data from 12 cities in Inner Mongolia Autonomous Region in China. The comparison result proves the proposed method’s effectiveness. Full article
Show Figures

Figure 1

21 pages, 337 KiB  
Article
Preplay Negotiations with Unconditional Offers of Side Payments in Two-Player Strategic-Form Games: Towards Non-Cooperative Cooperation
by Valentin Goranko
Mathematics 2022, 10(14), 2518; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142518 - 20 Jul 2022
Viewed by 1150
Abstract
I consider strategic-form games with transferable utility extended with a phase of negotiations before the actual play of the game, where players can exchange a series of alternating (turn-based) unilaterally binding offers to each other for incentive payments of utilities after the play, [...] Read more.
I consider strategic-form games with transferable utility extended with a phase of negotiations before the actual play of the game, where players can exchange a series of alternating (turn-based) unilaterally binding offers to each other for incentive payments of utilities after the play, conditional only on the recipients playing the strategy indicated in the offer. Every such offer transforms the game payoff matrix by accordingly transferring the offered amount from the offering player’s payoff to the recipient’s in all outcomes where the indicated strategy is played by the latter. That exchange of offers generates an unbounded-horizon, extensive-form preplay negotiations game, which is the focus of this study. In this paper, I study the case where the players assume that their opponents can terminate the preplay negotiations phase at any stage. Consequently, in their negotiation strategies, the players are guided by myopic rationality reasoning and aim at optimising each of their offers. The main results and findings include a concrete algorithmic procedure for computing players’ best offers in the preplay negotiations phase and using it to demonstrate that these negotiations can generally lead to substantial improvement of the payoffs for both players in the transformed game, but they do not always lead to optimal outcomes, as one might expect. Full article
(This article belongs to the Special Issue Cooperative Game Theory and Mathematical Structures)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop