Intelligent Optimization in Big Data, Machine Learning and Artificial Intelligence

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: closed (20 March 2022) | Viewed by 29873

Special Issue Editor


E-Mail Website
Guest Editor
Department of Industrial and Management Engineering, ERICA campus, Hanyang University, Ansan 15558, Korea
Interests: operations research; optimization; big data; machine learning

Special Issue Information

Dear Colleagues

The importance of efficient algorithms capable of solving real world problems is increasingly being recognized as Industry 4.0 is being realized. As a result, intelligent optimization has emerged as a fundamental tool for handling the tasks of the industrial revolutions that have occurred in big data, machine learning, and artificial intelligence. Intelligent optimization has also become a promising tool for dealing with complex optimization problems encountered in a variety of industries, including manufacturing, logistics, and service areas. For Industry 4.0 to be successful on a wide scale, from a technological perspective, fast and accurate intelligent algorithms capable of handling a variety of optimization tasks of the 4th industrial revolution will be important.

The purpose of this Special Issue is to gather a collection of articles reflecting the latest developments of intelligence optimization theories and applications. The fields of interest include operations research and computer science including the issues of big data, machine learning, deep learning, reinforced learning, artificial intelligence, and others.

Prof. Dr. Jong Soo Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Classical intelligent optimization algorithms
  • Population-based intelligent algorithms
  • Hybrid optimization
  • Multi-objective optimization
  • Big data
  • Machine learning
  • Artificial intelligence
  • Operation research

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1638 KiB  
Article
Agent-Based Recommendation in E-Learning Environment Using Knowledge Discovery and Machine Learning Approaches
by Zeinab Shahbazi and Yung-Cheol Byun
Mathematics 2022, 10(7), 1192; https://0-doi-org.brum.beds.ac.uk/10.3390/math10071192 - 06 Apr 2022
Cited by 26 | Viewed by 3822
Abstract
E-learning is a popular area in terms of learning from social media websites in various terms and contents for every group of people in this world with different knowledge backgrounds and jobs. E-learning sites help users such as students, business workers, instructors, and [...] Read more.
E-learning is a popular area in terms of learning from social media websites in various terms and contents for every group of people in this world with different knowledge backgrounds and jobs. E-learning sites help users such as students, business workers, instructors, and those searching for different educational institutions. Excluding the benefits of this system, there are various challenges that the users face in online platforms. One of the important challenges is the true information and right content based on these resources, search results and quality. This research proposes virtual and intelligent agent-based recommendation, which requires users’ profile information and preferences to recommend the proper content and search results based on their search history. We applied Natural Language Processing (NLP) techniques and semantic analysis approaches for the recommendation of course selection to e-learners and tutors. Moreover, machine learning performance analysis applied to improve the user rating results in the e-learning environment. The system automatically learns and analyzes the learner characteristics and processes the learning style through the clustering strategy. Compared with the recent state-of-the-art in this field, the proposed system and the simulation results show the minimizing number of metric errors compared to other works. The achievements of the presented approach are providing a comfortable platform to the user for course selection and recommendations. Similarly, we avoid recommending the same contents and courses. We analyze the user preferences and improving the recommendation system performance to provide highly related content based on the user profile situation. The prediction accuracy of the proposed system is 98% compared to hybrid filtering, self organization systems and ensemble modeling. Full article
Show Figures

Figure 1

33 pages, 1019 KiB  
Article
Performance of a Novel Chaotic Firefly Algorithm with Enhanced Exploration for Tackling Global Optimization Problems: Application for Dropout Regularization
by Nebojsa Bacanin, Ruxandra Stoean, Miodrag Zivkovic, Aleksandar Petrovic, Tarik A. Rashid and Timea Bezdan
Mathematics 2021, 9(21), 2705; https://0-doi-org.brum.beds.ac.uk/10.3390/math9212705 - 25 Oct 2021
Cited by 86 | Viewed by 2995
Abstract
Swarm intelligence techniques have been created to respond to theoretical and practical global optimization problems. This paper puts forward an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method, by an explicit exploration mechanism and a chaotic [...] Read more.
Swarm intelligence techniques have been created to respond to theoretical and practical global optimization problems. This paper puts forward an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method, by an explicit exploration mechanism and a chaotic local search strategy. The resulting augmented approach was theoretically tested on two sets of bound-constrained benchmark functions from the CEC suites and practically validated for automatically selecting the optimal dropout rate for the regularization of deep neural networks. Despite their successful applications in a wide spectrum of different fields, one important problem that deep learning algorithms face is overfitting. The traditional way of preventing overfitting is to apply regularization; the first option in this sense is the choice of an adequate value for the dropout parameter. In order to demonstrate its ability in finding an optimal dropout rate, the boosted version of the firefly algorithm has been validated for the deep learning subfield of convolutional neural networks, with respect to five standard benchmark datasets for image processing: MNIST, Fashion-MNIST, Semeion, USPS and CIFAR-10. The performance of the proposed approach in both types of experiments was compared with other recent state-of-the-art methods. To prove that there are significant improvements in results, statistical tests were conducted. Based on the experimental data, it can be concluded that the proposed algorithm clearly outperforms other approaches. Full article
Show Figures

Figure 1

21 pages, 5633 KiB  
Article
A Distributed Quantum-Behaved Particle Swarm Optimization Using Opposition-Based Learning on Spark for Large-Scale Optimization Problem
by Zhaojuan Zhang, Wanliang Wang and Gaofeng Pan
Mathematics 2020, 8(11), 1860; https://0-doi-org.brum.beds.ac.uk/10.3390/math8111860 - 23 Oct 2020
Cited by 7 | Viewed by 1941
Abstract
In the era of big data, the size and complexity of the data are increasing especially for those stored in remote locations, and whose difficulty is further increased by the ongoing rapid accumulation of data scale. Real-world optimization problems present new challenges to [...] Read more.
In the era of big data, the size and complexity of the data are increasing especially for those stored in remote locations, and whose difficulty is further increased by the ongoing rapid accumulation of data scale. Real-world optimization problems present new challenges to traditional intelligent optimization algorithms since the traditional serial optimization algorithm has a high computational cost or even cannot deal with it when faced with large-scale distributed data. Responding to these challenges, a distributed cooperative evolutionary algorithm framework using Spark (SDCEA) is first proposed. The SDCEA can be applied to address the challenge due to insufficient computing resources. Second, a distributed quantum-behaved particle swarm optimization algorithm (SDQPSO) based on the SDCEA is proposed, where the opposition-based learning scheme is incorporated to initialize the population, and a parallel search is conducted on distributed spaces. Finally, the performance of the proposed SDQPSO is tested. In comparison with SPSO, SCLPSO, and SALCPSO, SDQPSO can not only improve the search efficiency but also search for a better optimum with almost the same computational cost for the large-scale distributed optimization problem. In conclusion, the proposed SDQPSO based on the SDCEA framework has high scalability, which can be applied to solve the large-scale optimization problem. Full article
Show Figures

Figure 1

17 pages, 461 KiB  
Article
Efficient Algorithms for a Large-Scale Supplier Selection and Order Allocation Problem Considering Carbon Emissions and Quantity Discounts
by Shin Hee Baek and Jong Soo Kim
Mathematics 2020, 8(10), 1659; https://0-doi-org.brum.beds.ac.uk/10.3390/math8101659 - 25 Sep 2020
Cited by 8 | Viewed by 2030
Abstract
This paper considers a multi-period supplier selection and order allocation problem for a green supply chain system that consists of a single buyer and multiple heterogeneous suppliers. The buyer sells multiple products to end customers and periodically replenishes each item’s inventory using a [...] Read more.
This paper considers a multi-period supplier selection and order allocation problem for a green supply chain system that consists of a single buyer and multiple heterogeneous suppliers. The buyer sells multiple products to end customers and periodically replenishes each item’s inventory using a periodic inventory control policy. The periodic inventory control policy used by the buyer starts every period with an order size determination of each item and the subsequent supplier selection to fulfill the orders. Because each supplier in the system is different from other suppliers in the types of carrying items, delivery distance, item price, and quantity discount schedule, the buyer’s problem becomes a complicated optimization problem. For the described order size and supplier selection problem of the buyer, we propose a nonlinear integer programming model and develop two different algorithms to enhance the usability of the model in a real business environment with a large amount of data. The algorithms are developed to considerably cut computational time and at the same time to generate a good feasible solution to a given supplier selection and order allocation problem. Computational experiments that were conducted to test the efficiency of the algorithms showed that they can cut as much as 99% of the computational time and successfully find feasible solutions, deviating not more than 3.4% from the optimal solutions. Full article
Show Figures

Figure 1

21 pages, 2477 KiB  
Article
A Self-Care Prediction Model for Children with Disability Based on Genetic Algorithm and Extreme Gradient Boosting
by Muhammad Syafrudin, Ganjar Alfian, Norma Latif Fitriyani, Muhammad Anshari, Tony Hadibarata, Agung Fatwanto and Jongtae Rhee
Mathematics 2020, 8(9), 1590; https://0-doi-org.brum.beds.ac.uk/10.3390/math8091590 - 15 Sep 2020
Cited by 13 | Viewed by 3773
Abstract
Detecting self-care problems is one of important and challenging issues for occupational therapists, since it requires a complex and time-consuming process. Machine learning algorithms have been recently applied to overcome this issue. In this study, we propose a self-care prediction model called GA-XGBoost, [...] Read more.
Detecting self-care problems is one of important and challenging issues for occupational therapists, since it requires a complex and time-consuming process. Machine learning algorithms have been recently applied to overcome this issue. In this study, we propose a self-care prediction model called GA-XGBoost, which combines genetic algorithms (GAs) with extreme gradient boosting (XGBoost) for predicting self-care problems of children with disability. Selecting the feature subset affects the model performance; thus, we utilize GA to optimize finding the optimum feature subsets toward improving the model’s performance. To validate the effectiveness of GA-XGBoost, we present six experiments: comparing GA-XGBoost with other machine learning models and previous study results, a statistical significant test, impact analysis of feature selection and comparison with other feature selection methods, and sensitivity analysis of GA parameters. During the experiments, we use accuracy, precision, recall, and f1-score to measure the performance of the prediction models. The results show that GA-XGBoost obtains better performance than other prediction models and the previous study results. In addition, we design and develop a web-based self-care prediction to help therapist diagnose the self-care problems of children with disabilities. Therefore, appropriate treatment/therapy could be performed for each child to improve their therapeutic outcome. Full article
Show Figures

Figure 1

27 pages, 2166 KiB  
Article
Design and Comparative Analysis of New Personalized Recommender Algorithms with Specific Features for Large Scale Datasets
by S. Bhaskaran, Raja Marappan and B. Santhi
Mathematics 2020, 8(7), 1106; https://0-doi-org.brum.beds.ac.uk/10.3390/math8071106 - 06 Jul 2020
Cited by 14 | Viewed by 2229
Abstract
Nowadays, because of the tremendous amount of information that humans and machines produce every day, it has become increasingly hard to choose the more relevant content across a broad range of choices. This research focuses on the design of two different intelligent optimization [...] Read more.
Nowadays, because of the tremendous amount of information that humans and machines produce every day, it has become increasingly hard to choose the more relevant content across a broad range of choices. This research focuses on the design of two different intelligent optimization methods using Artificial Intelligence and Machine Learning for real-life applications that are used to improve the process of generation of recommenders. In the first method, the modified cluster based intelligent collaborative filtering is applied with the sequential clustering that operates on the values of dataset, user′s neighborhood set, and the size of the recommendation list. This strategy splits the given data set into different subsets or clusters and the recommendation list is extracted from each group for constructing the better recommendation list. In the second method, the specific features-based customized recommender that works in the training and recommendation steps by applying the split and conquer strategy on the problem datasets, which are clustered into a minimum number of clusters and the better recommendation list, is created among all the clusters. This strategy automatically tunes the tuning parameter λ that serves the role of supervised learning in generating the better recommendation list for the large datasets. The quality of the proposed recommenders for some of the large scale datasets is improved compared to some of the well-known existing methods. The proposed methods work well when λ = 0.5 with the size of the recommendation list, |L| = 30 and the size of the neighborhood, |S| < 30. For a large value of |S|, the significant difference of the root mean square error becomes smaller in the proposed methods. For large scale datasets, simulation of the proposed methods when varying the user sizes and when the user size exceeds 500, the experimental results show that better values of the metrics are obtained and the proposed method 2 performs better than proposed method 1. The significant differences are obtained in these methods because the structure of computation of the methods depends on the number of user attributes, λ, the number of bipartite graph edges, and |L|. The better values of the (Precision, Recall) metrics obtained with size as 3000 for the large scale Book-Crossing dataset in the proposed methods are (0.0004, 0.0042) and (0.0004, 0.0046) respectively. The average computational time of the proposed methods takes <10 seconds for the large scale datasets and yields better performance compared to the well-known existing methods. Full article
Show Figures

Figure 1

12 pages, 298 KiB  
Article
Attribute Reduction in Soft Contexts Based on Soft Sets and Its Application to Formal Contexts
by Won Keun Min
Mathematics 2020, 8(5), 689; https://0-doi-org.brum.beds.ac.uk/10.3390/math8050689 - 02 May 2020
Cited by 4 | Viewed by 1378
Abstract
We introduce the notion of the reduct of soft contexts, which is a special notion of a consistent set for soft contexts. Then, we study its properties and show that this notion is well explained by the two classes, 1 0 and [...] Read more.
We introduce the notion of the reduct of soft contexts, which is a special notion of a consistent set for soft contexts. Then, we study its properties and show that this notion is well explained by the two classes, 1 0 and 2 0 , of independent attributes. In particular, we describe in detail how to extract a reduct from a given consistent set. Then, based on this extraction process, we propose a six-step method for constructing a reduct from a given consistent set. Additionally, to apply this method to formal contexts, we examine the relationship between the reducts of a given formal context and the reducts of the associated soft context. We finally illustrate the process of obtaining reducts in a formal context using this relationship and the six-step method using an example. Full article
37 pages, 2594 KiB  
Article
Niching Multimodal Landscapes Faster Yet Effectively: VMO and HillVallEA Benefit Together
by Ricardo Navarro and Chyon Hae Kim
Mathematics 2020, 8(5), 665; https://0-doi-org.brum.beds.ac.uk/10.3390/math8050665 - 27 Apr 2020
Cited by 3 | Viewed by 2572
Abstract
Variable Mesh Optimization with Niching (VMO-N) is a framework for multimodal problems (those with multiple optima at several search subspaces). Its only two instances are restricted though. Being a potent multimodal optimizer, the Hill-Valley Evolutionary Algorithm (HillVallEA) uses large populations that prolong its [...] Read more.
Variable Mesh Optimization with Niching (VMO-N) is a framework for multimodal problems (those with multiple optima at several search subspaces). Its only two instances are restricted though. Being a potent multimodal optimizer, the Hill-Valley Evolutionary Algorithm (HillVallEA) uses large populations that prolong its execution. This study strives to revise VMO-N, to contrast it with related approaches, to instantiate it effectively, to get HillVallEA faster, and to indicate methods (previous or new) for practical use. We hypothesize that extra pre-niching search in HillVallEA may reduce the overall population, and that if such a diminution is substantial, it runs more rapidly but effective. After refining VMO-N, we bring out a new case of it, dubbed Hill-Valley-Clustering-based VMO (HVcMO), which also extends HillVallEA. Results show it as the first competitive variant of VMO-N, also on top of the VMO-based niching strategies. Regarding the number of optima found, HVcMO performs statistically similar to the last HillVallEA version. However, it comes with a pivotal benefit for HillVallEA: a severe reduction of the population, which leads to an estimated drastic speed-up when the volume of the search space is in a certain range. Full article
Show Figures

Graphical abstract

14 pages, 1592 KiB  
Article
Gated Recurrent Unit with Genetic Algorithm for Product Demand Forecasting in Supply Chain Management
by Jiseong Noh, Hyun-Ji Park, Jong Soo Kim and Seung-June Hwang
Mathematics 2020, 8(4), 565; https://0-doi-org.brum.beds.ac.uk/10.3390/math8040565 - 11 Apr 2020
Cited by 20 | Viewed by 3555
Abstract
Product demand forecasting plays a vital role in supply chain management since it is directly related to the profit of the company. According to companies’ concerns regarding product demand forecasting, many researchers have developed various forecasting models in order to improve accuracy. We [...] Read more.
Product demand forecasting plays a vital role in supply chain management since it is directly related to the profit of the company. According to companies’ concerns regarding product demand forecasting, many researchers have developed various forecasting models in order to improve accuracy. We propose a hybrid forecasting model called GA-GRU, which combines Genetic Algorithm (GA) with Gated Recurrent Unit (GRU). Because many hyperparameters of GRU affect its performance, we utilize GA that finds five kinds of hyperparameters of GRU including window size, number of neurons in the hidden state, batch size, epoch size, and initial learning rate. To validate the effectiveness of GA-GRU, this paper includes three experiments: comparing GA-GRU with other forecasting models, k-fold cross-validation, and sensitive analysis of the GA parameters. During each experiment, we use root mean square error and mean absolute error for calculating the accuracy of the forecasting models. The result shows that GA-GRU obtains better percent deviations than other forecasting models, suggesting setting the mutation factor of 0.015 and the crossover probability of 0.70. In short, we observe that GA-GRU can optimally set five types of hyperparameters and obtain the highest forecasting accuracy. Full article
Show Figures

Figure 1

14 pages, 2261 KiB  
Article
Clustering and Dispatching Rule Selection Framework for Batch Scheduling
by Gilseung Ahn and Sun Hur
Mathematics 2020, 8(1), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/math8010080 - 03 Jan 2020
Cited by 3 | Viewed by 2083
Abstract
In this study, a batch scheduling with job grouping and batch sequencing is considered. A clustering algorithm and dispatching rule selection model is developed to minimize total tardiness. The model and algorithm are based on the constrained k-means algorithm and neural network. We [...] Read more.
In this study, a batch scheduling with job grouping and batch sequencing is considered. A clustering algorithm and dispatching rule selection model is developed to minimize total tardiness. The model and algorithm are based on the constrained k-means algorithm and neural network. We also develop a method to generate a training dataset from historical data to train the neural network. We use numerical examples to demonstrate that the proposed algorithm and model efficiently and effectively solve batch scheduling problems. Full article
Show Figures

Figure 1

25 pages, 4872 KiB  
Article
Using Dynamic Adjusting NGHS-ANN for Predicting the Recidivism Rate of Commuted Prisoners
by Po-Chou Shih, Chui-Yu Chiu and Chi-Hsun Chou
Mathematics 2019, 7(12), 1187; https://0-doi-org.brum.beds.ac.uk/10.3390/math7121187 - 04 Dec 2019
Cited by 12 | Viewed by 2056
Abstract
Commutation is a judicial policy that is implemented in most countries. The recidivism rate of commuted prisoners directly affects people’s perceptions and trust of commutation. Hence, if the recidivism rate of a commuted prisoner could be accurately predicted before the person returns to [...] Read more.
Commutation is a judicial policy that is implemented in most countries. The recidivism rate of commuted prisoners directly affects people’s perceptions and trust of commutation. Hence, if the recidivism rate of a commuted prisoner could be accurately predicted before the person returns to society, the number of reoffences could be reduced; thereby, enhancing trust in the process. Therefore, it is of considerable importance that the recidivism rates of commuted prisoners are accurately predicted. The dynamic adjusting novel global harmony search (DANGHS) algorithm, as proposed in 2018, is an improved algorithm that combines dynamic parameter adjustment strategies and the novel global harmony search (NGHS). The DANGHS algorithm improves the searching ability of the NGHS algorithm by using dynamic adjustment strategies for genetic mutation probability. In this paper, we combined the DANGHS algorithm and an artificial neural network (ANN) into a DANGHS-ANN forecasting system to predict the recidivism rate of commuted prisoners. To verify the prediction performance of the DANGHS-ANN algorithm, we compared the experimental results with five other forecasting systems. The results showed that the proposed DANGHS-ANN algorithm gave more accurate predictions. In addition, the use of the threshold linear posterior decreasing strategy with the DANGHS-ANN forecasting system resulted in more accurate predictions of recidivism. Finally, the metaheuristic algorithm performs better searches with the dynamic parameter adjustment strategy than without it. Full article
Show Figures

Figure 1

Back to TopTop