Next Issue
Volume 2, June
Previous Issue
Volume 1, December

AI, Volume 2, Issue 1 (March 2021) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Using Convolutional Neural Networks to Map Houses Suitable for Electric Vehicle Home Charging
AI 2021, 2(1), 135-149; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010009 - 16 Mar 2021
Viewed by 1163
Abstract
With Electric Vehicles (EV) emerging as the dominant form of green transport in the UK, it is critical that we better understand existing infrastructures in place to support the uptake of these vehicles. In this multi-disciplinary paper, we demonstrate a novel end-to-end workflow [...] Read more.
With Electric Vehicles (EV) emerging as the dominant form of green transport in the UK, it is critical that we better understand existing infrastructures in place to support the uptake of these vehicles. In this multi-disciplinary paper, we demonstrate a novel end-to-end workflow using deep learning to perform automated surveys of urban areas to identify residential properties suitable for EV charging. A unique dataset comprised of open source Google Street View images was used to train and compare three deep neural networks and represents the first attempt to classify residential driveways from streetscape imagery. We demonstrate the full system workflow on two urban areas and achieve accuracies of 87.2% and 89.3% respectively. This proof of concept demonstrates a promising new application of deep learning in the field of remote sensing, geospatial analysis, and urban planning, as well as a major step towards fully autonomous artificially intelligent surveying techniques of the built environment. Full article
Show Figures

Figure 1

Article
A Combination of Multilayer Perceptron, Radial Basis Function Artificial Neural Networks and Machine Learning Image Segmentation for the Dimension Reduction and the Prognosis Assessment of Diffuse Large B-Cell Lymphoma
AI 2021, 2(1), 106-134; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010008 - 08 Mar 2021
Cited by 3 | Viewed by 1098
Abstract
The prognosis of diffuse large B-cell lymphoma (DLBCL) is heterogeneous. Therefore, we aimed to highlight predictive biomarkers. First, artificial intelligence was applied into a discovery series of gene expression of 414 patients (GSE10846). A dimension reduction algorithm aimed to correlate with the overall [...] Read more.
The prognosis of diffuse large B-cell lymphoma (DLBCL) is heterogeneous. Therefore, we aimed to highlight predictive biomarkers. First, artificial intelligence was applied into a discovery series of gene expression of 414 patients (GSE10846). A dimension reduction algorithm aimed to correlate with the overall survival and other clinicopathological variables; and included a combination of Multilayer Perceptron (MLP) and Radial Basis Function (RBF) artificial neural networks, gene-set enrichment analysis (GSEA), Cox regression and other machine learning and predictive analytics modeling [C5.0 algorithm, logistic regression, Bayesian Network, discriminant analysis, random trees, tree-AS, Chi-squared Automatic Interaction Detection CHAID tree, Quest, classification and regression (C&R) tree and neural net)]. From an initial 54,613 gene-probes, a set of 488 genes and a final set of 16 genes were defined. Secondly, two identified markers of the immune checkpoint, PD-L1 (CD274) and IKAROS (IKZF4), were validated in an independent series from Tokai University, and the immunohistochemical expression was quantified, using a machine-learning-based Weka segmentation. High PD-L1 associated with poor overall and progression-free survival, non-GCB phenotype, Epstein–Barr virus infection (EBER+), high RGS1 expression and several clinicopathological variables, such as high IPI and absence of clinical response. Conversely, high expression of IKAROS was associated with a good overall and progression-free survival, GCB phenotype and a positive clinical response to treatment. Finally, the set of 16 genes (PAF1, USP28, SORT1, MAP7D3, FITM2, CENPO, PRCC, ALDH6A1, CSNK2A1, TOR1AIP1, NUP98, UBE2H, UBXN7, SLC44A2, NR2C2AP and LETM1), in combination with PD-L1, IKAROS, BCL2, MYC, CD163 and TNFAIP8, predicted the survival outcome of DLBCL with an overall accuracy of 82.1%. In conclusion, building predictive models of DLBCL is a feasible analytical strategy. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

Opinion
The Ouroboros Model, Proposal for Self-Organizing General Cognition Substantiated
AI 2021, 2(1), 89-105; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010007 - 26 Feb 2021
Viewed by 880
Abstract
The Ouroboros Model has been proposed as a biologically-inspired comprehensive cognitive architecture for general intelligence, comprising natural and artificial manifestations. The approach addresses very diverse fundamental desiderata of research in natural cognition and also artificial intelligence, AI. Here, it is described how the [...] Read more.
The Ouroboros Model has been proposed as a biologically-inspired comprehensive cognitive architecture for general intelligence, comprising natural and artificial manifestations. The approach addresses very diverse fundamental desiderata of research in natural cognition and also artificial intelligence, AI. Here, it is described how the postulated structures have met with supportive evidence over recent years. The associated hypothesized processes could remedy pressing problems plaguing many, and even the most powerful current implementations of AI, including in particular deep neural networks. Some selected recent findings from very different fields are summoned, which illustrate the status and substantiate the proposal. Full article
(This article belongs to the Special Issue Frontiers in Artificial Intelligence)
Show Figures

Figure 1

Article
Using Machine Learning and Feature Selection for Alfalfa Yield Prediction
AI 2021, 2(1), 71-88; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010006 - 14 Feb 2021
Cited by 2 | Viewed by 1526
Abstract
Predicting alfalfa biomass and crop yield for livestock feed is important to the daily lives of virtually everyone, and many features of data from this domain combined with corresponding weather data can be used to train machine learning models for yield prediction. In [...] Read more.
Predicting alfalfa biomass and crop yield for livestock feed is important to the daily lives of virtually everyone, and many features of data from this domain combined with corresponding weather data can be used to train machine learning models for yield prediction. In this work, we used yield data of different alfalfa varieties from multiple years in Kentucky and Georgia, and we compared the impact of different feature selection methods on machine learning (ML) models trained to predict alfalfa yield. Linear regression, regression trees, support vector machines, neural networks, Bayesian regression, and nearest neighbors were all developed with cross validation. The features used included weather data, historical yield data, and the sown date. The feature selection methods that were compared included a correlation-based method, the ReliefF method, and a wrapper method. We found that the best method was the correlation-based method, and the feature set it found consisted of the Julian day of the harvest, the number of days between the sown and harvest dates, cumulative solar radiation since the previous harvest, and cumulative rainfall since the previous harvest. Using these features, the k-nearest neighbor and random forest methods achieved an average R value over 0.95, and average mean absolute error less than 200 lbs./acre. Our top R2 of 0.90 beats a previous work’s best R2 of 0.87. Our primary contribution is the demonstration that ML, with feature selection, shows promise in predicting crop yields even on simple datasets with a handful of features, and that reporting accuracies in R and R2 offers an intuitive way to compare results among various crops. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Article
Remaining Useful Life Prediction Using Temporal Convolution with Attention
AI 2021, 2(1), 48-70; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010005 - 14 Feb 2021
Viewed by 860
Abstract
Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies [...] Read more.
Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment. Full article
Show Figures

Figure 1

Article
Testing the Suitability of Automated Machine Learning for Weeds Identification
AI 2021, 2(1), 34-47; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010004 - 09 Feb 2021
Cited by 1 | Viewed by 1025
Abstract
In the past years, several machine-learning-based techniques have arisen for providing effective crop protection. For instance, deep neural networks have been used to identify different types of weeds under different real-world conditions. However, these techniques usually require extensive involvement of experts working iteratively [...] Read more.
In the past years, several machine-learning-based techniques have arisen for providing effective crop protection. For instance, deep neural networks have been used to identify different types of weeds under different real-world conditions. However, these techniques usually require extensive involvement of experts working iteratively in the development of the most suitable machine learning system. To support this task and save resources, a new technique called Automated Machine Learning has started being studied. In this work, a complete open-source Automated Machine Learning system was evaluated with two different datasets, (i) The Early Crop Weeds dataset and (ii) the Plant Seedlings dataset, covering the weeds identification problem. Different configurations, such as the use of plant segmentation, the use of classifier ensembles instead of Softmax and training with noisy data, have been compared. The results showed promising performances of 93.8% and 90.74% F1 score depending on the dataset used. These performances were aligned with other related works in AutoML, but they are far from machine-learning-based systems manually fine-tuned by human experts. From these results, it can be concluded that finding a balance between manual expert work and Automated Machine Learning will be an interesting path to work in order to increase the efficiency in plant protection. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Graphical abstract

Editorial
Acknowledgment to Reviewers of AI in 2020
AI 2021, 2(1), 32-33; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010003 - 27 Jan 2021
Viewed by 1121
Abstract
Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that AI maintains its standards for the high quality of its published papers [...] Full article
Article
Surface Defect Inspection in Images Using Statistical Patches Fusion and Deeply Learned Features
AI 2021, 2(1), 17-31; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010002 - 17 Jan 2021
Cited by 1 | Viewed by 1193
Abstract
Defect detection in images is a challenging task due to the existence of tiny and noisy patterns on surface images. To tackle this challenge, a defect detection approach is proposed in this paper using statistical data fusion. First, the proposed approach breaks a [...] Read more.
Defect detection in images is a challenging task due to the existence of tiny and noisy patterns on surface images. To tackle this challenge, a defect detection approach is proposed in this paper using statistical data fusion. First, the proposed approach breaks a large image that contains multiple separate defects into smaller overlapping patches to detect the existence of defects in each patch, using the conventional convolutional neural network approach. Then, a statistical data fusion approach is proposed to maintain the spatial coherence of cracks in the image and aggregate the information extracted from overlapping patches to enhance the overall performance and robustness of the system. The proposed approach is evaluated using three benchmark datasets to demonstrate its superior performance in terms of both individual patch inspection and the whole image inspection. Full article
Show Figures

Figure 1

Article
Automated Source Code Generation and Auto-Completion Using Deep Learning: Comparing and Discussing Current Language Model-Related Approaches
AI 2021, 2(1), 1-16; https://0-doi-org.brum.beds.ac.uk/10.3390/ai2010001 - 16 Jan 2021
Cited by 1 | Viewed by 1570
Abstract
In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related [...] Read more.
In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop