Machine Learning in Electronic and Biomedical Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence Circuits and Systems (AICAS)".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 32636

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering - DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: microelectronics; analog and mixed-signal integrated circuits; electronic device modeling; statistical IC design; machine learning signal processing; pattern recognition; bio-signal analysis and classification; system identification; neural networks; stochastic processes
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering - DII, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
Interests: embedded systems; machine learning; neural networks; pattern recognition; tensor learning; system identification; signal processing; image processing; speech recognition/synthesis; speaker identification; bio-signal analysis and classification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, machine learning techniques have proven to be extremely useful in a wide variety of applications and they are now rapidly gaining increasing interest both in electronics and in biomedical engineering.

The Special Issue seeks to collect contributions from researchers involved in developing and using machine learning techniques applied to:

  • Embedded systems for artificial intelligence (AI) applications, in which the interest is focused on implementing these algorithms directly in the devices, thus reducing latency, communication costs, and privacy concerns;
  • Edge computing, where the aim is to process AI algorithms locally on the device, i.e., where the data are generated, by focusing on compression techniques, dimensionality reduction, and parallel computation.
  • Wearable sensors for collecting biological data;
  • Human activity detection as well as the diagnosis and prognosis of patients based on the investigation of data collected from sensors;
  • Intelligent decision systems and automatic computer-aided-diagnosis systems for early detection and classification of diseases;
  • Neuroimaging techniques such as magnetic resonance, ultrasound imaging, computed tomography to aid in the diagnosis and prediction of diseases.

Topics of Interest:

The aim of this Special Issue is to publish original research articles that cover recent advances in the theory and application of machine learning for electronic and biomedical engineering.

The topics of interest include, but are not limited to:

  • Machine learning applications for embedded systems;
  • Machine learning for edge computation;
  • Deep learning model compression and acceleration;
  • Image classification, detection, and semantic segmentation;
  • Machine learning for autonomous guide;
  • Machine learning for agriculture;
  • Machine learning for industry;
  • Deep neural networks for biomedical image processing;
  • Machine learning methods for computer-aided diagnosis;
  • Machine learning-based healthcare applications, such as sensor-based behavior analysis, human activity recognition, disease prediction, biomedical signal processing, and data monitoring.

Prof. Dr. Turchetti Claudio
Dr. Laura Falaschetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning
  • Neural networks
  • Edge computing
  • Sensors for IoT
  • Vision sensors
  • Autonomous guide
  • Medical image classification
  • Computer-aided diagnosis
  • Human activity recognition
  • Biosignals

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 168 KiB  
Editorial
Machine Learning in Electronic and Biomedical Engineering
by Claudio Turchetti and Laura Falaschetti
Electronics 2022, 11(15), 2438; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11152438 - 04 Aug 2022
Viewed by 1491
Abstract
In recent years, machine learning (ML) algorithms have become of paramount importance in computer science research, both in the electronic and biomedical fields [...] Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)

Research

Jump to: Editorial, Review

20 pages, 5566 KiB  
Article
Machine Learning-Based Feature Selection and Classification for the Experimental Diagnosis of Trypanosoma cruzi
by Nidiyare Hevia-Montiel, Jorge Perez-Gonzalez, Antonio Neme and Paulina Haro
Electronics 2022, 11(5), 785; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11050785 - 03 Mar 2022
Cited by 5 | Viewed by 1936
Abstract
Chagas disease, caused by the Trypanosoma cruzi (T. cruzi) parasite, is the third most common parasitosis worldwide. Most of the infected subjects can remain asymptomatic without an opportune and early detection or an objective diagnostic is not conducted. Frequently, the disease [...] Read more.
Chagas disease, caused by the Trypanosoma cruzi (T. cruzi) parasite, is the third most common parasitosis worldwide. Most of the infected subjects can remain asymptomatic without an opportune and early detection or an objective diagnostic is not conducted. Frequently, the disease manifests itself after a long time, accompanied by severe heart disease or by sudden death. Thus, the diagnosis is a complex and challenging process where several factors must be considered. In this paper, a novel pipeline is presented integrating temporal data from four modalities (electrocardiography signals, echocardiography images, Doppler spectrum, and ELISA antibody titers), multiple features selection analyses by a univariate analysis and a machine learning-based selection. The method includes an automatic dichotomous classification of animal status (control vs. infected) based on Random Forest, Extremely Randomized Trees, Decision Trees, and Support Vector Machine. The most relevant multimodal attributes found were ELISA (IgGT, IgG1, IgG2a), electrocardiography (SR mean, QT and ST intervals), ascending aorta Doppler signals, and echocardiography (left ventricle diameter during diastole). Concerning automatic classification from selected features, the best accuracy of control vs. acute infection groups was 93.3 ± 13.3% for cross-validation and 100% in the final test; for control vs. chronic infection groups, it was 100% and 100%, respectively. We conclude that the proposed machine learning-based approach can be of help to obtain a robust and objective diagnosis in early T. cruzi infection stages. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

13 pages, 916 KiB  
Article
Bidimensional and Tridimensional Poincaré Maps in Cardiology: A Multiclass Machine Learning Study
by Leandro Donisi, Carlo Ricciardi, Giuseppe Cesarelli, Armando Coccia, Federica Amitrano, Sarah Adamo and Giovanni D’Addio
Electronics 2022, 11(3), 448; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11030448 - 02 Feb 2022
Cited by 14 | Viewed by 1914
Abstract
Heart rate is a nonstationary signal and its variation may contain indicators of current disease or warnings about impending cardiac diseases. Hence, heart rate variation analysis has become a noninvasive tool to further study the activities of the autonomic nervous system. In this [...] Read more.
Heart rate is a nonstationary signal and its variation may contain indicators of current disease or warnings about impending cardiac diseases. Hence, heart rate variation analysis has become a noninvasive tool to further study the activities of the autonomic nervous system. In this scenario, the Poincaré plot analysis has proven to be a valuable tool to support cardiac diseases diagnosis. The study’s aim is a preliminary exploration of the feasibility of machine learning to classify subjects belonging to five cardiac states (healthy, hypertension, myocardial infarction, congestive heart failure and heart transplanted) using ten unconventional quantitative parameters extracted from bidimensional and three-dimensional Poincaré maps. Knime Analytic Platform was used to implement several machine learning algorithms: Gradient Boosting, Adaptive Boosting, k-Nearest Neighbor and Naïve Bayes. Accuracy, sensitivity and specificity were computed to assess the performances of the predictive models using the leave-one-out cross-validation. The Synthetic Minority Oversampling technique was previously performed for data augmentation considering the small size of the dataset and the number of features. A feature importance, ranked on the basis of the Information Gain values, was computed. Preliminarily, a univariate statistical analysis was performed through one-way Kruskal Wallis plus post-hoc for all the features. Machine learning analysis achieved interesting results in terms of evaluation metrics, such as demonstrated by Adaptive Boosting and k-Nearest Neighbor (accuracies greater than 90%). Gradient Boosting and k-Nearest Neighbor reached even 100% score in sensitivity and specificity, respectively. The most important features according to information gain are in line with the results obtained from the statistical analysis confirming their predictive power. The study shows the proposed combination of unconventional features extracted from Poincaré maps and well-known machine learning algorithms represents a valuable approach to automatically classify patients with different cardiac diseases. Future investigations on enriched datasets will further confirm the potential application of this methodology in diagnostic. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

20 pages, 7985 KiB  
Article
A Novel Hybrid Approach Based on Deep CNN to Detect Glaucoma Using Fundus Imaging
by Rabbia Mahum, Saeed Ur Rehman, Ofonime Dominic Okon, Amerah Alabrah, Talha Meraj and Hafiz Tayyab Rauf
Electronics 2022, 11(1), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11010026 - 22 Dec 2021
Cited by 46 | Viewed by 3990
Abstract
Glaucoma is one of the eye diseases stimulated by the fluid pressure that increases in the eyes, damaging the optic nerves and causing partial or complete vision loss. As Glaucoma appears in later stages and it is a slow disease, detailed screening and [...] Read more.
Glaucoma is one of the eye diseases stimulated by the fluid pressure that increases in the eyes, damaging the optic nerves and causing partial or complete vision loss. As Glaucoma appears in later stages and it is a slow disease, detailed screening and detection of the retinal images is required to avoid vision forfeiture. This study aims to detect glaucoma at early stages with the help of deep learning-based feature extraction. Retinal fundus images are utilized for the training and testing of our proposed model. In the first step, images are pre-processed, before the region of interest (ROI) is extracted employing segmentation. Then, features of the optic disc (OD) are extracted from the images containing optic cup (OC) utilizing the hybrid features descriptors, i.e., convolutional neural network (CNN), local binary patterns (LBP), histogram of oriented gradients (HOG), and speeded up robust features (SURF). Moreover, low-level features are extracted using HOG, whereas texture features are extracted using the LBP and SURF descriptors. Furthermore, high-level features are computed using CNN. Additionally, we have employed a feature selection and ranking technique, i.e., the MR-MR method, to select the most representative features. In the end, multi-class classifiers, i.e., support vector machine (SVM), random forest (RF), and K-nearest neighbor (KNN), are employed for the classification of fundus images as healthy or diseased. To assess the performance of the proposed system, various experiments have been performed using combinations of the aforementioned algorithms that show the proposed model based on the RF algorithm with HOG, CNN, LBP, and SURF feature descriptors, providing ≤99% accuracy on benchmark datasets and 98.8% on k-fold cross-validation for the early detection of glaucoma. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

16 pages, 3851 KiB  
Article
Predicting Regional Outbreaks of Hepatitis A Using 3D LSTM and Open Data in Korea
by Kwangok Lee, Munkyu Lee and Inseop Na
Electronics 2021, 10(21), 2668; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10212668 - 31 Oct 2021
Cited by 4 | Viewed by 1671
Abstract
In 2020 and 2021, humanity lived in fear due to the COVID-19 pandemic. However, with the development of artificial intelligence technology, mankind is attempting to tackle many challenges from currently unpredictable epidemics. Korean society has been exposed to various infectious diseases since the [...] Read more.
In 2020 and 2021, humanity lived in fear due to the COVID-19 pandemic. However, with the development of artificial intelligence technology, mankind is attempting to tackle many challenges from currently unpredictable epidemics. Korean society has been exposed to various infectious diseases since the Korean War in 1950, and to overcome them, the six most serious cases in National Notifiable Infectious Diseases (NNIDs) category I were defined. Although most infectious diseases have been overcome, viral hepatitis A has been on the rise in Korean society since 2010. Therefore, in this paper, the prediction of viral hepatitis A, which is rapidly spreading in Korean society, was predicted by region using the deep learning technique and a publicly available dataset. For this study, we gathered information from five organizations based on the open data policy: Korea Centers for Disease Control and Prevention (KCDC), National Institute of Environmental Research (NIER), Korea Meteorological Agency (KMA), Public Open Data Portal, and Korea Environment Corporation (KECO). Patient information, water environment information, weather information, population information, and air pollution information were acquired and correlations were identified. Next, an epidemic outbreak prediction was performed using data preprocessing and 3D LSTM. The experimental results were compared with various machine learning methods through RMSE. In this paper, we attempted to predict regional epidemic outbreaks of hepatitis A by linking the open data environment with deep learning. It is expected that the experimental process and results will be used to present the importance and usefulness of establishing an open data environment. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

19 pages, 1956 KiB  
Article
A Comprehensive Analysis of Deep Neural-Based Cerebral Microbleeds Detection System
by Maria Anna Ferlin, Michał Grochowski, Arkadiusz Kwasigroch, Agnieszka Mikołajczyk, Edyta Szurowska, Małgorzata Grzywińska and Agnieszka Sabisz
Electronics 2021, 10(18), 2208; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182208 - 09 Sep 2021
Cited by 10 | Viewed by 2197
Abstract
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a [...] Read more.
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a true CMB from its mimics, however, if successfully solved, it would streamline the radiologists work. To deal with this complex three-dimensional problem, we propose a machine learning approach based on a 2D Faster RCNN network. We aimed to achieve a reliable system, i.e., with balanced sensitivity and precision. Therefore, we have researched and analysed, among others, impact of the way the training data are provided to the system, their pre-processing, the choice of model and its structure, and also the ways of regularisation. Furthermore, we also carefully analysed the network predictions and proposed an algorithm for its post-processing. The proposed approach enabled for obtaining high precision (89.74%), sensitivity (92.62%), and F1 score (90.84%). The paper presents the main challenges connected with automatic cerebral microbleeds detection, its deep analysis and developed system. The conducted research may significantly contribute to automatic medical diagnosis. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

13 pages, 2235 KiB  
Article
Modeling Radio-Frequency Devices Based on Deep Learning Technique
by Zhimin Guan, Peng Zhao, Xianbing Wang and Gaofeng Wang
Electronics 2021, 10(14), 1710; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10141710 - 16 Jul 2021
Cited by 7 | Viewed by 2733
Abstract
An advanced method of modeling radio-frequency (RF) devices based on a deep learning technique is proposed for accurate prediction of S parameters. The S parameters of RF devices calculated by full-wave electromagnetic solvers along with the metallic geometry of the structure, permittivity and [...] Read more.
An advanced method of modeling radio-frequency (RF) devices based on a deep learning technique is proposed for accurate prediction of S parameters. The S parameters of RF devices calculated by full-wave electromagnetic solvers along with the metallic geometry of the structure, permittivity and thickness of the dielectric layers of the RF devices are used partly as the training and partly as testing data for the deep learning structure. To implement the training procedure efficiently, a novel selection method of training data considering critical points is introduced. In order to rapidly and accurately map the geometrical parameters of the RF devices to the S parameters, deep neural networks are used to establish the multiple non-linear transforms. The hidden-layers of the neural networks are adaptively chosen based on the frequency response of the RF devices to guarantee the accuracy of generated model. The Adam optimization algorithm is utilized for the acceleration of training. With the established deep learning model of a parameterized device, the S parameters can efficiently be obtained when the device geometrical parameters change. Comparing with the traditional modeling method that uses shallow neural networks, the proposed method can achieve better accuracy, especially when the training data are non-uniform. Three RF devices, including a rectangular inductor, an interdigital capacitor, and two coupled transmission lines, are used for building and verifying the deep neural network. It is shown that the deep neural network has good robustness and excellent generalization ability. Even for very wide frequency band (0–100 GHz), the maximum relative error of the coupled transmission lines using the proposed method is below 3%. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

14 pages, 2150 KiB  
Article
GateRL: Automated Circuit Design Framework of CMOS Logic Gates Using Reinforcement Learning
by Hyoungsik Nam, Young-In Kim, Jina Bae and Junhee Lee
Electronics 2021, 10(9), 1032; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10091032 - 26 Apr 2021
Cited by 3 | Viewed by 4897
Abstract
This paper proposes a GateRL that is an automated circuit design framework of CMOS logic gates based on reinforcement learning. Because there are constraints in the connection of circuit elements, the action masking scheme is employed. It also reduces the size of the [...] Read more.
This paper proposes a GateRL that is an automated circuit design framework of CMOS logic gates based on reinforcement learning. Because there are constraints in the connection of circuit elements, the action masking scheme is employed. It also reduces the size of the action space leading to the improvement on the learning speed. The GateRL consists of an agent for the action and an environment for state, mask, and reward. State and reward are generated from a connection matrix that describes the current circuit configuration, and the mask is obtained from a masking matrix based on constraints and current connection matrix. The action is given rise to by the deep Q-network of 4 fully connected network layers in the agent. In particular, separate replay buffers are devised for success transitions and failure transitions to expedite the training process. The proposed network is trained with 2 inputs, 1 output, 2 NMOS transistors, and 2 PMOS transistors to design all the target logic gates, such as buffer, inverter, AND, OR, NAND, and NOR. Consequently, the GateRL outputs one-transistor buffer, two-transistor inverter, two-transistor AND, two-transistor OR, three-transistor NAND, and three-transistor NOR. The operations of these resultant logics are verified by the SPICE simulation. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

19 pages, 375 KiB  
Review
Bringing Emotion Recognition Out of the Lab into Real Life: Recent Advances in Sensors and Machine Learning
by Stanisław Saganowski
Electronics 2022, 11(3), 496; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11030496 - 08 Feb 2022
Cited by 33 | Viewed by 7738
Abstract
Bringing emotion recognition (ER) out of the controlled laboratory setup into everyday life can enable applications targeted at a broader population, e.g., helping people with psychological disorders, assisting kids with autism, monitoring the elderly, and general improvement of well-being. This work reviews progress [...] Read more.
Bringing emotion recognition (ER) out of the controlled laboratory setup into everyday life can enable applications targeted at a broader population, e.g., helping people with psychological disorders, assisting kids with autism, monitoring the elderly, and general improvement of well-being. This work reviews progress in sensors and machine learning methods and techniques that have made it possible to move ER from the lab to the field in recent years. In particular, the commercially available sensors collecting physiological data, signal processing techniques, and deep learning architectures used to predict emotions are discussed. A survey on existing systems for recognizing emotions in real-life scenarios—their possibilities, limitations, and identified problems—is also provided. The review is concluded with a debate on what challenges need to be overcome in the domain in the near future. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

Back to TopTop