sensors-logo

Journal Browser

Journal Browser

Machine Learning in Sensors and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (10 January 2022) | Viewed by 60806

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information Display, Kyung Hee University, Seoul, Korea
Interests: display electronics; low power; machine learning; UI/UX
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning keeps extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, and so on. In particular, because data are required to build machine learning networks, sensors are one of most important technologies. In addition, machine learning networks can contribute to the improvement of sensor performances and the creation of new sensor applications. This Special Issue is addressed at all types of machine learning applications related to sensors and imaging.

Topics of interest may include but are not limited to the following:

  • Machine learning for improving sensor performance;
  • New sensor appications using machine learning;
  • Machine-learning-based HCI;
  • Machine-learning-based localization and object tracking;
  • Machine learing for sensor signal processing;
  • Machine-learning-based analysis on the big data collected from sensors.

Dr. Hyoungsik Nam
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 5432 KiB  
Article
Computer Vision-Based Path Planning for Robot Arms in Three-Dimensional Workspaces Using Q-Learning and Neural Networks
by Ali Abdi, Mohammad Hassan Ranjbar and Ju Hong Park
Sensors 2022, 22(5), 1697; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051697 - 22 Feb 2022
Cited by 19 | Viewed by 5303
Abstract
Computer vision-based path planning can play a crucial role in numerous technologically driven smart applications. Although various path planning methods have been proposed, limitations, such as unreliable three-dimensional (3D) localization of objects in a workspace, time-consuming computational processes, and limited two-dimensional workspaces, remain. [...] Read more.
Computer vision-based path planning can play a crucial role in numerous technologically driven smart applications. Although various path planning methods have been proposed, limitations, such as unreliable three-dimensional (3D) localization of objects in a workspace, time-consuming computational processes, and limited two-dimensional workspaces, remain. Studies to address these problems have achieved some success, but many of these problems persist. Therefore, in this study, which is an extension of our previous paper, a novel path planning approach that combined computer vision, Q-learning, and neural networks was developed to overcome these limitations. The proposed computer vision-neural network algorithm was fed by two images from two views to obtain accurate spatial coordinates of objects in real time. Next, Q-learning was used to determine a sequence of simple actions: up, down, left, right, backward, and forward, from the start point to the target point in a 3D workspace. Finally, a trained neural network was used to determine a sequence of joint angles according to the identified actions. Simulation and experimental test results revealed that the proposed combination of 3D object detection, an agent-environment interaction in the Q-learning phase, and simple joint angle computation by trained neural networks considerably alleviated the limitations of previous studies. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

20 pages, 1462 KiB  
Article
Comparing Sampling Strategies for Tackling Imbalanced Data in Human Activity Recognition
by Fayez Alharbi, Lahcen Ouarbya and Jamie A Ward
Sensors 2022, 22(4), 1373; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041373 - 11 Feb 2022
Cited by 12 | Viewed by 2936
Abstract
Human activity recognition (HAR) using wearable sensors is an increasingly active research topic in machine learning, aided in part by the ready availability of detailed motion capture data from smartphones, fitness trackers, and smartwatches. The goal of HAR is to use such devices [...] Read more.
Human activity recognition (HAR) using wearable sensors is an increasingly active research topic in machine learning, aided in part by the ready availability of detailed motion capture data from smartphones, fitness trackers, and smartwatches. The goal of HAR is to use such devices to assist users in their daily lives in application areas such as healthcare, physical therapy, and fitness. One of the main challenges for HAR, particularly when using supervised learning methods, is obtaining balanced data for algorithm optimisation and testing. As people perform some activities more than others (e.g., walk more than run), HAR datasets are typically imbalanced. The lack of dataset representation from minority classes hinders the ability of HAR classifiers to sufficiently capture new instances of those activities. We introduce three novel hybrid sampling strategies to generate more diverse synthetic samples to overcome the class imbalance problem. The first strategy, which we call the distance-based method (DBM), combines Synthetic Minority Oversampling Techniques (SMOTE) with Random_SMOTE, both of which are built around the k-nearest neighbors (KNN). The second technique, referred to as the noise detection-based method (NDBM), combines SMOTE Tomek links (SMOTE_Tomeklinks) and the modified synthetic minority oversampling technique (MSMOTE). The third approach, which we call the cluster-based method (CBM), combines Cluster-Based Synthetic Oversampling (CBSO) and Proximity Weighted Synthetic Oversampling Technique (ProWSyn). We compare the performance of the proposed hybrid methods to the individual constituent methods and baseline using accelerometer data from three commonly used benchmark datasets. We show that DBM, NDBM, and CBM reduce the impact of class imbalance and enhance F1 scores by a range of 9–20 percentage point compared to their constituent sampling methods. CBM performs significantly better than the others under a Friedman test, however, DBM has lower computational requirements. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

15 pages, 758 KiB  
Article
Fuzzy Overclustering: Semi-Supervised Classification of Fuzzy Labels with Overclustering and Inverse Cross-Entropy
by Lars Schmarje, Johannes Brünger, Monty Santarossa, Simon-Martin Schröder, Rainer Kiko and Reinhard Koch
Sensors 2021, 21(19), 6661; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196661 - 07 Oct 2021
Cited by 10 | Viewed by 2655
Abstract
Deep learning has been successfully applied to many classification problems including underwater challenges. However, a long-standing issue with deep learning is the need for large and consistently labeled datasets. Although current approaches in semi-supervised learning can decrease the required amount of annotated data [...] Read more.
Deep learning has been successfully applied to many classification problems including underwater challenges. However, a long-standing issue with deep learning is the need for large and consistently labeled datasets. Although current approaches in semi-supervised learning can decrease the required amount of annotated data by a factor of 10 or even more, this line of research still uses distinct classes. For underwater classification, and uncurated real-world datasets in general, clean class boundaries can often not be given due to a limited information content in the images and transitional stages of the depicted objects. This leads to different experts having different opinions and thus producing fuzzy labels which could also be considered ambiguous or divergent. We propose a novel framework for handling semi-supervised classifications of such fuzzy labels. It is based on the idea of overclustering to detect substructures in these fuzzy labels. We propose a novel loss to improve the overclustering capability of our framework and show the benefit of overclustering for fuzzy labels. We show that our framework is superior to previous state-of-the-art semi-supervised methods when applied to real-world plankton data with fuzzy labels. Moreover, we acquire 5 to 10% more consistent predictions of substructures. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

16 pages, 2528 KiB  
Article
Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks
by Benedikt Adelmann and Ralf Hellmann
Sensors 2021, 21(17), 5831; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175831 - 30 Aug 2021
Cited by 5 | Viewed by 2097
Abstract
In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the [...] Read more.
In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

17 pages, 4042 KiB  
Article
Machine Learning for Sensorless Temperature Estimation of a BLDC Motor
by Dariusz Czerwinski, Jakub Gęca and Krzysztof Kolano
Sensors 2021, 21(14), 4655; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144655 - 07 Jul 2021
Cited by 19 | Viewed by 4529
Abstract
In this article, the authors propose two models for BLDC motor winding temperature estimation using machine learning methods. For the purposes of the research, measurements were made for over 160 h of motor operation, and then, they were preprocessed. The algorithms of linear [...] Read more.
In this article, the authors propose two models for BLDC motor winding temperature estimation using machine learning methods. For the purposes of the research, measurements were made for over 160 h of motor operation, and then, they were preprocessed. The algorithms of linear regression, ElasticNet, stochastic gradient descent regressor, support vector machines, decision trees, and AdaBoost were used for predictive modeling. The ability of the models to generalize was achieved by hyperparameter tuning with the use of cross-validation. The conducted research led to promising results of the winding temperature estimation accuracy. In the case of sensorless temperature prediction (model 1), the mean absolute percentage error MAPE was below 4.5% and the coefficient of determination R2 was above 0.909. In addition, the extension of the model with the temperature measurement on the casing (model 2) allowed reducing the error value to about 1% and increasing R2 to 0.990. The results obtained for the first proposed model show that the overheating protection of the motor can be ensured without direct temperature measurement. In addition, the introduction of a simple casing temperature measurement system allows for an estimation with accuracy suitable for compensating the motor output torque changes related to temperature. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

13 pages, 7622 KiB  
Article
Constrained Multiple Planar Reconstruction for Automatic Camera Calibration of Intelligent Vehicles
by Sang Jun Lee, Jae-Woo Lee, Wonju Lee and Cheolhun Jang
Sensors 2021, 21(14), 4643; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144643 - 06 Jul 2021
Cited by 1 | Viewed by 1830
Abstract
In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement [...] Read more.
In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

19 pages, 5550 KiB  
Article
Piston Error Measurement for Segmented Telescopes with an Artificial Neural Network
by Dan Yue, Yihao He and Yushuang Li
Sensors 2021, 21(10), 3364; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103364 - 12 May 2021
Cited by 3 | Viewed by 1864
Abstract
A piston error detection method is proposed based on the broadband intensity distribution on the image plane using a back-propagation (BP) artificial neural network. By setting a mask with a sparse circular clear multi-subaperture configuration in the exit pupil plane of a segmented [...] Read more.
A piston error detection method is proposed based on the broadband intensity distribution on the image plane using a back-propagation (BP) artificial neural network. By setting a mask with a sparse circular clear multi-subaperture configuration in the exit pupil plane of a segmented telescope to fragment the pupil, the relation between the piston error of segments and amplitude of the modulation transfer function (MTF) sidelobes is strictly derived according to the Fourier optics principle. Then the BP artificial neural network is utilized to establish the mapping relation between them, where the amplitudes of the MTF sidelobes directly calculated from theoretical relationship and the introduced piston errors are used as inputs and outputs respectively to train the network. With the well trained-network, the piston errors are measured to a good precision using one in-focused broadband image without defocus division as input, and the capture range achieving the coherence length of the broadband light is available. Adequate simulations demonstrate the effectiveness and accuracy of the proposed method; the results show that the trained network has high measurement accuracy, wide detection range, quite good noise immunity and generalization ability. This method provides a feasible and easily implemented way to measure piston error and can simultaneously detect the multiple piston errors of the entire aperture of the segmented telescope. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

14 pages, 4607 KiB  
Article
Object Detection Combining CNN and Adaptive Color Prior Features
by Peng Gu, Xiaosong Lan and Shuxiao Li
Sensors 2021, 21(8), 2796; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082796 - 15 Apr 2021
Cited by 6 | Viewed by 2471
Abstract
When compared with the traditional manual design method, the convolutional neural network has the advantages of strong expressive ability and it is insensitive to scale, light, and deformation, so it has become the mainstream method in the object detection field. In order to [...] Read more.
When compared with the traditional manual design method, the convolutional neural network has the advantages of strong expressive ability and it is insensitive to scale, light, and deformation, so it has become the mainstream method in the object detection field. In order to further improve the accuracy of existing object detection methods based on convolutional neural networks, this paper draws on the characteristics of the attention mechanism to model color priors. Firstly, it proposes a cognitive-driven color prior model to obtain the color prior features for the known types of target samples and the overall scene, respectively. Subsequently, the acquired color prior features and test image color features are adaptively weighted and competed to obtain prior-based saliency images. Finally, the obtained saliency images are treated as features maps and they are further fused with those extracted by the convolutional neural network to complete the subsequent object detection task. The proposed algorithm does not need training parameters, has strong generalization ability, and it is directly fused with convolutional neural network features at the feature extraction stage, thus has strong versatility. Experiments on the VOC2007 and VOC2012 benchmark data sets show that the utilization of cognitive-drive color priors can further improve the performance of existing object detection algorithms. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

25 pages, 25970 KiB  
Article
An Efficient Plaintext-Related Chaotic Image Encryption Scheme Based on Compressive Sensing
by Zhen Li, Changgen Peng, Weijie Tan and Liangrong Li
Sensors 2021, 21(3), 758; https://0-doi-org.brum.beds.ac.uk/10.3390/s21030758 - 23 Jan 2021
Cited by 26 | Viewed by 3362
Abstract
With the development of mobile communication network, especially 5G today and 6G in the future, the security and privacy of digital images are important in network applications. Meanwhile, high resolution images will take up a lot of bandwidth and storage space in the [...] Read more.
With the development of mobile communication network, especially 5G today and 6G in the future, the security and privacy of digital images are important in network applications. Meanwhile, high resolution images will take up a lot of bandwidth and storage space in the cloud applications. Facing the demands, an efficient and secure plaintext-related chaotic image encryption scheme is proposed based on compressive sensing for achieving the compression and encryption simultaneously. In the proposed scheme, the internal keys for controlling the whole process of compression and encryption is first generated by plain image and initial key. Subsequently, discrete wavelets transform is used in order to convert the plain image to the coefficient matrix. After that, the permutation processing, which is controlled by the two-dimensional Sine improved Logistic iterative chaotic map (2D-SLIM), was done on the coefficient matrix in order to make the matrix energy dispersive. Furthermore, a plaintext related compressive sensing has been done utilizing a measurement matrix generated by 2D-SLIM. In order to make the cipher image lower correlation and distribute uniform, measurement results quantified the 0∼255 and the permutation and diffusion operation is done under the controlling by two-dimensional Logistic-Sine-coupling map (2D-LSCM). Finally, some common compression and security performance analysis methods are used to test our scheme. The test and comparison results shown in our proposed scheme have both excellent security and compression performance when compared with other recent works, thus ensuring the digital image application in the network. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

16 pages, 15778 KiB  
Article
Wildfire Risk Assessment of Transmission-Line Corridors Based on Naïve Bayes Network and Remote Sensing Data
by Weijie Chen, You Zhou, Enze Zhou, Zhun Xiang, Wentao Zhou and Junhan Lu
Sensors 2021, 21(2), 634; https://0-doi-org.brum.beds.ac.uk/10.3390/s21020634 - 18 Jan 2021
Cited by 23 | Viewed by 3145
Abstract
Considering the complexity of the physical model of wildfire occurrence, this paper develops a method to evaluate the wildfire risk of transmission-line corridors based on Naïve Bayes Network (NBN). First, the data of 14 wildfire-related factors including anthropogenic, physiographic, and meteorologic factors, were [...] Read more.
Considering the complexity of the physical model of wildfire occurrence, this paper develops a method to evaluate the wildfire risk of transmission-line corridors based on Naïve Bayes Network (NBN). First, the data of 14 wildfire-related factors including anthropogenic, physiographic, and meteorologic factors, were collected and analyzed. Then, the relief algorithm is used to rank the importance of factors according to their impacts on wildfire occurrence. After eliminating the least important factors in turn, an optimal wildfire risk assessment model for transmission-line corridors was constructed based on the NBN. Finally, this model was carried out and visualized in Guangxi province in southern China. Then a cost function was proposed to further verify the applicability of the wildfire risk distribution map. The fire events monitored by satellites during the first season in 2020 shows that 81.8% of fires fall in high- and very-high-risk regions. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

26 pages, 9997 KiB  
Article
Shelf Auditing Based on Image Classification Using Semi-Supervised Deep Learning to Increase On-Shelf Availability in Grocery Stores
by Ramiz Yilmazer and Derya Birant
Sensors 2021, 21(2), 327; https://0-doi-org.brum.beds.ac.uk/10.3390/s21020327 - 06 Jan 2021
Cited by 23 | Viewed by 10351
Abstract
Providing high on-shelf availability (OSA) is a key factor to increase profits in grocery stores. Recently, there has been growing interest in computer vision approaches to monitor OSA. However, the largest and well-known computer vision datasets do not provide annotation for store products, [...] Read more.
Providing high on-shelf availability (OSA) is a key factor to increase profits in grocery stores. Recently, there has been growing interest in computer vision approaches to monitor OSA. However, the largest and well-known computer vision datasets do not provide annotation for store products, and therefore, a huge effort is needed to manually label products on images. To tackle the annotation problem, this paper proposes a new method that combines two concepts “semi-supervised learning” and “on-shelf availability” (SOSA) for the first time. Moreover, it is the first time that “You Only Look Once” (YOLOv4) deep learning architecture is used to monitor OSA. Furthermore, this paper provides the first demonstration of explainable artificial intelligence (XAI) on OSA. It presents a new software application, called SOSA XAI, with its capabilities and advantages. In the experimental studies, the effectiveness of the proposed SOSA method was verified on image datasets, with different ratios of labeled samples varying from 20% to 80%. The experimental results show that the proposed approach outperforms the existing approaches (RetinaNet and YOLOv3) in terms of accuracy. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

23 pages, 8675 KiB  
Article
Estimating the Growing Stem Volume of Coniferous Plantations Based on Random Forest Using an Optimized Variable Selection Method
by Fugen Jiang, Mykola Kutia, Arbi J. Sarkissian, Hui Lin, Jiangping Long, Hua Sun and Guangxing Wang
Sensors 2020, 20(24), 7248; https://0-doi-org.brum.beds.ac.uk/10.3390/s20247248 - 17 Dec 2020
Cited by 25 | Viewed by 3101
Abstract
Forest growing stem volume (GSV) reflects the richness of forest resources as well as the quality of forest ecosystems. Remote sensing technology enables robust and efficient GSV estimation as it greatly reduces the survey time and cost while facilitating periodic monitoring. Given its [...] Read more.
Forest growing stem volume (GSV) reflects the richness of forest resources as well as the quality of forest ecosystems. Remote sensing technology enables robust and efficient GSV estimation as it greatly reduces the survey time and cost while facilitating periodic monitoring. Given its red edge bands and a short revisit time period, Sentinel-2 images were selected for the GSV estimation in Wangyedian forest farm, Inner Mongolia, China. The variable combination was shown to significantly affect the accuracy of the estimation model. After extracting spectral variables, texture features, and topographic factors, a stepwise random forest (SRF) method was proposed to select variable combinations and establish random forest regressions (RFR) for GSV estimation. The linear stepwise regression (LSR), Boruta, Variable Selection Using Random Forests (VSURF), and random forest (RF) methods were then used as references for comparison with the proposed SRF for selection of predictors and GSV estimation. Combined with the observed GSV data and the Sentinel-2 images, the distributions of GSV were generated by the RFR models with the variable combinations determined by the LSR, RF, Boruta, VSURF, and SRF. The results show that the texture features of Sentinel-2’s red edge bands can significantly improve the accuracy of GSV estimation. The SRF method can effectively select the optimal variable combination, and the SRF-based model results in the highest estimation accuracy with the decreases of relative root mean square error by 16.4%, 14.4%, 16.3%, and 10.6% compared with those from the LSR-, RF-, Boruta-, and VSURF-based models, respectively. The GSV distribution generated by the SRF-based model matched that of the field observations well. The results of this study are expected to provide a reference for GSV estimation of coniferous plantations. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

17 pages, 5939 KiB  
Article
Classification of Variable Foundation Properties Based on Vehicle–Pavement–Foundation Interaction Dynamics
by Robin Eunju Kim
Sensors 2020, 20(21), 6263; https://0-doi-org.brum.beds.ac.uk/10.3390/s20216263 - 03 Nov 2020
Cited by 2 | Viewed by 1864
Abstract
The dynamic interaction between vehicle, roughness, and foundation is a fundamental problem in road management and also a complex problem, with their coupled and nonlinear behavior. Thus, in this study, the vehicle–pavement–foundation interaction model was formulated to incorporate the mass inertia of the [...] Read more.
The dynamic interaction between vehicle, roughness, and foundation is a fundamental problem in road management and also a complex problem, with their coupled and nonlinear behavior. Thus, in this study, the vehicle–pavement–foundation interaction model was formulated to incorporate the mass inertia of the vehicle, stochastic roughness, and non-uniform and deformable foundation. Herein, a quarter-car model was considered, a filtered white noise model was formulated to represent the road roughness, and a two-layered foundation was employed to simulate the road structure. To represent the non-uniform foundation, stiffness and damping coefficients were assumed to vary either in a linear or in a quadratic manner. Subsequently, an augmented state-space representation was formulated for the entire system. The time-varying equation governing the covariance of the response was solved to examine the vehicle response, subject to various foundation properties. Finally, a linear discriminant analysis method was employed for classifying the foundation types. The performance of the classifier was validated by test sets, which contained 100 cases for each foundation type. The results showed an accuracy of over 90%, indicating that the machine learning-based classification of the foundation had the potential of using vehicle responses in road managements. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

16 pages, 1628 KiB  
Article
Multi-Frame Star Image Denoising Algorithm Based on Deep Reinforcement Learning and Mixed Poisson–Gaussian Likelihood
by Ming Xie, Zhenduo Zhang, Wenbo Zheng, Ying Li and Kai Cao
Sensors 2020, 20(21), 5983; https://0-doi-org.brum.beds.ac.uk/10.3390/s20215983 - 22 Oct 2020
Cited by 9 | Viewed by 2608
Abstract
Mixed Poisson–Gaussian noise exists in the star images and is difficult to be effectively suppressed via maximum likelihood estimation (MLE) method due to its complicated likelihood function. In this article, the MLE method is incorporated with a state-of-the-art machine learning algorithm in order [...] Read more.
Mixed Poisson–Gaussian noise exists in the star images and is difficult to be effectively suppressed via maximum likelihood estimation (MLE) method due to its complicated likelihood function. In this article, the MLE method is incorporated with a state-of-the-art machine learning algorithm in order to achieve accurate restoration results. By applying the mixed Poisson–Gaussian likelihood function as the reward function of a reinforcement learning algorithm, an agent is able to form the restored image that achieves the maximum value of the complex likelihood function through the Markov Decision Process (MDP). In order to provide the appropriate parameter settings of the denoising model, the key hyperparameters of the model and their influences on denoising results are tested through simulated experiments. The model is then compared with two existing star image denoising methods so as to verify its performance. The experiment results indicate that this algorithm based on reinforcement learning is able to suppress the mixed Poisson–Gaussian noise in the star image more accurately than the traditional MLE method, as well as the method based on the deep convolutional neural network (DCNN). Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Graphical abstract

Review

Jump to: Research

26 pages, 2318 KiB  
Review
Review of Capacitive Touchscreen Technologies: Overview, Research Trends, and Machine Learning Approaches
by Hyoungsik Nam, Ki-Hyuk Seol, Junhee Lee, Hyeonseong Cho and Sang Won Jung
Sensors 2021, 21(14), 4776; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144776 - 13 Jul 2021
Cited by 22 | Viewed by 10451
Abstract
Touchscreens have been studied and developed for a long time to provide user-friendly and intuitive interfaces on displays. This paper describes the touchscreen technologies in four categories of resistive, capacitive, acoustic wave, and optical methods. Then, it addresses the main studies of SNR [...] Read more.
Touchscreens have been studied and developed for a long time to provide user-friendly and intuitive interfaces on displays. This paper describes the touchscreen technologies in four categories of resistive, capacitive, acoustic wave, and optical methods. Then, it addresses the main studies of SNR improvement and stylus support on the capacitive touchscreens that have been widely adopted in most consumer electronics such as smartphones, tablet PCs, and notebook PCs. In addition, the machine learning approaches for capacitive touchscreens are explained in four applications of user identification/authentication, gesture detection, accuracy improvement, and input discrimination. Full article
(This article belongs to the Special Issue Machine Learning in Sensors and Imaging)
Show Figures

Figure 1

Back to TopTop