sensors-logo

Journal Browser

Journal Browser

Recent Trends in Embedded Technologies and Wearable Systems: Artificial Intelligence Solutions

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (28 February 2021) | Viewed by 12848

Special Issue Editor


E-Mail Website
Guest Editor
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
Interests: deep learning; computer vision; image processing; brain science; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid advancement in presence of real-time control for embedded systems and wearable systems, Artificial Intelligence (AI) approaches have played a crucial role in many applications. Today’s embedded technologies and wearable systems are becoming even more intimate and control to human lives. As with any technology there are number of challenges encircling the use of wearable and embedded technology such as security and privacy, energy consumption, application development platform and human-computer interaction. Further, latest advancement in health monitoring and other related applications, we expect that it is timely and important to reveal the extent to which embedded and wearable system developments in AI might be offer a paradigm shift in this context. This special issue will bring together the recent trends in AI focusing real-world applications for next generation embedded and wearable technologies, to address the problem of how to handle the uncertainty (e.g., noisy sensors) with probabilistic, machine learning and adaption methodologies.  The AI branches that is not limited to expert systems, artificial immune system, swarm intelligence, fuzzy system, (deep) neural network, evolutionary computing and various hybrid systems, which are combinations of two or more of the branches.

The main idea of this special issue is to explore the capabilities of AI methods and significant impact towards embedded and wearable systems. Moreover, this special issue is to address comprehensive nature of embedded wearable computing systems and to emphasize the character of AI in modelling, identification, optimization, prediction, forecasting, and control of future generative systems. Thus, this special issue solicits state-of-the-art research findings from both academia and industry, with a particular emphasis on novel techniques to ensure the impact of AI in wearable embedded technologies and its related applications. Proposed submissions should be original, unpublished, and present novel in-depth fundamental research contributions either from a methodological/application perspective in accomplishing embedded wearable systems for society.

The topics relevant to this special issue include the following:

 

  • Artificial intelligence solutions (data fusion, association, classification)
  • Challenges to data analysis arising from the miniature size of sensors and supporting electronics, such as data power levels and artefacts
  • Access control for wearable systems
  • Adaptive and Hybrid AI systems for wearable computing systems
  • Fully integrated ultra-wearable sensing platform
  • Privacy and security challenges in wearable embedded systems
  • Multimodal sensors - next generation space-saving and unobtrusive solutions
  • Wearable embedded technologies and its applications (examples: auditory and visual brain computer interfaces, fatigue, sleep physiological stress and etc)
  • Energy harvesting and management for pervasive sensing
  • Specification, Validation and Verification of Wearable Embedded System and Software
  • Datasets, measurement and performance evaluation
  • Signal processing solutions for wearable physiological sensing (data conditioning,
  • detection, estimation)
  • Deep learning methods for inferring health-related information

Papers must be tailored to the emerging fields of AI paradigms in embedded wearable computing systems and explicitly consider the recent deployments models, challenges, and novel solutions. The guest editors maintain the right to reject papers they deem to be out of scope of this special issue. Only originally unpublished contributions and invited articles will be considered for the issue. The papers should be formatted according to the journal guidelines.

Dr. Sang-Woong Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent embedded system
  • AI solutions
  • inferring health-related information
  • wearable physiological sensing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3728 KiB  
Communication
Study on Human Activity Recognition Using Semi-Supervised Active Transfer Learning
by Seungmin Oh, Akm Ashiquzzaman, Dongsu Lee, Yeonggwang Kim and Jinsul Kim
Sensors 2021, 21(8), 2760; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082760 - 14 Apr 2021
Cited by 18 | Viewed by 3101
Abstract
In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require [...] Read more.
In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods. Full article
Show Figures

Figure 1

19 pages, 1481 KiB  
Article
Accelerating On-Device Learning with Layer-Wise Processor Selection Method on Unified Memory
by Donghee Ha, Mooseop Kim, KyeongDeok Moon and Chi Yoon Jeong
Sensors 2021, 21(7), 2364; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072364 - 29 Mar 2021
Cited by 3 | Viewed by 2211
Abstract
Recent studies have applied the superior performance of deep learning to mobile devices, and these studies have enabled the running of the deep learning model on a mobile device with limited computing power. However, there is performance degradation of the deep learning model [...] Read more.
Recent studies have applied the superior performance of deep learning to mobile devices, and these studies have enabled the running of the deep learning model on a mobile device with limited computing power. However, there is performance degradation of the deep learning model when it is deployed in mobile devices, due to the different sensors of each device. To solve this issue, it is necessary to train a network model specific to each mobile device. Therefore, herein, we propose an acceleration method for on-device learning to mitigate the device heterogeneity. The proposed method efficiently utilizes unified memory for reducing the latency of data transfer during network model training. In addition, we propose the layer-wise processor selection method to consider the latency generated by the difference in the processor performing the forward propagation step and the backpropagation step in the same layer. The experiments were performed on an ODROID-XU4 with the ResNet-18 model, and the experimental results indicate that the proposed method reduces the latency by at most 28.4% compared to the central processing unit (CPU) and at most 21.8% compared to the graphics processing unit (GPU). Through experiments using various batch sizes to measure the average power consumption, we confirmed that device heterogeneity is alleviated by performing on-device learning using the proposed method. Full article
Show Figures

Figure 1

18 pages, 1358 KiB  
Article
CCA: Cost-Capacity-Aware Caching for In-Memory Data Analytics Frameworks
by Seongsoo Park, Minseop Jeong and Hwansoo Han
Sensors 2021, 21(7), 2321; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072321 - 26 Mar 2021
Cited by 1 | Viewed by 1705
Abstract
To process data from IoTs and wearable devices, analysis tasks are often offloaded to the cloud. As the amount of sensing data ever increases, optimizing the data analytics frameworks is critical to the performance of processing sensed data. A key approach to speed [...] Read more.
To process data from IoTs and wearable devices, analysis tasks are often offloaded to the cloud. As the amount of sensing data ever increases, optimizing the data analytics frameworks is critical to the performance of processing sensed data. A key approach to speed up the performance of data analytics frameworks in the cloud is caching intermediate data, which is used repeatedly in iterative computations. Existing analytics engines implement caching with various approaches. Some use run-time mechanisms with dynamic profiling and others rely on programmers to decide data to cache. Even though caching discipline has been investigated long enough in computer system research, recent data analytics frameworks still leave a room to optimize. As sophisticated caching should consider complex execution contexts such as cache capacity, size of data to cache, victims to evict, etc., no general solution often exists for data analytics frameworks. In this paper, we propose an application-specific cost-capacity-aware caching scheme for in-memory data analytics frameworks. We use a cost model, built from multiple representative inputs, and an execution flow analysis, extracted from DAG schedule, to select primary candidates to cache among intermediate data. After the caching candidate is determined, the optimal caching is automatically selected during execution even if the programmers no longer manually determine the caching for the intermediate data. We implemented our scheme in Apache Spark and experimentally evaluated our scheme on HiBench benchmarks. Compared to the caching decisions in the original benchmarks, our scheme increases the performance by 27% on sufficient cache memory and by 11% on insufficient cache memory, respectively. Full article
Show Figures

Figure 1

13 pages, 3679 KiB  
Article
A Study on User Recognition Using the Generated Synthetic Electrocardiogram Signal
by Min-Gu Kim and Sung Bum Pan
Sensors 2021, 21(5), 1887; https://0-doi-org.brum.beds.ac.uk/10.3390/s21051887 - 08 Mar 2021
Cited by 5 | Viewed by 2012
Abstract
Electrocardiogram (ECG) signals are time series data that are acquired by time change. A problem with these signals is that comparison data that have the same size as the registration data must be acquired every time. A network model of an auxiliary classifier [...] Read more.
Electrocardiogram (ECG) signals are time series data that are acquired by time change. A problem with these signals is that comparison data that have the same size as the registration data must be acquired every time. A network model of an auxiliary classifier based generative adversarial neural network that is capable of generating synthetic ECG signals is proposed to resolve the data size inconsistency problem. After constructing comparison data with various combinations of the real and generated synthetic ECG signal cycles, a user recognition experiment was performed by applying them to an ensemble network of parallel structure. Recognition performance of 98.5% was demonstrated when five cycles of real ECG signals were used. Moreover, 98.7% and 97% accuracies were provided when the first cycle of synthetic ECG signals and the fourth cycle of real ECG signals were repetitively used as the last cycle, respectively, in addition to the four cycles of real ECG. When two cycles of synthetic ECG signals were used with three cycles of real ECG signals, 97.2% accuracy was shown. When the last third cycle was repeatedly used with the three cycles of real ECG signals, the accuracy was 96%, which was 1.2% lower than the performance obtained while using the synthetic ECG. Therefore, even if the size of the registration data and that of the comparison data are not consistent, the generated synthetic ECG signals can be applied to a real life environment, because a high recognition performance is demonstrated when they are applied to an ensemble network of parallel structure. Full article
Show Figures

Figure 1

28 pages, 14354 KiB  
Article
Design and Implementation of a Video/Voice Process System for Recognizing Vehicle Parts Based on Artificial Intelligence
by Kapyol Kim, Incheol Jeong and Jinsoo Cho
Sensors 2020, 20(24), 7339; https://0-doi-org.brum.beds.ac.uk/10.3390/s20247339 - 21 Dec 2020
Cited by 6 | Viewed by 2643
Abstract
With the recent development of artificial intelligence along with information and communications infrastructure, a new paradigm of online services is being developed. Whereas in the past a service system could only exchange information of the service provider at the request of the user, [...] Read more.
With the recent development of artificial intelligence along with information and communications infrastructure, a new paradigm of online services is being developed. Whereas in the past a service system could only exchange information of the service provider at the request of the user, information can now be provided by automatically analyzing a particular need, even without a direct user request. This also holds for online platforms of used-vehicle sales. In the past, consumers needed to inconveniently determine and classify the quality of information through static data provided by service and information providers. As a result, this service field has been harmful to consumers owing to such problems as false sales, fraud, and exaggerated advertising. Despite significant efforts of platform providers, there are limited human resources for censoring the vast amounts of data uploaded by sellers. Therefore, in this study, an algorithm called YOLOv3+MSSIM Type 2 for automatically censoring the data of used-vehicle sales on an online platform was developed. To this end, an artificial intelligence system that can automatically analyze an object in a vehicle video uploaded by a seller, and an artificial intelligence system that can filter the vehicle-specific terms and profanity from the seller’s video presentation, were also developed. As a result of evaluating the developed system, the average execution speed of the proposed YOLOv3+MSSIM Type 2 algorithm was 78.6 ms faster than that of the pure YOLOv3 algorithm, and the average frame rate per second was improved by 40.22 fps. In addition, the average GPU utilization rate was improved by 23.05%, proving the efficiency. Full article
Show Figures

Figure 1

Back to TopTop